Ethics in the Era of Artificial Intelligence

Ethics in the Era of Artificial Intelligence 150 150 IEEE Pulse

Ethics can be interesting and fascinatingly compelling because of the subtle natures of its solutions in ambiguous situations. Articles on ethical issues and college courses on ethics rarely present answers to the questions that are posed. That is because ethical responses are highly situational and depend a lot on commonly accepted, but not codified, beliefs, and attitudes.

Ethics involve imagining consequences and deciding which of several choices is most acceptable, or, is least unacceptable. As ethicists may tell us, there are no right or wrong choices to ethical questions; or, there may only be wrong choices, with degrees of wrong choices among which to choose. Ethical choices are nebulous at best.

One reason that ethical questions do not have definite answers is that ethics reflect the generally accepted societal thinking of the day (so-called “community standards”). And, what is generally agreed as acceptable during one time may change drastically during another time. As an example of how ethics concepts change depending on social norms, consider that back in the idealistic 1960s and early 1970s, ZPG was highly regarded. ZPG stood for zero population growth, and couples were supposed to conceive and raise no more than two children. There was an awareness that the world back then was threatened with overpopulation, and ZPG was seriously considered to be the answer to the problem. Young couples were expected by most of their peers to adhere to ZPG.

There is no such imperative today. ZPG is never mentioned anymore, which is ironic, because there is concern these days that world population could outstrip food production and available resources to support all the people who will inhabit our planet in a few years. Even with the present concern about climate change generated by human activities, ZPG is never mentioned.

More recently, we have been made aware of the denunciation of brave, gallant, and valorous military leaders of the past because they happened to be associated with movements not acceptable today; they served on the wrong sides of wars. We have also seen entertainers who were idolized in the past and who are now condemned because they recorded or performed songs perfectly acceptable in their time but not now [1]. We have also seen politicians and other public figures who have been held accountable to today’s standards for actions taken at other times with other limits of acceptability. This tendency toward revisionist thinking makes one wonder what ethical standards will be in the future and whether individual actions taken now will be found deficient at some future date.

All artificial intelligence (AI) is not the same. Some AI is programmed, and the actions of the AI system are determined solely by the choices of the programmer. Other AI, notably machine learning or deep learning, allows the system itself to adjust so that the correct responses to undetermined situations are the result of comparisons between the trial responses and the “correct” responses. Deep learning is very powerful in that the programmer is often unaware of the method by which the AI system arrives at the correct answer. The problem, however, is that, in situations with ethical conditions, the “correct” answer may, or even is likely to, change from one case to another.

Deep learning AI systems are programmed to reach their own conclusions from the available data; even the programmers may not know exactly how the final conclusion has been arrived at. If the AI system is programmed to achieve a certain objective, and that is the only criterion for action, then the system may be driven to achieve that objective without regard for any extraneous ethical issues, including the basic tenet of protection of human life. Russell [2] warns against letting AI systems become that sophisticated and single-intentioned.

In view of the gross imprecision of ethical choices, what is going to happen to ethical engineering decision-making in the era of AI? Can we trust AI systems to make the right decision in all situations? Will AI systems make the same choices as ethical humans would make? Can we count on AI programmers to anticipate all the many possible situations into which the AI system may be used?

From a different perspective, if AI programmed codes are able to be scrutinized by the public, it is highly possible that the choices to ambiguous situations made by the programmers could come under severe criticism because of the actions that would be taken by the AI system. Thus, it is likely that many AI codes would remain opaque to almost everyone. That could make the situation worse by not being able to anticipate the actions likely to be made until a situation actually occurs with unpopular consequences. There then would be little chance to correct programming faults before they are found to be wrong in an emergency.

The trolley problem illustrates the dilemma for programming of fully autonomous vehicles, one very important application of AI [3]. In this hypothetical example, a trolley speeds toward a group of people (at least five) tied to the track, and who would die immediately upon impact. There is an alternative side track where a single person is tied to the track. Should the autonomous vehicle switch to the alternative track to save many lives but sacrifice one life? And, how soon and under what conditions should the switch be made? This is but one example of the moral dilemma that could be faced by a fully autonomous vehicle. In this case, the ethical solution must be programmed into the vehicle control system even before the situation arises. As with many ethical issues, there is no perfect solution. But, unlike a spur-of-the-moment decision made by a human operator, which may be excused no matter which course of action had been chosen, the self-driving vehicle decision is not made under duress and can be criticized once the programmed course of action is made known.

Not only are societal norms subject to change with time, but there are also differences in community standards from place to place. There are certainly variations of opportunities given to members of different races, ages, genders, religions, or positions of power in different parts of the world. There are even differences among regions of the USA. These differences can certainly have their effects on ethical decisions. The answer to the trolley problem, for instance, may depend on personal attributes of the people tied to the two tracks. Some people may be considered more valuable than others.

An article by Adib [4] in IEEE Spectrum described a system by which the presence of people standing behind an opaque wall could be detected by radio waves. Their system could even detect heart rate, breathing movements, and emotional state of the person hidden behind the wall. It is not a large leap to imagine that further developments could lead to exact identification of the person standing there. Such a scenario would totally eliminate privacy for someone not willing to identify him- or herself. Would it be ethical to identify a reluctant witness in a trial, or a witness to a crime, or someone in some other situation where the personal safety could be at risk of a person who would normally be considered to be hidden behind a solid obstacle?

Industrial products and processes are often governed by technical standards of manufacture. These are usually arrived at by consensus among interested parties, at least in the USA, and serve to standardize the products or processes across different producers. Consumers benefit from standards because the products produced in compliance with the standards are compatible and, essentially, interchangeable. Standards for AI products and processes are also subject to standards development; these will incorporate choices of ethics and social issues made so that there is a consistency of action across all AI products similarly produced. The Chinese government has already started developing such standards [5].

Even the patenting process in various countries is influenced more or less by moral and ethical considerations [6]. Limits have been placed on patents at various times based on the morality of potential usage of the invention; if the use of the innovation was perceived as pernicious and not beneficial, then a patent would not have been issued in the USA in the early 1800s (although that consideration seems not to be in effect today). In modern Europe, patents are understood to have ethical, social, and economic implications, and patents are issued with these things incorporated. Although these considerations were, for the most part, related to patent applications for discoveries of parts of living beings, especially identification of specific gene functions, AI patent applications are bound to be considered with the same policies in mind.

Most engineers, scientists, and technologists think of AI as applied to objects and mechanisms. However, Cassell [7] has written of applications of AI to social interaction. She described humans as interactive, rather than autonomous, beings who could benefit from AI-based machines that work in tight interdependence with people. These kinds of AI machines must have a natural feel to the people with whom they interrelate. Applications that she described included various AI teaching machines capable of building rapport with young students, and AI machines helping children with high-functioning autism cope with interpersonal relations. She concludes by writing: “It is in our power to bring about a future where social interaction is preserved and where social interaction is even enhanced.”

Still, the same AI approaches that are effective at teaching young students valuable lessons can also be used by certain groups to instill the wrong types of messages. Radical groups can use the same technologies to teach extreme views or to teach suspicion, disrespect, and dishonesty. There is a need in these situations to assure that ethical behaviors are, at least, discussed and understood by AI developers.

Deep learning techniques have been used to predict physical and biochemical properties of materials without having to produce them in reality [8]. Such uses could be used to make better health care treatments, discover new medicines, and improve preventative health care. At issue in these applications are their affects on data privacy, intellectual property, and, even, employment of scientists, medical personnel, and technicians, all issues of ethical concern.

Why are ethical decisions in AI different from those in other situations? AI systems do not act spontaneously; ethical decisions in AI systems are likely to be made beforehand and are probably made from a theoretical outlook. So, they are not subject to situational variations not considered a priori. Also, AI systems widely disseminated and used need to conform to certain standards agreed ahead of time. Once incorporated in AI systems, ethical choices will be difficult to change. So, AI systems are unlikely to reflect local or temporal ethical standards. If AI systems were able to be modified to conform to different social norms, then they may well be incompatible with other unmodified AI systems. This realm of ethical decision-making is qualitatively different from decisions made for design or operation of products or processes with more limited applications.

Awareness of ethical choices is important for an engineer, programmer, or other designer who will be participating in the AI movement as it gathers momentum into the future. However, anticipating all possible scenarios and what choices are to be made for each will not be simple AI tasks. Ethics courses and ethical discussions both in educational institutions and as postgraduate continuing education are important for the participant to be made aware of the issues, even if not the answers. Nevertheless, AI systems pose particular difficulties because they cannot easily be held accountable for making improper ethical choices under particular sets of circumstances, the way human operators are expected to know the best choices to make under those same circumstances. The bottom line is that anyone involved with the design or operation of AI systems will need to be made aware of ethical considerations in the broadest possible sense.

References

  1. C. Thomas, “In defense of Kate Smith,” Baltimore Sun, vol. 182, no. 118, p. A23, Apr. 28, 2019.
  2. S. Russell, “It’s not too soon to be wary of AI,” IEEE Spectr., vol. 56, no. 10, pp. 47–51, Oct. 2019.
  3. D. E. Bailey and I. Erickson, “Selling AI: The case of fully autonomous vehicles,” Issues Sci. Technol., vol. XXXV, no. 3, pp. 57–61, Spring 2019.
  4. F. Adib, “Seeing with radio,” IEEE Spectr., vol. 56, no. 6, pp. 34–39, Jun. 2019.
  5. K. B. Belton, D. B. Audretsch, J. D. Graham, and J. A. Rupp, “Who will set the rules for smart factories,” Issues Sci. Technol., vol. XXXV, no. 3, pp. 70–76, Spring 2019.
  6. S. Parthasarathy, Patent Politics: Life Forms, Markets, and the Public Interest in the United States and Europe, Chicago, IL: Univ. of Chicago Press, 2017.
  7. J. Cassell, “Artificial intelligence for a social world,” Issues Sci. Technol., vol. XXXV, no. 4, pp. 29–36, Summer 2019.
  8. F. Lake, “ AI in the life sciences: On the up, or over-hyped?” Biotechniques, vol. 66, no. 4/5, p. 204, Apr./May 2019.

Arthur T. Johnson (artjohns@umd.edu) is a professor emeritus in bioengineering, University of Maryland, College Park, MD, USA.