The Yin and the Yang of New Technologies

The Yin and the Yang of New Technologies 150 150 IEEE Pulse
Author(s): Arthur T. Johnson

Imagine technology as a living being. Perched on its left shoulder is a miniature angel, representing the good uses and beneficial consequences of the technology. On its right shoulder is a small devil, the bad or unintended consequences of the technological application. Every technology that I know has these two potential results: one that we might call “good,” and the other we might deem “bad,” or, at least, “undesirable.”

Of course, when the technology is conceived and further developed, the innovators look almost exclusively at the potentially good uses of the new technology, and proclaim the need for the new technology to solve pressing societal problems [12]. This attitude is what motivates originators to concentrate on the technology in the first place.

However, once the technology has been loosed on the world, it, like Pandora’s box, exhibits both desirable and undesirable effects. Whether we consider new chemicals, mechanisms, or procedures, we have seen how they do improve life as long as they are used correctly. But, intended or not, other uses of the technology arise, and not all of these solve more problems than they cause.

Yin and yang is a Chinese philosophical concept that describes how obviously opposite or contrary forces may actually be complementary, interconnected, and interdependent in the natural world, and how they may give rise to each other as they interrelate to one another. Yin is the receptive and yang the active principle, seen in all forms of change and difference, such as the opposite ultimate effects of applications of new technologies.

There are myriad examples of the uses and misuses of new technologies. One of the most egregious of these is the misuse of opioid drugs, initially developed as an effective pain-relief mechanism that mimics the body’s own means to deal with pain. Opioid drugs were originally seen as a panacea for the problem of intractable pain. The same drugs soon became a means for users to deaden their own internal demons, and a rash of overdosage deaths has ensued.

Artificial intelligence (AI) holds a lot of promise for overcoming the shortcomings of human abilities. However, artificial intelligence, especially machine-learning techniques, is not always transparent enough to be predictable, and so can produce unintended consequences. For example, one goal of some AI developers is to incorporate artificial emotional responses in computer and robotic systems. Who would want an angry robot? Even a joyful robot might act in ways that are unpredictably harmful to somebody.

Some artificial intelligence pioneers are attempting to develop systems to determine the emotional states of computer users or people whose images are caught on camera in different situations [15]. One system they are working on works indoors, with or without masks, and with poor lighting; it works outdoors even when hats or sunglasses are worn. Being able to recognize the inner feelings, motivations, and attitudes of people have commercial advantages. If artificial intelligence algorithms can reliably interpret emotions and behaviors, then they could have many applications in the fields of robotics, health care, vehicle design, and many others. However, others argue that such intrusive systems pose a serious threat to privacy that societies may not be prepared to deal with.

An IEEE Spectrum article by Adib [1] described a system by which the presence of people standing behind an opaque wall could be detected by radio waves. Their system could even detect heart rate, breathing movements, and emotional state of the person hidden behind the wall. It is not a large leap to imagine that further developments could lead to exact identification of the person standing there. Such a scenario would totally eliminate privacy for someone not willing to identify him- or herself. Would it be ethical to identify a reluctant witness in a trial, or a witness to a crime, or someone in some other situation where personal safety could be at risk if that person would normally be considered to be hidden behind a solid obstacle?

A good general discussion of some of the scenarios of the risks of artificial intelligence systems and capabilities can be found in an article by Bajema [2]. She describes six situations of unintended consequences that can turn artificial intelligence from a beneficial technology into one to be feared. Included in the discussion are: loss of identifiable reality, uncontrollable outcomes, assaults on privacy, exploitation of human weaknesses, bias in system designs, and fear of all artificial intelligence technologies. Certainly, artificial intelligence systems can have both upsides and downsides for users to be aware of.

Quadruple paralysis most likely isolates a person from any kind of technology with a mechanical interface. Such devices include computers with their keyboards and mice. Small silicon probes inserted into the brains of some of these victims have given them the ability to move robotic arms to grasp objects just by thinking about moving their limbs. Typing also becomes possible when the probes measure the firing of dozens of neurons in the brain. Sounds great? Yes, the benefits of this technology for the paralyzed parties involved are unquestioned. But, the same technologies that allow someone to control external mechanisms can also be turned inside out so that the probes deliver information to the brain instead of drawing it out. Mind reading and mind control, to say nothing of the loss of privacy of one’s thoughts, become possible with this same technology. The same artificial intelligence algorithms and computer technologies that can offer a helping hand when needed can also go amok, as what happened to Mickey Mouse in the Sorcerer’s Apprentice scene in Walt Disney’s movie Fantasia [11]. Mickey conjured up magic that gave him control of inanimate objects to carry water for him while he slept. He awoke later to find that the water carriers did not know when to stop, and the place was awash in a flood. Unless we are able to anticipate and control unintended consequences of our new technologies, we, like Mickey, may rue what we have unleashed.

Cell phones and other technologies have the capability to track locations of individuals whether or not they are aware of such capabilities of their devices [13]. The Internet of Things (IoT) can also disclose the uses and habits of people owning appliances connected to the internet. All of these pose privacy issues. People’s movements can be tracked and their personal issues can be made known to third parties who may not have the best intentions in mind.

Technologies enabling social media have had the promise of fostering communications and bringing people closer together. Social media have also been used by groups to coordinate their activities when separated by physical distances. Those are the good effects. However, it has recently come to light that, too often, comparisons between postings can lead to mental anguish, especially among vulnerable teens.

Frances Haugen became a whistleblower, warning against the social media company Facebook with policies that were harmful to the mental health of teenagers and were also contributing to human trafficking through use of its services [7], [17]. Facebook, she contended, was aware of the harms to which it was contributing, but was unwilling or unable to counteract them. She claimed that the company was choosing profit over correction. Company Chief Executive Officer Mark Zuckerberg announced that the future of Facebook lay in developing a new virtual universe, which he called a Metaverse (which brings to mind the question: What is a Meta for?), that would fundamentally reshape how humans interact with technology. Haugen believed that the Metaverse would further isolate people from one another, rather than bring them together. Only time will tell.

The role of whistleblower is not an easy one to assume [8]. Whistleblowers must often tell on powerful government agencies or huge corporations. So, many unethical or harmful activities can continue to operate when no one is prepared to take the personal risks required in order to disclose information that could cause discomfort or worse to a powerful body. The example of the Tuskegee study of untreated African American men with syphilis conducted by the U.S. government shows that this sort of behavior can go on even when sponsored by a presumedly trustworthy source.

Social media are these days extremely influential in molding attitudes and divisions in our society. Shouting “fire” in a crowded theater is not only wrong, but is also dangerous. Some social media technologies can make the shouting seem much louder. The ridiculous nature of many social media entries was satirized in the cartoon “Speed Bump,” in which several ancient Greeks are sitting around talking, and one says: “I only hope one day there appears a device that allows all citizens to instantaneously share with the world every impulsive, irrational and mean-spirited thought that flits across their minds” [6].

Digital twinning is a technique that sets up in the virtual space an exact copy of a real physical structure [4], with the idea that the digital copy can exactly match the conditions and movements in the actual structure. That way, changes to movements and conditions can be tested digitally rather than move or modify actual hardware. Digital twins have been constructed for automobile assembly plants so that improvements in procedures can be tested quickly and relatively easily. Exact movements of workers in the actual plant are reflected in the digital twin. That is where a privacy issue comes in as a possible downside of this promising technique.

Television is a technology that was envisioned to bring visual programming of consequence into people’s homes. And, for a while, it did just that. In its early years, television programming was limited to a few hours each day appearing on less than a handful of networks. One benefit of television at this point in its development was that most people with television sets tuned in to the same popular programs, and so contributed to the societal cohesiveness that existed at the time. Although there were complaints about the quality of television programming, there was always an automatic topic of conversation among people as long as nearly everyone had the same viewing experiences.

Soon, though, the addictive nature of television-watching became a societal concern. People sitting in front of their sets and watching drivel were considered a real problem for the public. Of special concern was the effect that watching television for hours at a time had on the development of young children. Television programs at the time were not the types to contribute to the advancement of children’s intellects.

Then along came Sesame Street [23] in 1969, which exploited the addictive nature of watching television with educational lessons, and with the goal of helping to prepare children, especially those from low-income families, for school. Sesame Street continues to demonstrate that the benefits of a technology can be enhanced if efforts are made in this direction.

Of course, all of these technologies have been beneficial in many ways. However, I know of no technology that doesn’t have detrimental consequences either economic, societal, sustainability, environmental, or some other concern. Perhaps the untoward effects affect only a small portion of users, in which case the overall effects of the technology are vastly positive. Still, ameliorating the ill effects of a technology application should be a goal of developers and innovators.

Genetic mapping has had many useful applications. Knowing what genes are present and their effects on the organism containing them, known as functional genomics, has been helpful for predicting disease susceptibility, producing insulin and other compounds by microbes, and assisting people to learn of their ancestry. The last use has also, unfortunately, led to disclosures that have disrupted family ties in some cases.

The gene-editing technique known as CRISPR is one technology for which the full implications are yet to be determined. There are both good and bad possible applications of CRISPR, and some in between, as the Chinese scientist He Jiankui demonstrated when he edited the germ line of twin sister embryos to make them immune to HIV [6]. There could have been some justification for him to do so, but he was far ahead of the accepted current limits of CRISPR applications.

One area of new technology that seems to have received a reasonable amount of preintroduction consideration involves genetically-modified organisms. It took years of study and trials before any genetically-modified organism, including improved crops, could be released into the world. Of particular interest at present is the release into the wild of mosquitoes with genetic changes that cause them to become sterile [20]. The hope is that these particular species, ones that harbor the malaria parasite, will cease to exist, taking the malaria threat to humans with them. Genetic alterations can produce genes that defy normal laws of heredity and are completely present in future generations. Such genes are called gene drives. If these genes also result in sterility, then the goal of species elimination can be realized. However, there may be more subtle environmental and ecological ramifications to release of mosquitoes containing the gene drive, and these are being considered thoroughly, because, once the release is made, it cannot be undone.

It is clear that the full effects of new technologies can go far beyond what their creators had envisioned for them. The best time to deal with adverse effects of a technology is at the conceptual stage when embodiments are still fluid. Comprehending the full consequences of technology can be a daunting task. As the good uses have expanded, so have the harmful ones. There may also be a conflict that arises between personal and societal issues.

Scientists are often portrayed as spending much time and effort in the laboratory to bring their developments to life and out in the world. But, what they don’t spend much time on is the broader implications of their work [18].

Elkins-Tanton [9] calls it the “hero model” of science and engineering research and invention. The recognized leading scholar in a given area of research is treated with respect and resources not available to others in his or her group. These “heroes” are the ones who are given priority for leadership, funding, recognition, and control for research projects. They have a huge influence on the projects that are funded, the knowledge that is created, and how the technology should be adopted and regulated by society. Heroes develop into idols who often cannot see the full ramifications of their projects. This limits the outlooks and perspectives of technologies developed by groups headed by a hero. Elkins-Tanton [9] goes on to say that, in order to incorporate considerations related to bigger issues touched by research and development projects, all funded teams should be interdisciplinary in nature, and the questions to be answered by the team should be formulated by the whole team with perspectives coming from every direction. Stofan [21] agrees, saying that the hero model produces a personality-based environment, discouraging collaboration, enhancing cutthroat competition for resources, and, in the extreme, leading to bullying and harassment.

Worse, for society at large, not enough attention is given to the wide range of possible uses and misuses of a newly-developed technology as long as the hero is the sole person guiding the development.

Research and development efforts in the USA have become decentralized in recent years [10], relying on funding from business, philanthropy, and academic endowments as well as from more traditional federal, state, and local governments. Decentralization can lead to better ideas and solutions to problems faced by commercial markets. The role of government in this process should be to ensure that the questions being asked of researchers are the ones most important to society. This will require government to involve inputs from many sources, inclusive of the public’s concerns.

If government oversight is what is needed to anticipate the broader implications of new technology applications, then there are a couple of prospects for this in the USA. The Office of Technology Assessment (OTA) was an office of the United States Congress that operated from 1974 to 1995 [22]. OTA’s purpose was to examine issues involving new or expanding technologies, to assess their impacts, to analyze alternative policies to avert crises, and for scientific expertise to match that of the executive branch. OTA was to provide congressional members and committees with objective and authoritative analysis of the complex scientific and technical issues of the late 20th century. The OTA was defunded and disbanded in a cost-cutting move in 1995. However, it was recognized that the explanatory and assessment functions of the OTA were desirable, even necessary, for congressional members to be able to deal with nascent technologies requiring government funding.

The Science, Technology Assessment, and Analytics Team of the U.S. Government Accountability Office has assumed the functions formerly performed by the defunct Office of Technology Assessment. As a member of that team, Wright [24] helped to answer how the government can recruit and retain people needed to drive an innovation on a national scale. Their charge is to inform lawmakers about the central issues facing the government concerning new innovations and to suggest what can be done to address these issues.

The National Science Foundation (NSF) has a program called “Broader Impacts,” meant to encompass the potential to benefit society and contribute to the achievement of specific, desired societal outcomes [16]. Proposals submitted for funding to the NSF are encouraged to include consideration and description of possible benefits of a research project either not directly related to, or not readily apparent from, the topic of research proposed. As NSF expands upon these general criteria, it “values the advancement of scientific knowledge and activities that contribute to the achievement of societally relevant outcomes. Such outcomes include, but are not limited to: full participation of women, persons with disabilities, and underrepresented minorities in science, technology, engineering, and mathematics (STEM); improved STEM education and educator development at any level; increased public scientific literacy and public engagement with science and technology; improved well-being of individuals in society; development of a diverse, globally competitive STEM workforce; increased partnerships between academia, industry, and others; improved national security; increased economic competitiveness of the U.S.; use of science and technology to inform public policy; and enhanced infrastructure for research and education.”

However, both of these government programs, and any others like them, deal specifically with the potential positive beneficial outcomes of technological research and development activities, or the yang of technology. There are no programs that I know that specifically consider the yin, or possibly negative outcomes, of new technology adoption. Matthew et al. [14] proposed principles of a governance structure that should be able to assure the fair and beneficial applications of new technologies, although not necessarily to mitigate adverse effects.

Bioethicist R. Alta Charo was asked whether scientists are sufficiently trained to appreciate the ethical implications of the research [and development] they may pursue [3]. She replied that physicists and engineers became aware of the possible application of their fundamental research work for military purposes once the atomic bomb was developed and used in World War II. Biologists are beginning to have the same awakening. However, basic research is so far removed from possible applications, both good and bad, that it is hard for investigators to see the final results of their work. Discussions of societal control of their research are not realistic to them because of the remoteness of applications of their research results.

In some ways, she believes that teaching ethics to researchers, to give them some perspective on the eventual uses derived from their fundamental work, would not be as beneficial as teaching history, of how the work that they are presently conducting in the lab could someday transform a whole society. Everything starts with becoming aware of the potentials of what they are doing.

Wylie [25] advocates involving people without science degrees in scientific work. She argues that recognizing skills rather than credentials is a means to incorporate diverse backgrounds and life experiences in teams performing scientific research. Involving such people in scientific (and technology development) work would be a good means to bring more marginalized groups to an appreciation of what science is about and what it can do.

Whereas many of the unseen effects and consequences of a new technology are not always directly related to that technology, and may not be technological in nature, groups developing new technologies should include a multitude of disciplines at the earliest stages of development. It is not common to include in a development team social scientists, representatives of the public-at-large, and other nontechnical people to advise about possible future shortcomings of the envisioned technology. Most developers of technologies are not equipped to become seers and all-knowing forecasters. Yet, inclusion of experts broadly trained and educated, ones who have wide ranges of experiences and knowledge of possible uses and misuses of the technology as conceived, can be useful to either avoid, or, at least, anticipate undesirable consequences and be able to deal with them before the technology has been finalized. That is one justification for inclusion of general course credits in a college education program. It is also a reason to include respected and experienced members of the public in technology development teams.

Thus, there is a definite need for broadly-educated scientists and engineers who can become parts of teams working toward new technology development. There is also need, of course, for those with deep expertise in some narrow specialty, depending on the nature of the proposed technology. These are probably, unfortunately for the generalists, those who would lead the teams and be given the bulk of the credit for their successes. However, it should be the responsibility of the team leader, the “hero,” to be sure, not only to include generalists and unconventional representatives as members of the team, but to take full council with such members in order to anticipate and perhaps be able to counteract the possible ill effects of the applications of technologies that they develop.

New technologies intended to solve important problems can have many possible benefits for their designated users. However, there are often at least two sides to every new technology. Society might be better off if the ill effects of technology use are anticipated and ameliorated in some way before the technology is introduced. <

References

  1. F. Adib, “Seeing with radio,” IEEE Spectr., vol. 56, no. 6, pp. 34–39, Jun. 2019.
  2. N. Bajema, “AI’s real worst-case scenarios,” IEEE Spectr., vol. 59, no. 1, pp. 8–10, Jan. 2022.
  3. R. A. Charo, “Interview,” Issues Sci. Technol., vol. 37, no. 4, pp. 23–29, Summer 2021.
  4. A. R. Chow, “Here and there,” Time, vol. 199, nos. 1–2, pp. 51–53, Jan. 2022.
  5. D. Coverly, Speed Bump,” Baltimore Sun, Nov. 22, 2021.
  6. M. Crowley-Wang, “Genetic engineering: Dr. He Jiankui’s CRISPR’d twins,” BxBioethics, 2021. Accessed: Jan. 4, 2022. [Online]. Available: https://bxbioethics.com/2021/08/18/genetic-engineering-dr-he-jiankuis-crisprd-twins/
  7. S. Dance, “Research by Facebook, OkCupid raises concern,” Baltimore Sun, vol. 177, no. 269, pp. A1–A19, Sep. 2014.
  8. M. A. Edwards, C. Yang, and S. Roy, “Who dares to speak up?” Amer. Sci., vol. 109, no. 4, pp. 238–242, Jul./Aug. 2021.
  9. L. Elkins-Tanton, “Time to say goodbye to our heroes?” Issues Sci. Technol., vol. 37, no. 4, pp. 34–40, Summer 2021.
  10. M. Flagg and A. Garg, “Science policy from the ground up,” Issues Sci. Technol., vol. 38, no. 1, pp. 51–55, Fall 2021.
  11. W. D. Heaven, “AI is reinventing what computers are,” MIT Technol. Rev., vol. 124, no. 6, pp. 78–79, Nov./Dec. 2021.
  12. A. T. Johnson, “The technology hype cycle,” IEEE Pulse, vol. 6, no. 2, p. 50, Mar./Apr. 2015. [Online]. Available: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7059311
  13. R. Mac and K. Hill, “Popular apple airtags used as a high-tech stalking tool?’’ Baltimore Sun, p. A7, Jan. 4, 2022.
  14. D. J. H. Matthews, R. Fabi, and A. C. Offodile, “Imagining governance for emerging technologies,” Issues Sci. Technol., vol. 38, no. 3, pp. 40–46, Spring 2022.
  15. J. McQuaid, “Spying on your emotions,” Sci. Amer., vol. 325, no. 6, pp. 40–47, Dec. 2021.
  16. National Science Foundation (NSF). (2022). Broader Impacts: Improving Society. Accessed: Jan. 7, 2022. [Online]. Available: https://www.nsf.gov/od/oia/special/broaderimpacts/
  17. B. Perrigo, “Change agent,” Time, vol. 198, nos. 21–22, pp. 38–44, Dec. 2021.
  18. A. Ramirez, “Shining a light on the impacts of our innovations,” Issues Sci. Technol., vol. 37, no. 3, pp. 22–24, Spring 2021.
  19. A. Regalado, “A computer mouse inside your head,” MIT Technol. Rev., vol. 124, no. 6, pp. 28–35, Nov./Dec. 2021.
  20. T. H. Saey, “A weapon against mosquitoes,” Sci. News, vol. 201, no. 10, pp. 20–25, Jun. 2022.
  21. E. R. Stofan, “A new model for research teams,” Issues Sci. Technol., vol. 38, no. 1, p. 10, Fall 2021.
  22. U.S. Government Accounting Office. (1977). The Office of Technology Assessment. Accessed: Dec. 27, 2021. [Online]. Available: https://www.gao.gov/products/103962
  23. Wikipedia. (2021). Sesame Street. Accessed: Jan. 5, 2022. [Online]. Available: https://en.wikipedia.org/wiki/Sesame_Street
  24. C. Wright, “Hiring for the future of the science and technology enterprise,” Issues Sci. Technol., vol. 38, no. 1, pp. 17–19, Fall 2021.
  25. C. D. Wylie, “What fossil preparators can teach us about more inclusive science,” Issues Sci. Technol., vol. 38, no. 1, pp. 14–16, Fall 2021.