Artificial Intelligence Aided Ethics in Frontier Research

Artificial Intelligence Aided Ethics in Frontier Research

Artificial Intelligence Aided Ethics in Frontier Research 789 444 IEEE Pulse
Author(s): Andrés Díaz Lantada and Mette Ebbesen

Emergent technologies are frequently demonized due to the fear of the unknown. The doubts and alarms are more often than not sparked by their own developers, in a secret wish to become the masters of such fears, and thereby increase their control and influence upon laymen. The story is as old as the use of fire by the sorcerers guiding most ancient rituals. Now it seems to be the turn of artificial intelligence (AI), which is being continuously tainted with quasi-apocalyptic shadows, despite its remarkable potentials for supporting highly desirable societal transformations.

Summarizing, AI deals with the design and training of special algorithms for complex problem solving trying to mimic human reasoning and learning [1]. These algorithms are almost always so complex and count with such huge numbers of neurons that their understanding, even for their developers, is extremely challenging. The algorithms may work and provide realistic or plausible predictions, but their limits and potential failures are often unknown, which is commonly referred to as the black-box problem of AI [2]. The problem affects the transparency of AI-based methods.

Important ethical concerns have been detected when applying AI for a wide set of problems. These malfunctions normally arise due to wrongly developed algorithms, to an inadequate or biased training or to a lack of systematic validation in a desire to rapidly transfer a new method to society in an urge for profit. To prevent them and minimize risks while maximizing benefits, training of researchers in ethical AI, user-centered development approaches, systematic application of ethical principles, and developing proper standards and regulations, among others, are needed. Accordingly, to change the statu quo and thus open AI, research toward reliable AI and related initiatives pursuing transparent or at least more understandable white-box algorithms, as a more ethical alternative to classical black-box models [3].

In any case, together with some ethically concerning developments, it is also necessary to put forward the extremely important advances that AI is already enabling in a wide set of fields, from product design and materials modeling to autonomous transport and smart robots. Probably the sector that more rapidly and radically may benefit from the rise of AI is health care. Indeed, AI-based processing of medical images for diagnostic purposes, AI-supported monitoring of patients in rehabilitation, AI decision support tools in the clinic, smart medical implants and surgical robots with learning abilities or personalized medical devices autonomously designed by AIs, are bound to transform medical practice [4], [5]. Though, to make a real long-term and sustainable impact, several aspects must still be considered to a broader and deeper extent.

Specifically, the European Parliament published a report in 2022 on “Artificial intelligence in health care. Applications, risks, and ethical and societal impacts” [6], which identifies seven categories of risks and challenges: 1) patient harm due to AI errors, 2) misuse of medical AI tools, 3) risk of bias in medical AI and perpetuation of inequalities, 4) lack of transparency, 5) privacy and security issues, 6) gaps in AI accountability, and 7) obstacles to implementation in real-world health care.

These specific ethical issues of applying AI in health care are related to ethical obligations of respect for privacy and autonomy, beneficence, nonmaleficence, and justice.

Undoubtedly, ethics guided research will make AI and its applications more reliable, effective, efficient, equitable, and sustainable. At the same time, the relevant benefits that AI is demonstrating cannot be neglected, which makes us perceive the evolution of the field with optimism. Such high expectations inspire us to present and analyze the following dual issue: not only ethical expertise is needed for making the best of AI, but AI can also play a pivotal role for promoting ethical research and supporting researchers in ethical decision making. A possible path (or general strategy) toward such reliable AI-aided decision making is presented below.

Toward a co-created open-source compendium for supporting ethical decisions in research

Most sciences progress “on the shoulders of giants” following a constructive approach based on the understanding generated by our predecessors. Judges also rely on case law (iuris prudentia), the judicial decisions from previous cases, to dictate sentences according to the understanding of law historically constructed. However, ethical dilemmas are frequently approached ex novo, and evaluated applying moral principles that are not as universal and stable as one would expect. This may lead to extremely varied and sometimes mutually conflictive solutions to these dilemmas, which proves confusing for scientific technological researchers, without deep ethical training trying to implement ethics-guided research methods. In this case, AI may be part of the solution, as further explained.

Project-based and problem-based learning methods and the analysis of paradigmatic case studies are being progressively incorporated to ethics education, especially in techno-anthropology for the relevance of applied approaches [7]. Through case studies and paradigmatic examples, ethical decision making and the resolution of multistakeholder dilemmas become more direct, as the learnings from previous decisions in analogous problems are enlightening.

Arguably, systematically gathering and classifying paradigmatic resolutions of ethical dilemmas, and transforming such collection into a sort of co-created and interactive “Wikipedia” of ethics, would prove transformative for ethics education, as well as for the training and supporting of researchers dealing with challenging ethical decisions in a wide set of fields, especially in frontier research. Worldwide collaboration with widely accepted ethical principles [8] would be determinant for reaching global solutions to the different dilemmas and, hence, establishing the desired ethical good practices in frontier research. The arranged compendium would be continuously updated, following the example of inspiring technological initiatives and wikis (openly sharing projects and research results) dedicated to materials science, health care product design, and software engineering, such as Materiability, UBORA, Library of Things, Thingiverse, or GitHub, and could synergize with AI methods in unprecedented ways described below.

Implementing an AI-based decision support system for ethical dilemmas

Once the “Wikipedia” of analyzed ethical dilemmas is implemented, ideally following findability, accessibility, interoperability, reusability (FAIR) data principles [9], it should be employed for training an AI-based decision support system for researchers. The training of the AI algorithm would require a careful selection of inputs and outputs, an adequate codification, useful keywords for enhanced findability and search of relationships across fields, and a training and validation with a relevant number of ethical dilemmas covering as vast as possible spectrum, for which collaboration in the construction of the training source ethical wiki would be essential.

A supervision of the whole process would be required, for which a cohort of ethical experts trained in AI methods and acting as management team should be established, at least until unsupervised learning could be implemented and validated. Inputs would include ethical dilemma transformed into codified sequences, for which a genome of ethical questions should be developed. Outputs would not give a direct answer to a dilemma, but a weighted selection of applicable ethical principles according to the training. An autonomous search for related ethical dilemmas already solved in the collection would be another output.

Regarding the codification, ethics may take inspiration from methodologies developed for creative problem solving like Altshuller’s “teoriya resheniya izobretatelskikh zadach (TRIZ)” [10], [11]. In TRIZ, engineering problems are formulated as contradictions between improving and worsening features during the redesign of products and processes. Even if the products and processes are very different, for similar contradictions the same inventive principles usually lead to innovative, effective and efficient solutions, and these inventive principles are universal. The inventive principles connect the improving and worsening features through a matrix, and the whole methodology is directly programmable.

In our view, most ethical dilemmas can be also expressed as contradictions or in other words conflicts among basic ethical obligations (for instance privacy versus security, individual freedom versus public health, freedom of speech versus political correctness, private benefit versus equity, risks versus benefits…), which can be solved by adequately balancing universal or generally accepted ethical principles. Hence, as happens in TRIZ, the particular ethical dilemma would be transformed into a general codified problem, whose processing by the AI would provide general ethical principles which should then be particularized to the concrete dilemma by specifying and balancing them. The AI would not be a substitute for ethicists but a remarkable working tool for promoting its applicability in frontier research. Researchers would be guided by the AI, but the final solution to the dilemma should still be validated by ethical committees, whose tasks would be facilitated thanks to the AI management of ethical iuris prudentia.

To approach the succinctly described strategy, an ontology for ethical dilemmas may be needed, which is not a new concept. In fact, for the biomedical field and bioethical decisions, Koepsell et al. [12] proposed a biomedical ethics ontology more than a decade ago, which was both praised for its excellent aims for an ideal world, but also criticized for its lack of practical viability or usability by Du Bois [13]. Koepsell et al. [12] proposed a wiki-based approach for the implementation of the biomedical ethics ontology, which we believe could be applied even to a larger extent for reaching a global research ethics ontology. Among the critical arguments of Du Bois [13] is the fact that ethical concepts are social constructs lacking broadly accepted definitions across cultures. However, in the field of ethical research, some ethical principles are already becoming quite universal, as happens with American ethicists Beauchamp and Childress’ [8] principles of respect for autonomy, beneficence, non-maleficence and justice. Indeed, these principles have been found useful for ethics-guided research in the biomedical field, in synthetic biology [14], and in nanotechnology [15], to cite a few slightly related areas, which are still very broad and diverse.

Besides, technological advances make things different now, as compared with the aforementioned presentation of the biomedical ethics ontology and related discussion. Indeed, new technologies make it feasible, not only to implement wikis and co-creation environments, but also to support their expansion with the help of AI resources governed by an adequate management structure of ethicists with aligned principles. Recently, the development of “GenEth,” a general ethical dilemma analyzer, demonstrated the feasibility of an automated approach to ethical problem solving [16]. Through a dialog with ethicists, the analyzer codifies ethical principles in any given domain. Based on inductive logic programming, the system reaches interesting results validated through an ethical Turing test, which is outstanding and inspiring.

Ongoing and forthcoming advances in AI may bring these kinds of analyzers and algorithms a step beyond, especially if supervised or even autonomous learning strategies are implemented, for enabling continuous updates and improvements through the AI-based processing of an open encyclopedia, thesaurus or wiki of ethical dilemmas. Advanced in natural language processing can, not only support the automated formal codification of ethical dilemmas expressed in informal language by researchers and laymen, but also help to implement the global strategy.

Artificial intelligence aided decision making in frontier research: capabilities and challenges

In our opinion, the proposed scheme involving: 1) open-source co-created library, 2) ethical ontology, 3) simplified codification of problems, for example through contradictions as in TRIZ methodology, 4) AI instrument trained with the wiki, and 5) community of validators and supervisors, is robust, versatile, and practical, adheres to sounded ethical principles, and can lead to AI supported ethics in frontier research.

As advanced, in the initial stages, the trained system would just have a decision support role, as happens with current AI-based resources for supporting diagnose and treatment planning in medical practice. The initial version of the AI trained instrument would autonomously provide an indication of the general ethical principles applicable to concrete problems, but the specification and balancing of such principles are context-dependent. Hence, this cannot yet be done by an AI algorithm and, according to the context of the specific ethical dilemma, ethicists, and researchers would still perform the specification and balancing.

In the future, algorithms may reach an even higher degree of autonomy, although the supervision of the proposed relative weights and priorities among applicable ethical principles would always require intervention of the community of ethicists and researchers managing the AI system, in our opinion. In addition, toward higher autonomy, incorporating natural language processing skills to the AI-aided analyzer of ethical dilemmas may reinforce its learning abilities. Through them, it may be feasible for the system to interpret new dilemmas and to inspect the web for additional paradigmatic examples, which would be semi-automatically incorporated to the compendium, as human supervision and validation should be required.

Feasibility is demonstrated through the pioneering examples selected as references, which would already provide solutions to important parts of the described scheme and system’s architecture. Interestingly, AI may find or even predict hidden correlations between dilemmas across frontier and emergent fields of research, as AI is well-known for finding non-linear relationships between problems, which would be of special relevance for frontier research. Challenging though it may seem, through international collaboration and educational action, the successful implementation of an AI-based decision support system for analyzing ethical dilemmas in frontier research is truly at hand.

References

  1. S. Das et al., “Applications of artificial intelligence in machine learning: Review and prospect,” Int. J. Comput. Appl., vol. 115, no. 9, pp. 31–41, Apr. 2015, doi: 10.5120/20182-2402.
  2. N. Savage, “Breaking into the black box of artificial intelligence,” Nature, Mar. 2022, doi: 10.1038/d41586-022-00858-1.
  3. A. Saranya and R. Subhashini, “A systematic review of explainable artificial intelligence models and applications: Recent developments and future trends,” Decis. Anal. J., vol. 7, Jun. 2023, Art. no. 100230, doi: 10.1016/j.dajour.2023.100230.
  4. M. Wagner et al., “Artificial intelligence for decision support in surgical oncology—A systematic review,” Artif. Intell. Surg., vol. 2, no. 3, pp. 159–72, 2022, doi: 10.20517/ais.2022.21.
  5. K. Denecke and C. R. Baudoin, “A review of artificial intelligence and robotics in transformed health ecosystems,” Frontiers Med., vol. 9, Jul. 2022, Art. no. 795957, doi: 10.3389/fmed.2022.795957.
  6. European Parliament. (2022). Artificial Intelligence in Health care: Applications, Risks, and ethical and societal impacts. Directorate General for Parliamentary Research Services. LU: Publications Office. Accessed: Aug. 26, 2023. [Online]. Available: https://op.europa.eu/en/publication-detail/-/publication/958117aa-0c91-11ed-b11c-01aa75ed71a1/language-en
  7. T. Børsen, “Bridging engineering and humanities at techno-anthropology,” in Engineering, Social Sciences, and the Humanities: Have Their Conversations Come of Age? (Philosophy of Engineering and Technology), S. H. Christensen, A. Buch, E. Conlon, C. Didier, C. Mitcham, and M. Murphy, Eds. Cham, Switzerland: Springer, 2022, pp. 151–177. doi: 10.1007/978-3-031-11601-8_8.
  8. T. L. Beauchamp and J. F. Childress, Principles of Biomedical Ethics. New York, NY, USA: Oxford Univ. Press, 2019.
  9. M. D. Wilkinson et al., “The FAIR guiding principles for scientific data management and stewardship,” Sci. Data, vol. 3, no. 1, Mar. 2016, Art. no. 160018, doi: 10.1038/sdata.2016.18.
  10. I. M. Ilevbare, D. Probert, and R. Phaal, “A review of TRIZ, and its benefits and challenges in practice,” Technovation, vol. 33, no. 2, pp. 30–37, Feb. 2013, doi: 10.1016/j.technovation.2012.11.003.
  11. G. Altshuller, The Innovation Algorithm: TRIZ, Systematic Innovation and Technical Creativity. Worcester, MA, USA: Technical Innovation Center, 2021.
  12. D. Koepsell et al., “Creating a controlled vocabulary for the ethics of human research: Towards a biomedical ethics ontology,” J. Empirical Res. Hum. Res. Ethics, vol. 4, no. 1, pp. 43–58, Mar. 2009, doi: 10.1525/jer.2009.4.1.43.
  13. J. M. Dubois, “The biomedical ethics ontology proposal: Excellent aims, questionable methods,” J. Empirical Res. Hum. Res. Ethics, vol. 4, no. 1, pp. 59–62, Mar. 2009, doi: 10.1525/jer.2009.4.1.59.
  14. M. Ebbesen, S. Andersen, and F. S. Pedersen, “A conceptual framework for the ethics of synthetic biology,” Acad. Quart., vol. 12, pp. 203–223, Oct. 2015, doi: 10.5278/ojs.academicquarter.v0i12.2736.
  15. M. Ebbesen, S. Andersen, and F. Besenbacher, “Ethics in nanotechnology: Starting from scratch?” Bull. Sci., Technol. Soc., vol. 26, no. 6, pp. 451–462, Dec. 2006, doi: 10.1177/0270467606295003.
  16. M. Anderson and S. L. Anderson, “GenEth: A general ethical dilemma analyzer,” J. Behav. Robot., vol. 9, no. 1, pp. 337–357, Nov. 2018, doi: 10.1515/pjbr-2018-0024.