Consciousness for Artificial Intelligence?

Consciousness for Artificial Intelligence? 150 150 IEEE Pulse
Author(s): Arthur T. Johnson

Can artificial intelligence (AI) systems ever achieve anything close to consciousness? There is presently an intense speculation about whether they can or cannot [6].

In the F Minus comic for Thursday, 30 November 2023 [2], there is pictured a man sitting behind a computer screen; he says to another man standing close by, “There’s a problem, sir. The A.I. says it doesn’t feel inspired to write or make art anymore, and maybe it should have pursued engineering like its father.” If AI can achieve consciousness, is that what we are in for?

I have written before about ethical considerations related to AI [9]. The outlook for that essay was mostly from the perspective of AI used in machines and devices. But there are different kinds of AI with much different applications, and much different ethical and relational considerations.

A generative AI program, such as ChatGPT, uses huge numbers of data sources and machine learning to provide outputs that are derivations based on those original sources [4], [19]. Generative AI is artificial intelligence capable of generating text, images, or other media, using models derived from the patterns and structures of its input training data and then it generates new data that has similar characteristics. The way that generative AI produces its outputs has sometimes been likened to a prediction, similarly to how Google predicts what will be entered while a search phrase is still being typed.

I question the use of the word “prediction.” Undergraduate engineering students learn the essential difference between interpolation and extrapolation. The difference lies in the bounds of the data set used as a basis for predicting a value not already known. If the prediction lies within the limits of the data set, then the result is an interpolation of the data already known, and the prediction usually has a high level of confidence. If, on the other hand, the prediction is intended to be an extension of the data set, then the result is an extrapolation, and the predicted value is more or less questionable. The closer the predicted value is to the limits of the data input set, the more confidence one can have on the validity of the predicted value. I would call the extrapolation to be a prediction, but the interpolation to be a derivation. True extrapolated predictions would require creative abilities not yet apparent in AI systems.

But AI is now evolving rapidly, leading to AI with capabilities well beyond the purpose of prediction outputs [4], [18]. Generative AI that presently mimics human intelligence will likely be aimed at exceeding human capabilities in order to help humans solve complex problems that they presently find difficult to fathom. This type of AI is called artificial general intelligence (AGI). When realized, AGI could become an autonomous system that surpasses human capabilities in many important ways.

It is this type of AGI that is the subject of speculation about AI consciousness. Assuming that an AGI can be created with such advanced capabilities that the next step would be to incorporate sufficient elements of self-awareness to derive an independent and sentient autonomous entity, what would its consciousness be like? And, how would we be able to detect it? Or, from the perspective of the AGI itself, how would it communicate to us that it has this conscious quality [7]?

One real difficulty in answering this question is that a sufficient definition of the nature of consciousness has not yet been satisfactorily established. How the brain conjures conscious awareness from the electrical activity of billions of individual nerve cells remains one of the great unanswered questions of life. And what this consciousness actually means is not easy to define. Trying to extend whatever is known about consciousness in living beings to an AGI would be difficult, if not unreasonable, at this time.

Consciousness has several distinct dimensions that can be measured, albeit with some imprecision. Three of the most important ones are [12]:

  • wakefulness or physiological arousal;
  • awareness or the ability to have conscious mental experiences, including thoughts, feelings, and perceptions; and
  • sensory organization, or how different perceptions and more abstract concepts become woven together to create a seamless conscious experience.

These three dimensions are necessary to produce the overall state of human consciousness from moment to moment. When wide awake, for instance, we are in a state of high awareness, but as we drift off to sleep, both wakefulness and awareness subside [10]. Consciousness adds context to one’s existence; we become aware of the place we occupy in the world surrounding us.

Chittka and Wilson [3] posited that consciousness is an evolutionary trait that exists in all animals to at least some extent [8]. The implications of this hypothesis are that: 1) consciousness is preprogrammed to exist as an ability shared among many species, similarly to how somatic forms and functions are homologous among various species; 2) consciousness abets survival and reproduction, and so, like other evolutionary traits, is genetically determined with different permutations that make adaptation to different environmental factors a criterion for continued existence; and 3) there may be different qualities of conscious abilities depending on the hierarchical level of the animals in question. That doesn’t answer the essential question about the elemental nature of consciousness, but it does hint that consciousness in AGI might be a lot different from consciousness in animals, including humans.

An essential requirement of consciousness, one that usually slips by those who would characterize it, is sufficient sensory ability to define the distinction between the self and what is on the outside. Without the necessary sensors, awareness of external features is not possible. So, just as humans’ lack of vision in the ultraviolet spectral region prohibits our awareness of the world of ultraviolet activity, any quality external to an individual that cannot or is not sensed in some way just does not exist in the context defined by consciousness.

The issue of consciousness is particularly of interest to anesthesiologists, because it is their function during surgery to administer just enough anesthetic to patients to suppress consciousness without reaching dangerous levels of too much anesthetic. Proper anesthetic dosages can vary significantly among different patients.

Anesthesiologists have their own operational definition of consciousness based on phenomenological observations during anesthesia [13]. Complete functional correlates of consciousness have yet to be precisely identified, but rapidly evolving progress has yielded several hypotheses regarding the generation of consciousness. Experimental observations have enabled anesthesiologists to reversibly modulate different aspects of consciousness for improved patient management during general anesthesia. The desired outcome of this trove of knowledge would be the design of specific monitoring devices and approaches aiming at reliably and reproducibly detecting each of the possible states of consciousness during an anesthetic procedure. These include total absence of mental content (unconsciousness), and internal awareness (sensation of self and internal thoughts), with or without conscious perception of the environment (connected or disconnected consciousness, respectively), or, as has been previously mentioned, context.

Using transcranial magnetic stimulation (TMS), Gosseries et al. [5] were able determine some correlates between their measurements and various levels of consciousness in comatose and unresponsive patients. There are thus some abilities, not yet well developed, to distinguish between patients who are conscious but otherwise unable to respond and those who are not conscious at all. These same techniques are not likely to be able to detect consciousness in an inanimate AGI system. So, the question of how to detect consciousness in an AGI system still remains unanswered.

Schneider [15] has proposed two new tests for consciousness in an AGI system that could prove satisfactory to a wide range of consciousness theorists holding divergent conceptual positions rather than narrowly relying on just the truth of any particular theory of consciousness. Both tests require physical connection to the AGI system. But, some experts doubt that the tests establish the existence of genuine consciousness in the AGI in question. Nonetheless, the proposed tests constitute progress, as they could perhaps find use in conjunction with other tests for AGI consciousness.

One reason that the question of AI consciousness can be of interest to biomedical engineers is because of the possibility of intimate human application. Biomimetic prostheses for humans can take many forms. There are continuing developments of artificial retinas [14], [17], artificial limbs, reanimation of paralyzed limbs [11], and sound processing to improve hearing. Ambitious projects are underway to interface directly with the central nervous system to ameliorate deafness, blindness, paralysis, epilepsy, and the tremors of Parkinson’s disease [16], [20]. Even the confusion and lost cognitive function of Alzheimer’s disease may be helped by bypassing functionally defective brain regions with microelectronic signal processing units [1].

The possibility of electronic implants replacing brain neural circuits takes us to the remote possibility that AGI implants could bring consciousness to unresponsive humans. The use of such implants could be of great benefit for severely brain-injured patients. However, it would have to be determined if such artificial consciousness would be appropriately attributed to the patient or to the implant. In other words, would such an implanted patient possess the personality of the patient or that of the AI implant?
So, previously discussed ethical considerations related to AI used in machines and devices [9] do not seem to apply to this possibility. Assuming that the attribute of consciousness could someday be added to an AGI device, and that the existence of such a state could be definitely established, the AI that could be the core of a consciousness implant has different and expanded ethical considerations compared to AI used in self-driving automobiles.


  1. T. W. Berger et al., “Restoring lost cognitive function,” IEEE Eng. Med. Biol. Mag., vol. 24, no. 5, pp. 30–44, Sep. 2005.
  2. T. Carrillo, “F Minus.” Comic strip. Baltimore Sun., Nov. 30, 2023.
  3. L. Chittka and C. Wilson, “Expanding consciousness,” Amer. Sci., vol. 107, pp. 364–369, Nov./Dec. 2019.
  4. Gartner. (2023). Gartner Experts Answer the Top Generative AI Questions for Your Enterprise. Accessed: Nov. 22, 2023. [Online]. Available:
  5. O. Gosseries et al., “Assessing consciousness in coma and related states using transcranial magnetic stimulation combined with electroencephalography,” Annales Françaises d’Anesthésie et de Réanimation, vol. 33, no. 2, pp. 65–71, Feb. 2014, doi: 10.1016/j.annfar.2013.11.002.
  6. G. Huckins, “Machines like us,” MIT Technol. Rev., vol. 126, no. 6, pp. 30–37, Nov./Dec. 2023.
  7. A. T. Johnson, “What is life, really?” IBE E-LifeNews, Dec. 2011.
  8. A. T. Johnson, “Consciousness in animals,” IEEE Pulse, vol. 11, no. 2, pp. 29–30, Mar. 2020. [Online]. Available:
  9. A. T. Johnson, “Ethics in the era of artificial intelligence,” IEEE Pulse, vol. 11, no. 3, pp. 44–47, May/Jun. 2020. [Online]. Available:
  10. J. Kingsland. (Jan. 21, 2023). The Mystery of Human Consciousness. Accessed: Nov. 22, 2023. [Online]. Available:
  11. G. E. Loeb and R. Davoodi, “The functional reanimation of paralyzed limbs,” IEEE Eng. Med. Biol. Mag., vol. 24, no. 5, pp. 45–51, Sep./Oct. 2005.
  12. L. Melloni et al., “Making the hard problem of consciousness easier,” Science, vol. 372, no. 6545, pp. 911–912, May 2021.
  13. J. Montupil et al., “The nature of consciousness in anaesthesia,” BJA Open, vol. 26, no. 8, Sep. 2023, Art. no. 100224.
  14. D. C. Rodger and Y.-C. Tai, “Microelectronic packaging for retinal prostheses,” IEEE Eng. Med. Biol. Mag., vol. 24, no. 5, pp. 52–57, Sep./Oct. 2005.
  15. S. Schneider, Artificial You. Princeton, NJ, USA: Princeton Univ. Press, 2019.
  16. T. Stieglitz, M. Schuetter, and K. P. Koch, “Implantable biomedical microsystems for neural prostheses,” IEEE Eng. Med. Biol. Mag., vol. 24, no. 5, pp. 58–65, Sep./Oct. 2005.
  17. J. D. Weiland and M. S. Humayun, “A biomimetic retinal stimulating array,” IEEE Eng. Med. Biol. Mag., vol. 24, no. 5, pp. 14–21, Sep. 2005.
  18. “Artificial intelligence,” Wikipedia. (2023). Accessed: Nov. 22, 2023. [Online]. Available:
  19. “Generative artificial intelligence,” Wikipedia. (2023). Accessed: Nov. 22, 2023. [Online]. Available:
  20. K. D. Wise, “Silicon microsystems for neuroscience and neural prostheses,” IEEE Eng. Med. Biol. Mag., vol. 24, no. 5, pp. 22–29, Sep./Oct. 2005.