Can Emotion AI Keep You Healthier?

Can Emotion AI Keep You Healthier?

Can Emotion AI Keep You Healthier? 789 444 IEEE Pulse
Author(s): Mary Bates

Detecting emotive signals in specific contexts may provide insights into your health, but challenges persist in gathering accurate data

Even for humans, it can be challenging to recognize, interpret, and respond to emotions. Can artificial intelligence (AI) do any better? Technologies often referred to as “emotion AI” detect and analyze facial expressions, voice patterns, muscle activity, and other behavioral and physiological signals associated with emotions.

The field, also known as “affective computing,” dates back to at least 1995, when Rosalind Picard of the Massachusetts Institute of Technology (MIT) published a book of the same name. “I try to be very careful with my language,” says Picard, now head of the Affective Computing Research Group at MIT (Figure 1). “We are not detecting your feelings or your conscious experience. We are detecting signals that might be modulated by the underlying emotion.” 

In recent years, emotion AI has become a burgeoning multibillion-dollar industry, with promising applications in health care. However, concerns remain about the use and misuse of these technologies [1].

Emotions and health

Picard’s group at MIT has leveraged emotion AI in several different health contexts, including autism, migraine, and epilepsy. “We started by looking at the way that physiology is impacted by affect,” she says. “We have been consistent about measuring multiple signals, in a specific context, and stating the limitations within that context.” 

Can Emotion AI Keep You Healthier?

Figure 1. Rosalind Picard, head of the Affective Computing Research Group at MIT, is leveraging emotion AI in several different health contexts, including autism, migraine, and epilepsy. (Photo courtesy of Andy Ryan.)

Working with neurologists at Children’s Hospital Boston, Picard and colleagues discovered their technology could reliably detect certain kinds of seizures in regions of the brain associated with emotion. Picard cofounded the biotech company Empatica to help bring this technology to patients and in 2019, the U.S. Food and Drug Administration cleared the company’s seizure-sensing Embrace wristband for people ages six and up. The wristband is designed to detect motion and physiological patterns indicative of tonic-clonic seizures (also known as grand mal seizures) and alert caregivers [2].

Recently, Picard’s group has expanded their focus to include applications in mental health. Partnering with experts at Massachusetts General Hospital, they have developed machine learning algorithms to help diagnose and monitor symptoms among patients with depression. Picard and colleagues are currently running a study where participants wear Empatica E4 wristbands (to collect physiological data, like electrodermal activity) and download phone apps (to collect behavioral data, like texts and app usage). Their goal is to develop algorithms that can collect these data and identify meaningful patterns that may predict when an individual is struggling with depressive symptoms [3].

The close links between emotions and health also inspire Temitayo Olugbade, an applied machine learning scientist at University College London. She is a researcher on a project focused on chronic pain, in which pain signals from the brain persist beyond the normal period of tissue damage. The condition affects around 25% of adults in the United States [4]. Olugbade says that for patients with chronic pain, movement can not only be challenging, it may also evoke negative emotions, such as anxiety and fear, that can disrupt engagement in valued activities. To treat chronic pain, patients often undergo physical rehabilitation with physiotherapists. Physiotherapists adapt their behavior and feedback during exercise sessions based on the affective experiences that they perceive the patient is experiencing during the movements. 

However, chronic pain is a long-term condition and health care resources are limited. Olugbade and colleagues see potential for emotion AI to provide a bridge between physiotherapy in a clinical setting and the rest of patients’ lives. “The goal is to have this supportive technology in people’s homes or wherever they do physical activities, like at work or the park or the gym,” she says. 

Olugbade and colleagues are working on systems that can recognize patterns associated with worry or fear about movements based on wearable sensors that capture body movements and muscle activity. When this happens in the clinical setting, therapists can redirect the patient’s attention to their breathing, for example. Olugbade hopes their technology could help people with chronic pain recognize and address when physical activities cause them anxiety outside of the clinic. 

Faulty facial expressions

One concern raised by certain emotion AI technologies is the use of facial movements or expressions as proxies for emotional states. Research suggests that facial expressions and emotions are related, but not perfectly. For instance, people do smile when happy and scowl when angry more than would be expected by chance. Yet how people express emotions varies substantially across individuals, cultures, and situations [5].

Still, a common assumption is that a person’s emotional state can be reliably determined from his or her facial movements alone, says Lisa Feldman Barrett, a psychologist and neuroscientist at Northeastern University (Figure 2). “I think there is a conflation of detecting movement and detecting the meaning of those movements,” she says.

Evidence is accumulating that facial expressions of emotion are not universal and innate, but highly variable and dependent on context. She cites a recent analysis that found people scowled, on average, about 35% of the time when they are angry. “That’s more than chance,” she says. “But it means that 65% of the time people are angry, they are doing something else with their faces. That’s meaningful.” Even more importantly, Barrett adds, people scowl when they are not angry, as well—for instance, when they concentrate hard, hear a bad pun, or have gas. 

Can Emotion AI Keep You Healthier?

Figure 2. Lisa Feldman Barrett is a psychologist and neuroscientist at Northeastern University. (Photo courtesy of Lisa Feldman Barrett, Studio Eleven.)

“I’m not against emotion AI in principle,” says Barrett. “I just want people to understand that existing technology is using assumptions that are not supported by the best available empirical evidence.”

Potential risks 

Scientists and engineers may avoid the pitfalls associated with facial expressions by designing emotion AI using multiple, scientifically supported signals of emotion and capturing as much context as they can. Still, there are risks. 

Individuals within and outside the emotion AI community have warned of the potential harms of these technologies, especially when it comes to the dangers of using them to monitor and evaluate people in schools, businesses, and other situations [6]. “There are real concerns about these technologies, and they are already happening in some parts of the world,” says Picard. “My biggest worry is that people whose main goal is to control others may use personal information to diminish people’s autonomy and freedoms and to manipulate them in various ways.”

Even when it is intended to help people, such as in health applications, emotion AI must address issues of racial and gender equity. These technologies are programed by humans, and so subject to their conscious and unconscious biases. Olugbade points out the importance of designing these systems diligently so that they are appropriate for all potential users, and not just sensitive to the subset of the population used to train the AI. 

Bias in emotion AI, whether intentional or not, can have serious repercussions. For instance, research has shown that facial recognition software labels Black faces as having more negative emotions as white faces, regardless of the person’s expression [6]. Developers need to be aware of the potential for bias and work to actively prevent it, says Olugbade. “I think there are opportunities to use emotion recognition technology but we definitely need to take care in the way we develop them,” she says. “It is important to think about ethical risks and issues, follow well-developed standards based on rigorous research, and incorporate well-developed knowledge from psychological studies.”

The future of emotion AI

To ensure that emotion AI systems detect meaningful signals, interpret them correctly, and are free of bias, context is critical, according to Picard. “We really do need a lot more data, a lot more diversity in the data, and a lot more characterization of the differences that matter,” she says. 

In Barrett’s view, inferring a person’s emotions accurately requires capturing a whole suite of signals, and these ensembles may even be individually unique. “In addition to facial movements and body movements and vocalizations, you would have to measure what’s going on inside their body and maybe inside their brain,” she says. “You’d need to measure the external context, including the social context. And you’d have to do it in a temporally sensitive way to learn an individual’s vocabulary of expressions over time.”

Still, many believe that machine learning will be able to identify meaningful patterns that humans may not see. “AI is very good at figuring out which of one thousand variables, and all the different combinations of them, are reliable and repeatable at forecasting an event based on data that we give it,” says Picard. “We are not so good at this.”

Picard hopes that affective technologies will be used to prevent psychiatric illnesses and other health outcomes by helping clinicians and other experts see early warning signs. “For those who want to know, I think the technology holds an amazing future potential to bring that engineering rigor to objectively identifying patterns that are associated with the changes that happen before you get depressed, before you get the migraine, before you get the seizure, and before you have the meltdown, for a person with autism,” she says. “Then instead of the same one-size-fits-all advice for everybody, someone could look at your data and say for you and people whose patterns match yours, these behaviors are usually associated with more protective, healthy outcomes.” 

“I’m pretty confident that AI is better than people at doing that kind of data crunching. And that makes me hopeful.”

References

  1. A. Hagerty and A. Albert, “AI is increasingly being used to identify emotions-here’s what’s at stake,” The Conversation, Apr. 2021. [Online]. Available: https://theconversation.com/ai-is-increasingly-being-used-to-identify-emotions-heres-whats-at-stake-158809
  2. Seizure Alerting: Remote, Real-Time Alerts for Generalized Tonic-Clonic Seizures. Accessed: Nov. 18, 2022. [Online]. Available: https://www.empatica.com/doctors/seizure-alerts/
  3. A. Gold and D. Gross, “Deploying machine learning to improve mental health,” MIT News, Jan. 2022. [Online]. Available: https://news.mit.edu/2022/deploying-machine-learning-improve-mental-health-rosalind-picard-0126
  4. J. Dahlhamer et al., “Prevalence of chronic pain and high-impact chronic pain among adults—United States, 2016,” Morbidity Mortality Weekly Rep., vol. 67, pp. 1001–1006, Sep. 2018, doi: 10.15585/mmwr.mm6736a2.
  5. L. F. Barrett et al., “Emotional expressions reconsidered: Challenges to inferring emotion from human facial movements,” Psychol. Sci. Public Interest, vol. 20, no. 1, pp. 1–68, Jul. 2019, doi: 10.1177/1529100619832930.
  6. K. Crawford, “Time to regulate AI that interprets human emotion,” Nature, vol. 592, p. 167, Apr. 2021, doi: 10.1038/d41586-021-00868-5.