Ensuring AI Is Helpful and Not Harmful in Health Care

Ensuring AI Is Helpful and Not Harmful in Health Care

Ensuring AI Is Helpful and Not Harmful in Health Care 789 444 IEEE Pulse
Author(s): Jim Banks

Artificial intelligence will be a gamechanger in the health care setting, but its use comes with some caveats and regulators must ensure it maximizes patient benefits and limits potential harm

The presence of artificial intelligence (AI) is spreading fast through almost every industry, and health care is no exception. Data-based decision-making software is becoming pervasive in all facets of modern life, and the AI-enabled chatbot ChatGPT is having a seismic impact on the public perception of AI.

Algorithms to enable facial recognition or recommend a movie on Netflix are one thing, but an AI-enabled system that plays a role in health care decisions, treatment options and, ultimately, patient outcomes is something else entirely. One reason the AI adoption in health care lags behind other sectors is the severity of the consequences should an error cause problems with diagnosis, treatment, drug administration, or an operation.

Nevertheless, the potential benefits are substantial. AI can analyze vast quantities of data at speed and recognize patterns in that data that could significantly reduce the time required to make vital clinical decisions. “AI can potentially deliver the most benefit in areas where access to care is limited, such as rural areas, locations where clinicians and other health care providers are in short supply, or in settings where technology can reach further than physical presence, such as disaster recovery situations,” remarks clinical researcher Tania M. Martin-Mercado, M.S., M.P.H. (Figure 1).

Ensuring AI Is Helpful and Not Harmful in Health Care Figure 1

Figure 1. Dr. Tania M. Martin-Mercado, M.S., M.P.H., sees both the potential for AI in health care while noting that regulation is key in the effort to do no harm. (Photo courtesy of Dr. Martin-Mercado.)

“Populations that are vulnerable due to lower income levels, lack of insurance, or other socioeconomic inequities would benefit from AI solutions that reduce costs of care and facilitate or are integrated with community-based organizations and other health and human services programs,” she adds.

AI potential in patient care

In an age where data defines every aspect of treatment and care, the applications of AI in health care are almost endless. The potential value of its use cases is beyond question. Consider, for example, the growing market for AI-assisted robotic surgery, in which data from preop medical records can help to guide a surgeon’s actions, often leading to vastly reduced hospital stays for patients. In the United States, the medicare program already allows for diabetic retinopathy screening using AI as part of a new policy that could improve early detection.

Imperial College, London University, is investing heavily in AI-based research projects in health care. At the Department of Computing, a group is using AI to help capture and analyze medical images. The college has also developed BrainWear, a system for assessing the progress of brain tumors.

Dr. Aldo Faisal, who leads Imperial’s UKRI Center for Doctoral Training in AI for health care, has developed a system that uses bodily behavior to track subtle changes in the brain. Faisal is also developing an AI clinician—a reinforcement learning model to assist with the management of intravenous fluids and vasopressors in patients with sepsis in intensive care.

In different ways, AI will help us address unmet needs for health care, Faisal says. “Rich countries and less developed countries have a dearth of people who can deliver care, and AI can meet that demand. It won’t replace people but will augment workflows. It will do a lot of routine stuff technologically and leave the complex stuff to humans.”

The advent of AI use is already showing impacts in many settings. “We are seeing very rapid progression,” Faisal states. “We are leading the world’s first trial for a semi-autonomous treatment system in the ER. The system makes recommendations on what to prescribe and interacts with doctors.” Using this technology, “we have shown that we can reduce fatalities by many thousands per year,” he adds.

Faisal is describing digital therapeutics, which relies on a relationship between the AI and the doctors who ultimately decide on treatment. But this is where some tricky ethical questions arise. When an algorithm is part of the decision-making process, will doctors defer to it or ignore it, and where does liability lie if something goes wrong?

“Human beings are the largest legal and ethical challenge that must be raised when designing and deploying AI,” says Martin-Mercado. “Our biases can be automated just as easily as an intake form can be automated. Also, an act can be legal while being entirely unethical. When AI is used as a replacement for authenticity, it becomes problematic.”

Challenge for rulemakers

Protecting data privacy, minimizing the risk of bias, and reducing the use of surveillance are the primary goals of AI regulators, but global regulation on AI currently lacks a unified vision. Approaches vary country by country. Regulation is driven by the tech industry in the U.S., by the government in China, and by privacy issues and regulatory bodies in the European Union (EU).

Implicit bias is a key focus, and diverse teams are needed when creating AI health care solutions to ensure that the programs don’t carry the same biases as society and, therefore, exacerbate existing social problems.

“Lack of policy regarding AI can lead to racially motivated harm, along with other consequences of bias that result from automated tools,” notes Martin-Mercado. “Implicit bias is an enormous challenge and we have seen this bias manifest itself in clinical care tools that exist today, such as the eGFR and vaginal birth after cesarean (VBAC) calculators that are still in use.”

“These tools and many others like it improperly and inaccurately ‘correct’ for race, which is a social construct, not a biological one,” she adds. “Problems like this arise from outdated, stereotypical, and disproven ideas around race and sex. Although we now know better, many of these outdated tools are still in use.”

A unified global vision for AI regulation might improve safety, privacy, transparency, and accuracy, according to analysts at research organization GlobalData, while national approaches are likely to prevent it. In any case, big tech companies have power to shape its deployment and will likely lobby for a friendly regulatory environment.

“Regulation wants to get ahead of AI rather than playing catch up, and the delay between regulation and changes in technology is a moving target that is hard to hit, but there has been a really good effort to futureproof regulation that does not inhibit beneficial development and limits harm,” says Brian Scarpelli, executive director of the Connected Health Initiative (CHI), which works to clarify outdated health regulations.

In the U.K., the Regulatory Horizons Council has worked diligently to examine the regulation of AI as a medical device, while the National Health Service (NHS) has forged ahead with pilot programs for AI use. The EU is creating what it hopes will be the world’s first broad standards for regulating or banning certain uses of AI in 2023.

The Office of Civil Rights (OCR), part of the U.S. Department of Health and Human Services (HHS), has a nondiscrimination proposal that addresses implicit bias in AI, recognizing a recent study of patients’ electronic health records (EHR) that found black patients had disproportionately higher odds of being described with one or more negative descriptors in the history and notes. “Implicit bias is a moving target, can be a natural instinct, and—I would like to believe—is largely unintentional,” says Martin-Mercado. “It is hard to regulate this. However, the industry can put measures and best practices in place to avoid or de-risk the likelihood of implicit bias.”

Nevertheless, regulating AI remains a complex and sensitive challenge, partly because the technology is advancing so fast, partly because opinions on AI differ wildly. “People are starting to see AI as having agency rather than just analyzing data, as in self-driving cars,” remarks Faisal. “At a recent World Economic Forum crisis meeting on generative AI, some people said we need no regulations—the market will decide, while others said we should treat it like nuclear weapons because it is so dangerous. Thinking about AI in a way equivalent to medical device regulation may be sensible.”

What is the right approach to regulation?

Clearly, there is a balance to be struck between overly restrictive regulation and a laissez-faire approach to AI.

“I can’t think of a tech or modality that has advanced as quickly as AI, but I worry about the calls for broad, sweeping mandates and prohibitions without some demonstration of harm,” says Scarpelli. “You don’t want to make systemic change based on an outlier or a hypothetical. But an anti-regulatory approach to AI in health care cannot be taken seriously.”

“In other contexts—policing, facial recognition, and more—there are well demonstrated instances of harm and we can’t pretend that those don’t exist,” he adds. “We are pushing on a number of different agencies to find a way to maintain as much tech neutrality as possible but set the right goals around preventing harm. That is easier said than done.”

Ensuring a diverse mix in terms of race, gender, and industry background on the teams making the rules would be one way to limit the risk of implicit bias. Making sure accountability is a part of the process would also be vital. “There should be consequences, preferably legal and financial, to using AI in a manner that intentionally causes harm,” says Martin-Mercado.

Some voices, including researchers at Brookings Institution, are calling for a new federal agency to oversee advanced AI—an AI Control Council explicitly designed to address the AI control problem—but can such an agency, or existing regulators, look to a core set of values to define the regulatory framework?

Transparency, a recognition of potential bias, strict data privacy measures, a collaborative decision-making process, and the ability to adapt regulation as the technology evolves—these are the technical aspects. In terms of ethics, Scarpelli suggests that AI should: enhance access to health care; empower patients to manage their own health; strengthen the relationship patients have with their health care teams; and reduce administrative and cognitive burdens for both physicians and patients. The key point is that any regulation applies to the humans using AI, not to the AI itself.

“You can’t put ChatGPT in prison, and it doesn’t care if you turn it off,” says Faisal. “Any AI system is infinitely knowledgeable but also infinitely stupid, so AI must be a special team member working with a doctor.”

An AI system can quickly ingest expert decisions from very good doctors over many lifetimes, democratizing access to health care, and promoting the best treatment ideas. Humans, however, must have the last word on treatment.