From Annoying to Appreciated

From Annoying to Appreciated 618 372 IEEE Pulse

There’s a big difference between driving suggestions that come from a newly licensed, know-it-all teenager and those that come from a professional racecar driver who has spent years honing skills on the course. The first is one you just want to be quiet, and the second is one you actually want to speak up.
The same can be said of clinical decision support systems (CDSSs). The idea is to assist clinicians in providing the best medical care in the most efficient manner. The first iterations of these tools included alert boxes that popped up on a clinician’s computer screen while he or she was providing care. These pop-ups were often described as something akin to advice from the teenager: annoying, unwelcome, and anything but helpful. Today’s newer tools are just beginning to edge toward the other end of the spectrum, where they may one day provide information that improves the clinical workflow and overall patient care.

Right and Easy

Jerome Osheroff, M.D., the former chief clinical informatics officer for Thomson Reuters Healthcare, and now owner of TMIT Consulting LLC in Naples, Florida. (Photo courtesy of TMIT Consulting, LLC.)
Jerome Osheroff, M.D.

“Clinical decision support is about making the right thing to do the easy thing to do. And it’s about putting into place a process that will systematically guide decisions and actions toward the most effective and appropriate ones,” says leading clinical decision support expert Jerome Osheroff, M.D., the former chief clinical informatics officer for Thomson Reuters Healthcare, and now owner of TMIT Consulting LLC in Naples, Florida (See right: Jerome Osheroff, M.D., the former chief clinical informatics officer for Thomson Reuters Healthcare, and now owner of TMIT Consulting LLC in Naples, Florida. (Photo courtesy of TMIT Consulting, LLC.)).
That’s the shorthand version of a definition Osheroff has used while working with the Office of the National Coordinator for Health Information Technology and other federal agencies and leading development of widely used clinical decision support guidebooks [1]. The full definition delineates clinical decision support as “a process for enhancing health-related decisions and actions with pertinent organized clinical knowledge and patient information to improve health and health care delivery.” He remarks, “I just didn’t pull this definition and implementation guidance out of my ear. We’ve been working with literally hundreds of folks for more than a decade synthesizing and disseminating best practices for improving outcomes with clinical decision support.”
Osheroff readily admits that the definition is broad, but it is broad on purpose. Some places around the world lack access to the high-tech equipment that permeates the medical facilities of developed nations, and some even struggle to get basic computers, but that doesn’t mean that they cannot engage in clinical decision support, he says. “The technology provides powerful formats and channels for delivering information, but even in those places around the world where there’s no electricity, you still have to have a process for enhancing health-related decisions and actions.” For instance, Health eVillages is a program that provides clinical decision support tools—through mobile apps, iPods, and additional devices—and other mobile health technologies to remote clinical environments around the world [2].
To steer the evolution of the field, Osheroff also conceived— and the U.S. Agency for Healthcare Research and Quality (AHRQ) and the Centers for Medicare & Medicaid Services adopted—the Clinical Decision Support (CDS) Five Rights. The CDS Five Rights is a framework that calls for clinical decision support to provide “all the right information to all the right people through all the right channels in all the right formats at all the right times to improve specific care processes and outcomes,” he says.
With a clearer mission for CDSSs, researchers and companies are working on several key technological capabilities they hope will ensure that new clinician decision support tools are not only useful but are also desired additions to the health care arena.

Too Much Information!

Blackford Middleton, M.D., chair of the AMIA, earlystage entrepreneur, and former professor of biomedical informatics at Vanderbilt University in Nashville, Tennessee. (Photo courtesy of Blackford Middleton.)
Blackford Middleton, M.D.

The need for clinical decision support is increasing in direct relation to the snowballing medical information overload.
“It was estimated in the early 1980s that there were a million facts that needed to be known to practice general internal medicine, and another million facts if you were a subspecialist in cardiology, nephrology, or some other area. Of course, what is happening now is that the number is increasing dramatically, given such things as the onset of genetic and genomic medicine and the complexity of an aging population with multiple comorbidities,” says Blackford Middleton, M.D., chair of the American Medical Informatics Association (AMIA), early-stage entrepreneur, and former professor of biomedical informatics at Vanderbilt University in Nashville, Tennessee (see right – Blackford Middleton, M.D., chair of the AMIA, earlystage entrepreneur, and former professor of biomedical informatics at Vanderbilt University in Nashville, Tennessee. (Photo courtesy of Blackford Middleton.)).
He adds, “We can no longer expect the human brain to be able not only to store all the relevant pieces of information that may be applicable but also to process them in a way that reliably leads to the correct decision.”
The need to wade through all that data quickly has already led to an assortment of expert hospital systems, such as tools that automatically analyze electrocardiograms or arterial blood gas, or assist with drug dosing for cancer, neonatal, elderly, or other patients who are challenged to metabolize and eliminate drugs. “In all these settings, clinicians are already using expert systems not to supplant but to support their own interpretations,” Middleton says. Although they are relatively simple examples, he groups these expert systems under the realm of clinical decision support.

Edwin Lomotan, M.D., chief of clinical informatics for the AHRQ Health IT Division. (Photo courtesy of Ahrq.)
Edwin Lomotan, M.D.

Nonetheless, most people envision pop-up comments when they think of CDSSs, and some of the most noted are those that provide alerts about drug–drug interactions. These alerts fill a widening gap, says Edwin Lomotan, M.D., chief of clinical informatics for the AHRQ’s Health IT Division (see right – Edwin Lomotan, M.D., chief of clinical informatics for the AHRQ Health IT Division. (Photo courtesy of Ahrq.)). With thousands of medicines already available, the rapid-fire introduction of additional drugs, and the constant discovery of new interactions, clinicians simply cannot stay up to date on all things pharmaceutical. “So a CDSS may either be able to access a set of drug information or may set up recommendations and present them for clinicians’ consideration at the time they are prescribing their medicine,” he explains.
Clinical decision support tools also fit well with the trend toward personalized medicine. For instance, a CDSS might notify a physician about a potential allergic reaction that an individual patient may have to a certain medication, remind him or her of the need for a blood test for a patient with a particular disease, or recommend a screening for a patient who has a certain genetic susceptibility.

Eta Berner, Ed.D., professor of health informatics and director of the Center for Health Informatics for Patient Safety/Quality at the University of Alabama at Birmingham. (Photo courtesy of Harris Ponder.)
Eta Berner, Ed.D.

Other often-used clinical decision support tools include those that provide reminders for preventives, tests, and screenings, says Eta Berner, Ed.D., professor of health informatics and director of the Center for Health Informatics for Patient Safety/Quality at the University of Alabama at Birmingham (see right – Eta Berner, Ed.D., professor of health informatics and director of the Center for Health Informatics for Patient Safety/Quality at the University of Alabama at Birmingham. (Photo courtesy of Harris Ponder.)). She prepared a major report on CDSSs for the AHRQ a few years back [3]. Others can potentially save costs by warning of duplicate test ordering or by making recommendations for generic drugs instead of the brand-name drugs; or they can have linked references so that a physician can click on a medication to learn about side effects and dosing.
CDSSs can also deliver relevant information that supports an optimal clinical decision, even if the clinician doesn’t necessarily recognize the need for that information, Middleton notes. “The challenge is that we as physicians certainly know what we know, but we don’t know what we don’t know. And what’s observed in clinical studies of physician information needs is that sometimes things are missed, and errors of omission or errors of commission can occur.”
In other words, CDSSs could not only catch errors made by fatigued, overly stressed, or time-crunched clinicians but could also point out alternatives that clinicians hadn’t considered because those options simply didn’t occur to them. “In my 25 years of practice, I always, of course, try to do the best thing, but I also know that careful studies reveal that doctors have unanswered questions about therapy or care or diagnosis in one out of every three patient care visits,” Middleton says. Certainly, it’s possible that the CDSS advice won’t apply in every case, he says, but it’s also quite probable that the advice will indeed improve care.
Although they aren’t designed for that purpose, such systems would be especially advantageous for clinicians who are just starting out in the field, Berner says. “Some of my work and that of other researchers has found that less experienced health care professionals may be more receptive to CDSSs (than those at a senior level), but the systems are generally designed across the board.” They assume that the users are experts, and much of what they do is provide something more like a nudging, she says.
While CDSSs will no doubt be advantageous in the long run, the research hasn’t borne them out quite yet, according to Berner. “CDSSs all have potential benefits, but I say potential because many of them have not been clearly demonstrated, or they’ve been demonstrated in certain settings but not in all settings.” For example, while research has shown that CDSSs can identify and send alerts about inappropriate or dangerous drug orders, it’s difficult to say if other safeguards would have similarly caught the drug order before it was administered. “The nurse might very well notice that a dangerous drug was ordered, for instance, so it would have been stopped without CDSS,” she says.
Likewise, good CDSS advice might also be ignored by a physician. “There are many things that can intervene between that initial decision and what happens to the patient. These all make it challenging to get the research data to show a clear impact on patient outcomes.” She adds, “It does not mean that consistently providing the good advice of a CDSS doesn’t do anything, but it is harder to study and show its impact on patients.”
Clear and uncontested research results to verify health care outcomes are always hard to come by, especially when the targets of study are newer technologies. As the information overload continues to balloon, however, most experts agree that clinical decision support tools will become more common and increasingly necessary in health care.

Getting Beyond Alerts

Although pop-up alerts are the best-known CDSSs, they are hardly an optimal usage of clinical decision support, according to Osheroff. He equates an alerts-only CDSS to a guardrail on a highway: while it can keep you from driving off the road, it’s not a good method of steering. “Yet, that’s what people have been doing by defining clinical decision support as a pop-up alert that interrupts workflow and tells doctors that what they’re going to do is wrong,” he says. “That’s been hugely counterproductive.” Instead, clinical decision support tools should be efficient and effective guides, he emphasizes. That gets back to the CDS Five Rights, and getting pertinent and usable information to clinicians so they can provide the best-possible care.
Electronic health records (EHRs) will continue to play a big role in CDSSs as a major source of data, including such things as a patient’s pharmaceutical prescriptions, lab test results, and family medical history. The more complete the EHR, the better it is. With that in mind, researchers are working on natural-language-processing software to extract relevant information from EHRs (see “Say What? Extracting Usable Data from the Written Word”), while other major artificial- intelligence projects, such as IBM’s Watson technologies, are under way to cull insights from EHRs to support medical decisions.
[accordion title=”Say What? Extracting Usable Data from the Written Word”]
You visit your family physician’s office. The nurse sees you first and takes your blood pressure, temperature, and pulse; asks about medications you’re taking; and quickly hits a few keys on the laptop to enter the data into your EHR. That kind of data involves numbers or checkboxes and is simple for software programs to compile, track, and report.
But what about narrative comments? These are the summaries of your medical condition that are typically written in a combination of everyday language and technical terminology by your caregiver to accurately portray details such as your symptoms, the progress of your condition, or your response to treatment. Many caregivers from specialists and radiologists to lab technicians and physical therapists may add narrative comments to your medical record. As yet, software programs are not good at deciphering narrative comments, according to Timothy Imler, M.D., a research scientist at the Regenstrief Institute Division of Biomedical Informatics in Indianapolis and assistant professor of medicine in the Division of Gastroenterology and Hepatology at Indiana University School of Medicine (see below). He is one of numerous researchers who are attempting to develop a new analytical technique—called natural language processing—that uses sophisticated software to turn descriptive written words into accessible data.

Timothy Imler is a researcher who is developing a new analytical technique to help turn descriptive written words into accessible data. (Photo courtesy of Timothy Imler.)
Timothy Imler is a researcher who is developing a new analytical technique to help turn descriptive written words into accessible data. (Photo courtesy of Timothy Imler.)

IEEE Pulse: What is the need for natural language processing of narrative comments in health care?
Timothy Imler: The majority of clinical encounters are entered into the EHR system as freetext documents, which include things like dictation and typed-in notes. It becomes very challenging to glean the information that’s included in these documents unless you have a qualified physical human reading them, and that doesn’t always take place.
IEEE Pulse: How are you and other researchers approaching natural language processing for use in EHRs?
Imler: Natural language processing has been around since the 1950s, so this is not a new technology. It’s the application of it that has become the bigger issue. And we’ve come at this from a couple of perspectives. One is from the research side, and identifying things that would have not been identified in data searches previously.
As you may know, all billing records are based on a coding system called the International Classification of Diseases, and it’s the ninth revision, so they are known as ICD9 codes. There are about 70,000 or so different codes that refer to different diseases and medicines. Most research that was done in the past simply pulled these codes and assumed that the underlying data were correct. But there’s a lot more nuance to it. For one thing, multiple diseases can fall under one code. In addition, there are certain diseases that have no existing codes.
IEEE Pulse: So some disease information, and even entire diseases, would be missed. How can natural language processing help?
Imler: Natural language processing can extract some of these data so that researchers can get access to information that previously went unidentified. An example is a skin disease called calciphylaxis. This is a rare skin disease that happens in patients on dialysis, and there isn’t a code that extracts it. So what the researchers were able to do was to use natural language processing to find occurrences of this disease in text documents, and through that, were able to build the largest set of patients with this disease that has ever been recorded. That, in turn, enables further research and insights into the disease itself.
IEEE Pulse: You also mentioned a second use for natural language processing in health care. What is that?
Imler: That is on the quality-monitoring side, so that would mean making sure that natural language processing is pulling out the information from documents correctly for use in clinical decision support. That’s another big area and where my research is focused.
We looked at a colonoscopy for quality monitoring and were able to pick out something called the adenoma, which is a precancerous lesion found during the colonoscopy. Based on the presence of adenoma, guidelines dictate recommendations, including how long until your next colonoscopy. We looked at the ability for natural language processing to pull out adenomas, quantify those data, and use the guidelines, which is the clinical decision support aspect, to generate a recommendation. We then compared that to a manual review and recommendation by an expert [1]. We found (the combination of) natural language processing and clinical decision support to be very accurate. It basically replaced the need for a human to make the decision. Subsequently, we published a study showing that natural language processing could work across 13 sites throughout the country that utilize different methods for reporting colonoscopy findings [2].
The big thing for natural language processing is that it provides information, but it needs to provide information that’s quantifiable and actionable, and that’s where clinical decision support comes in.
IEEE Pulse: Where do you see natural language processing being most beneficial?
Imler: One of the use cases for this is in the monitoring of drugs and potential side effects after the drug has come out of clinical trials. Some research has looked into why people discontinue medication. For this, natural language processing would be extremely helpful in digging through text documents to find cases where somebody developed a symptom, such as diarrhea, after he or she went on a particular medication.
In terms of clinical decision support, one of the greatest contributions of natural language processing is that it may suggest something or find something that a human may not have otherwise noted. And that was one of the interesting things that came out of our colonoscopy work. When we actually went back and reviewed where the discrepancies were between the human reviewer and the natural language processing clinical decision support, there were times where the natural language processing with clinical decision support was correct, whereas the human was not. That said, I don’t think natural language processing and clinical decision support are going to replace the need for the clinician. Instead, I look at it as a way to augment the knowledge base of the clinician. It’s more suggestions from the clinical decision support than an actionable response.
IEEE Pulse: What are some of the current limitations with natural language processing?
Imler: At this point, the biggest hurdle for natural language processing is to make sure that it works across different health systems. Part of that is how we say the same thing in different ways. I may use one term for a disease in my health care institution, but someone from another institution uses a different term. For instance, one institution may use calciphylaxis but another may call the disease calcific uremic arteriolopathy. The same is true with acronyms. An ophthalmologist may use an acronym to mean one thing, but a gastroenterologist may use the same acronym to mean something completely different. These are examples of what’s called linguistic variation.
IEEE Pulse: Are there other challenges for natural language processing, perhaps inferring meaning from a health care professional’s written description?
Imler: It can be difficult even for a human being to look at a report and know what the clinician meant by it. If we expect a computer system to be able to do this, then we’re set up to fail.
The other thing is that not everything is cut and dried in medicine. If you look at a pathology report, you might not have a yes or no result, but rather a “potential.” It’s very challenging for natural language processing to pull that nuance out.
IEEE Pulse: What’s next for natural language processing?
Imler: The hardest thing at this point is that we don’t have good information exchange across health care systems. A national health information exchange was first proposed by President George W. Bush in 2005, but we don’t have it yet. If that does become a reality, then natural language processing of health documents becomes a much more prudent aspect of it.
The big goal here is to use natural language processing to extract actionable data from free-text documents, and it really comes down to having a national sharing of data. I’m talking about a national resource where these different algorithms that we use for natural language processing can be utilized. Technologies such as natural language processing will become increasingly important as we go forward. We have an aging population here in the United States, and we also have an aging physician population that is nearing retirement. That means that we’re not only going to have older people who need more medical care, but we’re also going to have fewer providers. We’ll also have rising health care costs. Along with that, the amount of knowledge that we have on each patient has grown exponentially, and that will continue, so we will need to rely on technology to help us provide the best health care possible.
I really do look at the next ten to 15 years for natural language processing and machine learning, which is a similar cousin, to really play a prominent role in helping patients and providers as well as health care systems and payers.

REferences

  1. T. D. Imler, J. Morea, and T. F. Imperiale, “Clinical decision support with natural language processing facilitates determination of colonoscopy surveillance intervals,” Clin. Gastroenterol. Hepatol., vol. 12, no. 7, pp. 1130–1136, 2013.
  2. T. D. Imler, J. Morea, C.Kahi, J. Cardwell, C. S. Johnson, H. Xu, D. Ahnen, F. Antaki, C. Ashley, G. Baffy, I. Cho, J. Dominitz, J. Hou, M. Korsten, A. Nagar, K. Promrat, D. Robertson, S. Saini, A. Shergill, W. Smalley, and T. F. Imperiale, “Multi-center colonoscopy quality measurement utilizing natural language processing,” Am. J. Gastroenterol., vol. 110, no. 4, pp. 543–552, 2015.

[/accordion]
A shared knowledge base is also vital when it comes to best practices for care. Middleton describes the sophisticated dosing algorithms for neonates, the elderly, hepatic-failure, and renal-failure patients that are in place at Brigham and Women’s Hospital in Boston. This knowledge base is embedded in the hospital’s CDSS, which is wonderful for the clinicians there, but it is not universally available, he says.
Those types of one-hospital systems may soon be a thing of the past. Middleton led a 30-organization project, called the Clinical Decision Support Consortium [4], to create a prototype of a “highly evolved, tested, and vetted care-management knowledge base that was made available to each and every doctor across the country.” The result was a proof-of-concept, cloud-based knowledge repository and clinical decision support service, which the consortium unveiled in 2010. This was far from an exhaustive knowledge base, but it did demonstrate that such a project was possible, he says, adding that the work helped invigorate commercial interests, which are now developing cloud-based CDSS services.
Such shared knowledge repositories and web-based CDSSs will also allow for the injection of genetics and genomic information. That vast amount of data and its uses in diagnostics, prevention, and treatment is an area ripe for clinical decision support, but that work is really just beginning, says Berner. A CDSS for pharmacogenomics, for instance, would provide clinicians with a patient’s susceptibility to drugs and allow doctors to prescribe medications on an individual basis. “As we learn more about a patient’s personalized makeup, it will become part of clinical care and part of decision support, too.”
The pooling and sharing of relevant data is definitely a big issue and one that will not be solved tomorrow, Osheroff readily acknowledges. From a technology perspective, research has been continuing for many years about how to aggregate and present data, and many more years will be needed. “I think we’ve not even begun to scratch the surface of how to make all these massive amounts of data manageable. That’s going to require a huge amount of attention,” he explains.
The evolution of shared data will also promote the development and harmonization of health IT standards that support quality measurement and clinical decision support “so that everyone is specifying health decision support rules and specifications in the same way,” Lomotan says. “To get there, I think it’s a longer journey rather than a shorter one.”

Going with the Flow

For now, one of the most pressing issues with CDSSs is how to get the tools out there so they are seen as valued aids and not annoying backseat drivers. “I think the biggest limitation or challenge is integrating the CDSS into the clinical workflow well,” Lomotan says. “The last thing clinicians want is an alert that pops up and provides just too much information, is not relevant, or doesn’t actually help you solve the potential problem that it’s alerting you to. A lot of the early systems did that, and users were not happy about it.”
Berner agrees. Once the alerts were perceived as nuisances, clinicians stopped reading them and just clicked through them all so they could get on with their work. “Collectively, in the field, we still are struggling with the best ways to provide decision support so it doesn’t get ignored, which could cause the doctor to miss something serious,” she says.
The doctor is only one of the potential CDSS users, Lomotan notes. “As the field moves forward, we’re likely to see CDSSs at the team level, so there are lots of potential audiences for CDSSs. These include nurses, care coordinators, and quality-improvement specialists, and they may all have different information needs.” That means that workflow integration has to be considered for each.
New iterations of CDSSs will also bring in the patient as part of the team, says Middleton. Much as individuals now have become involved in decisions and planning for their own retirement funds, patients will have a larger stake in their own health care. “As a patient, I will be increasingly accessing my medical record online, perhaps contributing to it with data from wearable sensors and other quantified ‘selfie’ stuff, and using tools that might guide me on exercise, diet, or other things on the consumer side,” he explains.
That’s a good thing, because the scientific literature confirms that shared decision making and patient engagement improves outcomes. As homes become smarter, Osheroff envisions harnessing the applicable data to support decisions by the whole health care team, including the patient. “We’re talking about the patients and their clinicians making shared decisions based on data that are generated at home from smart devices like scales, glucometers, or blood pressure sensors, or, more futuristically, from devices for elderly patients that record when they’re flushing the toilet, when they’re opening the refrigerator, or when they’re turning on the lights,” he says. “I think the whole business of more robust data flow through the unending computerization of all sorts of different things is an important trend.”
Active patient participation with CDSSs is getting closer, Middleton says. Wearable sensors and various medical monitoring apps are readily available, but before they will truly make a mark, their collected data must be verified as true, accurate, representative, unbiased, and straightforward, he notes. “And we still need to determine the best ways for their engagement with the management or the maintenance of the clinical medical record.” Simultaneously, proper channels and formats will have to be developed to deliver the information in an accessible and practical way.
Again, it all comes down to the CDS Five Rights. Lomotan remarks, “The question for clinical decision support is: What information is needed, what is the right time and the right way to provide it, and who should receive it? That has always been the central question for clinical decision support.” He adds, “It’s a challenge, but we’re getting closer to answering it.”

References

  1. J. A. Osheroff, J. M. Teich, D. Levick, L. Saldana, F. T. Velasco, D. F. Sittig, K. M. Rogers, and R. A. Jenders. (2012). Improving Outcomes with Clinical Decision Support: An Implementer’s Guide, 2nd ed. [Online]. Chicago, IL: Healthcare Information and Management Systems.
  2. Health eVillages. [Online].
  3. E. S. Berner. (2009, June). Clinical decision support systems: state of the art. AHRQ Publication No. 09-0069-EF. [Online].
  4. Clinical Decision Support Consortium. [Online].