If you’ve been a patient or visited a loved one in the hospital, it is likely that you have experienced one or more of the following scenarios:
- Your child’s infusion-pump alarm keeps going off, but your hospital room door is closed and the nurse can’t hear it. You have to push the nurse call button to let the nurse know that there is (or may be) a problem with the pump. You wonder why the pump status and alarm information isn’t available at the nursing station or on a mobile patient-status communication device.
- Your partner’s pulse oximeter alarms, the nurse comes to the bedside, and you anxiously ask, “What’s wrong?” You’re told, “Don’t worry about that—it happens all the time.” Then you wonder why medical devices have alarms if they are ignored by the staff.
- You arrive in the clinic and are told that you need a full lab test panel and electrocardiogram (ECG). You protest that these tests were performed only two weeks ago by your primary care physician and were sent to the hospital in preparation for your admission. “Sorry,” you are told, “We can’t find your labs in our system.” What isn’t said is that the information was sent to the clinic by fax—not as machine-readable data, and the fax contents cannot be searched in the Electronic Health Record (EHR). Nor can your hospital’s EHR communicate with your primary care physician’s EHR.
- Your friend’s hip-repair surgery went well, but he was found dead in bed by a nurse the next morning. He was receiving morphine by a patient-controlled analgesia (PCA) intravenous pump and was monitored by routine physiologic monitoring (blood pressure, ECG, and pulse oximetry). The hospital is trying to identify the root causes of the event, but doesn’t have enough data. The data logs from some devices were autodeleted when he was discharged from the room, and other devices—like the infusion pump—have the wrong clock time, making it very difficult to reconstruct the timeline of events. As an engineer, you may wonder why modern technologies can’t make the PCA system more “error resistant.” Could his death have been prevented had data from physiological monitors been used by a “PCA safety” algorithm to stop his morphine infusion and summon help before the morphine doses accumulated and caused the respiratory arrest that led to his death? (See ASTM F2761, Annex B)
- You are a physician caring for unstable patients located on several different floors and wards across a large hospital building. You wonder why you don’t have tools that allow you to focus your attention on the patient with the most urgent needs. Why can’t their vital signs, infusion pump settings, order status, and pain scores be displayed on your smartphone? Why are there no physician-customizable apps to provide early notification of problems for enabling modest early interventions that would prevent more serious problems from developing? For example, why can’t you receive a notification when the Remicade infusion has started, and an alarm if the blood pressure begins dropping, indicating a Remicade reaction? Or an alert that a blood transfusion has completed, indicating that it is an appropriate time to reassess the patient’s chest pain?
A Wicked Problem
It is self-evident that these and many other preventable adverse events and clinical workflow inefficiencies could be resolved by integrating data and functions from bedside medical devices and clinical information systems. Many products and one-off research prototypes have done just that. But many more good ideas for improving patient care have not been developed, adopted, or scaled for broader adoption due to the inability to cross the “interoperability chasm” (with apologies to Geoffrey A. Moore, author of Crossing the Chasm, for reinterpreting this term). The medical device and health information technology (HIT) ecosystem is replete with devices and interfaces that don’t act as “good citizens” within an ecosystem that should exist to facilitate innovative solutions to these and numerous other health care technology challenges .
We rely on interoperability in many other domains to obtain data, send information, remotely adjust device configuration (such as a networked printer), future-proof complex networked environments by replacing outdated components, and—perhaps the evolutionary pinnacle of interoperability—crowdsource Web and smartphone apps that can run on standardized platforms.
If interoperability is an essential foundational capability for health care transformation, why can it be described as a “wicked [hard] problem,” that is, “resistant to solutions; incomplete, contradictory, and changing requirements; and complex dependencies?” .
A significant contributor to the interoperability chasm is the very definition of interoperability. A commonly referenced IEEE definition is “the ability of two or more systems or components to exchange information and to use the information that has been exchanged.” In addition, there are several variations of this definition from the Interoperable Delivery of European eGovernment Services, the National Alliance of HIT, and the Healthcare and Information Management Systems Society, that are more specific to information and communication technology or HIT , . Yet, these definitions are missing a critical ingredient that contributes to the ongoing confusion about how to cross the interoperability chasm: intent.
The definitions talk about the ability to exchange information, and, by that definition, although many devices and HIT systems exchange data, they may not be intended by their manufacturers to interoperate. The data-exchange capability is often accomplished by third-party “integration” software, which is meant to convert a noninteroperable proprietary interface to an “interoperable” interface. However, if the manufacturer of a product does not intend for the device’s data to be consumed by apps, algorithms, or other devices, there can be no assurance that it can be reliably (that is, safely) consumed by the other systems. Reverse engineering an interface and writing a device driver (a not uncommon practice) does not confer interoperability or safety.
Safely assembling medical device and HIT components to create a clinical system requires a systems engineering perspective. For example, what happens if the manufacturer of a weight scale decides that a new software release will change the export weight units to pounds from kilograms or increases the precision to measure 160.4 lb instead of 160 lb? If the scale was not designed with the intent that the data will be consumed by other systems to calculate drug dosages, a drug dosage calculator app may misinterpret the data and calculate an overdose of chemotherapeutic agent. Or, if the electronic data interface of a medical device has not been designed with the intent of sending real-time ventilator data, the ventilator may shut down and reboot when data are accessed .
In another example, the ECRI Institute found that “most interfaces didn’t function as desired, especially in the area of alarms. Our biggest concern with many of the monitoring system-ventilator combinations we tested is the central station’s failure to clearly communicate one or more high-priority ventilator alarms. In one pairing, the central station failed to issue any alarm in response to some high-priority ventilator alarms. In several other pairings, alarms were issued, but the central station displayed nondescript warning messages that didn’t accurately convey the risk” . Cases such as these show that the need for stable and reliable information exchange beyond that of the standard definition of interoperability is of paramount importance if we hope to truly create a smarter, user-friendly care environment.
Crossing the Interoperability Chasm to Get Connected for Patient Safety
So, what will it take to make the move to the next level of interoperability? Here are a few key suggestions:
- Data interoperability of the medical device must be a capability that was intended by the manufacturer—not an afterthought that is achieved by building integration software and then hoping that the data will be transferred accurately and the device will continue to perform as the manufacturer intended. Therefore, we should update the general-purpose definitions of interoperability and clarify the importance of intent in achieving safe interoperability. For example, “interoperability is the ability of two or more systems or components intended by their manufacturers to exchange information.” In view of the U.S. Food and Drug Administration’s leadership of a number of medical device interoperability-related activities over the last few years, their safety engineering expertise should be used to guide the community to assure delivery of safe interoperability .
- Achieving complete plug-and-play interoperability will take some time, but that doesn’t mean we can’t be making progress toward that important goal. The clinical user community must clarify expectations in the form of clinical scenarios or “good ideas for interoperability” that, if implemented, could improve safety and workflow as well as facilitate innovation. These “good ideas” can serve as design inputs for a system of standards and technology development and help ensure that interoperability solutions are clinically driven.
- Health care delivery organizations (HDOs) are starting to view the interoperability gap for what it really is: a barrier to innovations that could improve patient safety and health care affordability. Integrating noninteroperable devices consumes significant resources that should be used for health care, not integration. HDO expectations and requirements must be clearly articulated to help manufacturers develop products to meet these needs. A vehicle for some of these procurement requirements is the Medical Device Free Interoperability Requirements for the Enterprise (MD FIRE) procurement document, signed by the Veterans Administration and members from Partners Health Care, Kaiser Permanente, and Johns Hopkins Medicine, freely available online. As described in MD FIRE, requiring complete disclosure of the interface specification (whether proprietary or standards-based) can be implemented now without delay.
- Open platforms, especially reference implementations of standards and architectures, are needed. These must be fully and freely available to the community of hospitals, manufacturers, standards developers, computer science and engineering students, app developers, regulators, and everyone else who is eager to work together to mature the health care technology ecosystem to enable the next generation of safe and intelligent medical device and HIT systems. A prototype federally funded open research platform called “OpenICE” is publicly available. OpenICE can support research with legacy devices, interoperable devices, and devices connected as a “Medical Internet of Things” (MIoT). The patient scenarios described at the beginning of the article, and the thousands more that the readers of this article and their families have observed, will be prevented if we work together to enable fully integrated clinical environments designed from the ground up to support “error resistance” and innovation in health care.
- (2011). The role and future of HIT in an era of health care transformation. Conf. Proc. Role and Future of Healthcare Information Technology in an Era of Healthcare Transformation. [Online].
- J. M. Goldman. (2013). Medical device interoperability: A wicked problem. [Online] and http://en.wikipedia.org/wiki/wicked_problem
- (2007). Interoperability in health information systems. [Online].
- (2013). Definition of interoperability. [Online].
- (2010, Sept.). Medication errors: Significance of accurate patient weights. Pa Patient Saf. Advis., 7(3), p. 112; Oral Public Comment Submitted to PCAST. 2010. p. 244. [Online].
- ECRI Health Devices. (2012, May). Interfacing monitoring systems with ventilators. [Online].
- (2009). FDA medical device interoperability. Public Workshop, Docket No. FDA-2009-N-0664. [Online].