The Sepsis Watch watch is a rare example of numerous AI systems in healthcare that actually works. The reason is because it was integrated rather than just deployed in the hospital environment, an anthropologist who studied Sepsis Watch claims.
Many AI systems are being developed for healthcare, but very few of them actually work. Madeleine Clare Elish, a senior research scientist focused on ethical AI, stresses that for a technology to be successful, it needs to be integrated into the existing systems and procedures rather than just deployed in a specific environment.
Innovation, she said at EmTech Digital, hosted by MIT Technology Review, needs to be “repaired” to fit the specific environment it is designed for and empower people to use the provided solution.
Researchers at the Duke Institute for Health Innovation developed and implemented a deep learning model into clinical care to provide an early warning system within Duke Health for patients at risk of sepsis. Madeleine Clare Elish was there with other anthropologists to observe the implementation of the Sepsis Watch.
How does Sepsis Watch work?
“Sepsis is widespread and deadly. It is a leading cause of death in hospitals in the US and around the world. And while sepsis is treatable, it is notoriously difficult to diagnose consistently and to treat quickly,” Elish explained.
The Sepsis Watch is a clinical decision support system. Its goal is to improve and support the diagnosis and care of sepsis. The results of the clinical trial will be reported in the coming months. However, it already has dramatically improved the care of patients, as Elish put it. According to her, the Sepsis Watch is a great example of an ethical and effective AI technology integration.
The Sepsis Watch predicts the risk of patients developing sepsis along with new hospital protocols to raise the quality of sepsis treatment.
“When a patient arrives and is admitted to the emergency department, his electronic health record data is run through the Sepsis Watch system. If the module predicts that he is at a high risk of developing sepsis, his patient information is represented by a patient card on a Sepsis Watch iPad application. A nurse responsible for monitoring the iPad on which the Sepsis Watch runs regularly checks the app to review the patient cards. If a patient is predicted to be septic or at a high risk, this nurse calls the treating physician responsible for the patient’s care and conveys the risk category to the infectious disease (ID) physician on the telephone. If the physician agrees that the patient requires treatment for sepsis, the patient is further tracked on the iPad application by the nurse until the recommended treatment for sepsis is completed,” Elish explained.
According to her, usually, the deep learning module tends to eclipse the other parts of the solution, leaving people who allow for that innovation to function overlooked and undervalued.
“This is a problem, a challenge for ethical AI because overlooking the role of humans misses what is actually going on,” she said.
Technology needs to be “repaired”
Elish reckons that it is useful to think about technology-driven innovation as disruptive. When studying the implementation of the Sepsis Watch with her colleagues, Elish saw a certain disruption on the ground.
“When I say disruptions, I do not mean physical ones but rather ones of social norms and expectations, power dynamics, and information flows. These gaps, breakdowns, and miscommunications needed to be addressed for the intervention to function effectively,” she said.
In other words, disruptive innovations have to always be “repaired” to fit in the specific given context.
“The tools, outputs, and diagnostic practices of the emergency department had to be woven together, and this work of stitching together fell on the rapid response team nurses. (…) Repair work can take many forms, from emotional labor to expert justifications. It is a work required to make a technology effective in a specific context and to weave that technology into existing work practices, power dynamics, and cultural contexts,” Elish explained.
In theory, AI doesn’t get us very far, she reckons, and a human-centric approach to AI innovations is crucial in making the technology function properly.
“We need to be thinking about how AI will be integrated into social processes to make it actually work. Now is the time when we need to be thinking not just about what AI technologies could be doing to address existing problems but how and in what ways they will be integrated into existing social processes so that they address those problems.”
There are so many AI systems developed for healthcare, but so few of them actually work. Why is the Sepsis Watch considered to be an example of success?
“It was trained on local data, validated by local clinicians. It does not scale super well, and I think that is OK because there is so much systemic inequality and racism in our current healthcare institutions and systems. We do have to tread very carefully when we are designing systems, and we have to be locally context-specific and locally validated to ensure that we are not perpetuating health inequities,” Elish explained.
More great CyberNews stories:
Subscribe to our monthly newsletter