GPT4 used to translate medical jargon into layman’s terms


A US healthcare firm says it has successfully trialed a bespoke version of GPT4 that translates medical jargon into notes written in language that can be easily understood by patients.

NYU Langone Health began working with OpenAI’s GPT4 generative model last year, developing a specialized version of the tool.

Researchers tested the adapted GPT4 tool to see how well it could convert 50 sets of patient discharge notes into user-friendly language that could be easily understood by them.

“Specifically, running discharge notes through generative AI dropped the reports from an eleventh-grade reading level on average to a sixth-grade level, the gold standard for patient education materials,” said NYU Langone.

The study’s authors believe that doing this will help to alleviate anxiety among patients, who are often confounded by technical language used by physicians to summarize their conditions.

“Effective summaries are essential for patient safety during these transitions in care, but most are filled with technical language and abbreviations that are hard to understand and increase patient anxiety,” they said.

The team also ranked the AI discharge report translations using the Patient Education Materials Assessment Tool (PEMAT), which generates a percentage score based on 19 factors on the ability of patients to understand any piece of reading material.

“GPT4 translation raised PEMAT understandability scores to 81 percent, up from 13 percent seen with the original doctor-written discharge reports from the medical record,” said NYU Langone.

It says the tool “freed hundreds of its frontline clinicians to experiment with AI-based solutions to clinical problems using real patient data while adhering to federal standards that protect patient privacy.”

NYU Langone researchers also tested the GPT4 tool to see if it worked well unsupervised. While these results were satisfactory, it was found that the medical translator worked better when monitored by a healthcare professional.

“GPT4 worked well alone with some gaps in accuracy and completeness, but did more than well enough to be highly effective when combined with physician oversight, the way it would be used in the real world,” said senior study author Dr Jonah Feldman of NYU Langone Health.

“One focus of the study was on how much work physicians must do to oversee the tool, and the answer is very little. Such tools could reduce patient anxiety even as they save providers hours each week in medical paperwork, a major source of burnout.”