New! Sign up for our free email newsletter.
Science News
from research organizations

How good is that AI-penned radiology report?

Scientists design new way to score accuracy of AI-generated radiology reports

Date:
August 4, 2023
Source:
Harvard Medical School
Summary:
New study identifies concerning gaps between how human radiologists score the accuracy of AI-generated radiology reports and how automated systems score them. Researchers designed two novel scoring systems that outperform current automated systems that evaluate the accuracy of AI narrative reports. Reliable scoring systems that accurately gauge the performance of AI models are critical for ensuring that AI continues to improve and that clinicians can trust them.
Share:
FULL STORY

AI tools that quickly and accurately create detailed narrative reports of a patient's CT scan or X-ray can greatly ease the workload of busy radiologists.

Instead of merely identifying the presence or absence of abnormalities on an image, these AI reports convey complex diagnostic information, detailed descriptions, nuanced findings, and appropriate degrees of uncertainty. In short, they mirror how human radiologists describe what they see on a scan.

Several AI models capable of generating detailed narrative reports have begun to appear on the scene. With them have come automated scoring systems that periodically assess these tools to help inform their development and augment their performance.

So how well do the current systems gauge an AI model's radiology performance?

The answer is good but not great, according to a new study by researchers at Harvard Medical School published Aug. 3 in the journal Patterns.

Ensuring that scoring systems are reliable is critical for AI tools to continue to improve and for clinicians to trust them, the researchers said, but the metrics tested in the study failed to reliably identify clinical errors in the AI reports, some of them significant. The finding, the researchers said, highlights an urgent need for improvement and the importance of designing high-fidelity scoring systems that faithfully and accurately monitor tool performance.

The team tested various scoring metrics on AI-generated narrative reports. The researchers also asked six human radiologists to read the AI-generated reports.

The analysis showed that compared with human radiologists, automated scoring systems fared worse in their ability to evaluate the AI-generated reports. They misinterpreted and, in some cases, overlooked clinical errors made by the AI tool.

"Accurately evaluating AI systems is the critical first step toward generating radiology reports that are clinically useful and trustworthy," said study senior author Pranav Rajpurkar, assistant professor of biomedical informatics in the Blavatnik Institute at HMS.

Improving the score

In an effort to design better scoring metrics, the team designed a new method (RadGraph F1) for evaluating the performance of AI tools that automatically generate radiology reports from medical images.

They also designed a composite evaluation tool (RadCliQ) that combines multiple metrics into a single score that better matches how a human radiologist would evaluate an AI model's performance.

Using these new scoring tools to evaluate several state-of-the-art AI models, the researchers found a notable gap between the models' actual score and the top possible score.

"Measuring progress is imperative for advancing AI in medicine to the next level," said co-first author Feiyang 'Kathy' Yu, a research associate in the Rajpurkar lab. "Our quantitative analysis moves us closer to AI that augments radiologists to provide better patient care."

Long term, the researchers' vision is to build generalist medical AI models that perform a range of complex tasks, including the ability to solve problems never before encountered. Such systems, Rajpurkar said, could fluently converse with radiologists and physicians about medical images to assist in diagnosis and treatment decisions.

The team also aims to develop AI assistants that can explain and contextualize imaging findings directly to patients using everyday plain language.

"By aligning better with radiologists, our new metrics will accelerate development of AI that integrates seamlessly into the clinical workflow to improve patient care," Rajpurkar said.


Story Source:

Materials provided by Harvard Medical School. Original written by Ekaterina Pesheva. Note: Content may be edited for style and length.


Journal Reference:

  1. Feiyang Yu, Mark Endo, Rayan Krishnan, Ian Pan, Andy Tsai, Eduardo Pontes Reis, Eduardo Kaiser Ururahy Nunes Fonseca, Henrique Min Ho Lee, Zahra Shakeri Hossein Abad, Andrew Y. Ng, Curtis P. Langlotz, Vasantha Kumar Venugopal, Pranav Rajpurkar. Evaluating progress in automatic chest X-ray radiology report generation. Patterns, 2023; 100802 DOI: 10.1016/j.patter.2023.100802

Cite This Page:

Harvard Medical School. "How good is that AI-penned radiology report?." ScienceDaily. ScienceDaily, 4 August 2023. <www.sciencedaily.com/releases/2023/08/230804123729.htm>.
Harvard Medical School. (2023, August 4). How good is that AI-penned radiology report?. ScienceDaily. Retrieved April 27, 2024 from www.sciencedaily.com/releases/2023/08/230804123729.htm
Harvard Medical School. "How good is that AI-penned radiology report?." ScienceDaily. www.sciencedaily.com/releases/2023/08/230804123729.htm (accessed April 27, 2024).

Explore More

from ScienceDaily

RELATED STORIES