New! Sign up for our free email newsletter.
Science News
from research organizations

Paper calls for patient-first regulation of AI in healthcare

Date:
January 31, 2024
Source:
University of California - San Diego
Summary:
A new paper describes how, despite widespread enthusiasm about artificial intelligence's potential to revolutionize healthcare and the use of AI-powered tools on millions of patients already, no federal regulations require that AI-powered tools be evaluated for potential harm or benefit to patients.
Share:
FULL STORY

Ever wonder if the latest and greatest artificial intelligence (AI) tool you read about in the morning paper is going to save your life? A new study published in JAMA led by John W. Ayers, Ph.D., of the Qualcomm Institute within the University of California San Diego, finds that question can be difficult to answer since AI products in healthcare do not universally undergo any externally evaluated approval process assessing how it might benefit patient outcomes before coming to market.

The research team evaluated the recent White House Executive Order that instructed the Department of Health and Human Services to develop new AI-specific regulatory strategies addressing equity, safety, privacy, and quality for AI in healthcare before April 27, 2024. However, team members were surprised to find the order did not once mention patient outcomes, the standard metric by which healthcare products are judged before being allowed to access the healthcare marketplace.

"The goal of medicine is to save lives," said Davey Smith, M.D., head of the Division of Infectious Disease and Global Public Health at UC San Diego School of Medicine, co-director of the university's Altman Clinical and Translational Research Institute, and study senior author. "AI tools should prove clinically significant improvements in patient outcomes before they are widely adopted."

According to the team, AI-powered early warning systems for sepsis, a fatal acute illness among hospitalized patients that affects 1.7 million Americans each year, demonstrates the consequences of inadequate prioritization of patient outcomes in regulations. A third-party evaluation of the most widely adopted AI sepsis prediction model revealed 67% of patients who developed sepsis were not identified by the system. Would hospital administrators have chosen this sepsis prediction system if trials assessing patient outcomes data were mandated, the team wondered, considering the array of available early warning systems for sepsis?

"We are calling for a revision to the White House Executive Order that prioritizes patient outcomes when regulating AI products," added John W. Ayers, Ph.D., who is deputy director of informatics in Altman Clinical and Translational Research Institute in addition to his Qualcomm Institute affiliation. "Similar to pharmaceutical products, AI tools that impact patient care should be evaluated by federal agencies for how they improve patients' feeling, function, and survival."

The team points to its 2023 study in JAMA Internal Medicine on using AI-powered chatbots to respond to patient messages as an example of what patient outcome-centric regulations can achieve. "A study comparing standard care versus standard care enhanced by AI conversational agents found differences in downstream care utilization in some patient populations, such as heart failure patients," said Nimit Desai, B.S., who is a research affiliate at the Qualcomm Institute, UC San Diego School of Medicine student, and study coauthor. "But studies like this don't just happen unless regulators appropriately incentivize them. With a patient outcomes-centric approach, AI for patient messaging and all other clinical applications can truly enhance people's lives."

The team recognizes that its proposed regulatory strategy can be a significant lift for AI and healthcare industry partners and may not be necessary for every flavor of AI use case in healthcare. However, the researchers say, excluding patient outcomes-centric rules in the White House Executive Order is a serious omission.


Story Source:

Materials provided by University of California - San Diego. Original written by Mika Ono. Note: Content may be edited for style and length.


Journal Reference:

  1. John W. Ayers, Nimit Desai, Davey M. Smith. Regulate Artificial Intelligence in Health Care by Prioritizing Patient Outcomes. JAMA, 2024; DOI: 10.1001/jama.2024.0549

Cite This Page:

University of California - San Diego. "Paper calls for patient-first regulation of AI in healthcare." ScienceDaily. ScienceDaily, 31 January 2024. <www.sciencedaily.com/releases/2024/01/240131144536.htm>.
University of California - San Diego. (2024, January 31). Paper calls for patient-first regulation of AI in healthcare. ScienceDaily. Retrieved April 27, 2024 from www.sciencedaily.com/releases/2024/01/240131144536.htm
University of California - San Diego. "Paper calls for patient-first regulation of AI in healthcare." ScienceDaily. www.sciencedaily.com/releases/2024/01/240131144536.htm (accessed April 27, 2024).

Explore More

from ScienceDaily

RELATED STORIES