New! Sign up for our free email newsletter.
Science News
from research organizations

Public reporting measures fail to describe the true safety of hospitals

Study finds only one measure out of 21 to be valid

Date:
May 10, 2016
Source:
Johns Hopkins Medicine
Summary:
Common measures used by government agencies and public rankings to rate the safety of hospitals do not accurately capture the quality of care provided, new research suggests.
Share:
FULL STORY

Common measures used by government agencies and public rankings to rate the safety of hospitals do not accurately capture the quality of care provided, new research from the Johns Hopkins Armstrong Institute for Patient Safety and Quality suggests.

The findings, published in the journal Medical Care, found only one measure out of 21 met the scientific criteria for being considered a true indicator of hospital safety. The measures evaluated in the study are used by several public rating systems, including U.S. News and World Report's Best Hospitals, Leapfrog's Hospital Safety Score, and the Center for Medicare and Medicaid Services' (CMS') Star Ratings. The Johns Hopkins researchers say their study suggests further analysis of these measures is needed to ensure the information provided to patients about hospitals informs, rather than misguides, their decisions about where to seek care.

"These measures have the ability to misinform patients, misclassify hospitals, misapply financial data and cause unwarranted reputational harm to hospitals," says Bradford Winters, M.D., Ph.D., associate professor of anesthesiology and critical care medicine at Johns Hopkins and lead study author. "If the measures don't hold up to the latest science, then we need to re-evaluate whether we should be using them to compare hospitals."

Hospitals have reported their performance on quality-of-care measures publicly for years in an effort to answer the growing demand for transparency in health care. Several report performance using measures created by the Agency for Health Care Research and Quality (AHRQ) and CMS more than 10 years ago. Known as patient safety indicators (PSIs) and hospital-acquired conditions (HACs), these measures use billing data input from hospital administrators, rather than clinical data obtained from patient medical records. The result can be extreme differences in how medical errors are coded from one hospital to another.

"The variation in coding severely limits our ability to count safety events and draw conclusions about the quality of care between hospitals," says Peter Pronovost, M.D., Ph.D., another study author and director of the Johns Hopkins Armstrong Institute for Patient Safety and Quality. "Patients should have measures that reflect how well we care for patients, not how well we code that care."

The researchers analyzed 19 studies conducted between 1990 and 2015 that directly addressed the validity of HACs and PSI measures, as well as information from CMS, the AHRQ and the Maryland Health Services Cost Review Commission's websites. Errors listed in medical records were compared to billing codes found in administrative databases. If the medical record and the administrative database matched 80 percent of the time, the measure was considered a realistic portrayal of hospital performance.

Of the 21 measures developed by the AHRQ and CMS, 16 had insufficient data and could not be evaluated for their validity. Five measures contained enough information to be considered for the analysis. Only one measure--PSI 15, which measures accidental punctures or lacerations obtained during surgery--met the researchers' criteria to be considered valid.

"Patients and payers deserve valid measures of the quality and safety of care," says Pronovost, who is also Johns Hopkins Medicine's senior vice president for patient safety and quality. "Despite their broad use in pay for performance and public reporting, these measures no longer represent the gold standard for quality, and their continued use should be reconsidered."

The researchers say they hope their work will lead to reform and encourage public rating systems to use measures that are based in clinical rather than billing data.

Pronovost recently outlined additional fixes that could be implemented by the rating community in a commentary published in the April 2016 issue of JAMA. Designating a separate reporting entity to establish standards for data collection and making funds available for systems engineering research were listed as possible starting points by Pronovost and his co-author, Ashish Jha from Harvard.


Story Source:

Materials provided by Johns Hopkins Medicine. Note: Content may be edited for style and length.


Journal References:

  1. Bradford D. Winters, Aamir Bharmal, Renee F. Wilson, Allen Zhang, Lilly Engineer, Deidre Defoe, Eric B. Bass, Sydney Dy, Peter J. Pronovost. Validity of the Agency for Health Care Research and Quality Patient Safety Indicators and the Centers for Medicare and Medicaid Hospital-acquired Conditions. Medical Care, 2016; 1 DOI: 10.1097/MLR.0000000000000550
  2. LEROY E. PARKINS. Extension postgraduate medical instruction in the United States and Canada. JAMA: The Journal of the American Medical Association, 1934; 102 (26): 2155 DOI: 10.1001/jama.1934.02750260001001

Cite This Page:

Johns Hopkins Medicine. "Public reporting measures fail to describe the true safety of hospitals." ScienceDaily. ScienceDaily, 10 May 2016. <www.sciencedaily.com/releases/2016/05/160510143706.htm>.
Johns Hopkins Medicine. (2016, May 10). Public reporting measures fail to describe the true safety of hospitals. ScienceDaily. Retrieved December 21, 2024 from www.sciencedaily.com/releases/2016/05/160510143706.htm
Johns Hopkins Medicine. "Public reporting measures fail to describe the true safety of hospitals." ScienceDaily. www.sciencedaily.com/releases/2016/05/160510143706.htm (accessed December 21, 2024).

Explore More

from ScienceDaily

RELATED STORIES