New! Sign up for our free email newsletter.
Science News
from research organizations

Peer review system for awarding NIH grants is flawed, analysis suggests

Funding mechanism no better than random for choosing projects that will produce most-cited science, analysis suggests

Date:
February 16, 2016
Source:
Johns Hopkins University Bloomberg School of Public Health
Summary:
The mechanism used by the National Institutes of Health (NIH) to allocate government research funds to scientists whose grants receive its top scores works essentially no better than distributing those dollars at random, new research suggests.
Share:
FULL STORY

The mechanism used by the National Institutes of Health (NIH) to allocate government research funds to scientists whose grants receive its top scores works essentially no better than distributing those dollars at random, new research suggests.

The findings suggest that the expensive and time-consuming peer-review process is not necessarily funding the best science, and that awarding grants by lottery could actually result in equally good, if not better, results. A report on the research, published online Feb. 16 in the journal eLife, was written by Ferric Fang, MD, at the University of Washington, Anthony Bowen, MS, at the Albert Einstein College of Medicine and Arturo Casadevall, MD, PhD, at the Johns Hopkins Bloomberg School of Public Health,

"The NIH claims that they are funding the best grants by the best scientists. While these data would argue that the NIH is funding a lot of very good science, they are also leaving a lot of very good science on the table," says Casadevall, Professor and Chair of the W. Harry Feinstone Department of Molecular Microbiology and Immunology at the Bloomberg School. "The government can't afford to fund every good grant proposal, but the problems with the current system make it worse than awarding grants through a lottery."

Notes Fang, a professor of laboratory medicine and microbiology at the University of Washington: "We are not criticizing the peer reviewers. We are simply showing that there are limits to the ability of peer review to predict future productivity based on grant applications. This suggests that some of the resources and effort spent on ranking applications might be better spent elsewhere. While the average productivity of grants with better scores was somewhat higher, the differences were extremely small, raising questions as to whether the effort is worthwhile."

NIH rejects the majority of research grant proposals it receives. To decide which proposals to fund, NIH relies on expert panels whose members score each application. Funding decisions are made on the basis of these scores and the amount of available funds. In recent years, the NIH has only funded those proposals ranked around the top 10 percent. The annual research budget for the NIH was $30.1 billion in 2015.

For their study, the researchers reanalyzed data on the 102,740 research project grants funded by the NIH from 1980 through 2008. Researchers who published a paper in the journal Science in 2015 had collected the data set. Their research suggested that peer review did in fact work -- that the highest ranked research projects funded by the NIH earned the most citations. The researchers in this case chose to measure the success of a research grant by determining how many papers that resulted from the work were published in scientific journals and then tracked how many times those papers are cited in future research papers.

The original researchers looked at all of the grants funded by NIH in those years and a significantly larger number of grants were funded in many of those years. The percentage of grants funded in recent years has been at historic lows because of cutbacks resulting from sequestration budget cuts stemming from the October 2013 government shutdown.

For the new study, Casadevall and his colleagues decided to only look at the top 20 percent of grants awarded and found very little difference between the top-ranked projects and those projects ranked in the 20th percentile when it came to which would go on to be the most-cited research. What the peer review process can do, they determined, is discriminate between very good science and very bad science --that is, those in the top 20 percent versus those below the 50th percentile.

Peer review isn't cheap. The annual budget of the NIH Center for Scientific Review is $110 million. Individual NIH institutes and centers also spend a lot on peer review. That money could go toward more grants, the researchers say. The costs are not only financial. Writing and reviewing grants is extremely time consuming and diverts the efforts of scientists away from doing science itself.

The process also allows for substantial subjectivity. The objection of a single member of the committee can effectively kill a grant proposal, whether that objection is legitimate or not.

"When people's opinions count a lot, we may be doing worse than choosing at random," Casadevall says. "A negative word at the table can often swing the debate. And this is how we allocate research funding in this country."

To solve this, the authors suggest that the top proposals are first chosen by peer review and that those proposals then be put into a lottery, with grants awarded at random. Lotteries were used as part of the military draft during the Vietnam War and are used today to fill magnet schools with many qualified applicants and to award permanent residency applications. College student and low-income housing is often awarded by lottery. He says New Zealand has started using a lottery to make its scientific grants.

Adds Casadevall: "We're hoping people will look at this data and say, 'Can we do better? Can we create a fairer system that gives society the best science it can afford?'"


Story Source:

Materials provided by Johns Hopkins University Bloomberg School of Public Health. Note: Content may be edited for style and length.


Journal Reference:

  1. Ferric C. Fang, Anthony Bowen and Arturo Casadevall. NIH Peer Review Percentile Scores are Poorly Predictive of Grant Productivity. eLife, 2016

Cite This Page:

Johns Hopkins University Bloomberg School of Public Health. "Peer review system for awarding NIH grants is flawed, analysis suggests." ScienceDaily. ScienceDaily, 16 February 2016. <www.sciencedaily.com/releases/2016/02/160216090128.htm>.
Johns Hopkins University Bloomberg School of Public Health. (2016, February 16). Peer review system for awarding NIH grants is flawed, analysis suggests. ScienceDaily. Retrieved December 3, 2024 from www.sciencedaily.com/releases/2016/02/160216090128.htm
Johns Hopkins University Bloomberg School of Public Health. "Peer review system for awarding NIH grants is flawed, analysis suggests." ScienceDaily. www.sciencedaily.com/releases/2016/02/160216090128.htm (accessed December 3, 2024).

Explore More

from ScienceDaily

RELATED STORIES