Association between biomarkers and disease often overstated, researcher finds
- Date:
- June 1, 2011
- Source:
- Stanford University Medical Center
- Summary:
- More than two dozen widely cited studies linking genes or other "biomarkers" to specific diseases vastly overstate the association, according to new research from an expert in scientific study design. As a result, clinicians may be making decisions for their patients based on inaccurate conclusions not supported by other, larger studies.
- Share:
More than two dozen widely cited studies linking genes or other "biomarkers" to specific diseases vastly overstate the association, according to new research from an expert in scientific study design at the Stanford University School of Medicine. As a result, clinicians may be making decisions for their patients based on inaccurate conclusions not supported by other, larger studies.
The widely cited studies include one linking the BRCA1 mutation with colon cancer, another that links levels of C-reactive protein in the blood with cardiovascular disease and one that links homocysteine levels with vascular disease.
The exaggeration is likely the result of statistical vagaries coupled with human nature and the competitive nature of scientific publication, said John Ioannidis, MD, DSc, chief of the Stanford Prevention Research Center, in a paper to be published in the June 1 issue of the Journal of the American Medical Association. "No research finding has no uncertainty; there are always fluctuations," he said. "This is not fraud or poor study design, it's just statistical expectation. Some results will be stronger, some will be weaker. But scientific journals and researchers like to publish big associations."
Once published, the perception of a strong link between a marker and a disease often persists -- in part because of the scientific practice of referencing, or citing, previous supporting research in each new study. As landmark studies are repeatedly cited, their results become accepted as incontrovertible even in the face of larger, subsequent studies that report less-spectacular or even statistically negligible associations.
For this paper, Ioannidis and colleague Orestis Panagiotou, MD, from the University of Ioannina School of Medicine in Greece, analyzed 35 widely cited studies. They found that fewer than half of the biomarkers in these studies had statistically significant associations with disease risk in larger follow-up studies. Indeed, only one of every five of the original selected studies increased a patient's relative risk for a condition by more than 1.37. (Relative risk is calculated by dividing the proportion of people with the marker who develop the condition under study by those without the marker who also develop the condition. A relative risk of 1 means there is no difference between the two groups; a relative risk of 2 means that the proportion of people with the condition is double in those who have the marker than in those without the marker. The median relative risk for reported by the 35 highly-cited studies was 2.5.)
Much of Ioannidis' own work involves strengthening the way that research is planned, carried out and reported, and he was called "one of the world's foremost experts on the credibility of medical research" in a profile published last year in The Atlantic magazine. Ioannidis, the C.F. Rehnborg Professor in Disease Prevention at Stanford, outlined some of the problems he observed in a 2005 essay in PLoS Medicine titled, "Why most published research findings are false." The essay remains the most-downloaded article in the history of the Public Library of Science, according to the journal's media relations office.
In the current study, Ioannidis analyzed 35 of the most highly cited studies published between 1991 and 2006 in 10 well-regarded biomedical journals. Each of the studies had been referenced by at least 400 subsequent papers; some had citations numbering in the thousands. The studies analyzed the relationships between biomarkers such as the presence of specific genes or infections, levels of blood proteins and other markers with the likelihood of developing conditions such as cancer and heart disease.
"We found that a large majority of these highly cited papers suggested substantially stronger effects than that found in the largest study of the same markers and outcomes," said Ioannidis. He noted that studies with greater numbers of patients or studies called meta-analyses, which compile the results of several independent studies, are more likely to be accurate than smaller pilot studies. To use the example of flipping a coin, you might not be surprised to come up with two, three or even four heads in a row, but over the course of hundreds of flips you will approach a ratio of 50:50.
In addition to statistical aberrations, you also have the potential for superimposed bias, Ioannidis said. "Researchers tend to play with their data sets, and to analyze them in creative ways. We're certainly not pointing out any one investigator with this study; it's just the societal norm of science to operate in that fashion. But we need to follow the scientific method through to the end and demand replication and verification of results before accepting them as fact."
One way to do so could be to implement a system of ongoing review and reassessment for each proposed association between biomarkers and disease, Ioannidis said. For example, the results of each new study assessing the interaction between a specific marker and disease could feed into an ongoing analysis of the strength of the proposed link. Over time, the true strength of the association should become apparent -- just as repeatedly flipping a coin will eventually yield the correct head-to-tails ratio. Researchers in the field of genomics are already becoming more aware of the potential for bias and the need for large-scale studies and consortiums of researchers to replicate results, he added.
The findings hold true for negative results as well. For example, one highly cited paper in the New England Journal of Medicine concluded that infection with penicillin-resistant bacteria did not increase a patient's chance of dying from pneumococcal pneumonia -- a conclusion that did not make intuitive sense to many clinicians. However, subsequent studies indicated that infection with the resistant bacteria does increase the risk of death by about 50 percent.
"We have to learn to trust the bigger picture," said Ioannidis. "And it's better to demand this proof upfront rather than waiting for it to happen on a case-by-case basis. It is vitally important to validate original published findings with subsequent large-scale evidence to make progress in the field of biomarkers and risk association."
Story Source:
Materials provided by Stanford University Medical Center. Original written by Krista Conger. Note: Content may be edited for style and length.
Journal References:
- John P. A. Ioannidis, Orestis A. Panagiotou. Comparison of Effect Sizes Associated With Biomarkers Reported in Highly Cited Individual Articles and in Subsequent Meta-analyses. JAMA, 2011; 305 (21): 2200-2210 DOI: 10.1001/jama.2011.713
- Patrick M. M. Bossuyt. The Thin Line Between Hope and Hype in Biomarker Research. JAMA, 2011; 305 (21): 2229-2230 DOI: 10.1001/jama.2011.729
- John P. A. Ioannidis. Why Most Published Research Findings Are False. PLoS Medicine, 2005; 2 (8): e124 DOI: 10.1371/journal.pmed.0020124
Cite This Page: