New! Sign up for our free email newsletter.
Science News
from research organizations

Methods proposed to improve how observational studies are conducted

Date:
August 25, 2011
Source:
National Institute of Statistical Sciences
Summary:
Statisticians point out that medical and other observational studies often produce results that are later shown to be incorrect. They now suggest ways to fix the system.
Share:
FULL STORY

S. Stanley Young, assistant director for bioinformatics at the National Institute of Statistical Sciences (NISS), and Alan Karr, director at NISS, have published a non-technical article in the September issue of Significance magazine pointing out that medical and other observational studies often produce results that are later shown to be incorrect, and -- invoking a quality control perspective -- suggest ways to fix the system.

Their central point is that the current system of publication in peer-reviewed journals relies on post-production inspection to ensure quality, a practice that has disappeared from modern industry in favor of controlling the process instead: quality control is now process control, not product control. They cite W. Edwards Deming, considered by many the most innovative thinker ever about quality, arguing not only for process control, but also that the problem lies with the managers -- funders and journals -- rather with than the workers -- individual researchers who respond rationally to the current set of incentives.

Young and Karr describe both their and others' studies of the extent to which observational studies do not replicate. Published claims such as "coffee causes pancreatic cancer," or "women eating breakfast cereal are more likely to have boy babies," have been refuted by subsequent studies and analyses. When these studies reach the popular media and influence individual consumers, the burden falls not just on science but also on society. And even if there were no impact on the public, scarce research resources, both money and personnel, have been squandered.

The paper describes several technical difficulties with observational studies, among them multiple testing (if enough questions are asked, some will yield false positive answers), bias (systematic error) and multiple modeling (searching among mathematical models until one is found that "fits the data"). Publication bias is another issue: papers reporting positive scientific results (for example, an association between Type A personalities and heart attacks) are more likely to be published than those reporting negative results, even though the latter may be as important scientifically.

Young and Karr recommend that when a study is submitted for publication, the data be split into two sets, a modeling data set and a holdout data set. Journals would then accept or reject papers based on the analysis of the modeling data set without knowing the results of applying the methods to the holdout set. But then the journal would also publish an addendum to the paper giving the results of the analysis of the holdout set.

Significance magazine is published by the Royal Statistical Society of the UK and the American Statistical Society.


Story Source:

Materials provided by National Institute of Statistical Sciences. Note: Content may be edited for style and length.


Cite This Page:

National Institute of Statistical Sciences. "Methods proposed to improve how observational studies are conducted." ScienceDaily. ScienceDaily, 25 August 2011. <www.sciencedaily.com/releases/2011/08/110825123832.htm>.
National Institute of Statistical Sciences. (2011, August 25). Methods proposed to improve how observational studies are conducted. ScienceDaily. Retrieved November 17, 2024 from www.sciencedaily.com/releases/2011/08/110825123832.htm
National Institute of Statistical Sciences. "Methods proposed to improve how observational studies are conducted." ScienceDaily. www.sciencedaily.com/releases/2011/08/110825123832.htm (accessed November 17, 2024).

Explore More

from ScienceDaily

RELATED STORIES