New! Sign up for our free email newsletter.
Science News
from research organizations

Some Cancer Trials May Have Incorrectly Reported Success: Review Finds Flaws In Study Design And Analysis

Date:
March 26, 2008
Source:
Ohio State University
Summary:
A new study reviewing 75 group-randomized cancer trials over a five-year stretch shows that fewer than half of those studies used appropriate statistical methods to analyze the results. The review suggests that some trials may have reported that interventions to prevent disease or reduce cancer risks were effective when in fact they might not have been.
Share:
FULL STORY

A new study reviewing 75 group-randomized cancer trials over a five-year stretch shows that fewer than half of those studies used appropriate statistical methods to analyze the results. The review suggests that some trials may have reported that interventions to prevent disease or reduce cancer risks were effective when in fact they might not have been.

More than a third of the trials contained statistical analyses that the reviewers considered inappropriate to assess the effects of an intervention being studied. And 88 percent of those studies reported statistically significant intervention effects that, because of analysis flaws, could be misleading to scientists and policymakers, the review authors say.

“We cannot say any specific studies are wrong. We can say that the analysis used in many of the papers suggests that some of them probably were overstating the significance of their findings,” said David Murray, lead author of the review study and professor and chair of epidemiology in the College of Public Health at Ohio State University.

“If researchers use the wrong methods, and claim an approach was effective, other people will start using that approach. And if it really wasn’t effective, then they’re wasting time, money and resources and going down a path that they shouldn’t be going down.”

Murray and colleagues call for investigators to collaborate with statisticians familiar with group-randomized study methods and for funding agencies and journal editors to ensure that such studies show evidence of proper design planning and data analysis.

In group-randomized trials, researchers randomly assign identifiable groups to specific conditions and observe outcomes for members of those groups to assess the effects of an intervention under study.

These trials are used to investigate interventions that operate at a group level, manipulate the social or physical environment, or cannot be delivered to individuals in the same way a pill or surgical procedure can. For example, a group-randomized trial might study the use of mass media to promote cancer screenings and then assess how many screenings result among groups that receive different kinds of messages.

In analyzing the outcomes of such trials, researchers should take into account any similarities among group members or any common influences affecting the members of the same group, Murray said. But too often, this review found that the common ground among group members was not factored into the final statistical analysis.

What can result is called a Type 1 error, when a difference between outcomes in groups is found that doesn’t really exist.

“In science, generally, we allow for being wrong 5 percent of the time. If you use the wrong analysis methods with this kind of study, you might be wrong half the time. We’re not going to advance science if we’re wrong half the time,” said Murray, also a member of the Cancer Control Program in Ohio State’s Comprehensive Cancer Center.

The review identified 75 articles published in 41 journals that reported intervention results based on group-randomized trials related to cancer or cancer risk factors from 2002 to 2006. Thirty-four of the articles, or 45 percent, reported the use of appropriate methods used to analyze the results. Twenty-six articles, or 35 percent, reported only inappropriate methods were used in the statistical analysis. Eight percent of the articles used a combination of appropriate and inappropriate methods, and nine articles had insufficient information to even judge whether the analytic methods were appropriate or not.

“Am I surprised by these findings? No, because we have done reviews in other areas and have seen similar patterns,” Murray said. “It’s not worse in cancer than anywhere else, but it’s also not better. What we’re trying to do is simply raise the awareness of the research community that you need to attend to these special problems that we have with this kind of design.”

The use of inappropriate analysis methods is not considered willful or in any way designed to skew results of a trial, Murray noted.

“I’ve seen creative reasons people give in their papers for using the methods they use, but I’ve never seen anybody say it was done to get a more significant effect. But that’s what can happen if you use the wrong methods and that’s the danger,” he said. “What we want to know from a trial is what really happened. If an intervention doesn’t work, we need to know that, too, so we can try something else.”

The review also is not an indictment of the study design. Murray is a proponent of such trials and was the first U.S. expert to author a textbook on the subject (Design and Analysis of Group-Randomized Trials, Oxford University Press, 1998).

He also is a co-investigator on three group-randomized trials in progress at Ohio State. Two trials use specific clinics as the assigned groups. One is analyzing the effectiveness of having specially trained guides help cancer patients negotiate the health-care system. The second is investigating the effectiveness of aggressive physician promotion of colorectal cancer screening for patients with cancer risk factors. A third trial will use Appalachian counties as groups to compare the effectiveness of a media campaign to promote colorectal cancer screenings.

“We’re not trying to discourage people from using this design. It remains the best design available if you have an intervention that can’t be studied at the individual level,” Murray said.

The review appears online in the Journal of the National Cancer Institute. This review study was supported by grants from the National Cancer Institute and the American Cancer Society.

Murray conducted the review with Sherri Pals of the Centers for Disease Control and Prevention; Jonathan Blitstein of RTI International in Research Triangle Park, N.C.; Catherine Alfano of the Division of Health Behavior and Health Promotion in Ohio State’s College of Public Health; and Jennifer Lehman of Ohio State’s Department of Family Medicine.


Story Source:

Materials provided by Ohio State University. Note: Content may be edited for style and length.


Cite This Page:

Ohio State University. "Some Cancer Trials May Have Incorrectly Reported Success: Review Finds Flaws In Study Design And Analysis." ScienceDaily. ScienceDaily, 26 March 2008. <www.sciencedaily.com/releases/2008/03/080325163756.htm>.
Ohio State University. (2008, March 26). Some Cancer Trials May Have Incorrectly Reported Success: Review Finds Flaws In Study Design And Analysis. ScienceDaily. Retrieved December 23, 2024 from www.sciencedaily.com/releases/2008/03/080325163756.htm
Ohio State University. "Some Cancer Trials May Have Incorrectly Reported Success: Review Finds Flaws In Study Design And Analysis." ScienceDaily. www.sciencedaily.com/releases/2008/03/080325163756.htm (accessed December 23, 2024).

Explore More

from ScienceDaily

RELATED STORIES