Clinical Trial Sleight-of-Hand

7501727 - a rabbit in a hat and a magic wand against white background

© ljupco | 123rf.com

In 2005 a researcher named John Ioannidis published a seminal paper on publication bias in medical research, “Why Most Published Research Findings are False.” When Julia Belluz interviewed Ioannidis for Vox ten years later, she reported that as much as 30% of the most influential medical research papers turn out to be wrong or exaggerated. She said an estimated $200 billion, the equivalent of 85% of the global spending on research, is wasted on poorly designed and redundant studies. Ioannidis indicated that preclinical research on drug targets received a lot of attention since then. “There are papers showing that, if you look at a large number of these studies, only about 10 to 25 percent of them could be reproduced by other investigators.”

Ioannidis noted even with randomized control trials, there is empirical evidence indicating only a modest percentage can be replicated. Among those trails that are published, about half of the initial outcomes of the study are actually reported. In the published trials, 50% or more of the results are inappropriately interpreted, or given a spin that favors the sponsor of the research. “If you multiply these levels of loss or distortion, even for randomized trials, it’s only a modest fraction of the evidence that is going to be credible.”

One of the changes that Ioannidis’s 2005 paper seemed to produce was the introduction of mandatory clinical trial registration guidelines by the International Committee of Medical Journal Editors (ICMJE). Member journals were supposed to require prospective registration of trials before patient enrollment as a condition of publication. The purpose is that registering clinical trial ahead of time publically describes the methodology that should be followed during the trial. If the published report of the trial afterwards differed from its clinical trial registration, you have evidence that the researchers massaged or spun their research data when it didn’t meet the originally proposed outcome measures. In other words, they didn’t play by the rules they said ahead of time they were going to do in their research if they didn’t “win.”

Julia Rucklidge and two others looked at whether five psychiatric journals (American Journal of Psychiatry, Archives of General Psychiatry/JAMA Psychiatry, Biological Psychiatry, Journal of the American Academy of Child and Adolescent Psychiatry, and the Journal of Clinical Psychiatry) were indeed actually following the guidelines that said they would follow. They found that less than 15% of psychiatry trials were prospectively registered with no changes in their primary outcome measures (POMs). Most trials were either not prospectively registered, had either their POMs or timeframes changed sometime after registration, or they had their participant numbers changed.

In an article for Mad in America, Rucklidge said they submitted their research for review and publication in various journals, including two of the five they investigated. Six medical or psychiatric journals rejected it—they refused to publish Rucklidge et al.’s findings. PLoS One, a peer-reviewed open access journal did accept and publish their findings. She said while the researchers in their study could have changed their outcome measures or failed to preregister their trials for benign reasons, “History suggests that when left unchecked, researchers have been known to change their data.”

For example, an initial clinical trial for an antidepressant could be projected to last for 24 weeks. The 24-week time frame would be one of the initial primary outcome measures—will the antidepressant be more effective than a placebo after 24 weeks. After gathering all the data, the researchers find that the antidepressant was not more effective than placebo at 24 weeks. But let’s say it was more effective than placebo at 18 weeks. What gets reported is the results after 18 weeks; the 24 week original timeframe may disappear altogether when the research results are published.

People glorify their positive results and minimize or neglect reporting on negative results. . . . At worst, our findings mean that the trials published over the last decade cannot be fully trusted. And given that health decisions and funding are based on these published findings, we should be very concerned.

Looking ahead, Rucklidge had several suggestions for improving the situation with clinical trials.

1) Member journals of the ICMJE should have a dedicated person checking trial registries, trials should simply not be published if they haven’t been prospectively registered as determined by the ICMJE or the journals should state clearly and transparently reasons why studies might be published without adhering to ICMJE guidelines.2) If authors do change POMs or participant numbers or retrospectively register their trials, the reasons should be clearly outlined in the methods section of the publication.3) To further improve transparency, authors could upload the full clinical trial protocol, including all amendments, to the registry website and provide the raw data from a clinical trial in a format accessible to the research community.4) Greater effort needs to be made to ensure authors are aware of the importance of prospectively registering trials, by improving guidelines for submission (3) and when applying for ethical approval.5) Finally, reviewers should not make decisions about the acceptability of a study for publication based on whether the findings are positive or negative as this may be implicitly encouraging authors to be selective in reporting results.

Rucklidge also mentioned another study by Mathieu, Chan and Ravaud that looked at whether clinical trial registrations were actually looked at by peer-reviewers. The Mathieu et al. survey found that only one-third of the peer reviewers looked at registered trial information and then reported any discrepancies to journal editors. “When discrepancies were identified, most respondents (88.8%) mentioned them in their review comments, and 19.8% advised editors not to accept the manuscript.” The respondents who did not look at the trial registry information said that main reasons they failed to do so was because of the difficulty or inconvenience in accessing the registry record.

One suggested improvement by Mathieu, Chan and Ravaud was for journals to provide peer reviewers with the clinical trial registration number and a direct Web link to the registry record; or provide the registered information with the manuscript to be reviewed.

The actions of researchers who fail to accurately and completely register their clinical trials, alter POMs, change participant numbers, or make other adjustments to their research methodology and analysis without clearly noting the changes is akin to the sleight-of-hand practiced by illusionists. And sometimes the effect is radical enough to make an ineffective drug trial seem to a new miracle cure.


Please note: I reserve the right to delete comments that are snarky, offensive, or off-topic. If in doubt, read My Comments Policy.