Doubling Down on STAR*D Outcomes

Image by 2541163 from Pixabay

In January of 2006, the NIMH announced the results of Sequenced Treatment Alternatives to Relieve Depression (STAR*D) Study, the largest and longest study ever done to evaluate depression treatment. Its purpose was to determine the effectiveness of different treatments for people with Major Depression Disorder (MDD) who did not respond initial treatment with an antidepressant. The startling STAR*D results reported that almost 70 percent of those who did not withdraw from the study became symptom-free. “For the first time, doctors and people with depression now have extensive data on antidepressant treatments from a federally funded, large-scale, long-term study directly comparing treatment strategies.” However, the true remission rate turned out to be 35%, around half of what was reported.

In August of 2023 Ed Pigott and other researchers reanalyzed and published the patient-level data set from the STAR*D study in the British Medical Journal (BMJ), keeping their analysis to the original research protocol. They discovered the STAR*D investigators did not use did not use the STAR*D protocol-stipulated HRSD (Hamilton Rating Scale for Depression), but instead used a non-blinded clinic-administrated assessment, known as the QIDS-C, the Quick Inventory of Depressive Symptomatology. They also included 99 patients who scored as remitted on the HRSD at the outset of the study as well as 125 who scored as remitted when initiating their next-level treatment. “This inflated their report of outcomes.”

Unfortunately, the STAR*D investigators’ assertion of a 67% cumulative remission rate had already become accepted clinical wisdom. The NIMH’s director at the time, Thomas Insel, and an editorial in the American Journal of Psychiatry both claimed STAR*D participants achieved a 70% remission rate. “Our reanalysis found that in step 1, STAR*D’s remission and extent of improvement rates were substantially less than those reported in other open-label antidepressant comparator trials and then grew worse in steps 2-4.” The remission rate in step 1 was 25.5%; by step 4, it was only 10.4%.

Robert Whitaker further reported that Pigott and others discovered only 3% of the participants who entered the trial remitted and stayed well in the trial to its end in one year. One of the STAR*D investigators thought Pigott’s analysis was “reasonable and not incompatible with what we had reported.” That was 13 years ago and as of yet, there hasn’t been a public acknowledgement that these protocol violations were a form of scientific misconduct.

Yet, there has been no public acknowledgement by the American Psychiatric Association (APA) of this scientific misconduct. There has been no call by the APA—or academic psychiatrists in the United States—to retract the studies that reported the inflated remission rates. There has been no censure of the STAR*D investigators for their scientific misconduct. Instead, they have, for the most part, retained their status as leaders in the field.

Thus, given the documented record of scientific misconduct, in the largest and most important trial of antidepressants ever conducted, there is only one conclusion to draw: In American psychiatry, scientific misconduct is an accepted practice.

Whitaker said this presented a challenge to American citizens. If the American Psychiatric Association would not police its own research, it was up to the public to demand the STAR*D paper be withdrawn from the American Journal of Psychiatry. “As STAR*D was designed to guide clinical care, it is of great public health importance that this be done.”

He persuasively argued that there had been an intent to deceive. He said once Pigott and colleagues identified the deviations from the STAR*D protocol (which they did initially in 2010), “the STAR*D investigators’ ‘intent to deceive’ was evident.” After Pigott made the protocol and other key documents available in 2011 on two blogs for the Mad in America website, the scientific community could see the deception.

Their recent RIAT publication [in August of 2023] makes it possible to put together a precise numerical accounting of how the STAR*D investigators’ research misconduct, which unfolded step by step as they published three articles in 2006, served to inflate the reported remission rate. This MIA Report lays out that chronology of deceit. Indeed, readers might think of this MIA Report as a presentation to a jury. Does the evidence show that the STAR*D’s summary finding of a 67% cumulative remission rate was a fabrication, with this research misconduct born from a desire to preserve societal belief in the effectiveness of antidepressants?

In Psychiatry Under the Influence Whitaker and his coauthor Lisa Cosgrove wrote about how the STAR*D trial was an example of institutional corruption. They said there were two forms of institutional corruption, or economies of influence, driving that corruption: psychiatry’s guild interests and the extensive financial ties the STAR*D investigators had with the pharmaceutical industry. They said:

Although this was a NIMH-funded trial, industry influence was indirectly present during the trial. Rush and at least seven other STAR*D investigators had financial ties to Forest Laboratories, the manufacturer of Celexa. The investigators’ collective disclosure statement revealed hundreds of ties to pharmaceutical companies, with many investigators reporting that they had served as both consultants and speakers. Yet, given that this was a NIMH-funded trial, STAR*D couldn’t be blamed on the drug companies, and it could be argued that the “corruption” seen here far outstripped anything seen in a commercial trial of the SSRI antidepressants. (p. 129)

Whitaker said the American Psychiatric Association is best understood as a trade association that promotes the financial and professional interests of its members. The APA has long touted antidepressants as effective and safe treatment. He thought if the STAR*D results has been accurately reported, they would have derailed society’s belief in the safety and efficacy of antidepressants. The STAR*D investigators were, in a business sense, protecting one of their primary “products.” And they were safeguarding the public image of their profession.

This research misconduct has done extraordinary harm to the American public, and, it can be argued, to the global public. As this was the study designed to assess outcomes in real-world patients and guide future clinical care, if the outcomes had been honestly reported, consistent with accepted scientific standards, the public would have had reason to question the effectiveness of antidepressants and thus, at the very least, been cautious about their use. But the fraud created a soundbite—a 67% remission rate in real-world patients—that provided reason for the public to believe in their effectiveness, and a soundbite for media to trot out when new questions were raised about this class of drugs.

This, of course, is fraud that violates informed consent principles in medicine. The NIMH and the STAR*D investigators, with their promotion of a false remission rate, were committing an act that, if a doctor knowingly misled his or her patient in this way, would constitute medical battery.

This cataloging of harm done extends to those who prescribe antidepressants. Primary care physicians, psychiatrists, and others in the mental health field who want to do right by their patients have been misled about their effectiveness in real-world patients by this fraud.

The harm also extends to psychiatry’s reputation with the public. The STAR*D scandal, as it becomes known, fuels the public criticism of psychiatry that the field so resents.

Believing this to be a matter of great importance to public health, Mad in America put up a petition on change.org urging the American Journal of Psychiatry to retract the November 2006 article on the STAR*D results. Their hope is that the petition will circulate widely on social media and produce a public call for retraction that will grow too loud for the American Journal of Psychiatry to ignore. Whitaker hoped the publication of the August 2023 article by Pigott and others linked above in the prestigious journal British Medical Journal will lead the American Journal of Psychiatry to retract a paper that told a fabricated story about the outcome of the STAR*D study.

On December 1, 2023 the American Journal of Psychiatry published a letter from John Rush and four other STAR*D researchers, “The STAR*D Data Remain Strong: Reply to Pigott et al.” The researchers claimed the analytic approach by Pigott et al. had significant methodological flaws and stood by their results and methodology in STAR*D. They further said the effectiveness trials of their study were designed “to be more inclusive and representative of the real world than efficacy trials.” Pigott et al failed to recognize this rationale for the inclusion of the 941 patients in the original analyses that were eliminated from their reanalyses by Pigott et al.

The rationale for removing these participants from the longitudinal analysis appears to reflect a studious misunderstanding of the aims of the Rush et al. paper, with the resulting large difference in remission rates most likely the result of exclusion by Pigott et al. of hundreds of patients with low symptoms scores at the time of study exit.

Robert Whitaker responded to the letter in “After MIA Calls for Retraction of STAR*D Article, Study Authors Double Down.” He said the STAR*D investigators had inflated the “cumulative remission rate” in four principal ways. First by including ineligible patients in their tally of remitted patients. Second, by switching outcome measures. Third, by categorizing early dropouts as non-evaluable patients. Fourth, by calculating a “theoretical” remission rate.

By the end of their letter, they again affirmed the 67% cumulative remission rate. Whitaker thought they had “doubled-down on the fraud they committed in their 2006 summary report of STAR*D outcomes.”

Now that the STAR*D authors have “defended” their work, all the public really needs to know is this: The STAR*D investigators, by including 931 patients who weren’t eligible for the study in their final tally of cumulative remissions, greatly inflated that bottom-line outcome. That is research fraud, and in their letter to the editor, rather than admit that these patients weren’t eligible for the study, they instead falsely accused Pigott and colleagues of “creating” their own “post-hoc” criteria to remove those with “large improvements” in symptom scores from their re-analysis.

Whitaker said the STAR*D scandal evolved into a litmus test for psychiatry. Would they acknowledge the research misconduct and inform the public of how the STAR*S study had been compromised? Was it okay to deceive the public in this way? “And now, with this letter to the editor, we know the answer to that litmus test.”


Please note: I reserve the right to delete comments that are snarky, offensive, or off-topic. If in doubt, read My Comments Policy.