01/16/24

Doubling Down on STAR*D Outcomes

Image by 2541163 from Pixabay

In January of 2006, the NIMH announced the results of Sequenced Treatment Alternatives to Relieve Depression (STAR*D) Study, the largest and longest study ever done to evaluate depression treatment. Its purpose was to determine the effectiveness of different treatments for people with Major Depression Disorder (MDD) who did not respond initial treatment with an antidepressant. The startling STAR*D results reported that almost 70 percent of those who did not withdraw from the study became symptom-free. “For the first time, doctors and people with depression now have extensive data on antidepressant treatments from a federally funded, large-scale, long-term study directly comparing treatment strategies.” However, the true remission rate turned out to be 35%, around half of what was reported.

In August of 2023 Ed Pigott and other researchers reanalyzed and published the patient-level data set from the STAR*D study in the British Medical Journal (BMJ), keeping their analysis to the original research protocol. They discovered the STAR*D investigators did not use did not use the STAR*D protocol-stipulated HRSD (Hamilton Rating Scale for Depression), but instead used a non-blinded clinic-administrated assessment, known as the QIDS-C, the Quick Inventory of Depressive Symptomatology. They also included 99 patients who scored as remitted on the HRSD at the outset of the study as well as 125 who scored as remitted when initiating their next-level treatment. “This inflated their report of outcomes.”

Unfortunately, the STAR*D investigators’ assertion of a 67% cumulative remission rate had already become accepted clinical wisdom. The NIMH’s director at the time, Thomas Insel, and an editorial in the American Journal of Psychiatry both claimed STAR*D participants achieved a 70% remission rate. “Our reanalysis found that in step 1, STAR*D’s remission and extent of improvement rates were substantially less than those reported in other open-label antidepressant comparator trials and then grew worse in steps 2-4.” The remission rate in step 1 was 25.5%; by step 4, it was only 10.4%.

Robert Whitaker further reported that Pigott and others discovered only 3% of the participants who entered the trial remitted and stayed well in the trial to its end in one year. One of the STAR*D investigators thought Pigott’s analysis was “reasonable and not incompatible with what we had reported.” That was 13 years ago and as of yet, there hasn’t been a public acknowledgement that these protocol violations were a form of scientific misconduct.

Yet, there has been no public acknowledgement by the American Psychiatric Association (APA) of this scientific misconduct. There has been no call by the APA—or academic psychiatrists in the United States—to retract the studies that reported the inflated remission rates. There has been no censure of the STAR*D investigators for their scientific misconduct. Instead, they have, for the most part, retained their status as leaders in the field.

Thus, given the documented record of scientific misconduct, in the largest and most important trial of antidepressants ever conducted, there is only one conclusion to draw: In American psychiatry, scientific misconduct is an accepted practice.

Whitaker said this presented a challenge to American citizens. If the American Psychiatric Association would not police its own research, it was up to the public to demand the STAR*D paper be withdrawn from the American Journal of Psychiatry. “As STAR*D was designed to guide clinical care, it is of great public health importance that this be done.”

He persuasively argued that there had been an intent to deceive. He said once Pigott and colleagues identified the deviations from the STAR*D protocol (which they did initially in 2010), “the STAR*D investigators’ ‘intent to deceive’ was evident.” After Pigott made the protocol and other key documents available in 2011 on two blogs for the Mad in America website, the scientific community could see the deception.

Their recent RIAT publication [in August of 2023] makes it possible to put together a precise numerical accounting of how the STAR*D investigators’ research misconduct, which unfolded step by step as they published three articles in 2006, served to inflate the reported remission rate. This MIA Report lays out that chronology of deceit. Indeed, readers might think of this MIA Report as a presentation to a jury. Does the evidence show that the STAR*D’s summary finding of a 67% cumulative remission rate was a fabrication, with this research misconduct born from a desire to preserve societal belief in the effectiveness of antidepressants?

In Psychiatry Under the Influence Whitaker and his coauthor Lisa Cosgrove wrote about how the STAR*D trial was an example of institutional corruption. They said there were two forms of institutional corruption, or economies of influence, driving that corruption: psychiatry’s guild interests and the extensive financial ties the STAR*D investigators had with the pharmaceutical industry. They said:

Although this was a NIMH-funded trial, industry influence was indirectly present during the trial. Rush and at least seven other STAR*D investigators had financial ties to Forest Laboratories, the manufacturer of Celexa. The investigators’ collective disclosure statement revealed hundreds of ties to pharmaceutical companies, with many investigators reporting that they had served as both consultants and speakers. Yet, given that this was a NIMH-funded trial, STAR*D couldn’t be blamed on the drug companies, and it could be argued that the “corruption” seen here far outstripped anything seen in a commercial trial of the SSRI antidepressants. (p. 129)

Whitaker said the American Psychiatric Association is best understood as a trade association that promotes the financial and professional interests of its members. The APA has long touted antidepressants as effective and safe treatment. He thought if the STAR*D results has been accurately reported, they would have derailed society’s belief in the safety and efficacy of antidepressants. The STAR*D investigators were, in a business sense, protecting one of their primary “products.” And they were safeguarding the public image of their profession.

This research misconduct has done extraordinary harm to the American public, and, it can be argued, to the global public. As this was the study designed to assess outcomes in real-world patients and guide future clinical care, if the outcomes had been honestly reported, consistent with accepted scientific standards, the public would have had reason to question the effectiveness of antidepressants and thus, at the very least, been cautious about their use. But the fraud created a soundbite—a 67% remission rate in real-world patients—that provided reason for the public to believe in their effectiveness, and a soundbite for media to trot out when new questions were raised about this class of drugs.

This, of course, is fraud that violates informed consent principles in medicine. The NIMH and the STAR*D investigators, with their promotion of a false remission rate, were committing an act that, if a doctor knowingly misled his or her patient in this way, would constitute medical battery.

This cataloging of harm done extends to those who prescribe antidepressants. Primary care physicians, psychiatrists, and others in the mental health field who want to do right by their patients have been misled about their effectiveness in real-world patients by this fraud.

The harm also extends to psychiatry’s reputation with the public. The STAR*D scandal, as it becomes known, fuels the public criticism of psychiatry that the field so resents.

Believing this to be a matter of great importance to public health, Mad in America put up a petition on change.org urging the American Journal of Psychiatry to retract the November 2006 article on the STAR*D results. Their hope is that the petition will circulate widely on social media and produce a public call for retraction that will grow too loud for the American Journal of Psychiatry to ignore. Whitaker hoped the publication of the August 2023 article by Pigott and others linked above in the prestigious journal British Medical Journal will lead the American Journal of Psychiatry to retract a paper that told a fabricated story about the outcome of the STAR*D study.

On December 1, 2023 the American Journal of Psychiatry published a letter from John Rush and four other STAR*D researchers, “The STAR*D Data Remain Strong: Reply to Pigott et al.” The researchers claimed the analytic approach by Pigott et al. had significant methodological flaws and stood by their results and methodology in STAR*D. They further said the effectiveness trials of their study were designed “to be more inclusive and representative of the real world than efficacy trials.” Pigott et al failed to recognize this rationale for the inclusion of the 941 patients in the original analyses that were eliminated from their reanalyses by Pigott et al.

The rationale for removing these participants from the longitudinal analysis appears to reflect a studious misunderstanding of the aims of the Rush et al. paper, with the resulting large difference in remission rates most likely the result of exclusion by Pigott et al. of hundreds of patients with low symptoms scores at the time of study exit.

Robert Whitaker responded to the letter in “After MIA Calls for Retraction of STAR*D Article, Study Authors Double Down.” He said the STAR*D investigators had inflated the “cumulative remission rate” in four principal ways. First by including ineligible patients in their tally of remitted patients. Second, by switching outcome measures. Third, by categorizing early dropouts as non-evaluable patients. Fourth, by calculating a “theoretical” remission rate.

By the end of their letter, they again affirmed the 67% cumulative remission rate. Whitaker thought they had “doubled-down on the fraud they committed in their 2006 summary report of STAR*D outcomes.”

Now that the STAR*D authors have “defended” their work, all the public really needs to know is this: The STAR*D investigators, by including 931 patients who weren’t eligible for the study in their final tally of cumulative remissions, greatly inflated that bottom-line outcome. That is research fraud, and in their letter to the editor, rather than admit that these patients weren’t eligible for the study, they instead falsely accused Pigott and colleagues of “creating” their own “post-hoc” criteria to remove those with “large improvements” in symptom scores from their re-analysis.

Whitaker said the STAR*D scandal evolved into a litmus test for psychiatry. Would they acknowledge the research misconduct and inform the public of how the STAR*S study had been compromised? Was it okay to deceive the public in this way? “And now, with this letter to the editor, we know the answer to that litmus test.”

08/13/19

Following the Leader with Antidepressants

© lightwise | 123rf.com

In February of 2018 the international debate on antidepressants was renewed when James Davies, a co-founder of the Council for Evidence-Based Psychiatry (CEP), and his coauthors published a letter in the Times on the benefits and harms of antidepressants. This was in response to a study done by Cipriani et al that found all the 21 antidepressants reviewed to be more effective than placebo. Carmine Pariante of the Royal College of Psychiatrists said: “This meta-analysis finally puts to bed the controversy on anti-depressants, clearly showing that these drugs do work in lifting mood and helping most people with depression.” In response, the Council for Evidence-Based Psychiatry said that statement was “irresponsible and unsubstantiated, as the study actually supports what has been known for a long time,” namely that the differences between placebo and antidepressant are so minor that they are clinically insignificant. It created a media and professional firestorm that has yet to burn out, and even led to some strategic retreats by organizations like the RCP that originally hailed the results.

CEP noted how the individuals in the referenced studies were not in truly blinded clinical trials. “Most people on antidepressants experience some noticeable physical or mental alterations, and as a consequence realise they are on the active drug.” This then boosts the placebo effect, adding further questions about the so-called effectiveness of antidepressants. Irving Kirsch has published several studies demonstrating the significance of the placebo effect with antidepressants. For more on the Cipraini et al study, see  “The Lancet Story on Antidepressants,” Part 1 and Part 2. For more on Irving Kirsch and the placebo effect, see  “Dirty Little Secret.”

Additionally, the trials only addressed short-term use of antidepressants (8 weeks), not the long-term use which is more typical. “Around 50% of patients have been taking antidepressants for more than two years, and the study tells us nothing about their effects over the long term. In fact, there is no evidence that long-term use has any benefits, and in real-world trials (STAR-D study) outcomes are very poor.” STAR*D was the largest, longest and most expensive study of antidepressants ever conducted.

James Davies and John Read (also a member of CEP) published a systematic review in the journal Addictive Behaviors that showed antidepressant withdrawal was “more widespread, severe and long-lasting than indicated by current guidelines.” The review indicated that an average of 56% of patients who stop or reduce their antidepressants experience withdrawal symptoms, a significant proportion of whom experienced them for more than two weeks. “It is not uncommon for patients to experience symptoms for several weeks, months, or longer.” One study said 40% of patients experience symptoms for at least six weeks; another indicated that 25% experience symptoms for at least 3 months. Davies said the new review indicated what patients have known for years, “That withdrawal from antidepressants often causes severe, debilitating symptoms which can last for weeks, months or longer.”

Davies and Read noted in their paper that an implication of the higher incidence of antidepressant withdrawal and longer duration added credence to concerns that doctors were misdiagnosing antidepressant withdrawal as treatment failure. “Re-emergent symptoms of depression and anxiety are a regular feature of antidepressant withdrawal itself.” They pointed out where the RCP’s own survey, “Coming Off Antidepressants” found that the withdrawal reaction was rated severe by most people, and approximately 25% of users reported experiencing anxiety for at least 3 months after stopping their antidepressant.

The President of the Royal College of Psychiatrists, Wendy Burn, published a letter in the Times that said “We know that in the vast majority of patients, any unpleasant symptoms experienced on discontinuing antidepressants have resolved within two weeks of stopping treatment.” CEP challenged the Royal College of Psychiatrists and its president, stating they believed the statement was not evidence-based; that it misled the public. Further, they pointed out how within 48 hours of the misleading statement in the Times, the RCP removed “Coming Off Antidepressants” from its website. They suggested one interpretation of that action was the RCP was attempting keep the public from seeing evidence that contradicted what the RCP president claimed in the Times.

This was not just a dispute between CEP and the RCP over interpreting Cipriani et al. August of 2018 contained a one-two punch that broadened the debate over antidepressant ineffectiveness. The British Journal of Psychiatry published an editorial written by Gordon Parker, the founder of The Black Dog Institute,  “The benefits of antidepressants: news or fake news?” that said antidepressant trials were disconnected from the real world of clinical practice. Psychological Medicine published a study by de Vries et al that analyzed the cumulative effect of publication biases on the apparent efficacy of antidepressants for the treatment of depression.

Asking if antidepressants are effective treatment for major depression is asking the wrong question. The problem, according to Gordon Parker, is that ‘major depression’ is a “domain diagnosis” for a variety of depressive illnesses. “Basically, the target diagnosis of major depression captures multiple types of depressions—some biological, some psychological, some social—and not all would be expected to respond to medication.” In other words, you lose the evidence for their effectiveness with biological causes by combining them with social and psychological ones. “For patients with depression, if you narrow down to those who have a biologically-based depressive sub-type, the antidepressants are distinctly effective.”

De Vries et al looked at the cumulative impact of biases upon on two effective treatments for depression: antidepressants and psychotherapy. They identified four major biases: study publication bias, outcome reporting bias, spin, and citation bias. Study publication bias involves not publishing an entire study. Outcome reporting bias refers to not publishing negative outcomes or switching the status of primary and secondary outcomes. “Both biases pose an important threat to the validity of meta-analyses.”

Spin uses reporting strategies that distort the interpretation of results and mislead readers. Authors conclude the treatment is effective despite non-significant results on the primary outcome. For example, by focusing on statistical significance instead of clinical significance, researchers have confirmed the efficacy of several SSRIs. Another spin technique is instead of concluding a treatment was no more effective than placebo, researchers point out how a treatment was well tolerated and effective in a sub population of the original study, say patients who had not received prior therapy. Finally, with citation bias, studies with positive results receive more citations than negative studies. This leads to greater visibility of positive results and creates an obstacle to ensuring that negative findings can be discovered. De Vries et al concluded:

The problem of study publication bias is well-known. Our examination of antidepressant trials, however, shows the pernicious cumulative effect of additional reporting and citation biases, which together eliminated most negative results from the anti-depressant literature and left the few published negative results difficult to discover. These biases are unlikely to be unique to anti-depressant trials. We have already shown that similar processes, though more difficult to assess, occur within the psychotherapy literature, and it seems likely that the effect of these biases accumulates whenever they are present. Consequently, researchers and clinicians across medical fields must be aware of the potential for bias to distort apparent treatment efficacy, which poses a threat to the practice of evidence-based medicine.

In October of 2018 a reanalysis of the STAR*D study, supported the claim of antidepressant ineffectiveness. The STAR*D study, published in 2004, attempted to mimic real world patients, recruiting from routine outpatient treatment centers. Additionally, they did not exclude patients with comorbid diagnoses, as is typically cone in clinical trials. STAR*D was funded by the NIMH at a cost of $35 million dollars and took six years to complete. The reanalysis was done by Irving Kirsch and others. The improvement found in the reanalysis was roughly half of that seen in the standard comparative drug trials. In her review of the Kirsch-led reanalysis for Mad in America, Joanna Moncrieff said STAR*D suggested that “in real life situations (which the STAR-D mimicked better than other trials) people taking antidepressants do not do very well.”

For the vast majority of people, depression naturally remits. “It is difficult to believe that people treated with antidepressants do any better than people who are offered no treatment at all.” Moncrieff speculated this may be the reason why the results of the main outcome of the STAR*D study took so long to be published. For more on the STAR*D study, see “Antidepressant Fall from Grace, Part 2.”

Then in May of 2019, the Royal College of Psychiatrists changed its position on antidepressant withdrawal. It issued a revised policy statement updating its guidance to doctors. James Davies of CEP said the changes were welcome; and if acted upon, “will help reduce the harm that is being caused to huge numbers of patients through overprescribing, inadequate doctor training and often disastrous withdrawal management.” The College called for the following changes:

  • There should be greater recognition of the potential for severe and long-lasting withdrawal symptoms on and after stopping antidepressants in NICE guidelines and patient information
  • NICE should develop clear evidence-based and pharmacologically-informed recommendations to help guide gradual withdrawal from antidepressant use
  • The use of antidepressants should always be underpinned by a discussion with the patient about the potential level of benefits and harms, including withdrawal
  • Discontinuation of antidepressants should involve the dosage being tapered, which may occur over several months, and at a reduction rate that is tolerable for the patient
  • Monitoring is needed to distinguish the features of antidepressant withdrawal from emerging symptoms
  • Adequate support services should be commissioned for people affected by severe and prolonged antidepressant withdrawal, modelled on existing best practice
  • There should be routine monitoring on when and why patients are prescribed antidepressants
  • Training for doctors should be provided on appropriate withdrawal management
  • Research is needed into the benefits and harms of long-term antidepressant use

These changes by the RCP with regard to antidepressants are needed in the US as well. Antidepressant withdrawal is a real concern for some individuals. Routine monitoring of when and why patients are prescribed antidepressants is needed. Support services are needed for individuals who experience severe and prolonged withdrawal. There is a need to inform patients when prescribing antidepressants of the potential benefits as well as the potential harms—including withdrawal.

Research into the potential benefits and harms of long-term antidepressant use is needed. Discontinuation of antidepressants should be done slowly, taking its cue from how well the patient is tolerating the taper. Both the patient and doctor should carefully monitor the tapering process and strive to distinguish between symptoms of antidepressant withdrawal and emerging symptoms of the underlying depressive disorder. Doctors need to be trained in appropriate tapering and withdrawal management of antidepressants.

Drawing on the above discussion, we can add the need for greater awareness of the multiple types of depressions—some biological, some psychological, some social—and the need to freely acknowledge that antidepressants won’t work for everyone. Edward Shorter makes a compelling case for distinguishing between depression and melancholia in How Everyone Became Depressed. In the pursuit of developing the evidence base for the use of antidepressants and best practice guidelines, we need to systematically eliminate the impact of bias on the publication of research results with antidepressants. Admittedly this is a problem that extends beyond just antidepressant research, see “Clinical Trial Sleight-of-Hand,” “The Reproducibility Problem” and “Reproducibility in Science” for more information.

British psychiatrists have taken the first step towards correcting errors in how they use antidepressants. Hopefully they will persist in seeing that the recommended changes are implemented. American psychiatrists and physicians need to do the same. They need to follow the lead of the RCP.

01/8/19

Antidepressant Fall From Grace, Part 2

© hikrcn | 123rf.com

In 1995 Irving Kirsch and Guy Sapirstein set out to assess the placebo effect in the treatment of depression. Like most people, Kirsch used to think that antidepressants worked—the active ingredient in the antidepressant helped people “cope with their psychological condition.”  They weren’t surprised to find a strong placebo effect in treating depression; that was their hypothesis and the reason to do the study. What did surprise them was how small the drug effect was—the difference between the response to the drug and the response to the placebo. “The placebo effect was twice as large as the drug effect.”

Along with Thomas Moore and others, Kirsch then did an analysis of data submitted to the FDA for approval of the six most widely prescribed antidepressants approved between 1987 and 1999: fluoxetine (Prozac), paroxetine (Paxil), sertaline (Zoloft), venafaxine (Effexor), nefadozone (Serzone) and citalopram (Celexa). The researchers found that 80% of the response to medication was duplicated in placebo control groups. The mean difference between drug and placebo was clinically negligible. You can read more about this study in Prevention & Treatment, “The Emperor’s New Drugs.”

When they published their findings, Kirsch sad he was pleasantly surprised by the consensus about their findings. “Some commentators argued that our analysis had actually overestimated the real effect of antidepressants.” One group of researchers said the minimal difference between antidepressant treatment and controls was a “dirty little secret” that had been known all along. “The companies that produce the drugs knew it, and so did the regulatory agencies that approve them for marketing. But most of the doctors who prescribe these medications did not know it, let alone their patients.”

According to Irving Kirsch, pharmaceutical companies have used several devices to present their products as better than they actually are. First they will withhold negative studies from publication. While publication bias effects all areas of research, it is acutely problematic with drug trials. “Most of the clinical trials evaluating new medications are sponsored financially by the companies that produce and stand to profit from them.”

The companies own the data that come out of the trials they sponsor, and they can choose how to present them to the public — or withhold them and not present them to the public at all. With widely prescribed medications, billions of dollars are at stake.

Positive studies may be published multiple times, a practice known as “salami slicing.” Often this is done in ways that makes it difficult for reviewers to recognize the studies were done on the same data. The authors may be different. References to the previous publication of the data are often missing. Sometimes there are minor differences in the date used between one publication and another. Sometimes positive data is cherry-picked from a clinical trial and published, giving the impression that the drug seemed more effective than it really was. For more information on this issue, see: The Emperor’s New Drugs: Exploding the Antidepressant Myth by Irving Kirsch.

Published in 2004, the STAR*D study (Sequenced Treatment Alternatives to Relieve Depression) was a multisite, multistep clinical trial of outpatients with nonpsychotic major depression. It was designed to be more representative of the real world use of antidepressants than typical clinical trials; and to show the effectiveness of antidepressants in the best of circumstances. STAR*D was funded by the NIMH at a cost of $35 million dollars and took six years to complete. It was hailed as the “largest antidepressant effectiveness trial ever conducted.” Robert Whitaker described it as follows:

The STAR*D trial was designed to test whether a multistep, flexible use of medications could produce remission in a high percentage of depressed outpatients. Those who didn’t get better with three months of initial treatment with an SSRI (citalopram) then entered a second stage of treatment, in which they were either put on a different antidepressant or given a second drug to augment an antidepressant. Those who failed to remit in step two could go on to a step three, and so on; in total, there were four treatment steps.

According to the NIMH, in level 1, about one-third of participants became symptom-free. In level 2, about 25% of participants became symptom-free. So a half of the participants in the STAR*D study became symptom-free after two treatment levels. “Over the course of all four treatment levels, almost 70 percent of those who did not withdraw from the study became symptom-free.” However, there was a progressive dropout rate: 21% withdrew after level 1; 30% after level 2; and 42% after level 3.

An overall analysis of the STAR*D results indicates that patients with difficult-to-treat depression can get well after trying several treatment strategies, but the odds of beating the depression diminish with every additional treatment strategy needed. In addition, those who become symptom-free have a better chance of remaining well than those who experience only symptom improvement. And those who need to undergo several treatment steps before they become symptom-free are more likely to relapse during the follow-up period. Those who required more treatment levels tended to have more severe depressive symptoms and more co-existing psychiatric and general medical problems at the beginning of the study than those who became well after just one treatment level.

The message communicated to doctors and the public was that STAR*D showed that antidepressants enabled 67% of depressed patients to recover. Robert Whitaker said an article in The New Yorker commented this “effectiveness rate” was “far better than the rate achieved by a placebo.” But this “cumulative” remission rate of 67% was in fact a theoretical rate that assumed those who dropped out of the study would have the same remission rates as those who remained. “They [also] included remission numbers for patients who weren’t depressed enough at baseline to meet study criteria, and thus weren’t eligible for analysis.” Irving Kirsch said the STAR*D symptom remission was temporary for most: “Approximately 93 percent of the patients who recovered relapsed or dropped out of the trial within a year.”

Recently, Kirsch and others acquired the STAR*D raw data through the MIMH and reanalyzed the HRSD (Hamilton Rating Scale for Depression) results. The HRSD was identified by the original as the primary outcome measure for STAR*D. “Yet the outcome that was presented in almost all the study papers was the QIDS (Quick Inventory of Depressive Symptomatology), a measure made up especially for the STAR-D study, with no prior or subsequent credentials.” The QIDS was devised as a way of tracking symptoms during the course of treatment NOT as an outcome measure. And the original study protocol stated it should not be used as an outcome measure.

Analysis of the HRSD data in STAR*D failed to reach the threshold required for a minimal improvement. “It is also below average placebo improvement in placebo-controlled trials of antidepressants.” The STAR*D results were about “half the magnitude of those obtained in standard comparative drug trials.” Commenting on STAR*D in his book, The Emperor’s New Drugs, Irving Krisch said:

This is a rather bleak picture of the effects of antidepressant treatment. In the best of circumstances—which is what the trial was designed to evaluate—only one out of three depressed patients showed a lasting recovery from depression, and since there was no evaluation of what the recovery rate might have been with placebo treatment, there was no way of knowing whether their recovery was actually due to the medication they had been given.

In her review of the Kirsch reanalysis of the STAR*D study, Joanna Moncrieff said STAR*D suggests that in real life situations, people who take antidepressants do not do very well. “In fact, given that for the vast majority of people depression is a naturally remitting condition, it is difficult to believe that people treated with antidepressants do any better than people who are offered no treatment at all.” She thought this might be the reason the results of the main outcome measure (the HRSD) remained unpublished for so long—and also an explanation for the substitution of the QIDS as an outcome measure. In the original STAR*D analysis:

Whether this was deliberate on the part of the original STAR-D authors or not, it was certainly not made explicit. There should surely be uproar about the withholding of information about one of the world’s most widely prescribed class of drugs. We must be grateful to Kirsch and his co-authors for finally putting this data in the public domain.

According to data gathered by the CDC, 10.7% of all U.S. adults in 2011-2014 reported using an antidepressant in the past 30 days. This is 5.9 times the reported usage for 1988-1994. Demographically, the percentages of U.S. adults who used antidepressants increased with age. The percentages of women using antidepressants were also consistently higher then men for all age groups. Yet their effectiveness in treating depression has been shown to be little better than a placebo. And given that they have a multitude of adverse effects—even the SSRIs—in most cases, no medication may be better than an antidepressant.

See “Dirty Little Secret” and “Do No Harm with Antidepressants” on this website for more information on the antidepressant research of Irving Kirsch. See “The Lancet Story on Antidepressants,” Part 1 and Part 2 for more on the ongoing debate over the effectiveness of antidepressants. See “Antidepressant Fall From Grace, Part 1” for a brief history of antidepressants.