Blog

Antidepressant Fall From Grace, Part 2

© hikrcn | 123rf.com

In 1995 Irving Kirsch and Guy Sapirstein set out to assess the placebo effect in the treatment of depression. Like most people, Kirsch used to think that antidepressants worked—the active ingredient in the antidepressant helped people “cope with their psychological condition.”  They weren’t surprised to find a strong placebo effect in treating depression; that was their hypothesis and the reason to do the study. What did surprise them was how small the drug effect was—the difference between the response to the drug and the response to the placebo. “The placebo effect was twice as large as the drug effect.”

Along with Thomas Moore and others, Kirsch then did an analysis of data submitted to the FDA for approval of the six most widely prescribed antidepressants approved between 1987 and 1999: fluoxetine (Prozac), paroxetine (Paxil), sertaline (Zoloft), venafaxine (Effexor), nefadozone (Serzone) and citalopram (Celexa). The researchers found that 80% of the response to medication was duplicated in placebo control groups. The mean difference between drug and placebo was clinically negligible. You can read more about this study in Prevention & Treatment, “The Emperor’s New Drugs.”

When they published their findings, Kirsch sad he was pleasantly surprised by the consensus about their findings. “Some commentators argued that our analysis had actually overestimated the real effect of antidepressants.” One group of researchers said the minimal difference between antidepressant treatment and controls was a “dirty little secret” that had been known all along. “The companies that produce the drugs knew it, and so did the regulatory agencies that approve them for marketing. But most of the doctors who prescribe these medications did not know it, let alone their patients.”

According to Irving Kirsch, pharmaceutical companies have used several devices to present their products as better than they actually are. First they will withhold negative studies from publication. While publication bias effects all areas of research, it is acutely problematic with drug trials. “Most of the clinical trials evaluating new medications are sponsored financially by the companies that produce and stand to profit from them.”

The companies own the data that come out of the trials they sponsor, and they can choose how to present them to the public — or withhold them and not present them to the public at all. With widely prescribed medications, billions of dollars are at stake.

Positive studies may be published multiple times, a practice known as “salami slicing.” Often this is done in ways that makes it difficult for reviewers to recognize the studies were done on the same data. The authors may be different. References to the previous publication of the data are often missing. Sometimes there are minor differences in the date used between one publication and another. Sometimes positive data is cherry-picked from a clinical trial and published, giving the impression that the drug seemed more effective than it really was. For more information on this issue, see: The Emperor’s New Drugs: Exploding the Antidepressant Myth by Irving Kirsch.

Published in 2004, the STAR*D study (Sequenced Treatment Alternatives to Relieve Depression) was a multisite, multistep clinical trial of outpatients with nonpsychotic major depression. It was designed to be more representative of the real world use of antidepressants than typical clinical trials; and to show the effectiveness of antidepressants in the best of circumstances. STAR*D was funded by the NIMH at a cost of $35 million dollars and took six years to complete. It was hailed as the “largest antidepressant effectiveness trial ever conducted.” Robert Whitaker described it as follows:

The STAR*D trial was designed to test whether a multistep, flexible use of medications could produce remission in a high percentage of depressed outpatients. Those who didn’t get better with three months of initial treatment with an SSRI (citalopram) then entered a second stage of treatment, in which they were either put on a different antidepressant or given a second drug to augment an antidepressant. Those who failed to remit in step two could go on to a step three, and so on; in total, there were four treatment steps.

According to the NIMH, in level 1, about one-third of participants became symptom-free. In level 2, about 25% of participants became symptom-free. So a half of the participants in the STAR*D study became symptom-free after two treatment levels. “Over the course of all four treatment levels, almost 70 percent of those who did not withdraw from the study became symptom-free.” However, there was a progressive dropout rate: 21% withdrew after level 1; 30% after level 2; and 42% after level 3.

An overall analysis of the STAR*D results indicates that patients with difficult-to-treat depression can get well after trying several treatment strategies, but the odds of beating the depression diminish with every additional treatment strategy needed. In addition, those who become symptom-free have a better chance of remaining well than those who experience only symptom improvement. And those who need to undergo several treatment steps before they become symptom-free are more likely to relapse during the follow-up period. Those who required more treatment levels tended to have more severe depressive symptoms and more co-existing psychiatric and general medical problems at the beginning of the study than those who became well after just one treatment level.

The message communicated to doctors and the public was that STAR*D showed that antidepressants enabled 67% of depressed patients to recover. Robert Whitaker said an article in The New Yorker commented this “effectiveness rate” was “far better than the rate achieved by a placebo.” But this “cumulative” remission rate of 67% was in fact a theoretical rate that assumed those who dropped out of the study would have the same remission rates as those who remained. “They [also] included remission numbers for patients who weren’t depressed enough at baseline to meet study criteria, and thus weren’t eligible for analysis.” Irving Kirsch said the STAR*D symptom remission was temporary for most: “Approximately 93 percent of the patients who recovered relapsed or dropped out of the trial within a year.”

Recently, Kirsch and others acquired the STAR*D raw data through the MIMH and reanalyzed the HRSD (Hamilton Rating Scale for Depression) results. The HRSD was identified by the original as the primary outcome measure for STAR*D. “Yet the outcome that was presented in almost all the study papers was the QIDS (Quick Inventory of Depressive Symptomatology), a measure made up especially for the STAR-D study, with no prior or subsequent credentials.” The QIDS was devised as a way of tracking symptoms during the course of treatment NOT as an outcome measure. And the original study protocol stated it should not be used as an outcome measure.

Analysis of the HRSD data in STAR*D failed to reach the threshold required for a minimal improvement. “It is also below average placebo improvement in placebo-controlled trials of antidepressants.” The STAR*D results were about “half the magnitude of those obtained in standard comparative drug trials.” Commenting on STAR*D in his book, The Emperor’s New Drugs, Irving Krisch said:

This is a rather bleak picture of the effects of antidepressant treatment. In the best of circumstances—which is what the trial was designed to evaluate—only one out of three depressed patients showed a lasting recovery from depression, and since there was no evaluation of what the recovery rate might have been with placebo treatment, there was no way of knowing whether their recovery was actually due to the medication they had been given.

In her review of the Kirsch reanalysis of the STAR*D study, Joanna Moncrieff said STAR*D suggests that in real life situations, people who take antidepressants do not do very well. “In fact, given that for the vast majority of people depression is a naturally remitting condition, it is difficult to believe that people treated with antidepressants do any better than people who are offered no treatment at all.” She thought this might be the reason the results of the main outcome measure (the HRSD) remained unpublished for so long—and also an explanation for the substitution of the QIDS as an outcome measure. In the original STAR*D analysis:

Whether this was deliberate on the part of the original STAR-D authors or not, it was certainly not made explicit. There should surely be uproar about the withholding of information about one of the world’s most widely prescribed class of drugs. We must be grateful to Kirsch and his co-authors for finally putting this data in the public domain.

According to data gathered by the CDC, 10.7% of all U.S. adults in 2011-2014 reported using an antidepressant in the past 30 days. This is 5.9 times the reported usage for 1988-1994. Demographically, the percentages of U.S. adults who used antidepressants increased with age. The percentages of women using antidepressants were also consistently higher then men for all age groups. Yet their effectiveness in treating depression has been shown to be little better than a placebo. And given that they have a multitude of adverse effects—even the SSRIs—in most cases, no medication may be better than an antidepressant.

See “Dirty Little Secret” and “Do No Harm with Antidepressants” on this website for more information on the antidepressant research of Irving Kirsch. See “The Lancet Story on Antidepressants,” Part 1 and Part 2 for more on the ongoing debate over the effectiveness of antidepressants. See “Antidepressant Fall From Grace, Part 1” for a brief history of antidepressants.

About Anselm Ministries

Drawing its name from an eleventh century monk and theologian who had a profound impact on Christianity, Anselm Ministries is a church-based teaching organization whose purpose is to support the pastoral care of the local church. It seeks to help individuals grow in their faith and their understanding of how to live godly, Christ-centered lives.

Share This Post

X
Facebook
LinkedIn
Pinterest
Email
Print

Discussion

Charles Sigler

D.Phil., Licensed Counselor, Addiction & Recovery Specialist

Share This Post

Recent Posts

"We must protect against the real danger that all of psychiatry will be tainted by the folly of DSM-5."
Naltrexone won't lower your risk of negative consequences from binge drinking on the next amateur night.
Does faith really lead to seeing into the unseen realm?
Are psychiatric treatments pseudo-scientific, and if so, is that a good thing?

Favorite Posts

The Niebuhrian version of the Serenity Prayer seems to have clearly come from Reinhold Niebuhr’s 1943 sermon.
“The kingdom is the whole of God’s redeeming activity in Christ in this world; the church is the assembly of those who belong to Jesus Christ.”
There does seem to be a “fuzzy boundary” between Substance Abuse and Substance Dependence. Allen Frances suggests we simply ignore the DSM-5 change.
Marijuana researchers like Stacie Gruber are concerned that “policy has outpaced science” when it comes to lawmakers making public health decisions about recreational and medical marijuana.
The bottom line is The Passion Translation (TPT) is not really a bible translation. Bible Gateway had good reasons to justify its removal.
If researchers and academic psychiatrists never believed the chemical imbalance theory of depression, why weren’t they as assertive challenging this urban legend?

Related Posts

Benign Coercion

Search this Site