The Lancet Story on Antidepressants, Part 2

© thelightwriter | 123rf.com

While introducing his review on The Mental Elf of a Lancet study by Cipriani et al., “Comparative efficacy and acceptability of 21 antidepressant drugs,” Andre Tomlin commented how it had been a rough few months where “anti-antidepressant voices” really hit the mainstream. Neuroskeptic thought the study was a nice piece or work, but had very little new information. He also thought the media hype over it was “frankly bananas.” In Part 1 of this article, I looked at the more positive responses to the Cipriani et al. study. Here we will look at the rest of the story from the “anti-antidepressant voices.”

Turn to Part 1 if you want to hear what The Mental Elf and Neuroskeptic had to say about the Cipriani et al. study first. Here we’ll look at the thoughts of Peter Gøtzsche, Joanna Moncrieff and the Council for Evidence-Based Psychiatry.

Tomlin seems to question Gøtzsche’s ‘evidence,’ that antidepressants actually kill people who take them. But turn to “In the Dark About Antidepressants” or “Psychiatry Needs a Revolution” for more on Gøtzsche’s ‘evidence’ on the harm from antidepressants before you dismiss his claims. Remember that Peter Gøtzsche is a careful medical researcher and the Director of the Nordic Cochrane Center. Along with 80 others, he helped start the Cochrane Collaboration in 1993, which is “a global independent network of researchers, professionals, patients, carers, and people interested in health.”

In “Rewarding the Companies that Cheated the Most in Antidepressant Trials, “ Dr. Gøtzsche’s opening comment was: “It is well known that we cannot trust the data the drug companies publish, and it seems that, in psychiatric drug trials, the manipulations with the data are particularly pronounced.” He described, with supporting citations, how half the deaths and suicides that occur in randomised drug trials are not published. When independent researchers have the opportunity to analyze trial data themselves, “the results are often markedly different to those the companies have published.” He then said:

Fraud and selective reporting are of course not limited to the most serious outcomes but also affect other trial outcomes. Several of the authors of a 2018 network meta-analysis in the Lancet are well aware that published trial reports of depression pills cannot be trusted. I therefore do not understand why they are authors on this paper.

He noted how most of the data analyzed by the Cipriani et al. study came from published trials reports, “which we know are seriously unreliable for depression trials.” Gøtzsche pointed out where one of the coauthors for the study had previously coauthored a study showing that “the effect of depression pills was 32% larger in published trials than in all trials in FDA’s possession.” In his opinion, the meta-analytic analysis of the Cipriani et al. study had no clinical value and was “so complicated that it is impossible to know what all this leads to. But we do know that statistical maneuvers cannot make unreliable trials reliable.”

In addition to the doubtful effect of antidepressants noted in the study (see Part 1), Gøtzsche thought ranking the drugs according to their effect and acceptability was a futile exercise. “My thought was that the authors had rewarded those companies that had cheated the most with their trials.” He said it was highly unlikely that some depression pills were both more effective and better tolerated than others.

One doesn’t need to be a clinical pharmacologist to know that this seems too good to be true. Drugs that are more effective than others (which is often a matter of giving them in higher, non-equipotent doses), will usually also be more poorly tolerated.

The reality is that despite serious flaws in depression drug trials, “the average effect is considerably below what is clinically relevant.” That was demonstrated in the Cipriani et al. study and has been shown in several other studies. Examples of the serious flaws noted by Gøtzsche included: “[a] lack of blinding because of the conspicuous adverse effects of the pills, cold turkey in the placebo group because people were already on depression pills before they were randomised, industry-funding, selective reporting and data massage.” He concluded the benefits to harm of depression pills meant that placebo was better than the drug.

Joanna Moncrieff was appalled at the almost universally uncritical coverage given to the Cipriani et al. study. In her article, “Challenging the new hype about antidepressants,” she noted where John Geddes, one of the study’s coauthors, said only one in six people with depression receive effective treatment; and he wanted to make that six out of six. By her calculations, if 9% of the UK population is already taking antidepressants, “and they only represent 1 in 6 of those who need them, then 54% of the population should be taking them. I make that another 27 million people!” Dr. Moncrieff went on and noted once again, that despite the hype, there was nothing groundbreaking in this latest meta-analysis. “It simply repeats the errors of previous analyses.”

The analysis consists of comparing ‘response’ rates between people on antidepressants and those on placebo. But ‘response’ is an artificial category that has been arbitrarily constructed out of the data actually collected, which consists of scores on depression rating scales, like the commonly used Hamilton rating Scale for Depression (HRSD). Analysing categories inflates differences (3). When the actual scores are compared, differences are trivial, amounting to around 2 points on the HRSD, which has a maximum score of 54. These differences are unlikely to be clinically relevant, as I have explained before. Research comparing HRSD scores with scores on a global rating of improvement suggest that such a difference would not even be noticed, and you would need a difference of at least 8 points to register ‘mild improvement’. [See her article for the noted citations and a link to her previous discussion on the HRSD]

Participants in a clinical trial can deduce whether or not they are in the experimental group with the antidepressant medication by recognizing the side effects with antidepressant medication “(e.g. nausea, dry mouth, drowsiness and emotional blunting) irrespective of whether or not they treat depression.” If that happens, these participants may then receive an amplified placebo effect by knowing they are taking an active drug rather than an inactive placebo. “This may explain why antidepressants that cause the most noticeable alterations, such as amitriptyline, appeared to be the most effective in the recent analysis.”

She also pointed out ‘real world’ studies showing the long-term effects of people treated with antidepressants. “The proportion of people who stick to recommended treatment, recover and don’t relapse within a year is staggeringly low (108 out of the 3110 people who enrolled in the STAR-D study and satisfied the inclusion criteria).”  Several studies have found that the outcomes for people treated with antidepressants “are worse than the outcomes of people with depression who are not treated with antidepressants.” Moncrieff said calling to increase the use of antidepressants, as Geddes did, will not address the problem of depression and will only “increase the harms these drugs produce.”

As the debate around the [media] coverage highlighted, many people feel they have been helped by antidepressants, and some are happy to consider themselves as having some sort of brain disease that antidepressants put right. These ideas can be reassuring. If people have had access to balanced information and decided this view suits them, then that is fine. But in order for people to make up their own minds about the value or otherwise of antidepressants and the understanding of depression that comes in their wake, they need to be aware that the story the doctor might have told them about the chemical imbalance in their brain and the pills that put it right, is not backed up by science [see her article for a link to this topic], and that the evidence these pills are more effective than dummy tablets is pretty slim.

The Council for Evidence-Based Psychiatry also pointed out “the new research proves nothing new.” Further, they cited where the Royal College of Psychiatrists (RCP) represented the Cipriani et al. study as “finally putting to bed the controversy on anti-depressants.”

This statement is irresponsible and unsubstantiated, as the study actually supports what has been known for a long time, that various drugs can, unsurprisingly, have an impact on our mood, thoughts and motivation, but also differences between placebo and antidepressants are so minor that they are clinically insignificant, hardly registering at all in a person’s actual experience.

Then on February 24th, the President of the Royal Collage of Psychiatry and the Chair of its Psychopharmacology Committee stated in a letter to The London Times that: “the vast majority of patients, any unpleasant symptoms experienced on discontinuing antidepressants have resolved within two weeks of stopping treatment.” This led to a “Formal Complaint to the UK Royal College of Psychiatrists” when Professor John Read and others wrote to the RCP disputing that claim. The formal complaint stated:

To mislead the public on this issue has grave consequences. People may be misled by the false statement into thinking that it is easy to withdraw and may therefore try to do so too quickly or without support from the prescriber, other professionals or loved ones. Other people, when weighing up the pros and cons of starting antidepressants may make their decision based partly on this wrong information. Of secondary concern is the fact that such irresponsible statements bring the College, the profession of Psychiatry (to which some of us belong), and – vicariously – all mental health professionals, into disrepute.

The complaint cited several research papers documenting how withdrawal effects from antidepressants “often last far longer than two weeks.” The cited research included a study done by the Royal Collage of Psychiatry (RCP) itself, “which found that withdrawal symptoms were experienced by the majority (63%), generally lasted for up to 6 weeks and that a quarter reported anxiety lasting more than 12 weeks. Within 48 hours of the misleading statement in The Times, the survey results were removed from the RCP website, as was a leaflet by the RCP on antidepressant withdrawal. You can listen to a podcast interview with Professor John Read here. There is a link to the RCP leaflet and The Times article there as well.

Stay tuned; this controversy isn’t over yet. In conclusion, to paraphrase Paul Harvey, “Now you know the rest of the Lancet story on antidepressants.”