04/20/18

The Lancet Story on Antidepressants, Part 2

© thelightwriter | 123rf.com

While introducing his review on The Mental Elf of a Lancet study by Cipriani et al., “Comparative efficacy and acceptability of 21 antidepressant drugs,” Andre Tomlin commented how it had been a rough few months where “anti-antidepressant voices” really hit the mainstream. Neuroskeptic thought the study was a nice piece or work, but had very little new information. He also thought the media hype over it was “frankly bananas.” In Part 1 of this article, I looked at the more positive responses to the Cipriani et al. study. Here we will look at the rest of the story from the “anti-antidepressant voices.”

Turn to Part 1 if you want to hear what The Mental Elf and Neuroskeptic had to say about the Cipriani et al. study first. Here we’ll look at the thoughts of Peter Gøtzsche, Joanna Moncrieff and the Council for Evidence-Based Psychiatry.

Tomlin seems to question Gøtzsche’s ‘evidence,’ that antidepressants actually kill people who take them. But turn to “In the Dark About Antidepressants” or “Psychiatry Needs a Revolution” for more on Gøtzsche’s ‘evidence’ on the harm from antidepressants before you dismiss his claims. Remember that Peter Gøtzsche is a careful medical researcher and the Director of the Nordic Cochrane Center. Along with 80 others, he helped start the Cochrane Collaboration in 1993, which is “a global independent network of researchers, professionals, patients, carers, and people interested in health.”

In “Rewarding the Companies that Cheated the Most in Antidepressant Trials, “ Dr. Gøtzsche’s opening comment was: “It is well known that we cannot trust the data the drug companies publish, and it seems that, in psychiatric drug trials, the manipulations with the data are particularly pronounced.” He described, with supporting citations, how half the deaths and suicides that occur in randomised drug trials are not published. When independent researchers have the opportunity to analyze trial data themselves, “the results are often markedly different to those the companies have published.” He then said:

Fraud and selective reporting are of course not limited to the most serious outcomes but also affect other trial outcomes. Several of the authors of a 2018 network meta-analysis in the Lancet are well aware that published trial reports of depression pills cannot be trusted. I therefore do not understand why they are authors on this paper.

He noted how most of the data analyzed by the Cipriani et al. study came from published trials reports, “which we know are seriously unreliable for depression trials.” Gøtzsche pointed out where one of the coauthors for the study had previously coauthored a study showing that “the effect of depression pills was 32% larger in published trials than in all trials in FDA’s possession.” In his opinion, the meta-analytic analysis of the Cipriani et al. study had no clinical value and was “so complicated that it is impossible to know what all this leads to. But we do know that statistical maneuvers cannot make unreliable trials reliable.”

In addition to the doubtful effect of antidepressants noted in the study (see Part 1), Gøtzsche thought ranking the drugs according to their effect and acceptability was a futile exercise. “My thought was that the authors had rewarded those companies that had cheated the most with their trials.” He said it was highly unlikely that some depression pills were both more effective and better tolerated than others.

One doesn’t need to be a clinical pharmacologist to know that this seems too good to be true. Drugs that are more effective than others (which is often a matter of giving them in higher, non-equipotent doses), will usually also be more poorly tolerated.

The reality is that despite serious flaws in depression drug trials, “the average effect is considerably below what is clinically relevant.” That was demonstrated in the Cipriani et al. study and has been shown in several other studies. Examples of the serious flaws noted by Gøtzsche included: “[a] lack of blinding because of the conspicuous adverse effects of the pills, cold turkey in the placebo group because people were already on depression pills before they were randomised, industry-funding, selective reporting and data massage.” He concluded the benefits to harm of depression pills meant that placebo was better than the drug.

Joanna Moncrieff was appalled at the almost universally uncritical coverage given to the Cipriani et al. study. In her article, “Challenging the new hype about antidepressants,” she noted where John Geddes, one of the study’s coauthors, said only one in six people with depression receive effective treatment; and he wanted to make that six out of six. By her calculations, if 9% of the UK population is already taking antidepressants, “and they only represent 1 in 6 of those who need them, then 54% of the population should be taking them. I make that another 27 million people!” Dr. Moncrieff went on and noted once again, that despite the hype, there was nothing groundbreaking in this latest meta-analysis. “It simply repeats the errors of previous analyses.”

The analysis consists of comparing ‘response’ rates between people on antidepressants and those on placebo. But ‘response’ is an artificial category that has been arbitrarily constructed out of the data actually collected, which consists of scores on depression rating scales, like the commonly used Hamilton rating Scale for Depression (HRSD). Analysing categories inflates differences (3). When the actual scores are compared, differences are trivial, amounting to around 2 points on the HRSD, which has a maximum score of 54. These differences are unlikely to be clinically relevant, as I have explained before. Research comparing HRSD scores with scores on a global rating of improvement suggest that such a difference would not even be noticed, and you would need a difference of at least 8 points to register ‘mild improvement’. [See her article for the noted citations and a link to her previous discussion on the HRSD]

Participants in a clinical trial can deduce whether or not they are in the experimental group with the antidepressant medication by recognizing the side effects with antidepressant medication “(e.g. nausea, dry mouth, drowsiness and emotional blunting) irrespective of whether or not they treat depression.” If that happens, these participants may then receive an amplified placebo effect by knowing they are taking an active drug rather than an inactive placebo. “This may explain why antidepressants that cause the most noticeable alterations, such as amitriptyline, appeared to be the most effective in the recent analysis.”

She also pointed out ‘real world’ studies showing the long-term effects of people treated with antidepressants. “The proportion of people who stick to recommended treatment, recover and don’t relapse within a year is staggeringly low (108 out of the 3110 people who enrolled in the STAR-D study and satisfied the inclusion criteria).”  Several studies have found that the outcomes for people treated with antidepressants “are worse than the outcomes of people with depression who are not treated with antidepressants.” Moncrieff said calling to increase the use of antidepressants, as Geddes did, will not address the problem of depression and will only “increase the harms these drugs produce.”

As the debate around the [media] coverage highlighted, many people feel they have been helped by antidepressants, and some are happy to consider themselves as having some sort of brain disease that antidepressants put right. These ideas can be reassuring. If people have had access to balanced information and decided this view suits them, then that is fine. But in order for people to make up their own minds about the value or otherwise of antidepressants and the understanding of depression that comes in their wake, they need to be aware that the story the doctor might have told them about the chemical imbalance in their brain and the pills that put it right, is not backed up by science [see her article for a link to this topic], and that the evidence these pills are more effective than dummy tablets is pretty slim.

The Council for Evidence-Based Psychiatry also pointed out “the new research proves nothing new.” Further, they cited where the Royal College of Psychiatrists (RCP) represented the Cipriani et al. study as “finally putting to bed the controversy on anti-depressants.”

This statement is irresponsible and unsubstantiated, as the study actually supports what has been known for a long time, that various drugs can, unsurprisingly, have an impact on our mood, thoughts and motivation, but also differences between placebo and antidepressants are so minor that they are clinically insignificant, hardly registering at all in a person’s actual experience.

Then on February 24th, the President of the Royal Collage of Psychiatry and the Chair of its Psychopharmacology Committee stated in a letter to The London Times that: “the vast majority of patients, any unpleasant symptoms experienced on discontinuing antidepressants have resolved within two weeks of stopping treatment.” This led to a “Formal Complaint to the UK Royal College of Psychiatrists” when Professor John Read and others wrote to the RCP disputing that claim. The formal complaint stated:

To mislead the public on this issue has grave consequences. People may be misled by the false statement into thinking that it is easy to withdraw and may therefore try to do so too quickly or without support from the prescriber, other professionals or loved ones. Other people, when weighing up the pros and cons of starting antidepressants may make their decision based partly on this wrong information. Of secondary concern is the fact that such irresponsible statements bring the College, the profession of Psychiatry (to which some of us belong), and – vicariously – all mental health professionals, into disrepute.

The complaint cited several research papers documenting how withdrawal effects from antidepressants “often last far longer than two weeks.” The cited research included a study done by the Royal Collage of Psychiatry (RCP) itself, “which found that withdrawal symptoms were experienced by the majority (63%), generally lasted for up to 6 weeks and that a quarter reported anxiety lasting more than 12 weeks. Within 48 hours of the misleading statement in The Times, the survey results were removed from the RCP website, as was a leaflet by the RCP on antidepressant withdrawal. You can listen to a podcast interview with Professor John Read here. There is a link to the RCP leaflet and The Times article there as well.

Stay tuned; this controversy isn’t over yet. In conclusion, to paraphrase Paul Harvey, “Now you know the rest of the Lancet story on antidepressants.”

04/10/18

The Lancet Story on Antidepressants, Part 1

© thelightwriter | 123rf.com

The Lancet recently published a new paper reporting on a large-meta-analysis of studies on antidepressants done by Cipriani et al., “Comparative efficacy and acceptability of 21 antidepressant drugs.” All 21 antidepressants reviewed in the study were found to be more effective than placebo. Various news agencies, referred to it as “a groundbreaking study;” or as confirming “that antidepressants are effective for major depressive disorder (MDD);” and, “New study: It’s not quackery—antidepressants work. Period.” But the excitement and conclusions noted here seem to have been overdone and a bit premature.

Let’s start with the articles quoted in the first paragraph. The author of an article for The Guardian thought the “groundbreaking” Lancet study showed antidepressants were effective; and “we should get on with taking and prescribing them.” The upshot for him was that the millions of people taking antidepressants (including him) “can continue to do so without feeling guilt, shame or doubt about the course of treatment.” Doctors should feel no compunction about prescribing these drugs. “It’s official: antidepressants work.”

An article for bigthink, “New study: It’s not quackery—antidepressants work. Period,” also thought the Cipriani et al. study was helping to put some of the debate about the effectiveness of antidepressants to bed. Again the reported result was that all antidepressants performed better than placebos. The bigthink author related that in order for a drug to be considered “effective, it had to reduce depression symptoms by at least 50 percent,” which would be an astounding discovery for even one antidepressant, let alone all 21. But that was no quite how the Cipriani et al. study authors defined drug efficacy for their study. The authors said efficacy was the “response rate measured by the total number of patients who had a reduction of ≥50% of the total score on a standardised observer-rating scale for depression,” not a 50% or greater reduction in depressive symptoms. Cipriani was then quoted as saying: “We were open to any result. This is why we can say this is the final answer to the controversy.”

The opening sentence of an article on the Medscape website, “Confirmed: Antidepressants Work for Major Depression,” said: “A large meta-analysis confirms that antidepressants are effective for major depressive disorder (MDD).” Here we find the correct description of efficacy in the study: “Results showed that each studied antidepressant was significantly more efficacious, defined as yielding a reduction of at least 50% in the total score of a standardized scale for depression, than placebo after 8 weeks.” Two additional quotations of Cipriani from a press release about the study are given, suggesting while antidepressants can be an effective tool, they shouldn’t necessarily be the first line of treatment. “Medications should always be considered alongside other options, such as psychological therapies, where these are available.”

Reflecting on these three articles, I thought the Guardian and bigthink articles weren’t as careful as they could have been in their rhetoric about the results of the Cipriani et al. study. Although the Medscape article was more nuanced, it also seemed to lead to the same conclusions as the Guardian article, namely: “The demonstration of the extent of antidepressant superiority over placebo reassures patients and health-care professionals of the efficacy of [this] treatment despite high placebo response rates.” But is this conclusion by the Medscape article accurate? In the discussion section of the Cipriani et al. study, the authors said: “We found that all antidepressants included in the meta-analysis were more efficacious than placebo in adults with major depressive disorder and the summary effect sizes were mostly modest.”  Further on was the following:

It should also be noted that some of the adverse effects of antidepressants occur over a prolonged period, meaning that positive results need to be taken with great caution, because the trials in this network meta-analysis were of short duration. The current report summarises evidence of differences between antidepressants when prescribed as an initial treatment. Given the modest effect sizes, non-response to antidepressants will occur. 

It does not seem the study conclusively found that antidepressants work for major depression. The authors even said in some individuals antidepressants won’t be effective. Now look at the following two assessments of the Cipriani et al. study from an individual (Neuroskeptic) and an organization (The Mental Elf) that I have found to be fair, nuanced and helpful in their assessments of research into psychiatric and medication-related issues.

The Mental Elf article does have a positive title: “Antidepressants can help adults with major depression” and an overall positive assessment, but there were some clear limitations noted as well. First, gleaning results from the study, it reported the most effective antidepressants studied were: agomelatine (Valdoxan, Melitor, Thymanax), amitriptyline (Elavil), escitalopram (Lexapro), mirtazapine (Remeron), paroxetine (Paxil), venlafaxine (Effexor) and vortioxetine (Brintellix). And it noted the least effective ones studied were: fluoxetine (Prozac), fluvoxamine (Luvox), reboxetine (Edronax) and trazodone (many different brand names). The most tolerable antidepressants were: agomelatine, citalopram (Celexa), escitalopram, fluoxetine, sertraline (Zoloft) and vortioxetine. And the least tolerable were: amitriptyline, clomipramine (Anafranil), duloxetine (Cymbalta), fluvoxamine (Luvox  or Faverin), reboxetine (Edronax and others), trazodone and venlafaxine.

The included data only covered a short time period—8-weeks of treatment. So the results may not apply to longer-term antidepressant use. “And some antidepressant side effects occur over a prolonged period, so positive results should be interpreted with caution.” Another concern the author noted was that seventy-eight percent of the trials included in the study were funded by pharmaceutical companies. While industry funding was not associated with substantial differences in response or dropout rates, non-industry funded trials were limited and many trials did not report or disclose their funding.

Another 73% of the included trials were rated as having a moderate risk of bias, with 9% rated as a high risk of bias and only 18% as having a low risk of bias. Significantly, the review pointed out the study did not address specific adverse events, withdrawal symptoms, or when antidepressants were used in combination with other non-drug treatments—information most patients would have found useful. Nevertheless, the Mental Elf reviewer thought the study struck a nice balance between “strong evidence that antidepressants work for adult depression” while “accepting the limitations and potential biases” in the study.

Neuroskeptic who wrote “About that New Antidepressant Study,” thought that while it was a nice piece of work, it told very little new information and had a number of limitations. He thought the media reaction to the paper was “frankly bananas.” He put the effectiveness ratings into perspective by pointing out the “mostly moderate” effect size was .30 on the Standardized Mean Difference (SMD) measure, where .2 was ‘small’ and .5 was ‘medium.’ “The thing is, ‘effective but only modestly’ has been the established view on antidepressants for at least 10 years.” He then cited a previous meta-analysis that found the overall effect size to be almost identical—.31! He then turned to the findings of Irving Kirsch’s research with antidepressants, saying:

Cipriani et al.’s estimate of the benefit of antidepressants is also very similar to the estimate found in the notorious Kirsch et al. (2008) “antidepressants don’t work” paper! Almost exactly a decade ago, Irving Kirsch et al. found the effect of antidepressants over placebo to be SMD=0.32, a finding which was, inaccurately, greeted by headlines such as “Anti-depressants ‘no better than dummy pills.”The very same newspapers are now heralding Cipriani et al. as the savior of antidepressants for finding a smaller effect…

The media hype has been “frankly bananas” about the Cipriani et al. study. More balanced reviews by Neuroskeptic and The Mental Elf thought it was “a nice piece of work” and “a nice balance” between the evidence that antidepressants work for adults with depression while accepting “the limitations and potential biases” in the data. The hype is claiming clear effectiveness for a measure that only shows modest effectiveness over the short-term of 8 weeks. Ironically, the trumpeted findings of Cipriani et al are actually lower than those of Irving Kisrch (.32), who pointed out that the SMD criterion suggested by NICE (National Institute for Health and Care Excellence) was .50. Kirsch et al. said: Thus, the mean change exhibited in trials provides a poor description of results.”

Be sure to read Part 2 of “The Lancet Story on Antidepressants” to see what anti-antidepressant voices have to say about the Cipriani et al. study. For more information on the antidepressant research by Irving Kirsch, see: “Dirty Little Secret” and “Do No Harm with Antidepressants.”