01/8/19

Antidepressant Fall From Grace, Part 2

© hikrcn | 123rf.com

In 1995 Irving Kirsch and Guy Sapirstein set out to assess the placebo effect in the treatment of depression. Like most people, Kirsch used to think that antidepressants worked—the active ingredient in the antidepressant helped people “cope with their psychological condition.”  They weren’t surprised to find a strong placebo effect in treating depression; that was their hypothesis and the reason to do the study. What did surprise them was how small the drug effect was—the difference between the response to the drug and the response to the placebo. “The placebo effect was twice as large as the drug effect.”

Along with Thomas Moore and others, Kirsch then did an analysis of data submitted to the FDA for approval of the six most widely prescribed antidepressants approved between 1987 and 1999: fluoxetine (Prozac), paroxetine (Paxil), sertaline (Zoloft), venafaxine (Effexor), nefadozone (Serzone) and citalopram (Celexa). The researchers found that 80% of the response to medication was duplicated in placebo control groups. The mean difference between drug and placebo was clinically negligible. You can read more about this study in Prevention & Treatment, “The Emperor’s New Drugs.”

When they published their findings, Kirsch sad he was pleasantly surprised by the consensus about their findings. “Some commentators argued that our analysis had actually overestimated the real effect of antidepressants.” One group of researchers said the minimal difference between antidepressant treatment and controls was a “dirty little secret” that had been known all along. “The companies that produce the drugs knew it, and so did the regulatory agencies that approve them for marketing. But most of the doctors who prescribe these medications did not know it, let alone their patients.”

According to Irving Kirsch, pharmaceutical companies have used several devices to present their products as better than they actually are. First they will withhold negative studies from publication. While publication bias effects all areas of research, it is acutely problematic with drug trials. “Most of the clinical trials evaluating new medications are sponsored financially by the companies that produce and stand to profit from them.”

The companies own the data that come out of the trials they sponsor, and they can choose how to present them to the public — or withhold them and not present them to the public at all. With widely prescribed medications, billions of dollars are at stake.

Positive studies may be published multiple times, a practice known as “salami slicing.” Often this is done in ways that makes it difficult for reviewers to recognize the studies were done on the same data. The authors may be different. References to the previous publication of the data are often missing. Sometimes there are minor differences in the date used between one publication and another. Sometimes positive data is cherry-picked from a clinical trial and published, giving the impression that the drug seemed more effective than it really was. For more information on this issue, see: The Emperor’s New Drugs: Exploding the Antidepressant Myth by Irving Kirsch.

Published in 2004, the STAR*D study (Sequenced Treatment Alternatives to Relieve Depression) was a multisite, multistep clinical trial of outpatients with nonpsychotic major depression. It was designed to be more representative of the real world use of antidepressants than typical clinical trials; and to show the effectiveness of antidepressants in the best of circumstances. STAR*D was funded by the NIMH at a cost of $35 million dollars and took six years to complete. It was hailed as the “largest antidepressant effectiveness trial ever conducted.” Robert Whitaker described it as follows:

The STAR*D trial was designed to test whether a multistep, flexible use of medications could produce remission in a high percentage of depressed outpatients. Those who didn’t get better with three months of initial treatment with an SSRI (citalopram) then entered a second stage of treatment, in which they were either put on a different antidepressant or given a second drug to augment an antidepressant. Those who failed to remit in step two could go on to a step three, and so on; in total, there were four treatment steps.

According to the NIMH, in level 1, about one-third of participants became symptom-free. In level 2, about 25% of participants became symptom-free. So a half of the participants in the STAR*D study became symptom-free after two treatment levels. “Over the course of all four treatment levels, almost 70 percent of those who did not withdraw from the study became symptom-free.” However, there was a progressive dropout rate: 21% withdrew after level 1; 30% after level 2; and 42% after level 3.

An overall analysis of the STAR*D results indicates that patients with difficult-to-treat depression can get well after trying several treatment strategies, but the odds of beating the depression diminish with every additional treatment strategy needed. In addition, those who become symptom-free have a better chance of remaining well than those who experience only symptom improvement. And those who need to undergo several treatment steps before they become symptom-free are more likely to relapse during the follow-up period. Those who required more treatment levels tended to have more severe depressive symptoms and more co-existing psychiatric and general medical problems at the beginning of the study than those who became well after just one treatment level.

The message communicated to doctors and the public was that STAR*D showed that antidepressants enabled 67% of depressed patients to recover. Robert Whitaker said an article in The New Yorker commented this “effectiveness rate” was “far better than the rate achieved by a placebo.” But this “cumulative” remission rate of 67% was in fact a theoretical rate that assumed those who dropped out of the study would have the same remission rates as those who remained. “They [also] included remission numbers for patients who weren’t depressed enough at baseline to meet study criteria, and thus weren’t eligible for analysis.” Irving Kirsch said the STAR*D symptom remission was temporary for most: “Approximately 93 percent of the patients who recovered relapsed or dropped out of the trial within a year.”

Recently, Kirsch and others acquired the STAR*D raw data through the MIMH and reanalyzed the HRSD (Hamilton Rating Scale for Depression) results. The HRSD was identified by the original as the primary outcome measure for STAR*D. “Yet the outcome that was presented in almost all the study papers was the QIDS (Quick Inventory of Depressive Symptomatology), a measure made up especially for the STAR-D study, with no prior or subsequent credentials.” The QIDS was devised as a way of tracking symptoms during the course of treatment NOT as an outcome measure. And the original study protocol stated it should not be used as an outcome measure.

Analysis of the HRSD data in STAR*D failed to reach the threshold required for a minimal improvement. “It is also below average placebo improvement in placebo-controlled trials of antidepressants.” The STAR*D results were about “half the magnitude of those obtained in standard comparative drug trials.” Commenting on STAR*D in his book, The Emperor’s New Drugs, Irving Krisch said:

This is a rather bleak picture of the effects of antidepressant treatment. In the best of circumstances—which is what the trial was designed to evaluate—only one out of three depressed patients showed a lasting recovery from depression, and since there was no evaluation of what the recovery rate might have been with placebo treatment, there was no way of knowing whether their recovery was actually due to the medication they had been given.

In her review of the Kirsch reanalysis of the STAR*D study, Joanna Moncrieff said STAR*D suggests that in real life situations, people who take antidepressants do not do very well. “In fact, given that for the vast majority of people depression is a naturally remitting condition, it is difficult to believe that people treated with antidepressants do any better than people who are offered no treatment at all.” She thought this might be the reason the results of the main outcome measure (the HRSD) remained unpublished for so long—and also an explanation for the substitution of the QIDS as an outcome measure. In the original STAR*D analysis:

Whether this was deliberate on the part of the original STAR-D authors or not, it was certainly not made explicit. There should surely be uproar about the withholding of information about one of the world’s most widely prescribed class of drugs. We must be grateful to Kirsch and his co-authors for finally putting this data in the public domain.

According to data gathered by the CDC, 10.7% of all U.S. adults in 2011-2014 reported using an antidepressant in the past 30 days. This is 5.9 times the reported usage for 1988-1994. Demographically, the percentages of U.S. adults who used antidepressants increased with age. The percentages of women using antidepressants were also consistently higher then men for all age groups. Yet their effectiveness in treating depression has been shown to be little better than a placebo. And given that they have a multitude of adverse effects—even the SSRIs—in most cases, no medication may be better than an antidepressant.

See “Dirty Little Secret” and “Do No Harm with Antidepressants” on this website for more information on the antidepressant research of Irving Kirsch. See “The Lancet Story on Antidepressants,” Part 1 and Part 2 for more on the ongoing debate over the effectiveness of antidepressants. See “Antidepressant Fall From Grace, Part 1” for a brief history of antidepressants.

10/21/15

Dirty Little Secret

© ia_64 | stockfresh.com
© ia_64 | stockfresh.com

Quoting Steven Hollon, in his book The Emperor’s New Drugs, Irving Kirsch said it was a “dirty little secret” that there was only a small difference between the experimental and control groups for the patients who participated in the randomized clinical trials (RCTs) used to approve SSRIs. Be sure to get this: the pharmaceutical companies that produced the drugs AND the regulatory agencies that approved them, knew there was essentially no difference between the effects of the drug and the placebo. Yet the drugs were approved for use with humans. “Many have long been unimpressed by the magnitude of the differences observed between treatments and controls, what some of our colleagues refer to as the ‘dirty little secret’ in the pharmaceutical literature.”

Kirsch was originally interested in studying the placebo effect, and not the antidepressant drug effect. “How is it, I wondered, that the belief that one has taken a medication can produce some of the effects of that medication?” He was not surprised to find a substantial placebo effect of the medications on depression. But he was surprised to see how small the drug effect was. “Seventy-five percent of the improvement in the drug group also occurred when people were give dummy pills with no active ingredient in them.”

You can read an article by Kirsch describing the research process described here in: “Antidepressants and the Placebo Effect.”

He replicated the findings in another study published in 2002, using the data submitted to the FDA by the pharmaceutical companies in their process of obtaining approval for six new generation antidepressants. There were some advantages to using the FDA data set. First, they received data on the published and unpublished clinical trials conducted by the pharmaceutical companies. What was particularly important here was that: “The results of the unpublished trials were known only to the drug companies and the FDA, and most of them failed to find a significant benefit of drug over placebo.”

A second advantage was that the FDA trials all used the same primary measure of depression—the Hamilton depression scale (HAM-D). The third advantage was that the FDA data was the same data used for the approval of the medications. So if there had been anything wrong with the trials, one would think, the medications would not have been approved.

In the data sent to us by the FDA, only 43% of the trials showed a statistically significant benefit of drug over placebo. The remaining 57% were failed or negative trials. . . . The results of our analysis indicated that the placebo response was 82% of the response to these antidepressants.

One explanation for Kirsch’s results could be that the replication done in 2002 contained both the published and unpublished clinical trials. The inclusion of failed and negative trials would have lowered the positive results required by the FDA for approval of a medication. So the placebo response was greater in this replication than it was in their original study because of including the unpublished trials. Nevertheless, the majority of the trials failed to show positive results. Remember that the pharmaceutical companies themselves conducted these studies; and that they were the trials done in the process of gaining approval for their medications.

Getting approval of a drug by the FDA requires the submission of two studies showing the new drug is better than a placebo. It doesn’t matter if it takes you ten studies to get those two; only the two positive ones count for approval. The requirement is that two trials have to demonstrate the drug is more effective than a placebo, and that measurement has to be statistically significant. Kirsch’s analysis found just a 1.8-point difference on the HAM-D scale between drug and placebo—a difference that is not clinically significant, even though it may be statistically significant. The National Institute for Health and Clinical Excellence (NICE) has set the criterion for a clinically significant difference between drug and placebo to be at least three points on the HAM-D scale.

A criticism of Kirsch’s 2002 study was that the results were based on clinical trials conducted on subjects who were not very depressed. So Kirsch et al. (2008) reanalyzed the data in: “Initial Severity and Antidepressant Benefits.” They found that “the overall effect of new-generation antidepressant medications is below recommended criteria for clinical significance.” Only for the most extremely depressed patients was there evidence for clinical significance, according to the HAM-D scale. Yet they also concluded this difference was “due to a decrease in the response to placebo rather than an increase in the response to medication.”

So the question becomes, what do all these drugs have in common that gives them a slight, but statistically significant effect on depression over placebo? The answer is that they all produce side effects.

Clinical trials are all double-blind studies, meaning that neither the patient nor the doctor is supposed to know whether the patient is given the active drug or the placebo. Yet in one study, 80% of patients guessed correctly whether or not they were on the drug or placebo; and 87% of doctors also guessed correctly. So most patients and most doctors could break the blind by guessing according to the presence or absence of side effects to the medications. Additionally, “89% of the patients in the drug group correctly ‘guessed’ that they had been given the real antidepressant, a result that is very unlikely to be due to chance.”

So clinical trials are not really double blind studies if most patients can guess whether or not they have been given the real drug rather than the placebo. This ability to “break blind” has been known in the research literature since 1986 when Rabkin et al. published their study, “How Blind is Blind” in the September issue of Psychiatry Research. Yet drug trials continue to use inert placebos.

But what would happen if an active placebo were used in clinical trials? Active placebos have been used with antidepressants in other studies. See “Active Placebos Versus Antidepressants for Depression.”  Moncrieff et al. reported that: “differences between antidepressants and active placebos were small.” Kirsch noted that in the nine clinical trials discussed by Moncrieff et al. where an active placebo (atropine) was used, there was only a significant difference in two of the studies.

In the vast majority (78 percent) of the clinical trials in which active placebos were used, no significant differences were found between the drug and the placebo. So comparisons with inactive placebos are much more likely to show drug-placebo differences than comparisons with active placebos. This suggests that at least part of the difference that has been found between antidepressant and placebo may be due to the experience of more side effects on the active drug than on the placebo.

It’s good this dirty little secret is becoming more widely known. But unfortunately the horse has already left the barn. Too bad it wasn’t getting press fifteen years ago before the SSRIs started going off-patent. The pharmaceutical companies have already gouged the public with their SSRI profits and their drugs have gone generic.

Eli Liliy’s Prozac went off patent in 2001. GlaxoSmithKline’s Paxil has been off-patent since 2003. Forest Labs’ Celexa patent expired in 2003. Pfizer’s Zoloft patent expired in 2006. Wyeth’s Effexor (now marketed by Pfizer) went off-patent in 2006. Wellbutrin, developed by Burroughs Wellcome and later acquired by GlaxoSmithKline, lost its patent in 2006. Lexapro was developed by Forest Laboratories in conjunction with Lundbeck and they won two patent extensions. But it lost exclusivity in 2012.

09/2/15

Do No Harm with Antidepressants

© Jrabelo | Dreamstime.com
© Jrabelo | Dreamstime.com

In April of 2006, I first read Irving Kirsch’s 2002 article, “The Emperor’s New Drugs.” In that article, Kirsch described how 80% of the response to antidepressant medications was duplicated in placebo control groups. Kirsch’s analysis was of the very same clinical data submitted to the FDA between 1987 and 1999 for the approval of 6 widely prescribed antidepressants. The allusion to Hans Christian Andersen’s tale, “The Emperor’s New Clothes” was fitting. Kirsch played the role of the little boy in Andersen’s tale to my understanding of how antidepressants work. He pointed to antidepressants and said: “But they have little or no therapeutic effect at all!”

Since 2006 I’ve become familiar with the work of several individuals questioning the received wisdom of psychotropic medications, including Joanna Moncrieff. Her book, The Myth of the Chemical Cure, had its own “aha!” moment in the development of my thinking on the clinical use of psychiatric medications. A search of  “Faith Seeking Understanding” by their names will pull up other articles where I have referenced them.

Not too long ago, I saw a link to a new article by Joanna Moncrieff and Irving Kirsch, “Empirically derived criteria cast doubt on the clinical significance of antidepressant-placebo differences.” I’ve read previous articles written by Moncrieff and Kirsch, “Efficacy of antidepressants in adults” and “Clinical trials and the response rate illusion.” But still looked forward to reading their latest. It seems to have hammered home the final nail in the coffin of the ineffectiveness of antidepressants for me.

In “The Emperor’s New Drugs,” Kirsch found that the drug/placebo difference was less than 2 points on the Hamilton-D (HAM-D) scale, a scale often used in studies for assessing the effects of antidepressants. Even then, Kirsch et al. were saying that: “the clinical significance of these differences is questionable.” The spin put on his conclusions was that this was only to hold true only for individuals with mild cases of depression. Moderate to severe depression should still have antidepressants as a first-line treatment.

However, in “Efficacy of antidepressants in adults,” Moncrieff and Kirsch pointed out that the studies included in “The Emperor’s New Drugs,” were mainly with patients suffering with severe to very severe depression. They cited additional studies questioning the efficacy of antidepressants and concluded: “Recent meta-analyses show selective serotonin reuptake inhibitors have no clinically meaningful advantage over placebo;” and that “Claims that antidepressants are more effective in more severe conditions have little evidence to support them.”

In their most recent article, Moncrieff and Kirsch tackled the issue of how antidepressants have been shown to be statistically superior to placebo. This statistical significance has been true from the time of Kirsch’s work on “The Emperor’s New Drugs, ” where the authors said that: “Although mean differences were small, most of them favored the active drug, and overall, the difference was statistically significant.” Moncrieff and Kirsch commented that a three-point difference on the HAM-D scale could not be detected by clinicians. Clinically relevant drug-placebo differences would have to be 7 points or greater on the HAM-D scale. “Currently, drug effects associated with antidepressants fall far short of these criteria.”

These conclusions were built upon the work of German psychiatrist Stefan Leucht and his colleagues. You can read a less technical discussion of the importance of this research in Dr. Moncrieff’s blog, here. She said that a reduction of 2 points on the 52 point HAM-D scale, while statistically significant, seemed to be an insignificant amount. “Leucht et al. provide some empirical evidence to support that hunch.”

Given that there was little if any difference in clinically relevant effects between one treatment and another, Moncrieff and Kirsch suggested that patients and healthcare providers should be aware that all treatments, including placebo, produce some positive effect on symptom scales, “while none outperforms a pill placebo to a meaningful degree.”

The small differences detected between antidepressants and placebo may represent drug-induced mental alterations (such as sedation or emotional blunting) or amplified placebo effects rather than specific ‘antidepressant’ effects. At a minimum, therefore, it is important to ascertain whether differences correlate with clinically detectable and meaningful levels of improvement.

So where does this discussion lead us? Treating depressive symptoms with antidepressants should not be a first option. “Given the choice, most depressed patients prefer psychotherapy over medication.” Moncrieff and Kirsch suggest that decisions about treatment should include patient preference, safety and cost. With regard to safety, antidepressants should be a last choice for treatment alternatives.

Their article referenced a study by Andrews et al., “Primum non nocere” (first, do no harm), which noted a series of harmful effects from SSRIs. Serotonin has wide reaching effects on adaptive processes throughout the body and could have many adverse health effects. They described how antidepressants effect the proper functioning of homeostatic mechanisms in the body.  Long-term use is associated with a loss of symptom reducing effectiveness with SSRIs. This suggests that the brain is pushing back against the effects of SSRIs and trying to regain the homeostasis present before the use of antidepressants began. “Because of the complex role that serotonin plays in shaping the brain, antidepressants could have complex effects on neuronal functioning.”

Additional negative side effects included attention problems, driving performance, falling and bone fractures in the elderly, gastrointestinal problems such as diarrhea, constipation and irritable bowel syndrome. SSRIs may increase the risk of abnormal bleeding. They can related to an increase risk of cardiovascular events. There is concern that SSRIs can effect neonatal development. One study suggested SSRI use during pregnancy, especially the first trimester, led to an increased risk of Autism Spectrum Disorders. Andrews et al. summarized their findings here:

We have reviewed a great deal of evidence of the effects of antidepressants on serotonergic processes throughout the body. Some of the effects are widely known, but they have been largely ignored in debates about the utility of antidepressants. Indeed, it is widely believed that antidepressant medications are both safe and effective; however, this belief was formed in the absence of adequate scientific verification. The weight of current evidence suggests that, in general, antidepressants are neither safe nor effective; they appear to do more harm than good.

03/18/15

Modern Alchemy with Antidepressants

19867524_sA study published in the open access journal, PLOS One by Sugarman et al. once again replicated previous studies showing that there was very little clinical difference between an antidepressant and placebo. In a way this is old news. One of the study’s authors, Irving Kirsch previously reported these findings. You can read more on this antidepressant research here and here. I’ve also looked at a 60 Minutes broadcast that interviewed him in “Thor’s Psychiatric Hammer: Antidepressants.” Kirsch has also published a book on the topic: The Emperor’s New Drugs: Exploding the Antidepressant Myth. But here is the significance of the Sugarman et al. study. It was the first evaluation to use “a complete database of published and unpublished trials sponsored by the drug’s manufacturer.”

In 2004, GlaxoSmithKline  (GSK) was required as part of a lawsuit settlement to post online the results of all clinical trials involving its drugs. The 2004 lawsuit was because the company had withheld data on the ineffectiveness and potential danger of Paxil (paroxetine) when given to adolescents and children. But it doesn’t seem GSK learned their lesson. In 2014 the company agreed to plead guilty to criminal charges and pay $3 billion in fines for promoting its antidepressant drugs, Paxil and Wellbutrin for unapproved uses and failing to report safety data about Avandia. So Sugarman et al. were able to use the data GSK made available to do the research reported here.

The current analysis is the first evaluation of the efficacy of an SSRI medication in the treatment of multiple anxiety disorders, and the first to utilize a complete database of published and unpublished trials sponsored by the drug’s manufacturer. Our results indicated that paroxetine presented a modest benefit over placebo in the treatment of anxiety and depression, with mean change score differences of 2.3 and 2.5 points on the HRSA [Hamilton Rating Scale for Anxiety] and HRSD [Hamilton Rating Scale for Depression], respectively.

The study’s results found that individuals receiving placebo reported 79% of the magnitude of change with the individuals receiving paroxetine. This was consistent to previously reported magnitudes of 76% for placebo compared to paroxetine. Replicating this previous finding, namely greater than 75% of the drug response, suggested that: “the magnitude of the placebo effect is especially large in the treatment of anxiety and depression.” Given the similarities between paroxetine and other SSRIs, it is possible that similar magnitudes of placebo effects will be found with them. Further research is required to support this proposition. Nevertheless, “the current analysis indicates that the published literature represents an overestimate of the true efficacy of paroxetine in the treatment of anxiety.”

The glass-half-full reporting of the differences between drug and placebo have emphasized that statistically significant differences were found. The problem is, those differences were so small, that their clinical significance was questionable. According to the criteria of NICE, the National Institute of Health and Clinical Excellence, “the mean difference between paroxetine and placebo in the current analyses fell short of clinical significance for the treatment of both anxiety and depression.” Sugarman et al. reviewed these concerns and concluded that changes of three points or less on the HRSD did not correspond to a clinically detectable change and appeared to be “of marginal clinical significance.”

So paroxetine has only a slight benefit over placebo in treating symptoms of anxiety and supports previous work indicating that it has just a modest benefit over placebo when treating depression. Given the known side effects with standard medications used to treat anxiety and depression, their use as a first-line treatment for these problems seems questionable. “The obvious alternative for the treatment of both anxiety and depression is psychotherapy intervention.” But direct comparisons have not generally shown a significant difference between depression treatment modalities (medication or psychotherapy). Similarly inconclusive findings were noted for anxiety treatment.

Allen Frances said there were two differences between medieval alchemy and the pharmaceutical industry today. First is the well-oiled, massively financed, worldwide, and devastatingly effective marketing machine. Second is the requirement for a DSM diagnosis.

A significant portion of the $12 billion spent each year on antidepressants in the United States rewards the drug companies for promoting the overly widespread use of what to many patients are no more than highly advertised, oversold, and very expensive placebos prescribed for a fake diagnosis. (Allen Frances, Saving Normal)

In 2010, there was a study published a Scandinavian psychiatric journal with the provocative title: “Antidepressant Medication Prevents Suicide in Depression.”  It concluded from studying 18,922 suicides in Sweden between 1992 and 2003, “that a substantial number of depressed individuals were saved from suicide by postdischarge treatment with antidepressant medication.” Sixteen months after publication, it was formally retracted by the authors for “… unintentional errors in the analysis of the data.”

Psychologist Phillip Hickey reported that after a five month legal battle, he was able to get access to the correct data. The original study found that among completed suicides treated for depression in psychiatric care in the last five years before their suicide, 164 (15.2%) had antidepressants in their blood when they committed suicide. The corrected data indicated that 603 (56%) had antidepressants in them when they committed suicide. The “unintentional error” was huge—an increase of 439 people (268%).

And yet, the study’s author said that no conclusion from the study could be drawn “regarding antidepressants’ effects on suicide risk in any direction.” In other words, you couldn’t conclude that antidepressants prevented or facilitated suicide risk. Hickey reported that at the time of writing the original article, its author has financial ties to Lundback, Eli Lily and GSK (GlaxoSmithKline).

In another study, found in The British Journal of Psychiatry, a team of UCLA researchers randomized 88 participants into double-blind groups for 8 weeks of treatment (placebo or medication) with supportive care; and a separate group receiving supportive care alone. Expectations of medication effectiveness, general treatment effectiveness and therapeutic alliance were also measured. The groups receiving medication or placebo plus supportive care were not significantly different. However, both had significantly better outcomes than the supportive care alone group. Expectations of medication effectiveness were predictive of only the placebo response. Therapeutic alliance predicted participant response to both medication and placebo.

The lead author of the study, Andrew Lechter, said that the results indicated that if you think a pill is going to work, it probably will work. He noted that belief in the effectiveness of the medication was not related to the likelihood of benefitting from it. “Our study indicates that belief in ‘the power of the pill’ uniquely drives the placebo response, while medications are likely to work regardless of patients’ belief in their effectiveness.” He speculated that factors like direct-to-the-consumer advertising could be shaping peoples’ attitudes about medication. “It may not be an accident that placebo response rates have soared at the same time the pharmaceutical companies are spending $10 billion a year on consumer advertising.”

It seems that Lechter is saying that the drug response was independent of the expectations of medication effectiveness, while the placebo response was driven be the prior expectations of the participants, as they were influenced by factors like direct-to-the-consumer advertisings. If true, this would seem to challenge, to a certain extent, the results noted above and in Kirsch’s previous research. Replication of the results is needed before Lechter’s conclusions from his research are accepted. It should be pointed out that paroxetine (Paxil) was approved by the FDA in May of 1996, while direct-to-the-consumer advertising of medications did not begin until 1997. Therefore, it would not have had an effect upon the paroxetine data reported above. I would also feel more comfortable with Lechter’s interpretations of his data if he didn’t have as extensive an association with the pharmaceutical industry. See the “Declaration of interest” in the linked abstract from The British Journal of Psychiatry.