09/15/20

If TD Walks Like a Duck …

© jean cliclac | stockfresh.com

On April 10, 2020, Psychiatric Times published “Advances in Tardive Dyskinesia: A Review of Recent Literature.” The article was written as a supplement to Psychiatric Times and provided helpful information on the recent literature on tardive dyskinesia (TD), the signs and symptoms of TD, risk factors and epidemiology, treatment and screening. It also contained concise summaries of several studies of TD. Overall, it appears to be an accessible article with a wealth of information on TD. But like a pitch from a pharmaceutical sales rep, attached to the end of the article is the medication guide for Ingrezza (valbenazine), one of two FDA-approved medications to treat TD, and a full-page color ad by Neurocrine Biosciences for Ingrezza.

The attachment of the advertisement and medication guide for Ingrezza left me wondering whether “Advances in Tardive Dyskinesia” was truly a review of the recent literature or just an advertisement. I don’t know if the author knew his article would have these attachments, but they left me wondering how objective the article was. It was formatted like a journal article, but as it was not published in a peer-reviewed journal; it did not have to go through that process. So, I question the representativeness of the “concise summaries of selected articles” reviewed in the article and whether there is another perspective of TD and its treatment.

Peter Breggin M.D. has a “TD Resources Center” for prescribers, scientists, professionals, patients and families that presents an opposing view of TD. He considers TD to be an iatrogenic disorder, largely from the use of antipsychotic drugs. Breggin said: “TD is caused by all drugs that block the function of dopamine neurons in the brain. This includes all antipsychotic drugs in common use as well as a few drugs used for other purposes.” In what follows, I will compare selections from the “Advances in Tardive Dyskinesia” to information available on Dr. Breggin’s website.

“Advances in Tardive Dyskinesia” said the causes of TD were unknown, but likely to be complex and multifactorial. It suggested there is evidence that multiple genetic risk factors interact with nongenetic factors, contributing to TD risk. It quoted the DSM-5 definition of TD, which said it developed in association with the use of a neuroleptic (antipsychotic) medication. Note that TD is narrowly conceived as related to the use of antipsychotics. This then necessitate the existence of something called spontaneous dyskinesia, which is an abnormal movement in antipsychotic-naïve patients that is “indistinguishable from TD.” A review of antipsychotic-naïve studies found that the prevalence of spontaneous dyskinesia ranged from 4% to 40% and increased with age.

TD was said to be highly prevalent in patients treated with antipsychotics, but the rate of TD was lower, but not negligible, in patients treated with second-generation antipsychotics compared to those treated with first-generation antipsychotics (21% versus 30%). The cumulative duration of antipsychotic exposure was said to be an important consideration when estimating these TD rates, which can be diagnosed after as little as 3 months of cumulative antipsychotic treatment. TD can emerge more rapidly in some patients, especially the elderly.

Alternatively, Peter Breggin said TD is “a group of involuntary movement disorders caused by drug-induced damage to the brain.” As noted above, he conceives TD as caused by all drugs that block the function of dopamine in the brain. Included are all antipsychotics “as well as a few drugs used for other purposes.” He said TD begins to appear within 3-6 months of exposure to antipsychotics, but cases have occurred from one or two doses.

The risk of developing TD, according to Breggin, was high for all age groups, including young adults: 5% to 8% of younger adults per year who are treated with antipsychotics. The rates are cumulative, so that by three years of use, 15% to 24% will be afflicted. “Rates escalate in the age group 40-55 years old, and among those over 55 are staggering, in the range of 25%-30% per year.” One article by Joanne Wojcieszek, “Drug-Induced Movement Disorders,” said the risk factors for developing TD increase with advancing age. “In patients older than age 45 years, the cumulative incidence of TD after neuroleptic exposure is 26%, 52%, and 60% after 1, 2, and 3 years respectively.” There is more detailed information in his Scientific Literature section. Regarding the newer, second-generation antipsychotics , he thought they had similar rates to first-generation antipsychotics. Breggin said:

Drug companies have made false or misleading claims that newer antipsychotic drugs or so-called atypicals are less likely to cause TD than the older ones. Recent research [more information in his Scientific Literature section], much of it from a large government study called CATIE, have dispelled this misinformation. Considering how huge the TD rates are, a small variation among drugs would be inconsequential. All the antipsychotic drugs with the possible exception of the deadly Clozaril, cause TD at tragically high rates. Since all these drugs are potent dopamine blockers, there should have been no doubt from the beginning that they would frequently cause TD.

Psychiatric Times linked several articles on tardive dyskinesia together, including, “Not All That Writhes Is Tardive Dyskinesia” and others like “Tardive Dyskinesia Facts and Figures,”  “A Practical Guide to Tardive Dyskinesia,” and others. The information in the articles generally seems to acknowledge TD as being associated with long-term exposure to dopamine receptor antagonists (Although “Not All That Writhes Is Tardive Dyskinesia” softened that association by making the TD-spontaneous dyskinesia distinction), which include both first and second-generation antipsychotics. In “Tardive Dyskinesia Facts and Figures,” Lee Robert said a survey of patients taking antipsychotics found that 58% were not aware that antipsychotics can cause TD. An estimated 500,000 persons in the US have TD. 60% to 70% of the cases are mild, and 3% are severe. “Persistent and irreversible tardive dyskinesia is most likely to develop in older persons. . . Tardive dyskinesia is not rare and anyone exposed to treatment with antipsychotics is at risk.”

GoodTherapy gave the following information on “Typical and Atypical Antipsychotic Agents.” The website noted antipsychotic medications, also known as neuroleptics or major tranquilizers, have both a short-term sedative effect and the long-term effect of reducing the chances of psychotic episodes. There are two categories of antipsychotics, typical or first-generation antipsychotics and atypical or second-generation antipsychotics. First generation antipsychotics were said to have a high risk of side effects, some of which are quite severe. “In response to the serious side effects of many typical antipsychotics, drug manufacturers developed another category referred to as atypical antipsychotics.”

Both generations of antipsychotics tend to block receptors in the dopamine pathways or systems of dopaminergic receptors in the central nervous system. “These pathways affect thinking, cognitive behavior, learning, sexual and pleasure feelings, and the coordination of voluntary movement. Extra firing (production of this neurotransmitter) of dopamine in these pathways produces many of the symptoms of schizophrenia.” The discovery of clozapine, the first atypical antipsychotic, noted how this category of drugs was less likely to produce extrapyramidal side effects (tremors, paranoia, anxiety, dystonia) in clinically effective doses than some other antipsychotics.

Wikipedia said as experience with atypical antipsychotics has grown, several studies have raised the question of the wisdom of broadly categorizing antipsychotic drugs as atypical/second generation and typical/first generation. “The time has come to abandon the terms first-generation and second-generation antipsychotics, as they do not merit this distinction.”

Although atypical antipsychotics are thought to be safer than typical antipsychotics, they still have severe side effects, including tardive dyskinesia (a serious movement disorder) neuroleptic malignant syndrome, and increased risk of stroke, sudden cardiac death, blood clots and diabetes. Significant weight gain may occur.

In another publication, “Tardive Dyskinesia,” Vasan and Padhy also said TD was caused by long-term exposure to first and second-generation antipsychotics, some antidepressants (like fluoxetine), lithium and certain antihistamines. They said there was some evidence that the long-term use of anticholinergic medications may increase the risk of TD. See “The Not-So-Golden Years” for more on anticholinergics. Vasan and Padhy said TD was seen in patients with chronic exposure to dopamine D2 receptor blockade and rarely in patients who have been exposed to antipsychotics less than three to six months. “A diagnosis of antipsychotic-induced tardive dyskinesia is made after the symptoms have persisted for at least one month and required exposure to neuroleptics for at least three months.”

They cautioned against the chronic use of first-generation antipsychotics, saying they should be avoided whenever possible. “Primary prevention of tardive dyskinesia includes using the lowest effective dose of antipsychotic agent for the shortest period possible.” The authors noted the FDA’s approval of Ingrezza (valbenazine) to treat TD, saying that early data indicated it was safe and effective in abolishing TDs. “However, the study was conducted by many physicians who also received some type of compensation from the pharmaceutical companies- so one has to take this data with a grain of salt until more long-term data are available.”

So, where does all of this leave us with regard to antipsychotics and tardive dyskinesia? Contrary to what “Advances in Tardive Dyskinesia” said, the cause of TD is known. According to Dr. Peter Breggin and others, TD is caused by drugs that block the function of dopamine receptors in the brain. The risk of developing TD is present for all ages, although older adults are at higher risk of suffering this adverse side effect. And the risk of TD increases with the continued use of antipsychotics, perhaps as high as 26%, 52%, and 60% after 1, 2, and 3 years respectively of cumulative use.

Narrowly defining TD (as the DSM does) leads to the confounding invention of spontaneous dyskinesia—that is said to be indistinguishable from TD. The logic here seems circular. If TD is defined as caused by antipsychotics (which seems to mean primarily first generation antipsychotics), then dyskinesia independent of antipsychotics can’t be TD, because TD is only found with antipsychotics.

The so-called first-generation antipsychotics and second-generation antipsychotics equally put the individual at risk of developing TD and should be used at the lowest possible dose for the shortest time—if at all. Dr. Breggin said: “Since all these drugs are potent dopamine blockers, there should have been no doubt from the beginning that they would frequently cause TD.” Disregarding the fact that this drug action common to all antipsychotics perpetuates what seems to be an unnecessary and inaccurate distinction between first-generation and second-generation antipsychotics. So, if it walks like a duck, and quacks like a duck, isn’t it a duck?

See “Downward Spiral of Antipsychotics” for more on the concerns with antipsychotics.

01/1/19

Antidepressant Fall From Grace, Part 1

© hikrcn | 123rf.com

The so-called antidepressants are not a single class of drugs; nor are they just used to treat depression. There has also been a long-running debate over their adverse effects and treatment effectiveness. After Prozac was approved as the first SSRI (selective serotonin reuptake inhibitor) in 1987, the SSRI class of antidepressants became a kind of patent medicine for treating various mood-related conditions, and was even used as a character or personality enhancement. Yet there has been an accumulation of evidence over the past twenty years that questioned whether SSRIs were more effective than placebo. Are antidepressants effective treatments for depression and are they worth the risk?

Currently, the main classes of antidepressants are SSRIs such as Prozac (fluoxetine), Zoloft (sertaline) and Celexa (citalopram); SNRIs (serotonin norepinephine reuptake inhibitors such as Effexor (venafaxine), Cymbalta (duloxetine) and Pristiq (desvenlafaxine); and NDRIs (norepinephrine-dopamine reuptake inhibitors) such as Welbutrin or Zyban (bupropion). Methylphenidate (as Ritalin, Concerta and others) is also chemically a NDRI, but is used primarily as a medication for ADHD and will not be included in the following discussion. Older classes of antidepressants include tricyclic antidepressants (TCAs), monoamine oxidase inhibitors (MAOIs) and tetracyclic antidepressants (TeCAs). Antidepressants are used to treat major depression, anxiety, obsessive-compulsive disorder (OCD), attention-deficit hyperactivity disorder (ADHD), eating disorders, chronic and neuropathic pain, bed-wetting, fibromyalgia and menopause, smoking cessation and others.

The earliest and probably most widely accepted scientific theory of antidepressant action is the monoamine hypothesis (which can be traced back to the 1950s), which states that depression is due to an imbalance (most often a deficiency) of the monoamine neurotransmitters (namely serotonin, norepinephrine and dopamine).It was originally proposed based on the observation that certain hydrazine anti-tuberculosis agents produce antidepressant effects, which was later linked to their inhibitory effects on monoamine oxidase, the enzyme that catalyses the breakdown of the monoamine neurotransmitters. All currently marketed antidepressants have the monoamine hypothesis as their theoretical basis.

In 1952 psychiatrists Max Lurie and Harry Salzer coined the term “antidepressant” to describe the action of isoniazid, a medication originally developed as a treatment for tuberculosis. Seeikoff and Robitzek experimented with another ant-tuberculosis drug, iproniazid, which had a greater psychostimulant effect, but also greater toxicity. Serious adverse effects, including liver inflammation, led to its recall as an antidepressant in 1961. These drugs are MAOIs.

A tricyclic antidepressant, Tofranil (imipramine), was also first used to treat depression in the 1950s. Another TCA, Elavil (amitriptyline), was approved in 1961. Dozens of additional TCAs were developed over time. Similar to TCAs, tetracyclic antidepressants (TeCAs) like Remeron (mirtazapine) were introduced in the 1970s. But there were problems with TCAs, including a higher risk of serious cardiovascular side effects. They also had a relatively low toxicity level, making them a suicide risk—not an attractive adverse effect for an antidepressant.

While they are biochemically similar to TCAs and have no real differences in therapeutic effectiveness, SSRIs only affect the reuptake of serotonin, not the reuptake of dopamine and norepinephrine. SSRIs also have a higher toxicity level than TCAs and a lower risk of serious cardiovascular side effects. So an argument was made for SSRIs having fewer and milder side effects than TCAs. Initially persuasive, this claim has become less credible over time.

The SSRI antidepressant craze began with the introduction of Prozac in 1987. Zoloft (sertaline) came to market in 1992, Luvox (fluvoxamine) in 1994, Paxil (paroxetine) in 1996, Celexa (citalopram) in 1998, and Lexapro (escitalopram) in 2002. All the above SSRIs are now off patent and available as generics. Yet they are among the three most commonly used classes of prescription medications in the U.S. 12.7% of persons over the age of 12 reported they took an antidepressant in the previous month, according to data from the National Center for Health Statistics. Antidepressant use is highest among females in two age groups: 40 to 59 (21.2%), and 60 and over (24.4%). The same trend was seen with males from 40 to 59 (11.6%), 60 and over (12.6%). See the linked CDC article for more information on antidepressant use among Americans.

Prozac use swept over the U.S. like a pharmaceutical wave after it was approved. It even became a drug that people took for  “cosmetic psychopharmacology,” according to psychiatrist Peter Kramer, the author of the best-selling book: Listening to Prozac. Kramer said: “If I am right, we are entering an era in which medication can be used to enhance the functioning of the normal mind. The complexities of that era await us.”

The complexities of antidepressant use from the early days included evidence of violence and suicide. Toxic Psychiatry by another psychiatrist named Peter Breggin, was published in 1991. Breggin documented reports of suicidal behavior with Prozac in both the popular press and the professional literature. “Suicidal Behavior Tied to Drug,” was published on February 7, 1991 in The New York Times. The article said two cases of suicidal behavior and fantasies (with no prior history) were reported in The New England Journal of Medicine that same day. Eli Lilly (the manufacturer of Prozac) was facing more than 50 lawsuits at the time, but denied that there was any scientific merit to the claim Prozac could prompt suicidal or violent acts.

Dr. Breggin also predicted the rise of what is now called “treatment resistant depression” with SSRIs. He said: “If Prozac can indeed alleviate depression by making more serotonin available in the brain, then with time it may produce incurable depression by making the brain relatively unresponsive to any amount of serotonin.” In 2004 the FDA finally required black box warnings to be placed on the newer antidepressants, warning of the potential for the increased risk of suicidal thoughts and behavior in children and adolescents. Despite the age qualification, the danger for adults is also present.

In an article, Breggin described “How FDA Avoided Finding Adult Antidepressant Suicidality.” Quoting the FDA report of the 2006 hearings, he noted where the FDA permitted the drug companies to search their own data for “various suicide-related text strings.” Because of the large number of subjects in the adult analysis, the FDA did not—repeat, DID NOT—oversee or otherwise verify the process. “This is in contrast to the pediatric suicidality analysis in which the FDA was actively involved in the adjudication.” He added that the FDA did not require a uniform method of analysis by each drug company and an independent evaluator as required with the pediatric sample.

Peter Gøtzsche, a Danish physician and medical researcher who co-founded the Cochrane Collaboration, wrote an article describing how “Antidepressants Increase the Risk of Suicide and Violence at All Ages.” He said that while drug companies warn that antidepressants can increase the risk of suicide in children and adolescents, it is more difficult to know what that risk is for adults. This is because there has been repeated underreporting and even fraud with reporting suicides, suicide attempts and suicidal thoughts in placebo-controlled antidepressant trials. He added the FDA has contributed to the problem by downplaying the concerns, choosing to trust the drug companies and suppressing important information.

Gøtzsche drew attention to a meta-analysis of placebo-controlled trials from 2006 where the FDA reported five suicides in 52,960 patients (one per 10,000).  See Table 9 of the 2006 report. However the individual responsible for the FDA’s 2006 meta-analysis had published a paper five years earlier using FDA data where he reported 22 suicides in 22,062 patients (which is 10 per 10,000). Additionally, Gøtzsche found there were four times as many suicides on antidepressants as on placebo in a 2001 study.

Additional adverse side effects from antidepressant use include: weight gain and metabolic disturbances; sexual dysfunction; bleeding; sleep disturbances; emotional blunting; agitation and activation; discontinuation syndrome (withdrawal); violence; and others. New research published in the journal Psychotherapeutics and Psychosomatics concluded that SNRIs should be added to the list of drugs that induce withdrawal symptoms upon discontinuation. Even a gradual withdrawal did not prevent the onset of “withdrawal phenomena” with SNRIs.

The results of this systematic review indicate that withdrawal symptoms may occur after discontinuation of any type of SNRI (venlafaxine, desvenlafaxine, duloxetine, milnacipran, or levomilnacipran). However, the prevalence of withdrawal symptoms was variable and appeared to be higher after discontinuation of venlafaxine.

See a literature review of long-term use of newer generation antidepressants (i.e., SSRIs and SNRIs and others) by Carvalho et al. You can also look at “In the Dark About Antidepressants,” Antidepressant Misuse Disorder” and “Listening to Antidepressants” on this website for more information on antidepressants and their adverse effects. For more information on the association of antidepressants and violence, see Medication Madness by Peter Breggin and “Violence and the Brain” or “Iatrogenic Gun Violence” on this website.

While not everyone will experience these adverse events, they are present for many individuals who have used or are using antidepressants. But if your depression is debilitating, are antidepressants effective enough to be worth risking their potential adverse effects? In Part 2 of “Antidepressant Fall From Grace” we will look at the debate over the efficacy of antidepressants.

05/22/18

To Shock or Not to Shock

© Viacheslav Nikolainenko | 123rf.com

To shock or not to shock; that is the question. Is electroconvulsive therapy (ECT) an effective and safe treatment for severe cases of mood disorders such as depression? Or is it something that “permanently impairs memory and causes other long term signs of mental dysfunction such as difficulties with concentration and new learning?” It continues to be one of the most controversial treatments used in medicine because “precisely why electroshock works is a mystery.” And it’s also because it has a history of being used as a form of torture and euthanasia by at least one Nazi doctor.

In October of 1939, shortly after the invention of ECT by two Italian researchers, Adolf Hitler signed a decree that authorized German doctors to euthanize any psychiatric patient who was deemed incurable. “Between 1939 and 1941, tens of thousands of patients were killed at psychiatric hospitals.” While the program officially ended in 1941, the practice continued until the defeat of Germany in 1945 particularly with Dr. Emil Gelny. After just three months of clinical training, Gelny was granted a specialist qualification in psychiatry in 1943 and placed in charge of two psychiatric hospitals in Austria. At first, he used lethal doses of drugs like morphine and barbiturates to kill ‘incurable’ patients, but when the drugs became scarce, he modified existing ECT machines.

After the initial shock rendered the patient unconscious, he added four extra electrodes and attached them to the person’s wrists and ankles to deliver the lethal shocks. In “Mass killing under the guise of ECT,” the authors wrote: “Besides its easy availability and cost-effectiveness, a further important factor was that ECT could be camouflaged as a medical procedure to reduce patients’ suspicion, at a time when many correctly feared that drugs were used to kill them.” The author of  “How Electroconvulsive Therapy Became a Nazi Weapon” said it was not clear how many individuals Gelny murdered with ECT. The combined death toll at the two hospitals was 4,800, but most patients likely died from drug overdoses or malnutrition. For more information on the Nazi use of psychiatry in euthanasia, see “Psychiatry’s role in the holocaust.”

Supporters of ECT like Dr. Jeffrey Lieberman say that modern ECT technologies allow for individualized treatment for each patient so the minimum amount of electricity needed to induce a seizure is used. Allen Frances, the chair for the DSM-IV, said on Twitter that if he had severe depression, “ECT would definitely be my 1st choice.” The use of anesthetics combined with muscle relaxants and oxygenation “render ECT an extremely safe procedure,” according to Dr. Lieberman. He thought it was extremely ironic that the inventors of ECT failed to even be nominated for a Nobel Prize, “despite the fact that their invention was the only early somatic treatment to become a therapeutic mainstay of psychiatry.” He noted where the APA, NIH and FDA all approve the use of ECT “as a safe and effective treatment for patients with severe cases of depression, mania, or schizophrenia, and for patients who cannot take or do not respond to medications.”

On the other side of the ECT debate is Dr. Peter Breggin, who has been personally and professionally fighting against the use of ECT for over thirty years. You can review a wealth of information, including over 150 scientific studies by him and others, on his website: ECT Resources Center. One of his ‘key articles,’ “The FDA should test the safety of ECT machines” was written in 2010 to inform the FDA about the damaging effects of ECT as it was considering a reclassification of ECT treatment as safe for depressed patients. Breggin’s opening comment was: “Since its inception in the late 1930s, electroconvulsive therapy (ECT) has never been subjected to testing for Food and Drug Administration (FDA) approval in regard to safety and effectiveness.”

He said ECT is acknowledged, even among staunch advocates of the procedure, as the most controversial treatment in psychiatry. The Consensus Development Conference on ECT, conducted by NIH, was cited as affirming that assertion. “Given this extraordinary degree of controversy, there can be no justification for not subjecting ECT to the same scrutiny that is given to devices and treatments that are far less controversial, including at the least new animals studies.” Since it causes an acute delirium, there is no scientific doubt ECT harms the brain and mental function.

ECT produces sufficient trauma to the brain to cause a severe grand mal convulsion. All ECT treatments result in a period of coma lasting several minutes or more, sometimes including a flat line EEG. In routine application, the patient awakens in a delirium that is virtually indistinguishable from any other closed head injury. Typical symptoms include severe headache, memory dysfunction, disorientation, confusion, lack of judgment and unstable mood. The treatment always results in apathy, and sometimes in euphoria, which are typical reactions to traumatic brain injury. Consent forms routinely warn patients not to make decisions during or shortly after the completion of any series of ECT treatments.It is acknowledged in neurology that repeated head injuries that produce concussive symptoms are likely to cause persistent harm. ECT treatments are far more traumatic than most concussions, and include prolonged coma after each treatment, sometimes accompanied by EEG flat lining, and severe delirium after a few treatments or less. The number of traumatic ECT treatments usually far exceeds the number of concussions that produce lasting harm.In the rational practice of regulatory affairs, the fact that a treatment causes such initial trauma would in itself require it to be withdrawn from the market; it certainly requires a thorough examination of ECT by the FDA, starting with animal studies.

He referred to large animal studies that demonstrated generalized brain damage from ECT. He pointed to a 2007 study confirming that ECT produces lasting memory dysfunction and more generalized persistent cognitive deficits. “Except in regard to a psychiatric treatment, substantial evidence on this scale for persistent damage would lead to an inquiry into withdrawing a treatment from the market.” And despite several decades of effort, ECT advocates have not been able to demonstrate any lasting improvement.

The Consensus Development Conference on ECT found that controlled clinical trials failed to demonstrate any positive effect beyond four weeks. Thus the risk/benefit ratio is very poor. This four-week period corresponds to the period of the acute delirium, when the ECT effects of emotional blunting and/or euphoria are mistaken for clinical improvement. Typically, the patient stops voicing complaints and may display an artificially elevated mood.

The result of the hearings was a FDA advisory panel recommended in 2011 that ECT devices be designated as high risk for all patients: “FDA panel advises more testing of ‘shock-therapy’ devices.” Note the contradiction to what Dr. Lieberman claimed.

FDA staffers who reviewed hundreds of studies reported that as a group, they were poorly designed and had too few patients to allow firm conclusions to be drawn. “Many failed to follow patients long enough to discover the duration of ill effects.” The majority of the 18-member committee said not enough was known about ECT and more research was needed into the usefulness and hazards of the ECT devices.

That ruling led to an ongoing controversy, with the FDA tabling the issue until in 2015 it drafted a ‘proposed order’ that would reclassify ECT as safe and effective and only moderately risky for adults with severe depression who haven’t responded to medication or other therapies. However, it would also impose new requirements, like requiring physicians to warn patients that the side effects of ECT can include confusion and memory loss, and that its long term safety in not proven. They would also have to monitor patients’ memory and cognitive skills before and during treatment. “And the FDA would also classify ECT as high risk for psychiatric conditions other than depression and for children and adolescents.”

STAT News said psychiatrists are concerned that classifying ECT as a high risk procedure for psychiatric conditions other than depression, and for children and adolescents “could prompt insurers to stop covering and doctors to stop recommending ECT for younger patients” and those with other psychiatric conditions, like schizophrenia, bipolar mania and catatonia. It would require ECT manufacturers to conduct clinical trials for these indications. “It’s widely expected they will decline to do so because of the cost.” Doctors could still provide ECT “off label,” but insurance companies could refuse to pay. “And physicians may worry about the potential for malpractice lawsuits if anything goes wrong.”

Stop and think for a minute about these last statements. ECT device manufacturers are expected to decline to do expensive clinical trials to confirm that their device is safe and effective for patients who are children, adolescents, and suffer from psychiatric conditions other than severe depression. If there were studies provided to the FDA in 2010 to support the use of ECT to treat these populations, they were poorly designed and had too few patients to allow for conclusions to be drawn. So they have been using ECT devices to treat these populations without reliable clinical trial evidence. And apparently the doctors who do ECT want to continue doing so without worrying about “the potential for malpractice lawsuits if anything goes wrong.”

The American Psychiatric Association and the consumer group NAMI (National Alliance of Mental Illness) think the FDA should classify ECT as moderately risky for all conditions for which it is now commonly used. The FDA received 2,040 comments on its draft rule during the public comment period closed in March of 2016. “The agency has not given a timetable for issuing a final rule.”

While we await the FDA decision on its draft guidance for ECT devices, consider these comments from Peter Breggin’s “Introductory Information About ECT”:

After one, two or three ECTs, the trauma causes typical symptoms of severe head trauma or injury including headache, nausea, memory loss, disorientation, confusion, impaired judgment, loss of personality, and emotional instability. These harmful effects worsen and some become permanent as routine treatment progresses.ECT works by damaging the brain. The initial trauma can cause an artificial euphoria which ECT doctors mistakenly call an improvement. After several routine ECTs, the damaged person becomes increasingly apathetic, indifferent, unable to feel genuine emotions, and even robotic. Memory loss and confusion worsen. This helpless individual becomes unable to voice distress or complaints, and becomes docile and manageable. ECT doctors mistakenly call this an improvement but it indicates severe and disabling brain injury.Abundant evidence indicates that ECT should be banned. Because ECT destroys the ability to protest, all ECT quickly becomes involuntary and thus inherently abusive and a human rights violation. Therefore, when ECT has already been started, concerned relatives or others should immediately intervene to stop it, if necessary with an attorney.

Meanwhile, two Pennsylvania state representatives are not willing to wait and see what the FDA recommends. They introduced a bill to prohibit the use of ECT on individuals age 16 and under. The co-sponsors thought it was deplorable when ECT was done to children who have no say on whether to agree or not to the treatment. One of them said he thought it was a form of child abuse. According to the Pennsylvania Department of Human Services, 13 children under the age of 5 were given ECT in 2014. Three adolescents between 13 and 17 were electroshocked as well that year. “Children should not be forced to undergo a treatment that can have a lasting impact on their physical and mental well-being.”

Also see: “The Frankenstein Monster of ECT,” “Is ECT Brain Disabling?” and “The Appalling Silence on ECT” on this website.

03/9/18

Psychiatry Needs a Revolution

© Anna_Om | stockfresh.com

Peter Gøtzsche wrote a January 2018 editorial in the British Medical Journal, where he elaborated on why he thinks, “Psychiatry is a disaster area in healthcare that we need to focus on.” In his editorial, Gøtzsche said the prevailing paradigm in psychiatry was to say psychiatric drugs have specific effects against specific disorders; and that their actions do more good than harm. However, he asserted that as a consequence of its liberal use of psychiatric drugs, psychiatry actually does more harm than good. Gøtzsche and other so-called “antipsychiatry,” critics are often dismissed by psychiatry. But there was a study that surveyed the attitudes of medical teaching faculty towards psychiatry and psychiatrists; and the results had more in common with the antipsychiatrists than you might think.

Stuart, Sartorius and Linamaa published “Images of Psychiatry and Psychiatrists” in the open access journal, Acta Psychiatra Scandinavica. They surveyed 1,057 teaching medical faculty members from 15 academic teaching centers in the United Kingdom, Europe and Asia. The overwhelming majority of respondents held negative views towards psychiatry as a discipline, psychiatrists and psychiatric patients. Some of their findings were startling: 90% thought psychiatrists were not good role models for medical students; 84% thought psychiatric patients should be treated only within specialized facilities.

When the survey asked about the perception of psychiatry as a profession, 8.9% thought psychiatry was unscientific; 7.7% thought it was not evidenced-based; and 8.0% thought psychiatry was not a genuine, valid branch of medicine. Perceptions of psychiatric treatment thought psychiatrists had too much power over their patients (25.0%); treatments were not as effective as in other branches of medicine (22.6%); and most who receive treatments do not find them helpful (20.4%). Then 28.6% said they would not encourage a bright student to enter psychiatry; and 75.4% said many students at their medical school were not interested in pursuing psychiatry as a career.

Results highlight the extent to which non-psychiatrist medical faculty hold negative opinions of psychiatry as a discipline, psychiatric treatments, psychiatrists as role models for medical students, psychiatry as a career choice, psychiatric patients, and psychiatric training. The most outstanding findings were that psychiatrists were not considered to be good role models for medical students, and psychiatric patients were considered to be emotionally draining and unsuitable to be treated outside of specialized facilities or in general hospitals.

In Search of an Evidence-Based Role for Psychiatry,” by Read, Runciman and Dillon noted this was not the only study indicating negative views of psychiatry by other medical professionals. They cited a study by Curtis-Barton and Eagles that found medical students were discouraged from choosing psychiatry as a career either a lot or a little because of a perceived lack of evidence base (51%); and the scientific basis of psychiatry (53%). Only 4-7% of UK medical students saw it as a ‘probable/definite’ career because of its poor evidence base. Commenting further on “Images of Psychiatry and Psychiatrists,” Read, Runciman and Dillon said:

Even more revealing than the survey findings was psychiatry’s response to it. The researchers themselves, including a former President of the World Psychiatric Association, wondered whether their colleagues’ opinions are ‘well founded in facts’ or ‘may reflect stigmatizing views toward psychiatry and psychiatrists’. Their own answer to that question becomes abundantly clear when, instead of proposing efforts to address the problems identified by the medical community, such as having little scientific basis, they recommend only ‘enhancing the perception of psychiatrists’ so as to ‘improve the perception of psychiatry as a career.’

The responses to the survey, all written by psychiatrists, dismissed each concern “and blamed everyone but their own profession, including their supposedly ignorant, prejudiced medical colleagues and the biased media.” Read, Runciman and Dillon then described problems with how mental health issues are conceptualized, what causes them and how to treat them. “Despite all this, biological psychiatry is trying to expand the reach of what others consider to be an unscientific, reductionistic, simplistic and pessimistic ‘medical model’.” A truly evidence-based psychiatry would recommend psychiatric medications at a last resort (and for a short time period). The adverse effects of medications should be fully disclosed and “no medical treatment should be forced on anyone against their will.”

Read, Runciman and Dillon said there were three core research areas that psychiatry should be demonstrating progress in, if it is a legitimate scientific, medical discipline. They are: conceptualization, causation and treatment of the disorders.

With regard to the conceptualization of psychiatric disorders, “psychiatry’s primary contribution is an ever expanding list of labels.” Many do not reach even minimal scientific reliability levels and calling them ‘diagnoses’ is often a misnomer. Significantly, the NIMH announced when the DSM-5 was about to be published that it was abandoning the DSM diagnostic approach to classifying mental health problems for its research to develop scientifically robust ‘research domains.’ See “Patients Deserve Better Than the DSM” for more information on this.

“In terms of causation, psychiatry has focused predominantly on chemical imbalances, brain abnormalities and genetics.” But has repeatedly failed findings of any substance in support of that premise. Genetics has an important role, if the research is done on constructs that actually exist. There is also “the role of epigenetic processes whereby genes are activated and deactivated by the environment.”

Research suggests that the safety and efficacy of psychiatric drugs has been grossly exaggerated. Documentation in support of this claim is overwhelming. See the websites for Mad in America, Peter Breggin, and David Healy and RxISK for starters. You can also search this website or start with: “In the Dark About Antidepressants,” “Blind Spots With Antipsychotics.”

Peter Gøtzsche similarly noted concerns with the “liberal use of psychiatric drugs.” He identified four concerns with the prevailing paradigm in psychiatry and gave supporting evidence for each.

  • First, the effects of the drugs are not specific. “They impair higher brain functions and cause similar effects in patients, healthy people and animals.” For instance, not only does serotonin (SSRI antidepressants influence serotonin levels) seem to have a role in maintaining mood balance, it can effect social behavior, appetite and digestion, sleep memory and sexual desire and function.
  • Second, research in support of the paradigm that psychiatric drug have specific effects against specific disorder is flawed.
  • Third, the widespread use of psychiatric drugs has been harmful for patients. In every country where the relationship has been examined, an increased use of psychiatric medications has accompanied an increase in the number of chronically ill people and the number of people on disability pensions.
  • Fourthly, all attempts to use brain scans to show that psychiatric disorders cause brain damage have failed. “This research area is intensely flawed and very often, the researchers have not even considered the possibility that any brain changes they observe could have been caused by the psychiatric drugs their patients have taken for years” Yet this has been shown repeatedly in many reliable studies, especially for neuroleptic drugs.

Peter Gøtzsche said the prevailing paradigm in psychiatry, that its drugs have specific effects against specific disorders, is unsustainable when the research in support of it is critically appraised. He said psychiatry needed a revolution; reforms were not enough. “We need to focus on psychotherapy and to hardly use any psychiatric drugs at all.” Dr. Gøtzsche is a medical researcher and the Director of the Nordic Cochrane Center. Along with 80 others, he helped start the Cochrane Collaboration in 1993, which is “a global independent network of researchers, professionals, patients, carers, and people interested in health.” The work of the Cochrane Collaboration is recognized as an international gold standard for high quality, trusted information.

01/5/18

In the Dark About Antidepressants

© tab62 | stockfresh.com

In 2011, antidepressants were the third most commonly prescribed medication class in the U.S. Mojtabai and Olfson noted in their 2011 article for the journal Health Affairs that much of the growth in the use of antidepressants was driven by a “substantial increase in antidepressant prescriptions by nonpsychiatric providers without an accompanying psychiatric diagnosis.” They added how the growing use of antidepressants in primary care raised questions “about the appropriateness of their use.” Despite this concern, antidepressant prescriptions continued to rise. By 2016, they were the second most prescribed class of medications, according to data from IMS Health.

A CDC Data Brief from August of 2017 reported on the National Health and Nutrition Examination Survey. The Data Brief provided the most recent estimates of antidepressant use in the U.S. for noninstitutionalized individuals over the age of 12. As indicated above, there was clear evidence of increased antidepressants use from 1999 to 2014. 12.7% of persons 12 and over (one out of eight) reported using antidepressant medication in the past month. “One-fourth of persons who took antidepressant medication had done so for 10 years or more.”

Women were twice as likely to take antidepressants. And use increased with age, from 3.4% among persons aged 12-19 to 19.1% among persons 60 and over. See the following figures from the CDC Data Brief. The first figure notes the increased use of antidepressants among persons aged 12 and over between 1999 and 2014. You can see where women were twice as likely to take antidepressants as men.

Figure 1

The second figure shows the percent of individuals aged 12 and over who took antidepressant medication in the past month between 2011 and 2014. Note how the percentages increase by age groups for both men and women, with the highest percentages of past month use for adults 60 and over for both men and women.

Figure 2

The third figure shows the length of antidepressant use among persons aged 12 and over. Note that while 27.2% reported using them 10 years or more, 68% reported using antidepressants for 2 years or more. “Long-term antidepressant use was common.” Over the fifteen-year time frame of the data, antidepressant use increased 65%.

Figure 3

The widespread use of antidepressants documented above is troubling when additional information about antidepressants is considered. A February 2017 meta-analysis done by Jakobsen et al., and published in the journal BMC Psychiatry, found all 131 randomised placebo-controlled trials “had a high risk of bias.” There was a statistically significant decrease of depressive symptoms as measured by the Hamilton Depression Rating Scale (HDRS), but the effect was below the predefined threshold for clinical significance of 3 HDRS points. Other studies have indicated that differences of less than 3 points on the HDRS are not clinically observable. See “Antidepressant Scapegoat” for more information on the HDRS. Jakobsen et al. concluded:

SSRIs might have statistically significant effects on depressive symptoms, but all trials were at high risk of bias and the clinical significance seems questionable. SSRIs significantly increase the risk of both serious and non-serious adverse events. The potential small beneficial effects seem to be outweighed by harmful effects.

In his review of the Jakobsen et al. study for Mad in America, Peter Simons noted where these results add to a growing body of literature “questioning the efficacy of antidepressant medications.” He pointed to additional studies noting the minimal or nonexistent benefit in patients with mild or moderate depression; the adverse effects of antidepressant medications; the potential for antidepressant treatment to potentially worsen outcomes. He concluded:

Even in the best-case scenario, the evidence suggests that improvements in depression due to SSRI use are not detectable in the real world. Given the high risk of biased study design, publication bias, and concerns about the validity of the rating scales, the evidence suggests that the effects of SSRIs are even more limited. According to this growing body of research, antidepressant medications may be no better than sugar pills—and they have far more dangerous side effects.

Peter Gøtzsche, a Danish physician and medical researcher who co-founded the Cochrane Collaboration, wrote an article describing how “Antidepressants Increase the Risk of Suicide and Violence at All Ages.” He said that while drug companies warn that antidepressants can increase the risk of suicide in children and adolescents, it is more difficult to know what that risk is for adults. That is because there has been repeated underreporting and even fraud in reporting suicides, suicide attempts and suicidal thoughts in placebo-controlled trials. He added that the FDA has contributed to the problem by downplaying the concerns, choosing to trust the drug companies and suppressing important information.

He pointed out a meta-analysis of placebo-controlled trials from 2006 where the FDA reported five suicides in 52,960 patients (one per 10,000).  See Table 9 of the 2006 report. Yet the individual responsible for the FDA’s meta-analysis had published a paper five years earlier using FDA data where he reported 22 suicides in 22,062 patients (which is 10 per 10,000). Additionally, Gøtzsche found there were four times as many suicides on antidepressants as on placebo in the 2001 study.

In “Precursors to Suicidality and Violence in Antidepressants” Gøtzsche co-authored a systematic review of placebo-controlled trials in healthy adults. The study showed that “antidepressants double the occurrence of events that can lead to suicide and violence.” Maund et al. (where he was again a co-author) demonstrated that the risk of suicide and violence was 4 to 5 times greater in women with stress incontinence who were treated with duloxetine (Cymbalta).

Although the drug industry, our drug regulators and leading psychiatrists have done what they could to obscure these facts, it can no longer be doubted that antidepressants are dangerous and can cause suicide and homicide at any age. Antidepressants have many other important harms and their clinical benefit is doubtful. Therefore, my conclusion is that they shouldn’t be used at all. It is particularly absurd to use drugs for depression that increase the risk of suicide when we know that psychotherapy decreases the risk of suicide. . . . We should do our utmost to avoid putting people on antidepressant drugs and to help those who are already on them to stop by slowly tapering them off under close supervision. People with depression should get psychotherapy and psychosocial support, not drugs.

Peter Breggin described “How FDA Avoided Finding Adult Antidepressant Suicidality.” Quoting the FDA report of the 2006 hearings, he noted where the FDA permitted the drug companies to search their own data for “various suicide-related text strings.” Because of the large number of subjects in the adult analysis, the FDA did not—repeat, DID NOT—oversee or otherwise verify the process. “This is in contrast to the pediatric suicidality analysis in which the FDA was actively involved in the adjudication.” Breggin added that the FDA did not require a uniform method of analysis by each drug company and an independent evaluator as required with the pediatric sample.

Vera Sharav, “a fierce critic of medical establishment,” the founder and president of the Alliance for Human Research Protection (AHRP), testified at the 2006 hearing. She reminded the Advisory Committee that the FDA was repeating a mistake they had made in the past.  She said in the past the FDA withheld evidence of suicides from the Advisory Committee. German documents and the FDA’s own safety review showed an increased risk of suicides in Prozac. “Confirmatory evidence from Pfizer and Glaxo were withheld from the Committee.”  Agency officials “obscured the scientific evidence with assurances.”

What the FDA presented to you is a reassuring interpretation of selected data by the very officials who have dodged the issue for 15 years claiming it is the condition, not the drugs. What the FDA did not show you is evidence to support that SSRI safety for any age group or any indication. They are all at risk. They failed to provide you a complete SSRI data analysis. They failed to provide you peer-reviewed critical analyses by independent scientists who have been proven right. FDA was wrong then; it is wrong now. Don’t collaborate in this. [But they eventually did]

Breggin commented that the FDA controlled and monitored the original pediatric studies because the drug companies did not do so on their own and failed to find a risk of antidepressant-induced suicidality in any age group. “Why would the FDA assume these same self-serving drug companies, left on their own again, would spontaneously begin for the first time to conduct honest studies on the capacity of their products to cause adult suicidality?”

In a linked document of two memos written by an Eli Lilly employee in 1990, Dr. Breggin noted where the individual questioned the wisdom of recommendations from the Lilly Drug Epidemiology Unit to “change the identification of events as they are reported by the physicians.” The person went on to say: “I do not think I could explain to the RSA, to a judge, to a reporter or even to my family why we would do this especially to the sensitive issue of suicide and suicide ideation. At least not with the explanations that have been given to our staff so far.” Those suggestions included listing an overdose in a suicide attempt as an overdose, even though (here he seems to be quoting from a policy or procedural statement) “when tracking suicides, we always look at all overdose and suicide attempts.” Eli Lilly brought the first SSRI, Prozac, to market in 1986.

Next time you hear someone say that the FDA studies only showed increased suicidality in children and young adults as opposed to adults, remember that the adult studies, unlike the pediatric studies, were not controlled, monitored or validated by the FDA. This is one more example of the extremes the FDA will go to in order to protect drug companies and their often lethal products.

The problems with antidepressants, most of which are SSRIs—selective serotonin reuptake inhibitors—were at least partially known as Prozac and its cousins were being developed and brought to market in the early 1990s. As the above discussion indicated, there seems to have been a disregard of the potential for multiple negative side effects from their use, up to and including the various forms of suicidality. The sleight-of-hand done by the drug companies, and apparently the FDA, means that many individuals are in the dark about the adverse side effects stemming from their SSRI medications.

11/3/17

Downward Spiral of Antipsychotics

© konradbak | stockfresh.com

Antipsychotic or neuroleptic drugs became big business for pharmaceutical companies about nine years ago, reaching sales of $14.6 billion in 2008. This made them the best selling therapeutic class of medications in the U.S. that year. Not only are they used to treat psychosis and bipolar disorder, several have been approved as an add-on medication to treat depression. Additionally they are used off-label with children and the elderly for behavioral control. And they are the primary cause for a serious neurological disorder called tardive dyskinesia.

Tardive dyskinesia (TD) is a group of involuntary movement disorders caused by drug-induced (iatrogenic) neurological damage, primarily from antipsychotic drugs and a few others that block the function of dopamine in the brain. “TD can vary from a disfiguring grimace to a totally disabling array of spasms and often bizarre movements of any part of the body.” If TD is not identified early on and the drugs stopped, “these disorders nearly always become permanent.” Both patients and their doctors often fail to recognize or diagnose its symptoms.

The above short introduction was gleaned from Dr. Peter Breggin’s “Antipsychotic Drugs and Tardive Dyskinesia (TD) Resources Center.” If you can spare a few minutes, watch a ten-minute TD video edited by Dr. Breggin from existing videos available on the Internet. The video contains a series of individuals with the three general types of tardive dyskinesia whose ages range from pre-adolescence to seniors. These types are: tardive dystonia, a series of involuntary movements that include painful muscle contractions or spasms; tardive akathisia, psychomotor agitation; and classic tardive dyskinesia, the rapid, jerky movements or uncontrollable head bobbing or movements of the arms, hands, feet fingers or toes. The following is a reproduction of a table from Psychiatric Drug Withdrawal by Peter Breggin more fully describing the symptoms of tardive dyskinesia.

Symptoms of Tardive Dyskinesia

Tardive Dyskinesia (Classic)

Rapid, irregular (choreiform), or slow and serpentine (athetoid) movements; often bizarre looking; involving any voluntary muscle, including:

Face, eyelids and eye muscles, jaw (chewing movements, tongue biting), mouth lips, or tongue (protruding, trembling curling, cupping)

Head (nodding), neck (twisting, turning), shoulders (shrugging), back, torso (rocking movements), or abdomen

Arms and legs (may move slowly or jerk out of control)

Ankles, feet and toes; wrists, hands, and fingers (sometimes producing flexion, extension or rotation)

Breathing (diaphragm and ribs; grunting), swallowing (choking), and speaking (dysphonis)

Balance, posture, and gait (sometimes worse when slow, often spastic)

Tardive Dystonia

Often painful sustained contractions (spasms) of any voluntary muscle group; potentially causing muscular hypertrophy, arthritis, and fixed joints; frequently involving the following:

Neck (torticolis, retrocollis) and shoulders

Face (sustained grimacing and tongue protrusion)

Mouth and jaw (sustained opening or clamping shut)

Arms and hands; legs and feet (spastic flexion or extension)

Torso (twisting and thrusting movements; flexion of spine)

Eye lids (blepharospasm)

Gait (spastic; mincing)

Tardive Akathisia

Potential agonizing inner agitation or tension, usually (but not always) compelling the patient to move, commonly manifested as the following:

Restless leg movements (when awake)

Foot stamping

Marching in place, pacing

Jitteriness

Clasping hands or arms

Inability to sit still

TD can impair any muscle functions that are partially or wholly under voluntary control such as the face, eye muscles, tongue, neck, back, abdomen, extremities, diaphragm and respiration, swallowing, and vocal cords. Coordination and posture can be afflicted. TD can cause tremors, tics, and paresthesias (e.g., burning sensations, numbness). TD can also afflict the autonomic nervous system, especially impairing gastrointestinal functioning.

TD sufferers are often exhausted by these unrelenting body movements; even when they are limited to one area of the body, such as the jaw, neck or feet. They become socially withdrawn and isolated—humiliated by the stigma of their uncontrollable movement disorder. One muscle group, such as the tongue or fingers, or various muscle groups can be afflicted. TD symptoms can also change over time from one muscle group to another. This can occur over a period of minutes, days, weeks or years. “Especially in the dystonic forms, the pain can be very severe, and the physical stress can cause serious orthopedic problems.” Antipsychotic drugs are the primary cause of this neurological disorder.

Antipsychotic drugs are also called neuroleptics. Prescribers often promote them in a misleading fashion as antidepressants, mood stabilizers, bipolar drugs, sleeping pills, and behavior control drugs in children. Recent ones include Risperdal (risperidone), Abilify (aripiprazole), Geodon (ziprasidone), Invega (paliperidone), Latuda (lurasidone), Rexulti (brexpiprazole), Risperdal (risperidone), Saphris (asenapine), Seroquel (quetiapine), and Zyprexa (olanzapine). Older antipsychotics include Haldol (haloperidol) and Thorazine (chlorpromazine). [See a more complete listing of antipsychotics here]

Drug companies have made claims that the newer, so-called atypical antipsychotic drugs are less likely to cause TD, but those claims are false or misleading. A 2008 study by Chouinard found a high prevalence of DIMD (drug-induced movement disorders). Nearly 50% of patients had a definite DIMD. The authors also said that DIMD persisted with atypical antipsychotics. “It is crucial to acknowledge that there is a persistence of DIMD with atypical antipsychotics, which are not recognized and confounded with psychiatric symptoms.” Caroff, Miller and Rosenheck reported in “Extrapyramidal Side Effects” that there were no significant differences in measuring the incident rates of TD and other DIMDs between atypical antipsychotics and an older antipsychotic, perphenazine (Trilafon).

According to Dr. Breggin, TD occurs around a rate of 5 to 8 percent per year in adults up to the age of 40, with an accumulating risk of 20% to 24% after four years. The rates increase rapidly over forty. “For a 40-year-old patient, the risk is 18% at 2 years [9% per year] and 30% at 4 years.” In patients over 45, “the cumulative incident of TD after neuroleptic exposure is 26%, 52%, and 60% after 1, 2, and 3 years, respectively.”

Recently the FDA approved two medications to treat TD. Let this sink in. A serious neurological disease, which is primarily caused by a class of psychiatric medications, is being treated with another medication. Neurology Advisor interviewed two experts medical experts on the current state of treatment for TD. One of the experts interviewed said:

The main issue in the “treatment” of TD is prevention. It is important to use offending agents only when the potential benefit is higher than this risk. Once it appears, the first step is to determine whether symptoms are bothersome or disabling to warrant treatment. If possible, either discontinue the offending agent, although this might not improve symptoms permanently, and they will probably worsen initially, or change to a similar agent less likely to cause TD. In terms of prescription medications for TD, I have observed significant variability in treatment approaches depending on physician choice. Over the last few years, I personally found tetrabenazine to be the most efficient agent.

Tetrabenazine (Xenazine) was approved to treat the movement disorder associated with Huntington’s chorea, but has been used off-label to treat TD. It is now off patent and available as a generic. On patent, it sold for $152,000 per year. The generic version is $96,000 per year. However, there have been two new drugs approved specifically to treat TD: Austedo (deutetrabenazine) and Ingrezza (vabenazine). Both are being priced at around $60,000 per year. All three medications carry “black box” warnings from the FDA because of their risk for depression and suicide. You can read more about these medications here, here, here and here.

In addition to TD, antipsychotics have been linked to adverse cardio vascular events, brain shrinkage, dopamine supersensitivity, weight gain, diabetes, a shortened life span and more. The risk-benefit ratio may see a need for short-term use of antipsychotics, but the long-term benefits of their use are questioned by many credible sources. There have been follow up studies that support that concern. See this blog post by Thomas Insel, the former director of the NIMH, the National Institute of Mental Health.

It seems to be stepping onto a downward spiral to use medications with serious side effects to treat TD, a serious side effect from antipsychotics. As the above quoted expert, suggested, the best treatment for tardive dyskinesia is prevention—avoid antipsychotics if at all possible. For more information on concerns and adverse effects with antipsychotics, see: “Blind Spots with Antipsychotics” Part 1 and Part 2, “Antipsychotic Big Bang”, “Wolves in Sheep’s Clothing” and “Hollow Man Syndrome.”

09/26/17

Demolishing ADHD Diagnosis

© Lightsource | stockfresh.com

The Harvard psychologist, Jerome Kagan, sees ADHD as more of an invented condition than a serious illness. Further, he thinks it was invented for “money-making reasons” by the pharmaceutical industry and pro-ADHD researchers. He believes the drastic increase in the number of children diagnosed with ADHD has more to do with “fuzzy diagnostic practices” and relabeling. Fifty years ago, a 7-year-old child who was bored and disruptive in class was seen as “lazy.” Today he is seen as suffering from ADHD.

Every child who’s not doing well in school is sent to see a pediatrician, and the pediatrician says: “It’s ADHD; here’s Ritalin.” In fact, 90 percent of these 5.4 million kids don’t have an abnormal dopamine metabolism. The problem is, if a drug is available to doctors, they’ll make the corresponding diagnosis.

In his interview with Spiegel Online, Kagan went on to say that the inflated diagnosis of ADHD and other so-called childhood mental health disorders means more money for the pharmaceutical industry, psychiatrists and the people doing research. “We’re up against an enormously powerful alliance: pharmaceutical companies that are making billions, and a profession that is self-interested.” As he said, he’s not the only psychologist who is saying this.

Parenting expert and family psychologist, John Rosemond, agrees with Kagan. In 2009 he co-authored The Diseasing of American’s Children where they argued that ADHD and other childhood behavior disorders “were inventions of the psychological-psychiatric-pharmaceutical industry.” They went further than Kagan in saying that ADHD does not exist; that it is a fiction. In his April 9, 2017 article, “ADHD Simply Does Not Exist,” Rosemond referred to Kagan’s declaration on ADHD, noting that he and other psychologists studied Kagan’s books and research papers on children and child development when they were in graduate school. In The Diseasing America’s Children, Rosemond said:

Science depends on verifiable, objective evidence and experimental results that can be replicated by other scientists. Where ADHD is concerned, neither verifiable, objective evidence nor replicable experimental results exist to support the claims of the ADHD establishment.”

Rosemond and his co-author, Bose Ravenel, believe that childhood behavior disorders like ADHD are manifestations of “dysfunctions of discipline and lifestyle” endemic to modern family culture. Once these problems are identified, they can be easily corrected. And once corrected, the errant behavior “usually recovers to a state of normalcy within a relatively short period of time.” They believe children do not need a psychologist when they misbehave, they need discipline—“firm, calm and loving discipline.”

In Debunking ADHD, educational psychologist Michael Corrigan said ADHD is a negative label that does not exist. “Not unlike the many wonderful stories about unicorns, fairies, and leprechauns, the diagnosis of ADHD is a brilliant work of fiction.” He noted that many of the common childhood behaviors (or supposed symptoms) associated with ADHD are also used to identify giftedness in children. When these behaviors are harnessed and focused, they can help children become “incredibly creative, insightful, and successful individuals in adulthood.” If children don’t learn to harness the power of the behaviors ADHD and giftedness have in common, “such behaviors when displayed might seem annoying and immature.” He said:

My biggest reason for writing this book is my desire to show you that the practice of medicating children for acting like children in the name of ADHD is, in two words, wrong and dangerous. Despite the grandiose claims of the mega-pharmaceutical companies selling ADHD drugs to concerned parents, prescribing pills to young children trying to learn how to become young adults is just a quick fix void of any long-term benefits.

Corrigan described eating lunch with a group of children who had just taken their ADHD medication at school. They were now supposedly “good to go” (sufficiently medicated) for an afternoon of learning. It was the longest lunch period he had ever experienced. “Comparing the kids at my table to others in the cafeteria, and slowly watching these playful, creative, energetic, and funny children go from kids being kids to near expressionless robot-like entities, made me sick to my stomach.”

The total number of children on ADHD medication “skyrocketed” from 1.5 million in 1995 to 3.5 million in 2011. “Sales of prescription stimulants quintupled from 2005 to 2015.” The rising rate of ADHD diagnosis has been described as “an unreal epidemic” and a “national disaster of dangerous proportions” by well-known professionals like Allen Frances and Keith Conners. Frances was the chair of the DSM-IV. Conners, now an emeritus professor of medical psychology at Duke University, “spent much of his career in legitimizing the diagnosis of ADHD.”

Allen Frances was one of four authors of an article in the International Journal of Qualitative Studies on Health and Well-Being, “ADHD: A Critical Update for Educational Professionals.” When the DSM-IV was published in 1994, the prevalence of ADHD was estimated to be 3%. Since then, parent-reported ADHD diagnosis increased to 7.8% in 2003; 9.5% in 2007; and to 11% in 2011. Nearly one in five high school boys had been diagnosed with ADHD and around 13.3% of 11-year-old boys were medicated for ADHD.

Teachers and other school personnel are often the first to suggest a child might be “ADHD.” Research suggested teachers felt insecure about dealing with behavioral problems and hesitated to accept responsibility for students with special needs. Frances and his coauthors described six scientifically grounded issues that educational professionals should be aware of when they are confronted with inattention and hyperactivity in the classroom.

First: birth order matters. Several studies have shown “That relative age is a significant determinant of ADHD diagnosis and treatment.” The youngest children in the classroom are twice as likely to be diagnosed with ADHD and receive medication. They suggest teachers take the child’s relative age into account when judging the child’s behavior. “Seeing ADHD as the cause of inattention and hyperactivity is in fact a logical fallacy as it is circular.”

Second, there is no single cause of ADHD. “There are no measurable biological markers or objective tests to establish the presence or absence of ADHD (or any other given DSM syndrome).” ADHD is a description of behavior and is based on “criteria that are sensitive to subjectivity and cognitive biases.” Multiple factors have been associated with ADHD, without necessarily implying causality. Those factors include: divorce, poverty, parenting styles, lone parenthood, sexual abuse, lack of sleep, artificial food additives, mobile phone use and growing up in areas with low solar intensity. “All these factors and more may play a role when a particular child exhibits impairing hyperactive and inattentive behaviours, and there is no conclusive cause of ADHD.”

Third, most children exhibiting “ADHD behavior” have normal-looking brains. Studies that do show small differences in terms of brain anatomy do not apply to all children diagnosed with ADHD. Individual differences refer to slower anatomical development. “They do not reveal any innate defect as is illustrated by the fact that many people with an unusual anatomy or physiology do not experience ADHD related problems.” Also, the test subjects in many brain-related studies are rigorously screened and don’t represent all individuals diagnosed with ADHD.

The samples do not comprise an accurate representation of their respective populations, meaning an average child with a diagnosis of ADHD and an average “normal” child. This problem is particularly urgent since the DSM 5 has lowered the age of onset criterion, as well as the impairment criterion compared to the previous version, the DSM-IV. Alongside the lowered threshold, the potential to generalize earlier research findings has lowered as well.

Fourth, the claims of ADHD being inherited may be overestimated.  The claims vary widely and are subject to debate because of methodological issues used in calculating the heritability coefficient in twin, familial and adoption studies. There is significant difficulty separating genetic influences from environmental ones, such as poverty, parenting styles and divorce, in these studies. “In genetic association studies that really analyse genetic material and that are more powerful when separating the influence of genetics from other etiologic sources, associated genes show only very small effects.” When combined, they explain less than 10% of variance.

This means they occur only slightly more often in diagnosed individuals than in controls, and they do not explain nor predict ADHD behaviours. For educational professionals, this is important to consider as an ADHD label might give a false sense of security with regard to the alleged (genetic) cause of a child’s behaviour and the preferred cure (medication).

Fifth, medication does not benefit most children in the long run. Follow up studies of the long-term effects of the MTA (Multimodal Treatment of Attention Deficit Hyperactivity Disorder) study showed a convergence of outcomes over time between medicated and non-medicated children. Other studies also report either no long-term benefits, or even worse benefits. “While medication may help a small group of children in the long run, most will not benefit from long-term pharmaceutical treatment.”

The sixth and final issue that educational professionals should be aware of when confronted with inattention and hyperactivity in the classroom is the reality that a diagnosis can be harmful to children. A CDC MMWR Report indicated only 13.8% had severe ADHD, with 86.2% having mild (46.7%) or moderate (39.5%) ADHD. The authors pointed out a DSM diagnosis opened the door for additional reimbursement to the school for treatment and school services, perhaps promoting a search for pathology in relatively mild cases. “The question is whether in these mild cases the merits of a confirmed diagnosis—such as acknowledgement of problems and access to help—outweigh possible demerits.” Some known disadvantages of a diagnosis are: lower teacher and parent expectations that turn into self-fulfilling prophecies, prejudice and stigmatization of diagnosed children, a more passive role towards problems, difficulties getting life and disability insurances later on in life, and others.

The Allen Frances article linked above was the most accepting of ADHD as a legitimate “neuro-developmental disorder.” Yet it cautioned there was no single cause for ADHD, medications to “treat” ADHD did not have long-term benefits, and there was a problem with its over diagnosis. Jerome Kagan thought 90% children were wrongly diagnosed with ADHD because of “fuzzy diagnostic practices and relabeling.” Michael Corrigan, John Rosemond and questioned the validity of ADHD as a neuro-developmental disorder. Corrigan said it pathologized normal childhood behavior; and medicating these children was wrong and evil. It’s time to demolish the ADHD treatment empire.

Additional articles on ADHD can be found on this website here: “National ADHD Epidemic,” “Misleading Info on ADHD,” “Tip of the ADHD Iceberg,” and “Is ADHD Simply a Case of the Fidgets?” You can also read a longer paper: “ADHD: An Imbalance of Fire Over Water of a Case of the Fidgets?

08/1/17

Repeating Past Mistakes

© kbuntu | 123rf.com

At 4:45 a.m. on September 1, 1939, 1.5 million German troops invaded Poland. Two days later Britain and France declared war on Germany and World War II had begun. This “blitzkrieg” strategy became a blueprint of how Hitler intended to wage war. Generally unknown, one of the key tools in the success of the German Wehrmacht was their use of a methamphetamine called Pervitin. The troops were literally on cloud nine about Pervitin, as were their commanders.

Reports from the front lines on the drug included the following glowing testimonies:

Everyone fresh and cheerful, excellent discipline. Slight euphoria and increased thirst for action. Mental encouragement, very stimulated. No accidents. Long-lasting effect.The feeling of hunger subsides. One particularly beneficial aspect is the appearance of a vigorous urge to work. The effect is so clear that it cannot be based on imagination.

Not surprisingly, addiction became a problem. In April and May of 1940 alone, the Nazis shipped 35 million units of Pervitin and similar medications to its troops. Troops at the front sent letters home begging for more Pervitin. “Everybody, from generals and their staffs to infantry captains and their troops, became dependent on methamphetamine.” A lieutenant colonel leading a Panzer division wrote the following in a report:

Pervitin was delivered officially before the start of the operation and distributed to the officers all the way down to the company commander for their own use and to be passed on to the troops below them with the clear instruction that it was to be used to keep them awake in the imminent operation. There was a clear order that the Panzer troop had to use Pervitin.

“Speed” or amphetamine is in ADHD medications like Adderall (amphetamine/dextroamphetamine), Vyvanse (lisdexamfetamine). Methylphenidate (Concerta, Ritalin, Daytrana) is their close chemical relative. By the way, don’t be fooled by the creative spelling done by Shire for Vyvanse: “lisdexamfetamine” instead of “lisdexamphetamine.” Writing for The Guardian, Alexander Zaitchik noted  how the phonetic sleight-of-hand of Shire with Vyvanse and its aggressive marketing contributed to its success in getting the FDA to approve Vyvanse to treat “Binge Eating Disorder.”

The company’s neo-phoneticism is intended to put more distance between its new golden goose and the deep clinical literature on speed addiction, not to mention last century’s disastrous social experiment with widespread daily speed use, encouraged by doctors, to temper appetites and control anxiety.

What follows is a history of amphetamines gleaned primarily from two sources: a paper on Amphetamines from the Center for Substance Abuse Research (CESAR) of the University of Maryland and a 2008 article by Nicolas Rasmussen for the American Journal of Public Health, “America’s First Amphetamine Epidemic 1929-1971.”

Amphetamine was first synthesized by a German chemist in 1887, but its stimulant effects weren’t noticed until the early 1930s, when it was rediscovered by accident. The chemist was trying to make ephedrine, a decongestant and appetite suppressant. Branded as Benzedrine, amphetamine was marketed as an inhaler for nasal congestion by the pharmaceutical company, Smith, Kline & French starting in 1933. It didn’t take some people long to figure out how to use Benzedrine for its euphoric effect. They cracked the container open and swallowed the Benzedrine-coated paper strip or steeped it in coffee.

Its use grew rapidly as medical professionals recommended amphetamine for alcohol hangover, depression, narcolepsy, weight-loss, hyperactivity in children and morning sickness in pregnant women. “The use of amphetamine grew rapidly because it was inexpensive, readily available, had long lasting effects, and because medical professionals purported that amphetamine did not pose an addiction risk.” During World War II, amphetamines or methamphetamine (a derivative of amphetamine) were used by both Allied and Axis troops to increase their alertness and endurance, as well as to improve their mood.

By 1945, over 500,000 civilians were using amphetamine psychiatrically or for weight loss. Between 1945 and 1960 commercial competition drove amphetamine use higher. After a patent expired in 1949, the FDA estimated the production of amphetamine and methamphetamine rose almost 400% by 1952. By 1962, production of amphetamines was approaching 43 standard 10-mg doses per person. This compares to concerns with the 65 doses per year in the present decade that social critics of our cultures point to as evidence of the overuse of psychotropic medications.

The adverse effects of amphetamine were becoming more evident by 1960. Amphetamine psychosis had been known since the 1930s, but was initially attributed to the drug unmasking latent schizophrenia. This claim is eerily similar to current interpretations of antidepressant activation unmasking latent bipolar disorder, rather than being seen as an adverse side effect of antidepressant medication. There were also concerns that amphetamines were addictive. But this didn’t stop individuals like President John F. Kennedy from using regular injections of vitamins, hormones and 15 mg of methamphetamine to help maintain his image of youthful vigor.

Large quantities of amphetamines were dispensed in the 1960s directly by diet doctors and weight loss clinics. Calculations of amphetamine use and misuse in 1970 estimated that at least 9.7 million Americans had used the drugs in the past year. And of those 9.7 million users, 3.8 million do so for nonmedical reasons and 2.1 million of those abused the drugs. Rasmussen said this first amphetamine epidemic was iatrogenic, “created by the pharmaceutical industry and (mostly) well-meaning prescribers.”  The current problem with the misuse of amphetamines has reached the peak of the original epidemic, namely about 3.8 million past-year nonmedical amphetamine users, with an estimated 320,000 of whom are addicted.

Parallel to this trend has been the surge in the legal supply of amphetamine-type ADHD medications such as Ritalin, Adderall and Vyvanse. American doctors, unlike those in other countries, have found it hard to resist prescribing these drugs. According to DEA production data, since 1995 medical consumption of these drugs has quintupled. In 2005, it exceeded the amphetamine consumption of 2.5 billion 10-mg amphetamine base units for medical use in 1969—compared to 2.6 billion base units in 2005. The following graph, taken from Rasmussen’s article, illustrates this increase. The data is based upon DEA production quotas and expressed as common dosage units of 10-mg amphetamine and 30-mg methylphenidate.

Rasmussen downplayed a causal connection between childhood stimulant treatment for ADHD and later nonmedical amphetamine consumption, but others don’t (See more on this below). However, he did think the wide distribution of ADHD stimulants, noted in the above graph, created a hazard. He cited data from a study that indicated 600,000 reported using stimulants other than methamphetamine nonmedically in the past month. So, “legally manufactured attention deficit medications like Adderall and Ritalin appear to be supplying frequent, and not just casual, misusers.”

An analysis of stimulant abuse in recent national household drug surveys found that half of the 3.2 million reporting past-year nonmedical use of stimulants in the U.S. only used psychiatric stimulants. And 750,000 of those reported they had never used anything but attention deficit pharmaceuticals in their entire lives. “On this evidence alone, one can fairly describe the high production and prescription rates of these medications as a public health menace of great significance.”

Another problem is the widespread acceptance of prescription amphetamines as a legal and relatively harmless drug that can be given to small children. Rasmussen said it is difficult to make a convincing case that the same drug is harmful if used nonmedically. Therefore he concluded any attempt to deal harshly with methamphetamine users today in the name epidemic control, without touching medical stimulant production and prescription was practically impossible and hypocritical.

There is some evidence of a connection between childhood stimulant treatment and later abuse or use of stimulants. See “ADHD: An Imbalance of Fire Over Water or a Case of the Fidgets?” on this website for a discussion of the association of addiction and ADHD medications as well as other adverse effects.

Nadine Lambert did a longitudinal study of ADHD children and normal controls. Her participants were followed through their childhood and adolescence and then evaluated three times as young adults. “ADHD was also significantly associated with amphetamine dependence.” However, being diagnosed with ADHD did not increase the odds of lifetime use of stimulants. She found that treatment with stimulants increased the odds of lifetime use of amphetamine and cocaine/amphetamines.

Commenting on Lambert’s findings in Brain Disabling Treatments in Psychiatry, Peter Breggin said:

It is not ADHD but the treatment for ADHD that puts children at risk for future drug abuse. This conclusion is entirely consistent with the fact that animals and humans cross addict to Ritalin, amphetamine and cocaine and that exposure to Ritalin in young animals causes permanent changes in the brain.

Hitler and his generals wanted victory at any cost and Pervitin (methamaphetamine), was part of that solution. German pilots called it “pilot’s chocolate”; soldiers on the front called it “Panzerschokolade” or “tank chocolate.” But towards the end of WW II, Vice Admiral Hellmuth asked German pharmacologists to develop a miracle drug. They had a wonder drug with Pervitin, but now they needed a miracle drug. So Gehard Orxzechowski synthesized D-IX. It was supposed to keep soldiers ready for battle even when they were asked, “to continue beyond what was considered normal.” It contained 5 mm of cocaine, 3mm of Pervintin and 5mm of morphine. It seems it was a good thing the war ended before they could distribute it widely to their troops.

We have a lesson to learn from the German Wehrmacht’s failure to make a better, smarter, stronger soldier through chemicals. The American war on drugs needs to recognize its greatest casualties are now coming from within—as with ADHD medications. And I think we need to reflect on the words of George Santayana in The Life of Reason: “Those who cannot remember the past are condemned to repeat it.”

06/30/17

Rooting for the Underdog

© konstantynov | 123rf.com

In July of 2010, a 57-year-old partner at the law firm Reed Smith jumped in front of a subway train in Chicago and died. His widow, Wendy Dolin, sued the pharmaceutical companies GlaxoSmithKline (who originally manufactured Paxil) and Mylan (the drug company who manufactured paroxetine, the generic version that Stewart Dolin took), charging that the drug caused akathisia, which led to his suicide. GSK attorneys dismissed the testimony of the plaintiff’s expert witness as “junk science,” which argued for a link between the drug and suicide. However it seems the jury disagreed, since on April 20th, 2017 they awarded a $3 million verdict for the plaintiff.

Mylan was released from the suit by the trial judge, who ruled they had no control over the drug’s label. GSK continues to maintain the company wasn’t responsible since it hadn’t manufactured the drug taken by Dolin. A Chicago Tribune article quoted Wendy Dolin as saying the ruling was “a great day for consumers.” The trial was not just about the money for her. It was about awareness to a health issue. But this isn’t the end. “Officials from the pharmaceutical company said the verdict was disappointing and that they plan to appeal.” GSK continues to assert they weren’t responsible because they didn’t manufacture the drug taken by Dolin.

Writing for Mad in America, attorney and activist Jim Gottstein noted the legal significance of the case, as it established GSK did not inform the FDA or doctors that Paxil could cause people to commit suicide—a conclusion GSK continues to deny. A second legal hurdle overcome by the ruling is a Catch-22 dilemma since SSRIs, like Paxil, are now usually prescribed as generics. “The generic drug manufacturer [Mylan} isn’t liable because it was prohibited from giving any additional information and the original manufacturer [GSK] isn’t liable because it didn’t sell the drug.” You can read Jim Gottstein’s article for an explanation of how these legal hurdles were overcome.

Bob Fiddaman interviewed Wendy Dolin after the verdict and she described some disturbing tactics used by GSK attorneys. She said depositions that should have been a few hours long became eight hours, “in an attempt to wear people down.” She said GSK asked the same question over and over again, hoping to confuse or manipulate people. She alleged they also called her friends, trying to get them to say something negative about her relationship with her deceased husband.

As a therapist, as a mother and a compassionate human being, I am aware there was no purpose to have done such. I have talked to therapists, physicians and pharmaceutical lawyers and all agree there was nothing gained by this other than to show me that GSK would stop at nothing to intimidate me.

During the trial it came to light that 22 individuals had died in Paxil clinical trials, 20 by suicide; two other deaths were suspected to be suicide. “All 22 victims were taking Paxil at the time, and 80% of these patients were over the age of 30.” GSK tried to argue their “illness” caused their deaths and not Paxil. Wendy Dolin said the lawsuit showed that “akathisia is a real, legitimate adverse drug reaction.” The public needs to be aware of its signs and symptoms.

Wendy said she knew even before they went to trial, that GSK would appeal the ruling if they lost. She thought there was a GSK lawyer in the courtroom during the trial gathering information for the appeal process. She said it had been suggested this case could go all the way to the Supreme Court, because GSK is afraid of the legal ramifications of a guilty verdict. The process could take 5-7 years. She said: “Clearly this case has never been about money. For me, it has always been about awareness, highlighting akathisia and ultimately changing the black box warning to include all ages.”

Writing for STAT News, Ed Silverman suggested the new head of the FDA, Scott Gottlieb, should require a stronger warning label for Paxil. “For the past decade, Paxil’s label has not carried any information indicating the drug poses a statistically significant risk of suicidal behavior for anyone over 25.” Yet there is scientific evidence of such a risk. See Table 16 in the linked “Exhibit 40” document of his article (I assume it’s from the Dolin trial). Silverman said: “For public health reasons, the FDA should pursue a warning.”  A former FDA commissioner was quoted as saying it was hard for him to understand why the warning of increased suicidal risk was not in the label.

But sucidality is not just a risk with Paxil (paroxetine). A meta-analysis done by Peter Gotzsche of the Cochrane Collaboration concluded that antidepressants doubled the risk of suicidality and aggression in children and adolescents. Gotzsche and his team of researchers reviewed the clinical study reports for duloxetine (Cymbalta), fluoxetine (Prozac and Sarafem), paroxetine (Paxil), sertraline (Zoloft), and venlafaxine (Effexor). Estimates of harm could not be accurately done because the quality of the clinical study reports varied drastically, limiting their ability to detect the harms. The true risk for serious harms was uncertain, they said, as the low incidence of these events and the poor design and reporting of the trials made it difficult to get accurate estimates.

A main limitation of our review was that the quality of the clinical study reports differed vastly and ranged from summary reports to full reports with appendices, which limited our ability to detect the harms. Our study also showed that the standard risk of bias assessment tool was insufficient when harms from antidepressants were being assessed in clinical study reports. Most of the trials excluded patients with suicidal risk and so our numbers of suicidality might be underestimates compared with what we would expect in clinical practice.

In April of 2016, the CDC released data indicating the suicide rate in the U.S. increased by 24% from 1999 to 2014. Overall, the age-adjusted suicide rate increased from 10.5 per 100,000 in 1999, to 13.0 per 100,000 in 2014. The rates increased for both males and females and for all ages from 10 to 74. The age-adjusted rates for males (20.7 per 100,000 population), was over three times that of females (5.8 per 100,000). Males preferred firearms as a method (55.4%), while poisoning was the most frequent method for females (34.1%). However, this was a lower percentage for both sexes than in 1999. See the following figure from the CDC Report noting suicide deaths by method and sex for 1999 and 2014.

This reverses a trend from 1985 to 2000, where the U.S. suicide rate was dropping. See the following chart taken from an NPR report on the same data.  The president-elect of the American Psychiatric Association (APA), Maria Oquendo, said she thought the late 1980s drop was probably due to the fact that new antidepressants (SSRIs) were more effective and had fewer side effects.

Karter noted how Oquendo and Christine Moutier (from the American Foundation for Suicide Prevention) both saw the addition of black box warnings of the potential for suicide in teenagers and young adults as contributing the rise in suicide rates. Moutier was more direct, stating the progress in depression treatment in the 80s and the 90s “was undone in recent years because of concerns that antidepressants could increase suicide risk.” Oquendo thought the increase of suicide deaths in younger populations was potentially due to the understandable reluctance of physicians to prescribe antidepressants to these individuals, “even when they’re aware the individual is suffering from depression.” She added how research showed the benefits outweigh the risks of prescribing antidepressants to children and adolescents.

But Justin Karter indicated this suggestion, that the warning labels led to a decreased number of antidepressant prescriptions for teenagers and adults, was inaccurate. Although several media outlets reported the increase in the suicide rate, they didn’t report the corresponding increase of Americans taking antidepressants, a rate that has nearly doubled.

There was a report published in the British Medical Journal in June of 2014 that indicated black box warnings on SSRIs had a paradoxical effect, with an increase in suicide attempts among youths. Mad in America cited 12 critics of the study and noted its multiple flaws. The unwarranted conclusion, namely lead to increasing the prescription of antidepressants to teenagers and youths, had the potential to do considerable harm. Mad in America concluded that it should never have been published. Among the problems with the study were the following:

The researchers’ stated conclusion, which was that a decrease in antidepressant prescribing in youth following the black box warning led to an increase in suicide attempts, isn’t supported by their own data. (1) There was not a significant decrease in SSRI prescriptions to teenagers and young adults following the black box warning. (2) Psychotropic drug poisonings are not a good proxy for suicide attempts. (3) This coding category actually tells of poisonings due to the use of psychiatric drugs, as opposed to their non-use. (4) Finally, there was no significant increase in the number of poisonings.

Additionally, Kantor et al., in “Trends in Prescription Drug Use Among Adults in the US” reported data from the National Health and Nutrition Examination Survey (NHANES) indicated that the use of antidepressants increased from 6.8% to 13% between 1999 and 2012. Yet, as Justin Karter reported, “The American Psychiatric Association guidelines continue to suggest medications as the preferred treatment for moderate to severe depression.”

If you’re still not convinced, take some time to read through a series of scientific articles submitted by Peter Breggin in his affidavit for another Paxil-related suicide trial. The topics covered included exhibits of Paxil causing suicidal behavior as well as SSRIs and SSRI withdrawal causing suicidality. There is another section on Dr. Breggin’s website that is an “Antidepressant Drug Resource & Information Center” with even more relevant articles.

Given the above discussion on antidepressants, the recent court ruling in Illinois awarding $3 million to Wendy Dolin has the potential to lead to an unknown number of future lawsuits, if it is upheld upon appeal. This could end up costing the pharmaceutical companies that brought now off patent SSRIs and SNRIs to market untold millions and possibly billions of dollars in further awards. So you can bet that GlaxoSmithKline has plenty of pharma companies (and their legal representatives) rooting for GSK to overturn the ruling in the Dolin case. Me, I’m rooting for the underdog here—the 13% of Americans who are taking antidepressant medications without clearly knowing the potential they have to make their depression and its consequences worse.

01/13/17

Iatrogenic Gun Violence

© StephanieFrey | stockfresh.comfresh eggs. Araucanas are also known as the Easter Chicken for the blue or greenish colored eggs they lay.

Whenever I read about horrific violence like the incident in the Fort Lauderdale airport, I wonder what role psychiatric medications played. I wonder if the violent behavior was iatrogenic—was it caused by psychiatric medications? This question will sound counter intuitive for many people. Surely the reverse is true. Psychiatric medication and proper diagnosis should have prevented it. Let’s see if it is.

Esteban Santiago was deployed to Iraq from April 2010 to February 2011 with the 130th Engineer Battalion, the 1013th Engineer Company of the Puerto Rico National Guard. After flying from Alaska to Fort Lauderdale Florida, he retrieved his baggage, which incidentally contained a semi-automatic handgun. Santiago had followed proper protocol, checking the weapon with TSA. He went into the men’s bathroom, loaded his weapon and opened fire in Terminal 2 of the airport, killing five people and wounding six others. A witness said he was just randomly shooting people, with no rhyme or reason to it.

Family members reported that he was a changed man when he returned from Iraq. His aunt said his mind was not right. At times he seemed normal, but other times he seemed lost. In Iraq, his unit cleared roads of improvised explosive devices and maintained bridges. Two people in his unit died while he was in Iraq. His aunt said: “He talked about all the destruction and the killing of children. He had visions all the time.” He had changed.

His brother Bryan confirmed that recently Esteban was hallucinating, but said he was receiving psychological treatment. Bryan said he believes the shooting rampage resulted from mental issues that surfaced after the Iraq tour. When Esteban’s tour ended, he was hospitalized for mental problems. Upon his release, he went to Puerto Rico where his father was ill and eventually died. While in Puerto Rico, he received mental health therapy. Esteban eventually moved to Alaska, where he joined the Alaska National Guard in November 2014. He was discharged in August of 2016.

Over the course of 2016, Santiago was repeatedly reported to Anchorage police for physical disturbances. In January of 2016 he was arrested and charged with assault and criminal mischief after an argument with his girlfriend. He allegedly yelled at her while she was in the bathroom and broke down the bathroom door. She told investigators that he tried to strangle her and struck her on the side of the head.

Santiago pleaded no contest to criminal mischief and assault charges. Under a deferred prosecution agreement, his charges would have been dismissed if he complied with the conditions. He was due back in court on March 28th, 2017 to assess his progress.

While living in Alaska, Esteban continued to receive psychological treatment, according to his brother. Although his girlfriend alerted the family to the situation in Alaska, Bryan said he did not know what mental health problem Esteban was being treated for; they never spoke about it by phone.

His son was born in September of 2016. In November of 2016, Esteban walked into an FBI office in Anchorage to report that his mind was being controlled by a U.S. intelligence agency. He told officials he had a firearm in his car, along with his newborn son. Santiago was checked into a mental health facility; his firearm was logged as evidence for safe keeping. The infant’s mother came for their child. FBI special agent Marlin Ritzman said:

During the interview, Mr. Santiago appeared agitated, incoherent and made disjointed statements. Although he stated he did not wish to harm anyone, as a result of his erratic behavior our agents contacted local authorities, who took custody of Mr. Santiago and transported him to the local medical facility for evaluation.

After conducting database reviews, interagency checks and interviews with his family members, the FBI closed its assessment of Santiago. Agents found no ties to terrorism during their investigation. A CNN senior law enforcement analyst and former FBI assistant director said Santiago hadn’t been adjudicated a felon and he hadn’t been adjudicated as mentally ill. So they couldn’t keep his weapon. The Walther 9-millimeter pistol was returned to him in the beginning of December. Authorities told CNN it was the pistol he used in the shooting incident in Fort Lauderdale.

Typically, Esteban was considered to be a calm young man who was never violent. Recently he began selling his possessions, including his car. Friends and associates noticed more erratic behavior. He bought a one-way ticket to Fort Lauderdale and packed his pistol and two magazines. His carryon bag with the pistol was his only luggage. He flew from Anchorage to Minneapolis to Fort Lauderdale. He retrieved his bag from the baggage claim area and went into a men’s room stall to load his pistol.

He shot the first people he saw, going up and down the carousels of the baggage claim, shooting through luggage to get at people that were hiding. He thinks he fired 15 bullets, aiming at his victim’s heads. A witness said Esteban showed no remorse. He didn’t say anything. “No emotion, no nothing. About as straight-faced as you get.” Afterwards, he just lay face down, spread eagle, waiting for the deputies to come and get him.

The above report was pieced together from information contained in the following reports by The New York Times here,  NJ.com here, CNN here, and NPR here.

There was no explicit mention of Santiago’s repeated involvement in “psychological treatment” involving psychiatric medications, but it highly probable he was taking psychiatric medication of some sort. The lack of any mention of his being prescribed medication may simply be due to confidentiality regulations. Or this silence could be due to the chicken-and-egg argument often applied to incidents involving violence and individuals with known psychiatric problems. Their mental illness, not the drugs to treat it, caused their horrific behavior.

Several psychiatrists have voiced concerns with psychiatry, its over reliance upon medication and denial of serious adverse effects from medication, like violence and suicide. Joanna Moncrieff said she’s sad her profession has not taken the harms drug treatments can do more seriously. She said it has a long history of ignoring the adverse effects of drugs, or attributing them to the underlying disease—of blaming the patient instead of the drug. “Too many psychiatrists have just accepted that drug treatments are good, and have not wanted to contemplate that actually these treatments could be harmful.”

First and foremost, she said, psychiatry needs to adopt a drug-centered model for understanding its drug treatments and what they do to people. Psychiatrists need more information, knowledge and training on what the drugs do—what effects they produce in people; “how they change the way that people think and feel and what sort of impact those changes have on people’s lives.” Watch two brief videos of her expressing her concerns here. You can read more about her “drug-centered model” here on this website: “A Drug is a Drug is a Drug.”

Peter Breggin has raised concerns with the association of violence and antidepressants since the early days of Prozac. In his 1991 book, Toxic Psychiatry, Dr. Breggin related newspaper and scientific reports pointing to an association between Prozac and “compulsive, self-destructive and murderous activities.” He said then he was personally familiar with several cases of compulsive suicidal or violent feelings that developed after taking Prozac. Over the years, his familiarity grew.

In “Psychiatry Has No Answer to Gun Massacres,” Breggin described how the Columbine High School shooter, Eric Harris had a “therapeutic” level of Luvox (fluvoxamine) in his body at the time of the murders.  He had a dose increase in his medication 2 ½ months before the assault and showed signs of drug toxicity five weeks before the event. James Holmes, the Aurora Colorado theater shooter, was in psychiatric treatment with the medical director of student health services, who was considered an expert on campus violence. She was concerned enough about Holmes to report him to the campus police and the campus threat assessment team a few weeks before the assault. When the assessment team suggested putting him on a 72-hour involuntary hold, she rejected the idea. “When Holmes quit school, the school washed its hands of all responsibility for him.”

In a 2010 journal article, “Antidepressant-Induced Suicide, Violence, and Mania: Risks for Military Personnel,” Dr. Breggin related how the adverse effects described in the 2009 edition of the Physicians’ Desk Reference for Zoloft (sertaline) resembled the most frequent psychiatric disorder associated with combat—PTSD—with its hyperalert overstimulated symptoms. He said identical or nearly identical warnings can be found in all antidepressant labels. “All these potentially dangerous symptoms are also commonly seen in PTSD in military personnel, posing the risk of worsening this common military disorder.”

Looking at the revised 2016 medication guide for Zoloft, we see that nothing much has changed with regard to adverse effect warnings. It said Zoloft and other antidepressant medications could increase suicidal thoughts or actions. Symptoms needing immediate attention included: acting aggressively or violent, feeling agitated, restless angry or irritable, an increase in activity or talking more than what is normal, acting on dangerous impulses, trouble sleeping, new or worse anxiety or panic attacks, trouble sleeping, other unusual changes in behavior or mood.

A condition known as “serotonin syndrome” has symptoms such as: agitation, hallucinations, coma and other changes in mental status. Symptoms of potential manic episodes included: greatly increased energy, racing thoughts, unusually grand ideas, severe trouble sleeping’s, reckless behavior, excessive happiness, talking more or faster.

Dr. Breggin concluded his article with the following cautions and recommendations. He said there was a strong possibility the increased suicide rates among active-duty soldiers were in part caused or made worse by the widespread prescription of antidepressant medication. Alone, they can cause a stimulant-like series of adverse effects. “These symptoms of activation can combine adversely with similar PTSD symptoms found so commonly in soldiers during and after combat.” He recommended the military study the relationship between psychiatric drug treatment and suicide as well as random or personal violence. He also suggested that antidepressants should be avoided in the treatment of military personnel.

Another emerging concern of an association between antidepressants and violence is in the research done by Yolande Lucire. She suggested that mutations in CYP450-encoding genes contributed to problems metabolizing psychiatric drugs, and thus were contributing factors in three cases of antidepressant-induced akathisia-induced homicide. The cytochrome P450 family of enzymes is responsible for metabolizing most of the drugs used in psychiatry. You can read her article here. You can also find another article: “Psych Drugs and Violence” on this web site. Within that article you will find a link to another article by Lucire on antidepressant-induced akathisia-related homicide and the CYP450 genes.

Hasn’t there been enough evidence associating suicide and violence with psychiatric medications, especially antidepressants, for open dialogue and more comprehensive scientific research into this public health issue? How many more Columbines, Auroras and Fort Lauderdales need to happen before we begin to address the association of psychiatric drugs and violence?