09/9/15

The Quest for Psychiatric Dragons, Part 1

© dvarg | 123rf.com
© dvarg | 123rf.com

In her book, Opening Skinner’s Box, author Lauren Slater related a conversation she had with Robert Spitzer, one of the most important psychiatrists of the twentieth century. She told him of the personal struggles of another individual that Spitzer was historically linked to, David Rosenhan. Slater told Spitzer that Rosenhan’s wife had died of cancer, his daughter had died in a car crash and he was paralyzed from a disease that doctors couldn’t diagnose. She reported that Spitzer’s response was: “That’s what you get for conducting such an inquiry.”

There are questions regarding the truth of what Slater reported here. Spitzer himself said that he doesn’t remember saying that. And if he did, he meant it in a joking way. However, Slater’s observation that “Rosenhan’s study is still hated in the field [of psychiatry] after forty years” is very true. In his recently published book Shrinks, Jeffrey Lierberman, a former president of the American Psychiatric Association, described Rosenhan at the time of his infamous study as “a little-known Stanford-trained lawyer who had recently obtained a psychology degree but lacked any clinical experience.” He thought the 1973 Rosenhan study had fueled an “activist movement that sought to eliminate psychiatry entirely.” See “A Censored Story of Psychiatry” Parts 1 and 2 for more on Rosenhan’s study and Lieberman’s portrayal of it.

In 1973, the American Psychiatric Association (APA) was in crisis. Gay activists had actively protested at the annual APA meetings between 1970 and 1972, seeking to have the APA remove homosexuality as a mental disorder from the DSM. Robert Spitzer, the architect of the diagnostic revolution that was codified in the DSM-III, related in an interview that he was at a symposium on the treatment of homosexuality in 1972 that was disrupted by a group of gay activists. He recalled that in effect, the activists were saying they wanted the meeting to stop; because “You’re pathologizing us!” The media attention of the above protests created a very public embarrassment for psychiatry. Kirk et al., in Mad Science, commented:

An entire group of people labeled as mentally ill by the American Psychiatric Association was disputing its psychiatric diagnosis. At the core of their challenge was a simple, easy-to-understand question: why was homosexuality a mental illness?

Spitzer approached one of the protesters after the symposium was cancelled and their conversation led to a meeting between some of the activists and an APA committee Spitzer was a member of, the APA Task Force on Nomenclature and Statistics. Spitzer recalled that the gist of the meeting was “the idea that the only way gays could overcome civil rights discrimination was if psychiatry would acknowledge that homosexuality was not a mental illness.” After the meeting with the Nomenclature and Statistics Task Force, Spitzer proposed the APA organize a symposium at the annual APA meeting in May of 1973. He continued to be active with this issue within the APA, and was responsible for the position statement (formulated on June 7, 1973 by Spitzer) that was approved by the APA Board of Trustees in December 1973 removing homosexuality as a diagnosis from the DSM.

Concurrent with this issue was the fallout from the publication of David Rosenhan’s article in the January 1973 issue of Science, “Being Sane in Insane Places.” Kirk et al. noted that the study was intriguing, easy to understand and had striking results. So it received a lot of media attention. The study reinforced the view that psychiatric judgments were inadequate, and even laughable. “Once again, the target of the joke was the scientific pretence of psychiatric diagnosis: Psychiatrists could not distinguish the sane from the insane.”

Jeffery Lieberman, a former president of the APA and author of the book Shrinks, said that an emergency meeting of the Board of Trustees was called in February of 1973 “to consider how to address the crisis and counter the rampant criticism.” Lieberman related how the APA Board of Trustees realized the best way to deflect the “tidal wave of reproof” was to make a fundamental change in how mental illness was conceptualized and diagnosed. They agreed that the most compelling means would be to transform the DSM. By the end of the emergency meeting, the trustees had authorized the creation of the third edition of the DSM.

Lieberman said Robert Spitzer wanted to be in charge of the revision process as soon as he heard it had been approved. Spitzer recalled,  “I spoke to the medical director at the APA and told him I would love to head this thing.”  In part because of the way he handled the quandary over homosexuality, Spitzer was appointed to chair the DSM-III Task Force in 1974. But he had already positioned himself as an expert on psychiatric diagnosis.

I think it is fair to say that Spitzer had been aiming towards this appointment for almost seven years. His association with the DSM began in 1966, when he agreed to take notes for the DSM-II committee. Then Spitzer et al. introduced use of the kappa statistic into the literature on psychiatric diagnosis in their 1967 study, “Quantification of Agreement in Psychiatric Diagnosis.” In The Selling of the DSM, Stuart Kirk and Herb Kutchins commented that the introduction of kappa appeared to provide a way to unify the comparison of reliability studies, while eliminating the statistical problem chance agreement at the same time. Joseph Fliess, who would later co-author with Spitzer their seminal 1974 study, was one of the authors here.

Before the Rosenhan study in 1973, Spitzer and others had already published several articles related to revising psychiatric diagnosis in the Archives of General Psychiatry: “Immediately Available Record of Mental Status Exam” (July, 1965);  “Mental Status Schedule” (April 1967); “Quantification of Agreement in Psychiatric Diagnosis: A New Approach” (July, 1967); “DIAGNO: A Computer Program for Psychiatric Diagnosis Utilizing the Differential Diagnostic Procedure” (June, 1968); “The Psychiatric Status Schedule” (July, 1970); “Quantification of Agreement in Multiple Psychiatric Diagnosis” (February, 1972). And these were just those published in the Archives.

In 1971 Spitzer was introduced to a group of psychiatric researchers from Washington University in St. Louis. They were working to develop diagnostic criteria for specific mental disorders. Spitzer was in heaven. Lieberman reported Spitzer said: “It was like I had finally awoken from a spell. Finally, a rational way to approach diagnosis other than the nebulous psychoanalytical definitions in the DSM-II.” According to Whitaker and Cosgrove in Psychiatry Under the Influence, more than half of the members Spitzer appointed to the DSM-III Task Force had an existing or past affiliation with Washington University.

Feighner et al., the group of researchers at Washington University in St. Louis, published “Diagnostic Criteria for Use in Psychiatric Research” in 1972. They proposed specific diagnostic criteria for 14 psychiatric disorders, along with the validating evidence for those criteria. Kirk and Kutchins said their work became known as the Feighner criteria, after its senior author. This study became a classic in the psychiatric literature, and has been cited over 4,000 times since its publication.

In 1978, Spitzer and others would use the Feighner criteria to produce the “Research Diagnostic Criteria”  (RDC), another significant step in the formation of the DSM-III. Kirk and Kutchins said: “These two articles … and the work on which they were based are among the most influential developments in psychiatry” since the late 1960s. An important fact in both the Feighner criteria and Spitzer’s RDC, was they were initially developed only for use in research. “Neither article proposed that the elaborate diagnostic systems be adopted by clinical psychiatrists.” That came later. But you can see the path that Spitzer had been walking since 1967. He wanted to radically change psychiatric diagnosis and had been methodically moving in that direction. And then the Rosenhan study, “Being Sane in Insane Places” was published in the journal Science.

01/12/15

Can Addicts Stop Using Without Help?

Image by kikkerdirk
Image by kikkerdirk

Maia Szalavitz wrote on Substance.com that she stopped shooting coke and heroin when she was 23. “I quit at around the age when, according to large epidemiological studies, most people who have diagnosable addiction problems do so —without treatment.” Although she personally got treatment help, her article was about people who stop without treatment or assistance from self-help, 12-Step programs. It was provocatively titled: “Most People with Addiction Simply Grow Out of It: Why Is This Widely Denied?” She’s currently finishing her sixth book, Unbroken Brain, “which examines why seeing addiction as a developmental or learning disorder can help us better understand, prevent and treat it.”

Szalavitz referenced an epidemiological study, which suggested that a significant proportion of individuals achieve remission from addiction at some point in their lifetime. This study by Lopez-Quintero et al. found that “half of the cases of nicotine, alcohol, cannabis and cocaine dependence remitted approximately 26, 14, 6 and 5 years after dependence onset, respectively.” An article by Gene H. Heyman reviewed four studies, including the Lopez-Quintero one, and suggested that: “most addicts were no longer using drugs at clinically significant (emphasis added) levels by the age of 30.” According to Heyman:

The idea that addiction is a disease characterized by compulsive (involuntary) drug use goes hand in hand with the belief that addicts require lifelong treatment and that treatment is necessary for recovery. However, the epidemiological results indicate that most addicts do not take advantage of treatment; nevertheless, most quit. The logical inference is that remission from drug dependence does not require treatment.

The implications of Heyman’s and Szalavitz’s interpretation of the research studies they cited has far reaching consequences, particularly for the addiction treatment industry. So I want to take a look at these epidemiological studies that led them to conclude that most addicts quit drug or alcohol use (or enter remission) on their own. Heyman’s review article looked at four national epidemiological surveys of the prevalence of psychiatric disorders. Szalavitz seems to cite references to these same four studies or other articles by Heyman. So my interaction will be with the discussion in Heyman’s article: “Quitting Drugs: Quantitative and Qualitative Features.”

Hyman presented data from four large national epidemiological studies that reported high remission rates of diagnosed substance-related disorders. The studies and their remission rates were as follows: 76% for NCS, the National Comorbidity Survey; 83% for the NCS-R, the National Comorbidity Survey Replication; and 81% for the NESARC, the National Epidemiological Survey on Alcohol and Related Studies. Another study, the Epidemiological Catchment Area (ECA) survey reported a lower remission rate of 57%, but had combined the criteria for substance abuse and substance dependence into one category. He concluded: “The results do not support the often heard claim that addiction is a chronic, relapsing disease.”

Now I also have problems with defining addiction in pure medical/disease model terms and would be happy to see a more socially and cognitively nuanced definition of addiction become mainstream. But those self-generated remission rates seemed awfully high. How was this remission quantified?

First, let’s look at a critique of epidemiological miscounts by Allen Frances. Frances was the chair appointed by the American Psychiatric Association for the fourth edition of the DSM, the Diagnostic and Statistical Manual of Mental Disorders used by the epidemiological researchers to quantify their definition of “remission.” He initially pointed to an article by Regier et al., “Limitations of Diagnostic Criteria and Assessment Instruments for Mental Disorders” published in the journal, Archives of General Psychiatry in 1998. The Regier et al. article abstract raised concerns with “significant differences in mental disorder rates from 2 large community surveys”—the ECA and the NCS, two of the studies cited and discussed by Heyman.

Frances also presented his critique of epidemiological studies that use DSM diagnoses in Saving Normal. There he pointed to the “inherent limitations” of defining clinical cases in epidemiological studies. They used lay interviewers who make “diagnoses” by symptom counts, with “no consideration of whether the symptoms are severe or enduring enough to warrant diagnosis or treatment.” As a consequence, the judgment of a clinician is missing. “This results in rates that are always greatly inflated.” Symptoms “that are mild, transient and lacking in clinical significance” are mistakenly diagnosed as symptoms of psychiatric disorder.

They should never be taken at face value as a true reflection of the real extent of illness in the community. Unfortunately, the exaggerated rates are always reported without proper caveat and are accepted as if they are an accurate reflection of the real prevalence of psychiatric disorder. (Saving Normal, p. 86)

Another problem with these studies was how they defined “remission.” Remission was simply not reporting the required number of symptoms to meet the diagnosis over the previous year. Remission had a broader meaning than just “quitting” or abstinence.

The diagnostic criteria for substance abuse and dependence found in the DSM-IV were used by all the studies reported in Heyman. The ECA study, as noted above, included individuals who were “substance abusers” and “substance dependent.” The other studies only looked at those who were “substance dependent.” Remission for the ECA study was defined as no reported symptoms, while in the others, it was defined as two or less. This was based upon the separate criteria needed for each diagnosis—only one from the list for substance abuse, but three for substance dependence.

In Mad Science, Kirk, Gomory and Cohen noted how the DSM’s diagnostic criteria are the de facto definitions of mental disorder in the U.S. However, they said that describing a set of behaviors and labeling them as symptoms or diagnostic criteria does not establish the presence or absence of an illness or disorder.

Descriptive diagnosis is a tautology that distracts observers from recognizing that DSM offers no indicators that establish the validity of any psychiatric illness, although they may typically point to distresses, worries or misbehaviors (Mad Science, p. 166).

So the importance of clinical judgment, pointed to by Frances, in making a diagnosis of the existence or remission of substance dependence or substance abuse is essential. Following the critique of Frances and Regier et al. and their concerns with inconsistencies and limitations of using diagnostic criteria in epidemiological studies, the reported incidence rates of both substance dependence AND remission are likely to be greatly inflated in the studies reviewed by Heyman.

The conclusion that large populations of individuals with diagnosable addiction problems (substance dependence, according to Heyman) can stop or remit without help in such high percentages is suspect. In addition, the “diagnosis” of individuals as substance dependent in these studies is probably inaccurate for many of them. It is likely that many of those labeled as substance dependent in the studies were only substance abusers. According to Carlton Erickson in The Science of Addiction, substance abusers are more likely to make changes in their substance use because of “significant impairment or distress in their life as a consequence of their use.” They may quit on their own, without treatment. They may even go back to moderate or controlled drinking or mature out of the habit.