06/22/18

Corrupted Clinical Trials

© ironstealth | stockfresh.com

Dr. Jason Fung opened his article, “The Corruption of Evidence Based Medicine—Killing for Profit” with the following: “The idea of Evidence Based Medicine (EBM) is great. The reality, though, not so much.” He said if the evidence base was false or corrupted, then evidence-based medicine was completely worthless. “It’s like building a wooden house knowing the wood is termite infested.” He’s not alone in this opinion and he quoted three current or former editors of the two most prestigious medical journals in the world who corroborated his statement.

Richard Horton, editor in chief of The Lancet said: “The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue.”

Dr. Marcia Angell, former editor in chief of the New England Medical Journal (NEJM) said: “It is simply no longer possible to believe much of the clinical research that is published, or to rely on the judgment of trusted physicians or authoritative medical guidelines. I take no pleasure in this conclusion, which I reached slowly and reluctantly over my two decades as an editor.”

Dr. Arnold Relman, the former editor of the NEJM, said: “The medical profession is being bought by the pharmaceutical industry, not only in terms of the practice of medicine, but also in terms of teaching and research. The academic institutions of this country are allowing themselves to be the paid agents of the pharmaceutical industry. I think it’s disgraceful.”

Dr Fung said: “Physicians and universities have allowed themselves to be bribed.” He went on to say the examples in medicine are everywhere. For instance, medical research is typically paid for by pharmaceutical companies. “Trials run by industry are 70% more likely than government funded trials to show a positive result.” Among the issues he described were: the selective publication of clinical trials, rigging the outcomes of those trials, publication bias and industry payments to medical journals and their editors.

Clinical trials with negative results are likely to be suppressed. Using the company Sanofi to illustrate the problem, Dr. Fung noted the company completed 92 studies in 2008, but only published the results of 14. He acknowledged how it would be financial suicide to publish data that would harm your company. “But knowing this, why do we still believe the evidence based medicine, when the evidence base is completely biased?”

With antidepressants, a review of the published literature published in the NEMJ suggested 94% of the trials conducted were positive. Among FDA-registered trials, 31% were not published and only 51% showed positive results. The authors said:

We cannot determine whether the bias observed resulted from a failure to submit manuscripts on the part of authors and sponsors, from decisions by journal editors and reviewers not to publish, or both. Selective reporting of clinical trial results may have adverse consequences for researchers, study participants, health care professionals, and patients.

Before the year 2000, pharmaceutical companies doing clinical trials did not have to declare beforehand what their primary outcomes would be. So they would “measure many different endpoints and simply figured out which one looked best and then declared the trial a success.” The government changed that requirement and after 2000, 8% of clinical trials showed good results when 57% of those before 2000 showed a positive result. Evidence of the evidence base was being corrupted by commercial interest.

If a journal publishes a positive article about a Pharma drug, the company would order several hundred thousand copies of the article to distribute to doctors in their marketing efforts. “It’s insanely profitable for journals to take money from Big Pharma.” The NEMJ gets 23% of its income from reprints; the Lancet gets 41%; and the American Medical Association gets 53%! “No wonder these journals are ready to sell their readers (ordinary physicians) down the river. It pays.” A cited study noted where 50.9% of the editors of prestigious medical journals received at least some payments from the industry, and in some cases “these payments were often large.”

We found that industry payments to journal editors are common and can be substantial. Moreover, many journals lack clear and transparent editorial conflicts of interest policies and disclosures. Given our findings, we would suggest that journals take several steps. Firstly, we would strongly argue that all journals should develop and implement a transparent, publicly accessible editorial conflicts of interest policy. Secondly, editors in chief should consider excluding those with considerable industry relations from editorial positions. While such a stance could be considered drastic, editors play a crucial role in research integrity; even an appearance of conflict can serve to undermine the clinical research enterprise.

In The Chronicle of Higher Education, Batt and Fugh-Berman noted where “Disclosing Corporate Funding Is Not Nearly Enough.” In the U.S. in 2015, industry spent $102.7 billion on health-related research, while federal agencies spent $35.9 billion. “The current administration attempted to further decrease NIH funding, but those efforts were unsuccessful.” They said reliance on industry money limits the scope of research; and “it weakens researchers’ ability to act as independent critics.” Pharmaceutical companies fund, publish and promote studies that that are favorable to their marketing goals and “suppress or attack research that threatens market share.”

Perhaps most troubling is that if the final results of a study do not support commercial goals, the full study may never be published. In general, industry-funded studies are less likely to be published than non-industry-funded ones. And contrary to expectations, the reason negative studies are unpublished is not because journals rejected them, but because they were never submitted for publication. Although many universities frown on agreements that give funders the right to suppress the publication of findings, policies regarding publishing are not uniform across colleges and universities. In any case, enforcement is nil: Colleges can’t force researchers to publish studies. Industry insiders tell us that when company representatives fail to prevent a researcher from publishing unfavorable results on a drug, they may attempt to persuade the researcher to “bury” the paper in an obscure journal. Or, under the guise of reviewing a manuscript for “accuracy,” a company may soften statements or insert subtle marketing messages into the article to mitigate harm to its marketing goals. We don’t know the extent to which industry funding distorts biomedical literature — and clinical decision-making — but a substantial body of evidence now shows that allowing industry to choose what scientific questions should be asked, and how findings should be analyzed, interpreted, and disseminated, has public-health costs. We need strategies to minimize industry influence on scientific questions, and the resulting impact on policies and medical practice.

Writing for Mad in America, Zenobia Morrill summarized a review article by three researchers, “Industry-corrupted psychiatric trials.” The authors quoted from Marcia Angell’s 2008 article for JAMA, “Industry-sponsored clinical research: A broken system,” where she said:

Over the past 2 decades, the pharmaceutical industry has gained unprecedented control over the evaluation of its own products. Drug companies now finance most clinical research on prescription drugs, and there is mounting evidence that they often skew the research they sponsor to make their drugs look better and safer.”

Amstersdam, McHenry and Jureidini, the authors of “Industry-corrupted psychiatric trials,” noted it was common knowledge that pharmaceutical companies “laundered” their promotional efforts through medical communications companies that “ghostwrite articles and then pay academic consultants to sign on to the fraudulent articles.”

The firms set up advisory board meetings with key opinion leaders and marketing executives in advance of the clinical trials. Once a trial is complete, the medical ghostwriter who is employed by the medical communications firm produces a draft of a manuscript – from a summary of the Final Study Report of the clinical trial – and seeks feedback from the corporate sponsor. It is at this stage in the manuscript production that misrepresentation of the trial data frequently occurs, since the medical ghostwriter is under the direction of marketing executives to “spin” the data. The medical ghostwriter then revises a number of drafts with input from the external academic “authors” and internal industry scientists, and once the corporate sponsor is satisfied that the final manuscript draft is “on message,” it is submitted by a corporate-designated lead author to a medical journal for peer review. Once the manuscript is submitted, the medical ghostwriter disappears or is acknowledged in the fine print for “editorial assistance.”

As a result, ghostwriting by the pharmaceutical industry has become a major factor in the “crisis of credibility” in academic medicine. “The integrity of science depends on the trust placed in individual clinicians and researchers and in the peer-review system which is the foundation of a reliable body of knowledge.” If academics allow their names to appear on ghostwritten articles, “they betray this basic ethical responsibility and are guilty of academic misconduct,” according to Amstersdam, McHenry and Jureidini. Ghostwriting extends to include an academic façade for research “that has been designed, conducted and analyzed by industry.” Yet the vast majority of ghostwritten publications won’t be revealed as such.

Key opinion leaders (KOLs) or “thought leaders” are academic physicians who are carefully vetted by the industry on the basis of their receptivity to the sponsor’s products. Pharmaceutical companies say they have engaged these KOLs for expert evaluation and feedback on marketing strategy. However, they essentially are highly paid “product champions” or marketers. “Few physicians and psychiatrists can resist the flattering offer by industry to become KOLs.” Medical journals are noted to be part of the problem here as well.

Medical journals are part of the problem rather than the solution to the problem. Instead of demanding rigorous peer review of a submissions and an independent analysis of the data, medical journal editors are pressured to publish favorable articles of industry-sponsored trials and rarely publish critical deconstructions of ghostwritten clinical trials. As medical journals and their owners have become dependent upon pharmaceutical revenue, the journals fail to adhere to the standards of science. Thus the publication of “positive” studies showing drug safety and effectiveness means more pharmaceutical advertising and more orders of reprints for dissemination by the sales force. In contrast, a “negative” study showing poor tolerability or ineffectiveness results in no such revenue.

“Industry-corrupted psychiatric trials” then went on to deconstruct how three studies, “SmithKline Beecham Paroxetine Study 329,” “Forest Laboratory Citalopram Study CIT-MD-18” and “SmithKline Beecham Paroxetine Study 352” all manipulated or misrepresented outcome data. The first two to support the use of the SSRI antidepressants paroxetine (Paxil) and citalopram (Celexa) for the treatment of childhood and adolescent depression. The third study, “Paroxetine Study 352,” misrepresented and manipulated outcome data in adults diagnosed with bipolar affective disorder. Morrill said: “Misconduct of this study was revealed when academics filed complaints of plagiarism and research misconduct against KOLs at medical research universities across the U.S. as well as pharmaceutical company executives.”  Read the review article for further details on how these three research studies were deconstructed to reveal how they manipulated or misrepresented outcome data.

In closing, let me remind you again of the opinion of Marcia Angell, former editor of the NEMJ:

It is simply no longer possible to believe much of the clinical research that is published, or to rely on the judgment of trusted physicians or authoritative medical guidelines. . . . Drug companies now finance most clinical research on prescription drugs, and there is mounting evidence that they often skew the research they sponsor to make their drugs look better and safer.

01/24/17

Herding Pharma “Cats”

© mdfiles | stockfresh.comAfrica

The Chinese government released a report in September of 2016 by the State Food and Drug Administration (SFDA) that found fraudulent clinical trial practices on a massive scale. The SFDA concluded that over 80% of clinical trial data was fabricated. The scandal was the result of a “breach of duty by supervision departments and malpractice by pharmaceutical companies, intermediary agents and medical staff.” More than 80% of the applications for the mass production of new medications have been cancelled, with warnings by the SFDA that further evidence of malpractice might still emerge.

Radio Free Asia also reported the SFDA indicated much of the clinical trail data was incomplete at best. But it also failed to meet basic analysis requirements or was untraceable. “Some companies were suspected of deliberately hiding or deleting records of adverse effects, and tampering with data that did not meet expectations.” Apparently, this came as no surprise to industry insiders. “Clinical data fabrication was an open secret even before the inspection.”

Many of the new drugs were combinations of existing ones. Clinical trial outcomes were written beforehand, and their data presented so it agreed with the fabricated outcomes. A doctor at a top Chinese hospital said the problem lay with the failure to implement regulations governing clinical trial data. “Guangdong-based rights activist Mai Ke said there is an all-pervasive culture of fakery across all products made in the country.” Reporting for Pharmafile, Ben Hargreaves said:

The root of the issue is then not regulation, with regulation for clinical trials running on similar lines to Western practises, but in the lack of adherence to them. China’s generic drug industry has struggled with quality problems and therefore there is a temptation for companies to manipulate data to meet standards. The report found that many of the new drugs were found to be a combination of existing drugs, with clinical trials outcomes written beforehand and the data tweaked to fit in with the desire outcomes.

Sadly, clinical trial problems are not unique to China. An editorial published in the British journal The Lancet Psychiatry described multiple issues beginning with how subjects are recruited, moving on to determining what the control group should be, and ultimately defining meaningful outcome measures. Sometimes, trial recruits receive “care” they didn’t agree to. “Researchers and ethics review boards need to examine the ethical arguments and practical procedures from other areas of medicine where consent is problematic.” If such trials are done, regular and rigorous monitoring is essential. Patient safety and autonomy needs to be a priority.

In his discussion of the editorial, Justin Carter elaborated on one of the problems with recruiting subjects. An individual was recruited into a study on three antipsychotics while under a forced commitment order from a judge. “The psychiatrist who recruited him was in charge of the study and was his treatment provider and was also empowered to report on the patient’s progress to the judge.” The individual died by suicide during the drug trial.

The work of Irving Kirsch and others has shown the problem with inert placebos (sugar pills). The side effects from medication make it easy for participants to guess which study group they are in.

And when the trial is over and the data in, do the outcome measures really provide something meaningful for people’s lives? If the ultimate goal is for people to fell better and resume their prior level of functioning, should outcome measures by primarily patient self-reports, clinical assessment, or differences shown by imaging or the as-yet-to-be-clearly-identified biomarkers?

Given the problems running and interpreting psychiatry trials, it is essential to learn how even the most successfully tested interventions work in real clinics with the broad patient population. Implementation, uptake, and effectiveness in real-life settings must be analysed, and delivery of new innovations modified accordingly. Future research should be thought of not as a plain linear process from innovation to trial to implementation, but as a virtuous circle where research feeds into the clinic and vice versa.

Another issue pointed to by Carter was the validity and reliability of the diagnosis or classification system used to determine who to include and who to exclude from the trials. The DSM system, now in its fifth edition (DSM-5), is the current “bible” for assessing and diagnosing problems the psychiatric medications in clinical trials are supposed to “treat” in the U.S. Yet there have been questions about the reliability and validity of the DSM dating from an argument raised by Robert Spitzer and others in the 1970s that ushered in changes still embedded in the DSM-5. Rachel Cooper gave a brief history of the reliability questions with the DSM in “How Reliable is the DSM-5?” You can also refer to “Psychiatry Has No Clothes,” “Where There’s Smoke …”, and  “The Quest for Psychiatric Dragons,” Parts 1 and 2.

A few weeks before the release of the DSM-5, Thomas Insel, then the NIMH Director, announced the NIMH would be “reorienting” its research away from DSM categories. The agency’s new approach is called the Research Domain Criteria (RDoC) project. For now, RDoC is a research framework and not a clinical tool. But NIMH has high hopes for it: “RDoC is nothing less than a plan to transform clinical practice by bringing a new generation of research to inform how we diagnose and treat mental disorders.” While Tom Insel has moved on to work for Alphabet (Google), RDoC is alive and well within NIMH. You can keep up with news about RDoC on the “Science News About RDoC.”

The Science Update for February 16, 2016 noted the March 2016 issue of the journal Psychophysiology would be devoted to the RDoC initiative. Dr. Bruce Cuthbert said the special issue was a unique opportunity for researchers to engage with one another and reflect on work being done in various laboratories throughout the country. He thought it was encouraging to see many investigators already engaged in the kind of work RDoC advocates. “What this shows is that while the RDoC acronym may be new, the principles behind RDoC are certainly not new to psychiatric research.”

If the principles behind RDoC are not new to psychiatric research, how can it bring “a new generation of research to inform how we diagnose and treat mental disorders” in order to transform clinical practice? It sounds a lot like using the same deck of cards to just play a new card game. RDoC may not be the transformative framework it’s touted to become.

Added to these issues is the failure of pharmaceutical companies to publically report the results of clinical trials, as they are required by law to do. New reporting rules will take effect on January 18, 2017. But advocates for transparency in clinical research have cautioned the success of the new rules will depend upon the willingness and vigor of government enforcement of those rules. The failure to enforce the existing rules, which went into effect in 2008, led to widespread noncompliance with reporting requirements. If the FDA had fined the violators, they could have collected an estimated $25 billion.

Reporting for STAT News, Charles Piller said studies have indicated only a small fraction of trials will comply with the law. Yet there are no current plans to increase enforcement staffing at the FDA and NIH. That’s a big problem, according to Ben Goldacre, an advocate for full disclosure in clinical research. Francis Collins, the NIH director said they are serious about this and will withhold funds, if needed. “It’s hard to herd cats, but you can move their food, or take their food away.”

The legislation that created ClinicalTrials.gov emerged from numerous cases of drug manufacturers withholding negative trial results, making drugs look more effective and less harmful. Efforts to market the antidepressant Paxil for teenagers more than a decade ago stimulated the push for better reporting. A recent analysis in the journal BMJ found that GlaxoSmithKline, Paxil’s manufacturer, failed to disclose 2001 data showing the drug to be no more effective than a placebo, and was linked to increased suicide attempts by teens.

Writing for Time, Alexandra Sifferlin reported on a new study that suggested many of the medical reviewers for the FDA go to work for the drug companies they oversaw while working for the government. One of the study’s authors said: “I don’t think there is overt collusion going on, but if you know in the back of your mind that a major career opportunity after the FDA is going to work on the other side of the table, I worry it can make you less likely to put your foot down.”

Returning to the Francis Collins metaphor, it seems that the willingness to try and herd Pharma cats is dependent on whether or not you are afraid they will scratch you in the attempt.

10/11/16

Stacking the Deck with Clinical Trials

© photosebia | stockfresh.com

© photosebia | stockfresh.com

In September of 2007 the “Food and Drug Administration Amendments Act of 2007” became law. This law requires that findings from human testing of drugs and medical devices be made publically available on the NIH website, ClinicalTrials.gov. But it seems that both drug companies and most research institutions—including leading universities and hospitals—routinely violate the law. An investigation by STAT News found that at least 95 percent of all disclosed research results were posted late or not at all.

Drug companies have long been castigated by lawmakers and advocacy groups for a lack of openness on research, and the investigation shows just how far individual firms have gone to skirt the disclosure law. But while the industry generally performed poorly, major medical schools, teaching hospitals, and nonprofit groups did worse overall — many of them far worse.

Four of the top ten recipients of federal medical research funding from the NIH were among the worst offenders. These four were: Stanford, the University of California, San Diego, the University of Pennsylvania, and the University of Pittsburgh. Researchers, university administrators and hospital executives interviewed by STAT News said they were not intentionally breaking the law. They were just too busy and lacked administrative funding to complete the required data entry on ClinicalTrials.gov. NIH estimated it takes, on average, around 40 hours to submit trials results.

Six organizations — Memorial Sloan Kettering, the University of Kansas, JDRF (formerly the Juvenile Diabetes Research Foundation), the University of Pittsburgh, the University of Cincinnati, and New York University — broke the law on 100 percent of their studies — reporting results late or not at all.

The Director of NIH, Francis Collins, said the findings were “very troubling.” He said pointing to the time demands on posting data to ClinicalTrials.gov was not an acceptable excuse for noncompliance. Beginning in the spring of 2016, after further refinement of the ClinicalTrials.gov rules, Collins said NIH and FDA will have “a firmer basis for taking enforcement actions.” The FDA is empowered to levy fines of up to $10,000 a day per trial for late reporting to ClinicalTrials.gov.

In theory, it could have collected $25 billion from drug companies since 2008 — enough to underwrite the agency’s annual budget five times over. But neither FDA nor NIH, the biggest single source of medical research funds in the United States, has ever penalized an institution or researcher for failing to post data.

When the “Food and Drug Administration Amendments Act of 2007” became law, Senator Charles Grassley said: “Mandatory posting of clinical trial information would help prevent companies from withholding clinically important information about their products. . . . To do less would deny the American people safer drugs when they reach into their medicine cabinets.” But the failure of drug companies and others to post clinical trial results, coupled with the failure of the FDA to hold them accountable via fines when they don’t, means the American people are being denied the ability to see for themselves if the drugs they take are safe and effective. Kathy Hudson, a deputy director for NIH, said:  “If no one ever knows about the knowledge gained from a study, then we have not been true to our word.”

The scarcity of clinical trial results posted to ClinicalTrails.gov is not the only issue with clinical trials and the NIH website. Drug companies and research facilities are also not prospectively registering clinical trials as they should. Scott, Rucklidge and Mulder found that “less than 15% of psychiatry trials were prospectively registered with no changes in POMs [primary outcome measures].” You can see Julia Rucklidge’s discussion of the study here. Also see “Clinical Trial Sleight-of-Hand” on this website.

Writing for Health Care Renewal, Bernard Carroll said there was a disconnection between the FDA’s drug approval process and what gets published in the medical journals. “Pharmaceutical corporations exploit this gap through adulterated, self-serving analyses, and the FDA sits on its hands.” He suggested that independent analyses of clinical trials be instituted, “because we cannot trust the corporate analyses.”

When corporations are involved, there is no point in prolonging the myth of noble and dispassionate clinical scientists searching for truth in clinical trials. It’s over. We would do better to stop pretending that corporate articles in medical journals are anything but marketing messages disguised with the fig leafs of co-opted academic authors and of so-called peer review.

Carroll proposed that Congress mandate the FDA to analyze all clinical trials data strictly according to the registered protocols and analysis plans. This should apply to new drugs as well as approved drugs being tested for new indications. And it should be applied to publications reporting new trials of approved drugs. “Corporations and investigators should be prohibited from publishing their own in-house statistical analyses unless verified by FDA oversight.” (emphasis in the original) Carroll quoted Eric Topol in a recent BMJ editorial as saying: “The disparity between what appears in peer reviewed journals and what has been filed with regulatory agencies is long standing and unacceptable.”

He gave three reasons for prohibiting in-house corporate analyses of clinical trials data. First, the inherent conflict of interest is too great to be ignored. Carroll described Forest Laboratories and citalopram as an example in his article to illustrate this point. Second, when corporate statisticians are encouraged to play around with the statistical analysis of the trial data (i.e., p-hacking), “they are no longer testing the defined study question with fidelity to the methods specified in the IND protocol.” Third, the FDA should monitor the publication of clinical trial reports in medical journals. The FDA inspects production facilities for evidence of physical adulteration, why not verify that what gets published in journals matches what they presented to the FDA for drug approval? “The harms of adulterated analyses can be just as serious as the harms of adulterated products.”

Pharmaceutical corporations are betting on huge profits with drug development. And allowing them to play fast and loose with clinical trial registration and the analysis of the trial data is akin to stacking the deck in their favor. It’s time to require pharmaceutical companies to stop trying to rig the clinical trial process in their favor.

08/23/16

Clinical Trial Sleight-of-Hand

7501727 - a rabbit in a hat and a magic wand against white background

© ljupco | 123rf.com

In 2005 a researcher named John Ioannidis published a seminal paper on publication bias in medical research, “Why Most Published Research Findings are False.” When Julia Belluz interviewed Ioannidis for Vox ten years later, she reported that as much as 30% of the most influential medical research papers turn out to be wrong or exaggerated. She said an estimated $200 billion, the equivalent of 85% of the global spending on research, is wasted on poorly designed and redundant studies. Ioannidis indicated that preclinical research on drug targets received a lot of attention since then. “There are papers showing that, if you look at a large number of these studies, only about 10 to 25 percent of them could be reproduced by other investigators.”

Ioannidis noted even with randomized control trials, there is empirical evidence indicating only a modest percentage can be replicated. Among those trails that are published, about half of the initial outcomes of the study are actually reported. In the published trials, 50% or more of the results are inappropriately interpreted, or given a spin that favors the sponsor of the research. “If you multiply these levels of loss or distortion, even for randomized trials, it’s only a modest fraction of the evidence that is going to be credible.”

One of the changes that Ioannidis’s 2005 paper seemed to produce was the introduction of mandatory clinical trial registration guidelines by the International Committee of Medical Journal Editors (ICMJE). Member journals were supposed to require prospective registration of trials before patient enrollment as a condition of publication. The purpose is that registering clinical trial ahead of time publically describes the methodology that should be followed during the trial. If the published report of the trial afterwards differed from its clinical trial registration, you have evidence that the researchers massaged or spun their research data when it didn’t meet the originally proposed outcome measures. In other words, they didn’t play by the rules they said ahead of time they were going to do in their research if they didn’t “win.”

Julia Rucklidge and two others looked at whether five psychiatric journals (American Journal of Psychiatry, Archives of General Psychiatry/JAMA Psychiatry, Biological Psychiatry, Journal of the American Academy of Child and Adolescent Psychiatry, and the Journal of Clinical Psychiatry) were indeed actually following the guidelines that said they would follow. They found that less than 15% of psychiatry trials were prospectively registered with no changes in their primary outcome measures (POMs). Most trials were either not prospectively registered, had either their POMs or timeframes changed sometime after registration, or they had their participant numbers changed.

In an article for Mad in America, Rucklidge said they submitted their research for review and publication in various journals, including two of the five they investigated. Six medical or psychiatric journals rejected it—they refused to publish Rucklidge et al.’s findings. PLoS One, a peer-reviewed open access journal did accept and publish their findings. She said while the researchers in their study could have changed their outcome measures or failed to preregister their trials for benign reasons, “History suggests that when left unchecked, researchers have been known to change their data.”

For example, an initial clinical trial for an antidepressant could be projected to last for 24 weeks. The 24-week time frame would be one of the initial primary outcome measures—will the antidepressant be more effective than a placebo after 24 weeks. After gathering all the data, the researchers find that the antidepressant was not more effective than placebo at 24 weeks. But let’s say it was more effective than placebo at 18 weeks. What gets reported is the results after 18 weeks; the 24 week original timeframe may disappear altogether when the research results are published.

People glorify their positive results and minimize or neglect reporting on negative results. . . . At worst, our findings mean that the trials published over the last decade cannot be fully trusted. And given that health decisions and funding are based on these published findings, we should be very concerned.

Looking ahead, Rucklidge had several suggestions for improving the situation with clinical trials.

1) Member journals of the ICMJE should have a dedicated person checking trial registries, trials should simply not be published if they haven’t been prospectively registered as determined by the ICMJE or the journals should state clearly and transparently reasons why studies might be published without adhering to ICMJE guidelines.2) If authors do change POMs or participant numbers or retrospectively register their trials, the reasons should be clearly outlined in the methods section of the publication.3) To further improve transparency, authors could upload the full clinical trial protocol, including all amendments, to the registry website and provide the raw data from a clinical trial in a format accessible to the research community.4) Greater effort needs to be made to ensure authors are aware of the importance of prospectively registering trials, by improving guidelines for submission (3) and when applying for ethical approval.5) Finally, reviewers should not make decisions about the acceptability of a study for publication based on whether the findings are positive or negative as this may be implicitly encouraging authors to be selective in reporting results.

Rucklidge also mentioned another study by Mathieu, Chan and Ravaud that looked at whether clinical trial registrations were actually looked at by peer-reviewers. The Mathieu et al. survey found that only one-third of the peer reviewers looked at registered trial information and then reported any discrepancies to journal editors. “When discrepancies were identified, most respondents (88.8%) mentioned them in their review comments, and 19.8% advised editors not to accept the manuscript.” The respondents who did not look at the trial registry information said that main reasons they failed to do so was because of the difficulty or inconvenience in accessing the registry record.

One suggested improvement by Mathieu, Chan and Ravaud was for journals to provide peer reviewers with the clinical trial registration number and a direct Web link to the registry record; or provide the registered information with the manuscript to be reviewed.

The actions of researchers who fail to accurately and completely register their clinical trials, alter POMs, change participant numbers, or make other adjustments to their research methodology and analysis without clearly noting the changes is akin to the sleight-of-hand practiced by illusionists. And sometimes the effect is radical enough to make an ineffective drug trial seem to a new miracle cure.

02/5/16

Wolves in Sheep’s Clothing

© Eros Erika | 123rf.com

© Eros Erika | 123rf.com

Atypical antipsychotics are now the largest-selling class of drugs in the U.S., accounting for more than $14.6 billion in annual sales by 2010. They are also the class of psychiatric drugs with the most negative side effects—and that’s saying something when you consider the others, namely antidepressants and anti-anxiety meds. Because schizophrenia effects such a small percentage of the population, the initial market for atypical antipsychotics was limited. The path to increased sales led through finding a wider market than just individuals with schizophrenia. So the pharmaceutical companies began to look at the behavioral disorders.

For the most part, these disorders are less serious than schizophrenia, but many are severe nonetheless, including hyper-activity in children and agitation in elderly patients. Marketing atypical anti-psychotic agents to patients with this broader category of disorders held the promise of sales reaching blockbuster levels.

There were two obstacles to this broader promotion. First, the FDA had only approved atypicals for the treatment of severe psychosis—schizophrenia—in adults. Their use for other disorders was then off-label. FDA regulations prohibit pharmaceutical companies from promoting drugs for such additional uses.

The second obstacle was that they didn’t have a very good safety profile. Used for a serious disorder like schizophrenia, the adverse effects of atypicals were understood to be a trade off. “But the risk–benefit calculus is much less favorable when milder conditions are involved.” Despite these impediments, the temptation was too much for the manufacturers to resist and a number of lawsuits over the past few years attest to this. Read “Antipsychotic Medications Are Spelling Legal Trouble for Drugmakers” for more information on this. Robert Field concluded his article with this observation:

 In light of the large number of successful enforcement actions and the continued potential for abuses, prosecutors are likely to remain vigilant concerning the marketing of atypical antipsychotic agents. Repeated violations could generate even larger penalties. Publicity over the large settlements has put physicians and the public on notice about the hazards of indiscriminate use of this class of drugs. In the future, regulators, clinicians and patients should view atypical antipsychotics and marketing claims concerning them with caution.

Over time, antipsychotics have “evolved.” Some are now approved as adjunct medication for treating major depression. Many are now are also prescribed for the treatment of bipolar disorder. And then there is off-label market for several behavioral disorders. No longer are they relegated to just the niche market of people diagnosed with schizophrenia.

The FDA-approved uses for antipsychotics now include the treatment of bipolar I disorder, schizophrenia, schizoaffective disorder and as an adjunct treatment for major depression. In addition to their FDA approved uses, several atypicals are used off-label to treat various psychiatric conditions. They have been studied as off-label treatment for the following conditions: ADHD, anxiety, dementia in elderly patients, depression, eating disorders, insomnia, OCD, personality disorder, PTSD, substance use disorders, and Tourette’s syndrome.

Clozapine (Clozaril) was the first atypical developed. Introduced in Europe in 1971, it was voluntarily pulled by its manufacturer when it was shown to cause a condition called agranulocytosis, a dangerous decrease in the number of white blood cells. It was then approved by the FDA in 1989 for the treatment of treatment-resistant schizophrenia. In 2002 the FDA also approved clozapine for reducing the risk of suicidal behavior. However, the FDA also requires it to carry five black box warnings for a series of adverse health effects including cardiovascular and respiratory problems and increased mortality in elderly patients with dementia-related psychosis.

The five main atypical antipsychotics currently used in the US are: Aripiprazole (Abilify), Olanzapine (Zyprexa), Quetiapine (Seroquel), Risperidone (Risperdal) and Ziprasidone (Geodon).  There are six newer ones whose off-label use have not been documented or researched as extensively as the preceding five have been. These newer ones are: Asenapine (Saphris), Iloperidone (Fanapt), Lurasidone (Latuda) and Paliperidone (Invega). Two brand new antipsychotics, Rexulti (brexpiprazole) and Vraylar (cariprazine), will be discussed below.

There are also six other atypicals that have not been approved for use in the US. They are: Amisulpride, Blonanserin, Melperone, Sertindole, Sulpride and Zotepine.  The following chart lists the FDA-approved indications for atypical antipsychotics.

Atypicals

Bipolar 1

schizophrenia

schizoaffective

Major depression

Aripiprazole

yes

yes

yes

Olanzapine

yes

yes

yes

Quetiapine

yes

yes

yes

Risperidone

yes

yes

Ziprasidone

yes

yes

Asenapine

yes

yes

Iloperidone

yes

Lurasidone

yes

yes

Paliperidone

yes

yes

Clozapine

yes

yes

Brexpiprazole

yes

yes

Cariprazine

yes

yes

Schizophrenia is the primary disorder for which antipsychotics are targeted and bipolar 1 disorder is second. Three of the main antipsychotics have been approved as augmentations for antidepressants, Aripiprazole, Olanzapine and Quetiapine. Interestingly, the medication guides for most of the antipsychotics seem to downplay the drug class they are in. They only refer to themselves as “antipsychotic” within the warning of a potential side effect called neuroleptic malignant syndrome. Coincidentally, that is the only place the other common term for antipsychotic, neuroleptic, is found.

Here is an example of how the warning for the potential side effect of neuroleptic malignant syndrome: was worded for Seroquel (quetiapine): “neuroleptic malignant syndrome (NMS). NMS is a rare but very serious condition that can happen in people who take antipsychotic medicines, including SEROQUEL.” Many of the other antipsychotics have similar wording for the discussion of this side effect. Abilify never refers to itself as an antipsychotic or neuroleptic in its medication guide. Under the discussion of possible side effects with Abilify is the following:

 Neuroleptic malignant syndrome (NMS): Tell your healthcare provider right away if you have some or all of the following symptoms: high fever, stiff muscles, confusion, sweating, changes in pulse, heart rate, and blood pressure. These maybe symptoms of a rare and serious condition that can lead to death. Call your healthcare provider right away if you have any of these symptoms.

But you will find a lot of discussion about antidepressants in some of these medication guides. Many of the antipsychotics use language that gives the impression that the drug is an “antidepressant,” not an “antipsychotic.” The medication guides for Abilify (aripiprazole), Seroquel (quetiapine) and Latuda (lurasidone) have an entire section that discusses what someone needs to know about antidepressant medications. Someone not familiar with the various classes of medications who are taking these drugs might think they are taking antidepressant and not an antipsychotic.

The following table summarizes the evidence for off-label use of the five primary atypical antipsychotics currently used in the US are: Aripiprazole, Olanzapine, Quetiapine, Risperidone and Ziprasidone. The strongest evidence of efficacy is noted as “++”, then “+”.  “0” means there have been no clinical trials attempted; “-“ represents no efficacy and “+-“ is for mixed results. “FDA” represents FDA approval for the condition. Keep in mind these ratings are based upon the data from the drug companies in their quest to expand the antipsychotic market.

Disorder

Aripiprazole

Olanzapine

Quetiapine

Risperidone

Ziprasidone

Anxiety

0

++

ADHD

0

0

0

+

0

Dementia

++

+

+

++

0

Depression

FDA

FDA

FDA

++

+

OCD

0

+

++

PTSD

0

+-

+-

++

0

Tourette’s

0

0

0

+

Risperidone was the first of the main five antipsychotics brought to market in 1990 by Janssen. In 1996 Eli Lilly brought olanzapine to market in September of 1996 and AstraZeneca brought quetiapine to market in September of 1997. Pfizer brought ziprasidone to market in June of 2002 and Bristol-Myers Squibb had aripiprazole approved in November of 2002.  All five are currently off patent. The patent expiration dates for the newer antipsychotics are as follows: Asenapine (Saphris) in 2020, Iloperidone (Fanapt) 2027, Lurasidone (Latuda) 2018. Paliperidone (Invega) lost its exclusivity on October 6, 2014.

Two brand new antipsychotics, Rexulti (brexpiprazole) and Vraylar (cariprazine) were just approved by the FDA in the summer of 2015.  Vraylar was approved for the treatment of schizophrenia and bipolar disorder in adults. Rexulti was approved as a treatment for schizophrenia and as an add-on treatment for adults with major depression.

The Rexulti medication guide also has a section describing what you need to know about antidepressants. It has the same warning for the potential side effect of neuroleptic malignant syndrome (NMS) found with Abilify. It also lists major depression as the first disorder it is used to treat; schizophrenia is listed second. So it seems that it is positioning itself to be seen more a treatment for depression than schizophrenia.

To its credit, Vraylar’s medication guide regularly refers to antipsychotics and the side effects of antipsychotics. And I did not find even ONE reference to “antidepressant.” However, its discussion of NMS is subtle, never explicitly saying it could occur from Vraylar. Under warnings and precautions it says:

Neuroleptic Malignant Syndrome (NMS), a potentially fatal symptom complex, has been reported in association with administration of antipsychotic drugs. Clinical manifestations of NMS are hyperpyrexia, muscle rigidity, delirium, and autonomic instability. Additional signs may include elevated creatine phosphokinase, myoglobinuria (rhabdomyolysis), and acute renal failure.

However, truth in advertising isn’t the only concern, at least with Vraylar. Johanna Ryan described a detailed investigation she did of the Vraylar studies registered with ClinicalTrials.gov. Out of the twenty registered studies, seventeen were completed, but still had not shared their results on the government website, a mandatory step in the process. I reviewed all the registered studies for Vraylar on December 4, 2015 and there were still no posted results from the completed clinical trials for Vraylar almost two months after Ryan’s article was posted on davidhealy.org.

She found at least six published papers directly based on these studies; only two were posted on CT.gov. The average number of listed authors was six to eight, with an academic noted as the “lead” author. The rest were drug company employees. Some papers only had employee-authors.

Overwhelmingly they were contract researchers. Some were freestanding clinical trial businesses. Others were busy medical practices with a thriving research business “on the side.” The first recruited subjects largely by TV, newspaper and online advertising which emphasized free treatment. The second combined some advertising with recruitment among their own patients.

The adverse side effects with antidepressants are increasingly evident, as is their well-documented ineffectiveness. But they are more acceptable by our cultural psyche than antipsychotics. Remember Listening to Prozac? Antipsychotics (neuroleptics) are now the “it” class of psychiatric medications. As they expand their market reach to beyond schizophrenia, the term “antipsychotics” has become a liability for sales. “Anti “depression” medication is an easier sell than anti “psychotic.” So it seems there has been an intentional effort by some pharmaceutical companies to blur the lines between the drug classes of antidepressants and antipsychotics.

I think a fitting metaphor for what’s happening is to think of this marketing strategy as an attempt to pass off wolves in sheep’s clothing. But you have to wonder just how bad the adverse effects of antipsychotics  (the wolves) are when the less harmful half of the metaphor—the “sheep”—is antidepressants.

04/8/15

Sola Fide with Drugs

© Awakenedeye | Dreamstime.com

© Awakenedeye | Dreamstime.com

Between May of 1999 and July of 2002, a researcher employed at the Stratton Veterans Affairs Medical Center in Albany New York falsified documents in a clinical trial drug study that contributed to the death of a subject. The researcher “knowingly and willfully” misrepresented the results of a blood chemistry analysis to qualify an individual with impaired kidney and liver function for the study. As Charles Seife reported, the study subject died as a direct consequence of the first dose of the treatment. “The researcher pleaded guilty to fraud and criminally negligent homicide and was sentenced to 71 months in prison.”

Although this episode is described in detail in FDA documents as well as court documents,none of the publications in the peer-reviewed literature associated with the chemotherapy study in which the patient died have any mention of the falsification, fraud, or homicide. The publications associated with 2 of the 3 other studies for which the researcher falsified documents also do not report on the violations.

This was just one of the four case examples described by Seife in his JAMA Internal Medicine article, “Research Misconduct Identified by the US Federal Drug Administration.” His study sought to identify publications describing clinical trials to which the FDA had given its severest warning (an OAI—official action indicated—warning) after doing routine inspections. Once the published articles were identified, Seife tried to determine if there was any subsequent acknowledgement of the violation.

From the documents he and his students gathered together, they found approximately 600 clinical trials mentioned as potentially having OAI violations. They then submitted requests for the FDA OAI notifications through the Freedom of Information Act. Because of extensive redactions (censoring for legal or security purposes), most of the trials in the documents could not be identified. When key information was available, they were able to identify 101 trials with one or more OAI grades. From these, they were able to glean 57 trials with 1 or more FDA inspections of a trial site with evidence of “significant departure from good clinical practice.” These violations included actions such as: underreporting adverse events, violations of protocol, violations of recruitment guidelines, and various forms of scientific misconduct.

In 22 of these trials (39%), the FDA cited researchers for falsification or submission of false information; in 14 (25%), for problems with adverse events reporting; in 42 (74%), for failure to follow the investigational plan or other violations of protocol; in 35 (61%), for inadequate or inaccurate recordkeeping; in 30 (53%), for failure to protect the safety, rights, and welfare of patients or issues with informed consent or institutional review board oversight; and in 20 (35%), for violations not otherwise categorized. Examples of uncategorized violations include cases in which the investigators used experimental compounds in patients not enrolled in trials, delegated tasks to unauthorized personnel, or otherwise failed to supervise clinical investigations properly.

The 57 clinical trials in their study had resulted in 78 articles published in peer-reviewed journals. “Of these 78 articles, only 3 publications (4%) included any mention of the FDA inspection violations despite the fact that for 59 of those 78 articles (76%), the inspection was completed at least 6 months before the article was published.”

This led Seife to conclude in: “Are Your Medications Safe?” that for more than a decade, the FDA has shown a pattern of burying the details of scientific fraud and misconduct. “The agency doesn’t notify the public, the medical establishment, or even the scientific community that the results of a medical experiment are not to be trusted.” So no one finds out which data is bogus; “which drugs might be on the market under false pretences.” The FDA has repeatedly hidden evidence of scientific fraud from the public, from its trusted scientific advisors—even as they were attempting to decide whether or not a new drug should be allowed on the market. They even stonewalled a congressional panel investigating a case of fraud regarding a dangerous drug.

The sworn purpose of the FDA is to protect the public health, to assure us that all the drugs on the market are proven safe and effective by reputable scientific trials. Yet, over and over again, the agency has proven itself willing to keep scientists, doctors, and the public in the dark about incidents when those scientific trials turn out to be less than reputable. It does so not only by passive silence, but by active deception. And despite being called out numerous times over the years for its bad behavior, including from some very pissed-off members of Congress, the agency is stubbornly resistant to change. It’s a sign that the FDA is deeply captured, drawn firmly into the orbit of the pharmaceutical industry that it’s supposed to regulate.

Seife’s research and conclusions are disturbing on so many levels. The FDA knows about dozens of scientific papers whose data are questionable, but the agency had said and done NOTHING. Even when itself is “shocked at the degree of fraud and misconduct in a clinical trial.” Seife said the most common excuse given by the FDA is that revealing which drugs’ approval relied upon tainted data, would compromise “confidential commercial information” that could hurt the drug companies if it was revealed. Another excuse is that the FDA doesn’t want to confuse the public by revealing misconduct that in the FDA’s judgment, doesn’t “pose an immediate risk to public health.”

The FDA wants you to take it on faith that its officials have the public’s best interest at heart. Justification through faith alone [sola fide] might be just fine as a religious doctrine, but it’s not a good foundation for ensuring the safety and effectiveness of our drugs.

11/19/14

Evidence-Based Treatment … Lacks Evidence

21828750_sEvidence-based medicine (EDM) began in the early 1990s and was seen as a revolutionary movement that would improve patient care. It grew to become the buzz-word for all medical and behavioral health care—make sure treatment is evidence-based! And yet, there is little evidence that EDM has achieved its aim. Health care costs have soared and there is a distinct lack of “high-quality evidence suggesting that EBM has resulted in substantial population-level health gains.”

Given that EBM firmly favours an empirical approach over expert opinion and mechanistic rationale, it is ironic that its widespread acceptance has been based on expert opinion and mechanistic reasoning, rather than EBM ‘evidence’ that it actually works.

The article from which the above critique was taken suggested that the lack of evidence for the overall benefit of EBM was a consequence of it not being implemented effectively. A cornerstone of EBM methodology—the randomized trial—has been corrupted by vested interests.  The authors, Every-Palmer and Howick, defined EBM as “the conscientious and judicious use of current best evidence in conjunction with clinical expertise and patient values to guide health care decisions.” They singled out the field of psychiatry for specific concern, where “the problems with corruption of randomized trials are dramatic.”

Most of the medical psychiatric evidence base has been funded by the pharmaceutical industry, often without the relationships being disclosed. “Between two-thirds and three-quarters of all randomized trials in major journals have been shown to be industry funded.” One of the consequences of this has been publication bias: positive results are published; negative results are not. The best current estimate is that half of all completed clinical trials have never been published in academic journals. Some trials are never registered.

There is also evidence that industry-funded studies exaggerate the treatment effects in favor of the product preferred by their sponsor. One study reviewed industry-funded studies of atypical antipsychotics and found that 90% of the trials showed superiority of the sponsor’s drug. The studies had been designed “in a way that would virtually guarantee the favoured drug would ‘win.’”

Among their recommendations, Every-Palmer and Howick suggested that all clinical trials should be registered and reported. There needs to be more investment in independent research. Evidence-ranking schemes also need to be modified to account for industry bias. These suggestions would be helpful corrections for the corruption of the randomized trial methodology, but what if there are additional problems? For example, merely correcting problems with the misuse of randomized trials would not address concerns related to clinical expertise or patient values.

If current medical science is reaching its limits with some complex illnesses, as Every-Palmer and Howick said was one possibility for the lack of progress with EBM, then further gains will be hard to come by. This would seem to be true with mental illness and addiction, which are diagnosed with the Diagnostic and Statistical Manual (DSM), 5th edition. DSM diagnoses are consensus-based decisions about clusters of symptoms and do not have any objective laboratory measure. Thomas Insel, the Director of the National Institute of Mental Health (NIMH), said that diagnosis with the DSM was equivalent to “creating diagnostic systems based on the nature of chest pain or the quality of fever.”

A further compounding error could be when the role of clinical judgment is neutralized as a result of an overreliance upon the trump of scientific—real or imagined—evidence. Kiene and Kiene noted how the reputation of clinical judgment in medicine has undergone a “substantial transformation” over the last century with the rise of modern research methodology.  “A primary mission [in medical progress] therefore became ‘to guard against any use of judgement’, and it was executed through clinical trials.”

Giovanni Fava pointed to the increasing crisis in psychiatric research and practice because “Psychopathology and clinical judgment are often discarded as non-scientific and obsolete methods.” He noted how the concept of evidence-based medicine has achieved widespread endorsement in all areas of clinical medicine, including psychiatry. But randomized trials were not intended to answer questions about the treatment of individual patients. “The results may show comparative efficacy of treatment for an average randomized patient, but not for pertinent subgroups formed by characteristics such as severity of symptoms, comorbidity and other clinical nuances.”

An aura of authority is given to collections of “best available evidence”, which can in turn lead to major abuses that produce “inappropriate guidelines” for clinical practice. The risk is especially serious as a result of the substantial financial conflicts of interest in medical societies and with the authors of the medical guidelines for clinical practice within those societies.

Special interest groups are thus using evidence-based medicine to enforce treatment through guidelines, advocating what can be subsumed under the German language term of “ Leitkultur ”, which connotes the cultural superiority of a culture, with policies of compulsory cultural assimilation. In psychiatry, such process has achieved strong prescribing connotations, with a resulting neglect of psychosocial treatments.

Given the existing crisis within psychiatry, especially with the questionable validity and reliability of diagnosis within the DSM, evidence-based treatment guidelines that were developed and disseminated within such a culture require radical revision or should be used with extreme caution. The evidence for their efficacy is lacking.