01/24/17

Herding Pharma “Cats”

© mdfiles | stockfresh.comAfrica

The Chinese government released a report in September of 2016 by the State Food and Drug Administration (SFDA) that found fraudulent clinical trial practices on a massive scale. The SFDA concluded that over 80% of clinical trial data was fabricated. The scandal was the result of a “breach of duty by supervision departments and malpractice by pharmaceutical companies, intermediary agents and medical staff.” More than 80% of the applications for the mass production of new medications have been cancelled, with warnings by the SFDA that further evidence of malpractice might still emerge.

Radio Free Asia also reported the SFDA indicated much of the clinical trail data was incomplete at best. But it also failed to meet basic analysis requirements or was untraceable. “Some companies were suspected of deliberately hiding or deleting records of adverse effects, and tampering with data that did not meet expectations.” Apparently, this came as no surprise to industry insiders. “Clinical data fabrication was an open secret even before the inspection.”

Many of the new drugs were combinations of existing ones. Clinical trial outcomes were written beforehand, and their data presented so it agreed with the fabricated outcomes. A doctor at a top Chinese hospital said the problem lay with the failure to implement regulations governing clinical trial data. “Guangdong-based rights activist Mai Ke said there is an all-pervasive culture of fakery across all products made in the country.” Reporting for Pharmafile, Ben Hargreaves said:

The root of the issue is then not regulation, with regulation for clinical trials running on similar lines to Western practises, but in the lack of adherence to them. China’s generic drug industry has struggled with quality problems and therefore there is a temptation for companies to manipulate data to meet standards. The report found that many of the new drugs were found to be a combination of existing drugs, with clinical trials outcomes written beforehand and the data tweaked to fit in with the desire outcomes.

Sadly, clinical trial problems are not unique to China. An editorial published in the British journal The Lancet Psychiatry described multiple issues beginning with how subjects are recruited, moving on to determining what the control group should be, and ultimately defining meaningful outcome measures. Sometimes, trial recruits receive “care” they didn’t agree to. “Researchers and ethics review boards need to examine the ethical arguments and practical procedures from other areas of medicine where consent is problematic.” If such trials are done, regular and rigorous monitoring is essential. Patient safety and autonomy needs to be a priority.

In his discussion of the editorial, Justin Carter elaborated on one of the problems with recruiting subjects. An individual was recruited into a study on three antipsychotics while under a forced commitment order from a judge. “The psychiatrist who recruited him was in charge of the study and was his treatment provider and was also empowered to report on the patient’s progress to the judge.” The individual died by suicide during the drug trial.

The work of Irving Kirsch and others has shown the problem with inert placebos (sugar pills). The side effects from medication make it easy for participants to guess which study group they are in.

And when the trial is over and the data in, do the outcome measures really provide something meaningful for people’s lives? If the ultimate goal is for people to fell better and resume their prior level of functioning, should outcome measures by primarily patient self-reports, clinical assessment, or differences shown by imaging or the as-yet-to-be-clearly-identified biomarkers?

Given the problems running and interpreting psychiatry trials, it is essential to learn how even the most successfully tested interventions work in real clinics with the broad patient population. Implementation, uptake, and effectiveness in real-life settings must be analysed, and delivery of new innovations modified accordingly. Future research should be thought of not as a plain linear process from innovation to trial to implementation, but as a virtuous circle where research feeds into the clinic and vice versa.

Another issue pointed to by Carter was the validity and reliability of the diagnosis or classification system used to determine who to include and who to exclude from the trials. The DSM system, now in its fifth edition (DSM-5), is the current “bible” for assessing and diagnosing problems the psychiatric medications in clinical trials are supposed to “treat” in the U.S. Yet there have been questions about the reliability and validity of the DSM dating from an argument raised by Robert Spitzer and others in the 1970s that ushered in changes still embedded in the DSM-5. Rachel Cooper gave a brief history of the reliability questions with the DSM in “How Reliable is the DSM-5?” You can also refer to “Psychiatry Has No Clothes,” “Where There’s Smoke …”, and  “The Quest for Psychiatric Dragons,” Parts 1 and 2.

A few weeks before the release of the DSM-5, Thomas Insel, then the NIMH Director, announced the NIMH would be “reorienting” its research away from DSM categories. The agency’s new approach is called the Research Domain Criteria (RDoC) project. For now, RDoC is a research framework and not a clinical tool. But NIMH has high hopes for it: “RDoC is nothing less than a plan to transform clinical practice by bringing a new generation of research to inform how we diagnose and treat mental disorders.” While Tom Insel has moved on to work for Alphabet (Google), RDoC is alive and well within NIMH. You can keep up with news about RDoC on the “Science News About RDoC.”

The Science Update for February 16, 2016 noted the March 2016 issue of the journal Psychophysiology would be devoted to the RDoC initiative. Dr. Bruce Cuthbert said the special issue was a unique opportunity for researchers to engage with one another and reflect on work being done in various laboratories throughout the country. He thought it was encouraging to see many investigators already engaged in the kind of work RDoC advocates. “What this shows is that while the RDoC acronym may be new, the principles behind RDoC are certainly not new to psychiatric research.”

If the principles behind RDoC are not new to psychiatric research, how can it bring “a new generation of research to inform how we diagnose and treat mental disorders” in order to transform clinical practice? It sounds a lot like using the same deck of cards to just play a new card game. RDoC may not be the transformative framework it’s touted to become.

Added to these issues is the failure of pharmaceutical companies to publically report the results of clinical trials, as they are required by law to do. New reporting rules will take effect on January 18, 2017. But advocates for transparency in clinical research have cautioned the success of the new rules will depend upon the willingness and vigor of government enforcement of those rules. The failure to enforce the existing rules, which went into effect in 2008, led to widespread noncompliance with reporting requirements. If the FDA had fined the violators, they could have collected an estimated $25 billion.

Reporting for STAT News, Charles Piller said studies have indicated only a small fraction of trials will comply with the law. Yet there are no current plans to increase enforcement staffing at the FDA and NIH. That’s a big problem, according to Ben Goldacre, an advocate for full disclosure in clinical research. Francis Collins, the NIH director said they are serious about this and will withhold funds, if needed. “It’s hard to herd cats, but you can move their food, or take their food away.”

The legislation that created ClinicalTrials.gov emerged from numerous cases of drug manufacturers withholding negative trial results, making drugs look more effective and less harmful. Efforts to market the antidepressant Paxil for teenagers more than a decade ago stimulated the push for better reporting. A recent analysis in the journal BMJ found that GlaxoSmithKline, Paxil’s manufacturer, failed to disclose 2001 data showing the drug to be no more effective than a placebo, and was linked to increased suicide attempts by teens.

Writing for Time, Alexandra Sifferlin reported on a new study that suggested many of the medical reviewers for the FDA go to work for the drug companies they oversaw while working for the government. One of the study’s authors said: “I don’t think there is overt collusion going on, but if you know in the back of your mind that a major career opportunity after the FDA is going to work on the other side of the table, I worry it can make you less likely to put your foot down.”

Returning to the Francis Collins metaphor, it seems that the willingness to try and herd Pharma cats is dependent on whether or not you are afraid they will scratch you in the attempt.

01/14/15

The Reproducibility Problem

Copyright : Fernando Gregory (Follow)
Copyright : Fernando Gregory (Follow)

In January of 2014, a Japanese stem cell scientist published what looked like groundbreaking research in the journal Nature that suggested stem cells could be made quickly and easily. But as James Gallagher of the BBC noted, “the findings were too good to be true.” Her work was investigated by the center where she conducted her research amid concern within the scientific community that the results had been fabricated. In July, the Riken Institute wrote a retraction of the original article, noting the presence of “multiple errors.” The scientist was later found guilty of misconduct. In December of 2014, Riken announced that their attempts to reproduce the results had failed. Dr. Obokata resigned saying, “I even can’t find the words for an apology.”

The ability to repeat or replicate someone’s research is the way scientists can weed out nonsense, stupidity and pseudo-science from legitimate science. In Scientific Literacy and the Scientific Method, Henry Bauer described a ‘knowledge filter’ that illustrated this process. The first stage of this process was research or frontier science. The research is then presented to editors and referees of scientific journals for review, in hopes of being published. It may also be presented to other interested parties in seminars or at conferences. If the research successfully passes through this first filter, it will be published in the primary literature of the respective scientific field—and pass into the second stage of the scientific knowledge filter.

The second filter consists of others trying to replicate the initial research or apply some modification or extension of the original research. This is where the reproducibility problem occurs. The majority of these replications fail. But if the results of the initial research can be replicated, these results are also published as review articles or monographs (the third stage). After being successfully replicated, the original research is seen as “mostly reliable,” according to Bauer.

So while the stem cell research of Dr. Obokata made it through the first filter to the second stage, it seems that it shouldn’t have. The implication is that Nature didn’t do a very good job reviewing the data submitted to it for publication. However, when the second filtering process began, it detected the errors that should have been caught by the first filter and kept what was poor science from being accepted as reliable science.

A third filter occurs where the concordance of the research results with other fields of science is explored. There is also continued research by others who again confirm, modify and extend the original findings. When the original research successfully comes through this filter, it is “mostly very reliable,” and will get included into scientific textbooks.

Francis Collins and Lawrence Tabak of The National Institute of Health (NIH) commented that: “Science has long been regarded as ‘self-correcting’, given that it is founded on the replication of earlier work.” But they noted how the checks and balances built into the process of doing science—that once helped to ensure its trustworthiness—have been compromised. This has led to the inability of researchers to reproduce the initial research findings.  Think here of how Obokata’s stem cell research was approved for publication in Nature, one of the most prestigious science journals.

The reproducibility problem has become a serious concern within research conducted into psychiatric disorders. Thomas Insel, the Director of the National Institute of Mental Health (NIMH), wrote a November 14, 2014 article in his blog on the “reproducibility problem” in scientific publications. He said that “as much as 80 percent of the science from academic labs, even science published in the best journals, cannot be replicated.” Insel said this failure was not always because of fraud or the fabrication of results. Perhaps his comment was made with the above discussion of Dr. Obokata’s research in mind. Then again, maybe it was made in regard to the following study.

On September 16, 2014, the journal Translational Psychiatry published a study done at Northwestern University that claimed it was the “First Blood Test Able to Diagnose Depression in Adults.”  Eva Redei, the co-author of the study said: “This clearly indicates that you can have a blood-based laboratory test for depression, providing a scientific diagnosis in the same way someone is diagnosed with high blood pressure or high cholesterol.” A surprise finding of the study was that the blood test also predicted who would benefit from cognitive behavioral therapy. The study was supported by grants from the NIMH and the NIH.

The Redei et al. study received a good bit of positive attention in the news media.  It was even called a “game changing” test for depression. WebMD, Newsweek, Huffington Post, US News and World Report, Time and others published articles on the research—all on the Translational Psychiatry publication date of September 16th.  Then James Coyne, PhD published a critique of the press coverage and the study in his “Quick Thoughts” blog. Coyne systematically critiqued the claims of the Redei et al. study. Responding to Dr. Rediei’s quote in the above paragraph, he said: “Maybe someday we will have a blood-based laboratory test for depression, but by themselves, these data do not increase the probability.”

He wondered why these mental health professionals would make such “misleading, premature, and potentially harmful claims.” In part, he thought it was because it was fashionable and newsworthy to claim progress in an objective blood test for depression. “Indeed, Thomas Insel, the director of NIMH is now insisting that even grant applications for psychotherapy research include examining potential biomarkers.” Coyne ended with quotes that indicated that Redei et al. were hoping to monetize their blood test. In an article for Genomeweb.com, Coyne quoted them as saying: “Now, the group is looking to develop this test into a commercial product, and seeking investment and partners.”

Coyne then posted a more thorough critique of the study, which he said would allow readers to “learn to critically examine the credibility of such claims that will inevitably arise in the future.” He noted how the small sample size contributed to its strong results—which are unlikely to be replicated in other samples. He also cited much larger studies looking for biomarkers for depression that failed to find evidence for them. His critique of the Redie et al. study was devastating. The comments from others seemed to agree. But how could these researchers be so blind?

Redie et al. apparently believed unquestionably that there is a biological cause for depression. As a result, their commitment to this belief effected how they did their research to the extent that they were blind to the problems pointed to by Coyne. Listen to the video embedded in the link “First Blood Test Able to Diagnose Depression in Adults” to hear Dr. Redie acknowledge she believes that depression is a disease like any other disease. Otherwise, why attempt to find a blood test for depression?

Attempts to replicate the Redei et al. study, if they are done, will raise further questions and (probably) refute what Coyne said was a study with a “modest sample size and voodoo statistics.” Before we go chasing down another dead end in the labyrinth of failed efforts to find a biochemical cause for depression, let’s stop and be clear about whether this “game changer” is really what it claims to be.