06/11/19

Vine of the Spirits

© martinak | 123rf.com; Ayahusaca tea & ingredients

The Hollywood Reporter described a trend emerging over the past few years in Hollywood—ayahuasca ceremonies. It appeared as part of the plot in the 2014 movie, While We Were Young, with Jennifer Aniston, Naomi Watts and Ben Stiller. The website Ranker, in “Celebrities Who have Tried Ayahuasca,” said stars like Paul Simon, Chelsea Handler, Jim Carrey, Tori Amos, Lindsay Lohan, Sting and others have taken the “yage” plunge. Described as a muddy tea, the active ingredient in ayahuasca is one of the world’s most potent psychedelics. Derived from a vine harvested in Peru and Brazil, it has been used for spiritual ceremonies for centuries and apparently  it wants to be more widely used. A Hollywood psychiatrist that hosted a yage ceremony said: “The plants told them [shamans leading the ceremonies] Hollywood’s a good place to get the word out.”

Despite its faddish use today, it was originally described by 16th century Christian missionaries from Spain and Portugal who encountered South Americans using ayahuasca. The missionaries described it as “the work of the devil.” It was also sought out by William Burroughs in the early 1950s. Burroughs had hoped ayahuasca would relieve or cure his opiate addiction. His experience became the basis of a 1963 novel he co-wrote with Allen Ginsburg called The Yage Letters. Traditional use by shamans led to native religions forming around ayahuasca rituals in the 1930s. In Brazil ayahuasca has had a legal status for ritual use since 1987.

Ayahuasca is a combination of chacruna leaves and the bark of the Banisteriopsis caapi vine, or caapi. “Aya” contains DMT (dimethyltrytamine), a powerful psychedelic. It is illegal in the U.S. and classified as a Schedule 1 drug. Its acute effects last about four hours and include intense perceptual, cognitive, emotional and affective experiences. After about forty minutes the visions begin, but they are not always “Go Ask Alice-White Rabbit” experiences. An Emmy-winning television producer said:

The first thing you see is often your worst fear come to life. . . . Mine was Holocaust horror. Another time I saw a 50-foot black widow spider — my fear of death manifesting. It illuminates the dark of your subconscious, and like a horror movie, once you see it, it no longer scares you.

Other visions were less intense. One person saw what looked like a Star Wars set: “dragonflies, sparks of light, plants, stars — everything moving. You go into it asking Mother Ayahuasca a question, then she shows you what you need to know.” Another person said: “I saw a spaceship the first time, then a machine with little gears and fobs. ‘This is how the universe works,’ a woman’s voice told me. The more you do it, the less anxiety you have. It lifts us out of the isolation everyone feels, particularly in Los Angeles.” But a Hollywood internist named Gary Cohen wasn’t convinced:

Ayahuasca contains chemicals found in SSRI antidepressants like Prozac and dangerous, older MAO-inhibitor antidepressants like Parnate. There are potential serious side effects, both with other drugs and food. Foods that contain tyramine — alcohol, cheese, meat, chocolate — can theoretically interact with ayahuasca to cause severely elevated blood pressure, resulting in strokes and extremely high body temperatures, which can cause death. I strongly advise my patients against using it recreationally or ‘spiritually.’

A handful of deaths have been reported, particularly those with heart ailments and high blood pressure. An 18-year old California teen seeking spiritual rebirth in 2012 was buried at the center in the Amazon jungle where he took the drug. “His shaman was convicted of homicide and sentenced to five years in prison.” The stories of transformation tend to hook interested individuals into trying it. “The rise of festival culture, Burning Man, plus technology — it’s all making everyone desire a deeper place.”

There has been some research into the effects of Ayahuasca. A paper was published in Psychological Medicine. The researchers conducted a double-blind randomized placebo-controlled trial of 29 (14 received ayahuasca; 15 received the placebo} patients with treatment-resistant depression. They found a rapid antidepressant effect after a single dosing session with ayahuasca when compared to placebo. The authors concluded: “This study brings new evidence supporting the safety and therapeutic value of ayahuasca, dosed within an appropriate setting, to help treat depression.” The study was registered at http://clinicaltrials.gov (NCT02914769).

We found evidence of rapid antidepressant effect after a single dosing session with ayahuasca when compared with placebo. Depression severity changed significantly but differently for the ayahuasca and placebo groups. Improvements in the psychiatric scales in the ayahuasca group were significantly higher than those of the placebo group at all time points after dosing, with increasing between-group effect sizes from D1 to D7. Response rates were high for both groups at D1 [day 1] and D2 and were significantly higher in the ayahuasca group at D7. Between-groups remission rate showed a trend toward significance at D7.

I would suggest there are several issues with the study. First is the washout period of 2 weeks, meaning any patients using antidepressants were rapidly tapered off them two weeks before the study. This was to ensure antidepressant medication was no longer present in the patient’s body. However, because of the rapidness of the taper the patients will likely have experienced antidepressant discontinuation syndrome (withdrawal) and a rebound of their negative mood, confounding the assessment of depression.

This short maximum time—seven days—for assessing the antidepressant effects is another limitation of the study. Ketamine, another medication with rapid antidepressant effects and psychedelic effects, has been shown to fade rapidly and require frequent, repeated treatments. The positive effects (or evidence of fading effects) ayahuasca may have on the patients beyond seven days is not known since antidepressant treatment is resumed at that point. Then there is the context of the ceremony itself.

Neuroskeptic reviewed the study on his blog, stating it revived some long-standing questions. He did think it was a promising, well-designed study, however. One of the qualities he pointed out was how the placebo brew looked, tasted and smelt like the real thing. He noted where Palhano-Fontes et al. concluded that while no serious side effects occurred, “the ayahuasca session was not necessarily a pleasant experience.” He thought the antidepressant effects were themselves a kind of placebo response—”the ayahuasca caused powerful psychedelic effects, such as ‘altered perception’ and ‘transcendence.’”

Such potent subjective experiences could lead patients to have confidence in the treatment and thus drive placebo effects, if combined with expectations that ayahuasca will be beneficial. A profound experience could trigger improvement in other ways, as well, such as by giving patients a new perspective on their own mental state.Now, in the case of ayahuasca, this ‘psychological’ interpretation of the antidepressant effect is not necessarily a problem. I think most people (including the traditional ayahuasca users) already assume that the psychedelic experience is part of the therapeutic process.

But it does raise the possibility the positive effects are not due to the ayahuasca itself. As the researchers themselves said, ayahuasca is not a panacea.

Three of the study’s authors described their study on The Conversation. They said it was the first randomized, placebo-controlled clinical trial of ayahuasca, which means “the vine of the spirits” in the Quechua language. They began by recruiting 218 patients with depression. The twenty-nine selected for the study had treatment-resistant depression and no history of psychiatric disorders like schizophrenia, “which ayahuasca may aggravate.” Although the sessions took place in a hospital, the space used was designed like a quiet and comfortable living room.

One day afterwards, 50% of all patients were significantly improved, including reduced anxiety and improved mood. After a week, 64% of the patients who just received ayahuasca felt their depression had eased. They also noted that because ayahuasca is illegal in many countries, its therapeutic value is difficult to test. Even in Brazil, using ayahuasca to treat depression remains a fringe, informal endeavor. They cautioned that ayahuasca was not a panacea or cure for depression.

Such experiences may prove too physically and emotionally challenging for some people to use it regularly as treatment. We have also observed regular ayahuasca users who still suffer from depression.

Another of the study’s authors pointed out the media interest given to ayahuasca as a potential “cure” for addiction and depression and said maybe it is, but “it’s too soon to tell.” He cautioned against such a conclusion. Acknowledging the power of the placebo effect, he said, “It is not currently possible to conclude that the observed effects were really caused by ayahuasca, or that ayahuasca can ‘cure’ depression.” He also seemed to agree with Neuroskeptic, that the powerful psychedelic effects were a kind of placebo effect: “By helping us find the sacred within us, its psychoactive power seems to hold therapeutic potential as an alternative way to address common disorders that modern medicine has thus far found difficult to treat.” He concluded by saying we’ll have to wait and see what the science says.

For more on ayahuasca, go to: “Ayahuasca Anonymous,” Part 1 and Part 2.

04/10/18

The Lancet Story on Antidepressants, Part 1

© thelightwriter | 123rf.com

The Lancet recently published a new paper reporting on a large-meta-analysis of studies on antidepressants done by Cipriani et al., “Comparative efficacy and acceptability of 21 antidepressant drugs.” All 21 antidepressants reviewed in the study were found to be more effective than placebo. Various news agencies, referred to it as “a groundbreaking study;” or as confirming “that antidepressants are effective for major depressive disorder (MDD);” and, “New study: It’s not quackery—antidepressants work. Period.” But the excitement and conclusions noted here seem to have been overdone and a bit premature.

Let’s start with the articles quoted in the first paragraph. The author of an article for The Guardian thought the “groundbreaking” Lancet study showed antidepressants were effective; and “we should get on with taking and prescribing them.” The upshot for him was that the millions of people taking antidepressants (including him) “can continue to do so without feeling guilt, shame or doubt about the course of treatment.” Doctors should feel no compunction about prescribing these drugs. “It’s official: antidepressants work.”

An article for bigthink, “New study: It’s not quackery—antidepressants work. Period,” also thought the Cipriani et al. study was helping to put some of the debate about the effectiveness of antidepressants to bed. Again the reported result was that all antidepressants performed better than placebos. The bigthink author related that in order for a drug to be considered “effective, it had to reduce depression symptoms by at least 50 percent,” which would be an astounding discovery for even one antidepressant, let alone all 21. But that was no quite how the Cipriani et al. study authors defined drug efficacy for their study. The authors said efficacy was the “response rate measured by the total number of patients who had a reduction of ≥50% of the total score on a standardised observer-rating scale for depression,” not a 50% or greater reduction in depressive symptoms. Cipriani was then quoted as saying: “We were open to any result. This is why we can say this is the final answer to the controversy.”

The opening sentence of an article on the Medscape website, “Confirmed: Antidepressants Work for Major Depression,” said: “A large meta-analysis confirms that antidepressants are effective for major depressive disorder (MDD).” Here we find the correct description of efficacy in the study: “Results showed that each studied antidepressant was significantly more efficacious, defined as yielding a reduction of at least 50% in the total score of a standardized scale for depression, than placebo after 8 weeks.” Two additional quotations of Cipriani from a press release about the study are given, suggesting while antidepressants can be an effective tool, they shouldn’t necessarily be the first line of treatment. “Medications should always be considered alongside other options, such as psychological therapies, where these are available.”

Reflecting on these three articles, I thought the Guardian and bigthink articles weren’t as careful as they could have been in their rhetoric about the results of the Cipriani et al. study. Although the Medscape article was more nuanced, it also seemed to lead to the same conclusions as the Guardian article, namely: “The demonstration of the extent of antidepressant superiority over placebo reassures patients and health-care professionals of the efficacy of [this] treatment despite high placebo response rates.” But is this conclusion by the Medscape article accurate? In the discussion section of the Cipriani et al. study, the authors said: “We found that all antidepressants included in the meta-analysis were more efficacious than placebo in adults with major depressive disorder and the summary effect sizes were mostly modest.”  Further on was the following:

It should also be noted that some of the adverse effects of antidepressants occur over a prolonged period, meaning that positive results need to be taken with great caution, because the trials in this network meta-analysis were of short duration. The current report summarises evidence of differences between antidepressants when prescribed as an initial treatment. Given the modest effect sizes, non-response to antidepressants will occur. 

It does not seem the study conclusively found that antidepressants work for major depression. The authors even said in some individuals antidepressants won’t be effective. Now look at the following two assessments of the Cipriani et al. study from an individual (Neuroskeptic) and an organization (The Mental Elf) that I have found to be fair, nuanced and helpful in their assessments of research into psychiatric and medication-related issues.

The Mental Elf article does have a positive title: “Antidepressants can help adults with major depression” and an overall positive assessment, but there were some clear limitations noted as well. First, gleaning results from the study, it reported the most effective antidepressants studied were: agomelatine (Valdoxan, Melitor, Thymanax), amitriptyline (Elavil), escitalopram (Lexapro), mirtazapine (Remeron), paroxetine (Paxil), venlafaxine (Effexor) and vortioxetine (Brintellix). And it noted the least effective ones studied were: fluoxetine (Prozac), fluvoxamine (Luvox), reboxetine (Edronax) and trazodone (many different brand names). The most tolerable antidepressants were: agomelatine, citalopram (Celexa), escitalopram, fluoxetine, sertraline (Zoloft) and vortioxetine. And the least tolerable were: amitriptyline, clomipramine (Anafranil), duloxetine (Cymbalta), fluvoxamine (Luvox  or Faverin), reboxetine (Edronax and others), trazodone and venlafaxine.

The included data only covered a short time period—8-weeks of treatment. So the results may not apply to longer-term antidepressant use. “And some antidepressant side effects occur over a prolonged period, so positive results should be interpreted with caution.” Another concern the author noted was that seventy-eight percent of the trials included in the study were funded by pharmaceutical companies. While industry funding was not associated with substantial differences in response or dropout rates, non-industry funded trials were limited and many trials did not report or disclose their funding.

Another 73% of the included trials were rated as having a moderate risk of bias, with 9% rated as a high risk of bias and only 18% as having a low risk of bias. Significantly, the review pointed out the study did not address specific adverse events, withdrawal symptoms, or when antidepressants were used in combination with other non-drug treatments—information most patients would have found useful. Nevertheless, the Mental Elf reviewer thought the study struck a nice balance between “strong evidence that antidepressants work for adult depression” while “accepting the limitations and potential biases” in the study.

Neuroskeptic who wrote “About that New Antidepressant Study,” thought that while it was a nice piece of work, it told very little new information and had a number of limitations. He thought the media reaction to the paper was “frankly bananas.” He put the effectiveness ratings into perspective by pointing out the “mostly moderate” effect size was .30 on the Standardized Mean Difference (SMD) measure, where .2 was ‘small’ and .5 was ‘medium.’ “The thing is, ‘effective but only modestly’ has been the established view on antidepressants for at least 10 years.” He then cited a previous meta-analysis that found the overall effect size to be almost identical—.31! He then turned to the findings of Irving Kirsch’s research with antidepressants, saying:

Cipriani et al.’s estimate of the benefit of antidepressants is also very similar to the estimate found in the notorious Kirsch et al. (2008) “antidepressants don’t work” paper! Almost exactly a decade ago, Irving Kirsch et al. found the effect of antidepressants over placebo to be SMD=0.32, a finding which was, inaccurately, greeted by headlines such as “Anti-depressants ‘no better than dummy pills.”The very same newspapers are now heralding Cipriani et al. as the savior of antidepressants for finding a smaller effect…

The media hype has been “frankly bananas” about the Cipriani et al. study. More balanced reviews by Neuroskeptic and The Mental Elf thought it was “a nice piece of work” and “a nice balance” between the evidence that antidepressants work for adults with depression while accepting “the limitations and potential biases” in the data. The hype is claiming clear effectiveness for a measure that only shows modest effectiveness over the short-term of 8 weeks. Ironically, the trumpeted findings of Cipriani et al are actually lower than those of Irving Kisrch (.32), who pointed out that the SMD criterion suggested by NICE (National Institute for Health and Care Excellence) was .50. Kirsch et al. said: Thus, the mean change exhibited in trials provides a poor description of results.”

Be sure to read Part 2 of “The Lancet Story on Antidepressants” to see what anti-antidepressant voices have to say about the Cipriani et al. study. For more information on the antidepressant research by Irving Kirsch, see: “Dirty Little Secret” and “Do No Harm with Antidepressants.”

02/6/18

Preying on Academics

© kamchatka | stockfresh.com

Seeking to test whether “predatory” journals would publish an obviously absurd paper, the science blogger Neuroskeptic wrote a Star Wars-themed spoof. His paper was about “midi-chlorians,” which in the Star Wars universe are entities that live inside cells and give the Jedi their powers. He filled the paper with other references to the “galaxy far, far away,” and submitted it to nine journals under the names of Dr. Lucas McGeorge and Dr. Annette Kim. Three journals accepted and published the spoofed article; another offered to publish it for a fee of $360.

Within six days of Neuroskeptic publishing his blog article describing what he did, “Predatory Journals Hit By ‘Star Wars’ Sting,” all three journals had deleted his online journal article. To generate the main text of the paper, he copied the Wikipedia page on “mitochondrion,” which actually do exist, and replaced all references in it with the Star Wars term “midichlorians.” He even admitted how he had reworded the text in the Methods section of his spoofed paper, saying: The majority of the text in the current paper was Rogeted from Wikipedia.” Oh, and “Dr. Lucas McGeorge” was sent an unsolicited invitation to serve on the editorial board of one of the journals. Some of clues within his article indicating it was a spoof included the following:

“Beyond supplying cellular energy, midichloria perform functions such as Force sensitivity…”“Involved in ATP production is the citric acid cycle, also referred to as the Kyloren cycle after its discoverer”“Midi-chlorians are microscopic life-forms that reside in all living cells – without the midi-chlorians, life couldn’t exist, and we’d have no knowledge of the force. Midichlorial disorders often erupt as brain diseases, such as autism.”

Neuroskeptic said his sting doesn’t prove that scientific publishing is hopelessly broken, but it does provide a reminder that at some so-called peer reviewed journals, there is not any “meaningful peer review.” This was an already known problem, but his sting illustrates the importance of peer review, which “is supposed to justify the price of publishing.” If you’re interested in reading the spoofed paper, there is a link in his blog article.

This matters because scientific publishers are companies selling a product, and the product is peer review. True, they also publish papers (electronically in the case of these journals), but if you just wanted to publish something electronically, you could do that yourself for free.

Neuroskeptic referred to another article found in the New York Times that was also about a “sting” operation on predatory journals. Here, a fictitious author, Anna O. Szust (Oszust is the Polish word for ‘a fraud’), applied to 360 randomly selected open-access journals asking to be an editor. Forty-eight accepted her and four made he editor in chief. She received two offers to start new journals and be the editor. The publications in her CV were fake, as were her degrees. “The book chapters she listed among her publications could not be found, but perhaps that should not have been a surprise because the book publishers were fake, too.”

Dr. Fraud received some tempting offers. She was invited to organize a conference, whose papers would be published. She would get 40% of the proceeds. Another journal invited her to start a new journal and offered her 30% of the profits. The investigators who conceived this sting operation told the journals that accepted Dr. Fraud that she wanted to withdraw her application to be an editor. “Dr. Fraud remains listed as a member of the editorial boards of at least 11 of those journals.”

Dr. Pisanski and her colleagues wrote about their sting operation in the journal Nature: “Predatory Journals Recruit Fake Editor.” Pisanski et al. said they became increasingly disturbed at the number of invitations they received to become editors or to review journals that were outside of their field. They learned some colleagues, mainly early-career researchers, were unaware of these predatory practices and had fallen for these traps.

So, in 2015, we created a profile of a fictitious scientist named Anna O. Szust and applied on her behalf to the editorial boards of 360 journals. Oszust is the Polish word for ‘a fraud’. We gave her fake scientific degrees and credited her with spoof book chapters. Her academic interests included, among others, the theory of science and sport, cognitive sciences and methodological bases of social sciences. We also created accounts for Szust on Academia.edu, Google+ and Twitter, and made a faculty webpage at the Institute of Philosophy at the Adam Mickiewicz University in Poznań. The page could be accessed only through a link we provided on her CV.

The aim of their study was to help academics understand how bogus versus legitimate journals operate—not trick the journals into accepting them as an editor. So if journals did not respond to her application, they did not email them again. “In many cases, we received a positive response within days of application, and often within hours.” They coded journals as “Accepted” only when a reply to their email explicitly accepted Szust as an editor or if Szust’s name appeared as an editorial board member on the journal’s website.

The laudable goal of open-access publishing gave birth to these predatory journals. Traditional academic journals raise support by subscription fees, while authors pay nothing. “Open-access journals reverse that model. The authors pay and the published papers are free to anyone who cares to read them.” For example, the Public Library of Science (PLOS) journals (which are credible open-access journals), charges between $1,495 and $2,900 to publish a paper. Predatory journals exist by publishing just about anything sent to them for a fee, often between $100 and $400, according to Jeffrey Beall, a scholarly communications librarian at the University of Colorado.

Beall does not believe that everyone who publishes in these predatory journals is duped. He thinks many researchers know exactly what they are doing when they publish an article there. “I believe there are countless researchers and academics, currently employed, who have secured jobs, promotions, and tenure using publications in pay-to-publish journals as part of their credentials and experience for the jobs and promotions they got.” So it now requires some due diligence on the part of academic employers to ferret out those questionable publications.

In “Science is Broken,” Siddhartha Roy and Marc Edwards, noted how over the past fifty years, “the incentives and reward structure of science have changed, creating a hypercompetition among academic researchers.” Universities are now using part time and adjunct faculty for up to 76% of their academic labor force. This makes tenure-track positions rarer and more desirable, as universities operate more like businesses. This academic business model has also led to an increased reliance on quantitative performance metrics “that value numbers of papers, citations and research dollars raised has decreased the emphasis on socially relevant outcomes and quality.”

There is growing concern that these pressures may encourage unethical conduct by some scientists. It certainly has contributed to the replication problem with published studies. See “Reproducibility in Science” for more on the replication problem. Roy and Edwards said: “We believe that reform is needed to bring balance back to the academy and to the social contract between science and society, to ensure the future role of science as a public good.” Predatory journals have entered into this changing structure of the academic business model, seeing the opportunity to take advantage of the “publish or perish” pressure on academics to secure jobs, promotions and gain tenure. The result has been an undermining of the university system and the practice of science as a public good.

09/15/17

Dysfunctional fMRIs

© abidal | 123rf.com

Neuroscientists at Dartmouth placed a subject into an fMRI machine to do an open-ended mentalizing task. The subject was shown a series of photographs depicting human social situations with a specific emotional reaction. The test subject was to determine what emotion the individual in the photo must have been experiencing. When the researchers analyzed their fMRI data, it seemed like the subject was actually thinking about the pictures. What was unusual about this particular fMRI study was that the subject was a dead Atlantic salmon.

Craig Bennett and other researchers wrote up the study to warn about the dangers of false positives in fMRI data. They wanted to call attention to the need to improve statistical methods in the field of fMRI research. But Bennett’s paper was turned down by several publications. However, a poster on their work found an appreciative audience at the Human Brain Mapping conference and neuroscience researchers began forwarding it to each other. The whimsical choice of a test subject seems to have prevented publication of the study, but it effectively illustrated and important point regarding the potential for false positives in fMRI research. The discussion section of their poster said:

Can we conclude from this data that the salmon is engaging in the perspective-taking task? Certainly not. What we can determine is that random noise in the EPI time series may yield spurious results if multiple comparisons are not controlled for. Adaptive methods for controlling the FDR and FWER are excellent options and are widely available in all major fMRI analysis packages.  We argue that relying on standard statistical thresholds (p < 0.001) and low minimum cluster sizes (k > 8) is an ineffective control for multiple comparisons. We further argue that the vast majority of fMRI studies should be utilizing multiple comparisons correction as standard practice in the computation of their statistics.

According to Alexis Madrigal of Wired, Bennett’s point was not to prove that fMRI research is worthless. Rather, researchers should use a set of statistical methods known as multiple comparisons correction “to maintain most of their statistical power while keeping the danger of false positives at bay.” Bennett likened the fMRI data problems to a kind of darts game and said: “In fMRI, you have 160,000 darts, and so just by random chance, by the noise that’s inherent in the fMRI data, you’re going to have some of those darts hit a bull’s-eye by accident.” So what, exactly, does fMRI measure and why is understanding this important?

The fundamental basis for neural communication in the brain is electricity. “At any moment, there are millions of tiny electrical impulses (action potentials) whizzing around your brain.” When most people talk about ‘brain activity,’ they are thinking about the activity maps generated by functional magnetic resonance imaging (fMRI). Mark Stokes, an associate professor in cognitive neuroscience at Oxford University said fMRI does not directly measure brain activity. Rather, fMRI measures the indirect consequences of neural activity, the haemodynamic response, which permits the rapid delivery of blood to active neuronal tissues. This indirect measurement is not necessarily a bad thing, if the two parameters (neural activity and blood flow) are tightly coupled together. The following figure from “What does fMRI Measure?” illustrates the pathway from neural activity to the fMRI.

A standard fMRI experiment generates thousands of measures in one scan (the 160,000 darts in Bennett’s analogy), leading to the possibility of false positives. This wealth of data in an fMRI dataset means that it is crucial to know how to interpret it properly. There are many ways to analyze an fMRI dataset, and the wealth of options may lead a researcher to choose one that seems will give him or her the best result.  The danger here is that then the researcher may then only see what they want to see.

Anders Ecklund, Thomas Nichols and Hans Knutson said that while fMRI was 25 years old in 2016, its most common statistical methods have not been validated using real data. They found that the most commonly used software packages for fMRI analysis (SPM, FSL, AFNI) could result in false positive rates up to 70%, where 5% was expected. The illusion of brain activity in a dead salmon discussed above was a whimsical example of a false positive with fMRI imaging.

A neuroscientist blogging under the pen name of Neuroskeptic pointed out that a root problem uncovered by Ecklund’s research is spatial autocorrelation—“the fact that the fMRI signal tends to be similar (correlated) across nearby regions.” The difficulty is well known and has software tools to deal with it, but “these fixes don’t work properly.”  The issue is the software assumes the spatial autocorrelation function has a Gaussian shape, when it fact, it has long tails, with more long-range correlations than expected. “Ultimately this leads to false positives.” See Neuroskeptic’s article for a graph illustrating this phenomena.

There is an easy fix to this problem. Ecklund and colleagues suggested using non-parametric analysis of fMRI data. Software to implement this kind of analysis has been available for a while, but to date it has not been widely adopted.” So there is still value in doing fMRI research, but a proper analysis of the dataset is crucial if the results are to be trusted.

Neuroskeptic also discussed an analysis of 537 fMRI studies done by Sprooten et al. that compared task-related brain activation in people with a mental illness and healthy controls. The five diagnoses examined were schizophrenia, bipolar disorder, major depression, anxiety disorders and obsessive-compulsive disorder (OCD). The analysis showed very few differences between the disorders in terms of the distribution of the group differences across the regions of the brain. “In other words, there was little or no diagnostic specificity in the fMRI results. Differences between patients and controls were seen in the same brain regions, regardless of the patients’ diagnosis.”

Sprooten et al. speculated that the disorders examined in their study arose from largely overlapping neural network dysfunction. They cited another recent meat-analysis by Goodkind et al. that also found “shared neural substrates across psychopathology.” Sprooten et al. said: “Our findings suggest that the relationship between abnormalities in task-related networks to symptoms is both complex and unclear.”

Neuroskeptic didn’t think there was a need to assume this transdiagnostic trait was an underlying neurbiological cause of the various disorders. He wondered if something like anxiety or stress during the fMRI scan could have been captured by the scan.

It’s plausible that patients with mental illness would be more anxious, on average, than healthy controls, especially during an MRI scan which can be a claustrophobic, noisy and stressful experience. This anxiety could well manifest as an altered pattern of task-related brain activity, but this wouldn’t mean that anxiety or anxiety-related neural activity was the cause of any of the disorders. Even increased head movement in the patients could be driving some of these results, although I doubt it can account for all of them.

A paper in NeuroImage by Nord et al. commented how numerous researchers have proposed using fMRI biomarkers to predict therapeutic responses in psychiatric treatment. They had 29 volunteers do three tasks using pictures of emotional faces. Each volunteer did the tasks twice one day and twice about two weeks later. While the grouped activations were robust in the scanned brain areas, within-subject reliability was low. Neuroskeptic’s discussion of the study, “Unreliability of fMRI Emotional Biomarkers,” said these results could be a problem for researchers who want “to use these responses as biomarkers to help diagnose and treat disorders such as depression.”

Neuroskeptic asked one of the researchers if it was a “real” biological fact that activity in the brain areas studied actually varied within subject, or was the variability a product of the fMRI measurement? He didn’t know, but thought it wasn’t simply a measurement issue. He also thought it was perfectly possible “that the underlying neuronal responses are quite variable over time.”

Grace Jackson, a board certified psychiatrist, wrote an unpublished paper critiquing how fMRIs and other functional brain scans are being presented to the public as confirming that psychiatric disorders are real brain diseases. She pointed out the failure of media discussions to point out that functional imaging technologies, like fMRI, “are incapable of measuring brain activity.” They assess transient changes in blood flow. She also commented on the existing controversy of using this technology for diagnosis. “Due to theoretical and practical limitations, their application in the field of psychiatry is restricted to research settings at this time.”

She said even if abnormal mental activity could be objectively defined and reliably determined, “it remains unclear how any functional imaging technology could differentiate the brain processes which reflect the cause, rather than the consequence, of an allegedly impairing trait or state. She concluded with a quote from a position paper drafted by the American Psychiatric Association that said imaging research cannot yet be used to diagnose psychiatric illness and may not be useful in clinical practice for a number of years. “We conclude that, at the present time, the available evidence does not support the use of brain imaging for clinical diagnosis or treatment of psychiatric disorders…”

No current brain imaging technology, including fMRI, can be used to diagnose or treat psychiatric disorders. fMRI technology does not directly measure neural activity and has a demonstrated tendency to generate false positives, especially if the statistical analysis of the fMRI dataset is done incorrectly. Given the limitations of functional imaging technology, it is unclear how fMRI—or any other functional imaging technology—will be able to clearly distinguish between brain activity that causes an impaired neural trait or state, and brain activity that is a consequence of an impaired neural trait or state. And yet fMRI scans are presented to the public, by some individuals, as measuring brain activity and proving the existence of some psychiatric disorders. In their hands fMRI technology has become dysfMRI: dysfunctional MRI technology.

05/20/15

Jump Starting Your Brain

© : Birgit Reitz-Hofmann | 123RF.com
© : Birgit Reitz-Hofmann | 123RF.com

Of course, there is research into using low dose electrical stimulation of the brain, a technique called transcranial direct current stimulation (tDCS). Dan Hurley published an article in the online magazine for The New York Times, Jumper Cables for the Mind, where he described the science and history of tDCS as well as his personal experience with it. It uses less than 1 percent of the electricity necessary for electroconvulsive therapy and can be powered by an ordinary nine-volt battery. Papers published in peer-reviewed scientific journals have claimed tDCS can improve:

Everything from working memory to long-term memory, math calculations, reading ability, solving difficult problems, piano playing, complex verbal thought, planning, visual memory, the ability to categorize, the capacity for insight, post-stroke paralysis and aphasia, chronic pain and even depression. Effects have been shown to last for weeks or months.

Felipe Fregni, the physician and neurophysiologist where Hurley received tDCS, said that it won’t make you superhuman, “but it may allow you to work at your maximum capacity.” He said the strongest evidence for its effectiveness was for depression. By itself, he said tDCS was as effective as Zoloft at relieving depression. “But when you combine the two, you have a synergistic effect, larger than either alone. That’s how I see the effects of tDCS, enhancing something else.” Looking at the JAMA Psychiatry abstract for Fregni’s published article, the researchers also found that out of 125 participants in the study, there were 7 cases of treatment-emergent mania or hypomania—5 of which were in the combined treatment group. So one the things enhanced by combining the use of Zoloft and tDCS was mania and hypomania.

Hypomania is a mood state that is less intense than mania. The individual is impulsive, shows a lack of restraint in social situations and is a poor judge of risky activities. Motor, emotional and cognitive abilities could be effected. The person may be euphoric or irritable, but typically to a lesser intensity than in mania. Their characteristic behaviors are being very energetic and talkative. They are quite confident while verbalizing a flight of creative ideas.

Because of the inexpensiveness and easy application of tDCS, people are treating themselves with kits and homemade devices. Hurley indicated that YouTube videos showing individuals experimenting on their own brains are available. They look more foolhardy than the cast of “Jackass,” he said.  “What they fail to realize is that applying too much current, for too long, or to the wrong spot on the skull, could be extremely dangerous.” Here seems to be one example of what he meant: “Still Zapping My Brain.” Yet there is a good bit of serious research a well: tDCS for Cogntive Enhancement or Centre for Brain Science: Transcranial Direct Current Stimulation (tDCS).

So-called DIY head zappers ignore the caution from scientists that tDCS is not ready for home use. The research is preliminary and stimulation could be dangerous. Home use is only as good as the person who built and operated the system. Some people have posted online images of scalp burns from improper current.  There have been some rare reports of manic episodes and even temporary paralysis.

“We are in such a fog of ignorance,” says neuroethicist Hank Greely of Stanford Law School, who studies how brain research intersects with society. “We really need to know more about how this works.”

Caroline Williams at NewScientist.com reported that Jared Horvath and others at the University of Melbourne reviewed more than 100 studies of tDCS and only found one that was convincing. He said there didn’t seem to be any significant or reliable effect of tDCS on blood flow, electrical, or evoked activity within the brain. tDCS supporters dispute the findings. Horvath and his research team are finalizing another analysis that looks at the evidence for cognitive and behavioral change after tDCS. Vincent Walsh, a cognitive neuroscientist at University College London is not convinced there will be any supportive results.

In terms of cognition, which is the other aspect that people make claims about, tDCS is massively hyped. The danger is that people have been promised better memories, better reading, better maths, increased intelligence… you name it. The effects are small, short lasting, and no substantial claims have been replicated across laboratories. This paper [Horvath’s] is hopefully the beginning of a counterweight to all the bullshit.

Horvath has recently published his further research into the efficacy of tDCS, claiming he found no evidence of cognitive effects from a single-session of tDCS. What was unique about this study is that Horvath and his colleagues only included independently replicated studies. This means an originally published study that another research group had repeated. “Our quantitative review does not support the idea that tDCS generates a reliable effect on cognition in healthy adults.” Of the 59 analyses conducted, no significant effect for tDCS was found—regardless of the inclusion laxity of the studies.

Neuroskeptic, a British neuroscientist, pointed out the exclusion of non-replicated studies was an unusual restriction. However, it seems to me that the intent was to correct for the research problems with publication bias (see “Open Access Could ‘KO’ Publication Bias”). He quoted Nick Davis, who has published several papers about tDCS, who said Horvath’s review was useful, helping researchers think about the way they talk about the effects of tDCS. Davis remains optimistic about the future of tDCS.

tDCS is still a developing technology. I think that with more principled methods of targeting the current flow to the desired brain area, we will see tDCS become one of the standard tools of cognitive neuroscience, just as EEG and fMRI have become.