01/19/18

Nature, Red in Tooth & Claw, Part 1

© master1305 | stockfresh.com

In this short video, you will see two lions stalking, and then one of them killing, a zebra. The two zebras are caught by surprise; but one got away unharmed. The photographer then seems to have edited the incident to show a somewhat later confrontation between one of the lions and the unharmed zebra over the downed and dying zebra. Eventually that zebra runs off when it is apparent its partner isn’t getting up. Now let’s see how your interpretation of the opening chapters of Genesis will influence your understanding of what you see in the video.

How does “Nature, red in tooth and claw,” fit into God’s plan? Tennyson wrestled with this question his 1850 poem, In Memoriam. Under the influence of geologists like Charles Lyell, there was a growing acceptance of uniformatarianism over catastrophism in Victorian England. Uniformatarianism suggested the earth’s geologic processes in the past acted with essentially the same intensity as they do in the present. Catastrophism held that the earth originated through supernatural means; while a series of catastrophic events, such as the biblical Flood, formed what geologists saw in rock formations and anthropologists found in fossils.

The acceptance of long, geologic periods of time in uniformatarianism challenged the belief originating with Bishop Ussher that the age of the earth was around 6,000 years old. See “Crumbling Pillars?” and “The Fall of the Chronology of Ussher” for more on Ussher. However, catastrophism fits with a younger view of the age of the earth. Belief in the truth of the Bible seemed to be undermined by this new geologic theory. “After the discoveries of Charles Lyell, and other geologists, discoveries which undermined the literal truth of the Bible, could one retain one’s faith in Christianity?”

A better way of stating the above dilemma would be that the discoveries of Lyell and others undermined a literal interpretation of biblical passages that had been used to support a younger age for the earth. An older earth was at odds with interpreting Genesis 1 to mean God accomplished his creative works in the space of six consecutive twenty-four hour days. A variety of approaches to interpret the creation days in Genesis 1 have been suggested, as Vern Poythress reviewed in his booklet, Christian Interpretations of Genesis 1. Some of the approaches are: the mature-creation theory, the gap theory, the intermittent-day theory, the analogical days theory and the day-age theory. See the link for a free pdf of the booklet and a description of the various approaches noted here.

Young earth creationists, like Answers in Genesis (AiG), will argue that all death, human and animal, was the result of the Fall. Writing for AiG, Simon Turpin said in Did Death of Any Kind Exist Before the Fall?, “Human physical and spiritual death, together with the death of animals, came about through the disobedience of one man.” That man, of course, was Adam. Turpin laid out biblical support for linking human and animal death as the result of the Fall by looking at nine “key passages” from Genesis 1, 2 and 3 through Revelation 21-22. Although he gave the impression that he has thoroughly researched and exegeted the issue, I have serious reservations with his discussion of the evidence and his conclusions.

For example, he gave the standard AiG argument for why the days of creation in Genesis must be understood as 24-hour days and should not be understood in any other sense. The genre of Genesis 1:1-2:3 is not poetic, according to Turpin. Genesis 1-11 is historical narrative in the same way Genesis 12-50 is: “There is no transition from non-historical to historical and it is not treated as a separate literary category from Genesis 12–50.” Additionally, “The days of Genesis 1 are six literal 24-hour days (Exodus 20:11) which occurred around 6,000–10,000 years ago.” This is the crux of the AiG argument against old earth creation and their rejection of animal death before the Fall. See “Does Anybody Really Know What Time Is?,” “What’s In A Day?” and “Genealogies In Genesis” for challenges and alternatives to an AiG position on Genesis 1 and the age of the earth.

If Genesis is interpreted through the lens of uniformitarian geology then the fossil record documents that millions of years of earth’s history are filled with death, mutations, disease, suffering, bloodshed, and violence. However, if the days of creation in Genesis 1 were only 24 hours long then there is no room for the millions of years of death, struggle, and disease to have taken place before Adam disobeyed God.

A second given reason by Turbin and AiG that Genesis 1 suggested there was no death before Adam’s Fall was “the vegetarian diet prescribed to both man and animals in Genesis 1:29-30 ruling out any carnivorous behavior before the Fall.” In his commentary on Genesis 1-4, C. John Collins pointed out that while Genesis 1:29-30 does say humans and animals were given plants to eat, “it does not say they ate nothing else.” Moreover, if we take the passage to mean a vegetarian diet for these animals, it only applies to creatures living on the land. “It says nothing about anything that lives in the water, many of which are carnivorous.”

Collins also said it was a mistake to read Genesis 2:17 as implying that physical death did not effect the creation before the Fall. He thought the focus of this death was spiritual death, addressed to Adam alone (the “you” is masculine singular); and is then appropriated by the woman in Genesis 3:2-3. “It applies to human beings and says nothing about the animals.”

From all of this we may conclude that Genesis does say that changes have come into human nature as a result of the fall—pain in childbearing, other afflictions of body and soul, death, frustration in ruling creation—but it does not follow that nonhuman nature is affected in the same way.

Turbin also cited Geerhardus Vos’s seminal book, Biblical Theology a couple of times in support of his assertions. Vos’s discussion in the passage both quotes by Turpin were taken was from was a section where Vos addressed “the principle of death symbolized by the dissolution of the body.” Vos was countering the view that human death preceded the Fall and had nothing to say about animal death. We could go on, but my intention was to illustrate how there are alternate possible interpretations of the passages cited by Turbin and other views of the six creation days of Genesis 1 that can fit with a biblical sense of the text. The AiG way to interpret Genesis 1-11 is not the only biblically legitimate way.

Ted Davis wrote a provocatively titled article for BioLogos, “Does Death Before the Fall Make God a Liar?” He addressed the same young earth creationist (YEC) and AiG claim that animal death was a direct result of the Fall. Davis reflected on a critique of a special issue of the Christian Research Journal devoted to the question, “Where Do We Come From?” The author of the AiG article, “Compromised Creation,” said she appreciated how the articles demonstrated “the impossibility of Darwinian evolution and the bankruptcy of theistic evolution.” But she found the issue dangerously compromised since many of the authors accepted an old earth. There was a general assumption of millions of years of living and dying.

There can be no argument that the fossil record is a graveyard full of evidence of disease, violence, carnivory, suffering, and death. To assume (as many authors implicitly do in this journal) that such miseries were all part of God’s “very good” creation (so named by God in Genesis 1:31) is to impugn God’s character. If God had called a world already full of bloodshed and death “very good,” then He either had a cruel sense of irony or didn’t know what He was talking about, or worse, He is a liar.

Davis pointed to Psalm 104, which praises God for the many wonders in creation; including the young lions who “roar for their prey, seeking their food from God.” There is the sea, which teams with innumerable creatures, both small and great. “These all look to you, to give them their food in due season. When you give it to them, they gather it up; when you open your hand, they are filled with good things” (Psalm 104:27-28). Davis said he doesn’t see how to reconcile this Psalm with YEC theodicy.

In A Biblical Case for an Old Earth, David Snoke said if you conceded that the Bible teaches that animals died before the Fall, many of the objections to an old earth fade away. “The whole point of an old-earth view is to say that things are as they appear, and the earth is full of fossils and fossil matter such as coral and limestone.” He thought that from a scientific standpoint, either the earth was old, or simply appeared old. However, there are theological problems in the mature creation or appearance of age view for both science and YEC/AiG.

In Redeeming Science (pp. 116ff; a pdf copy is linked here) Vern Poythress noted several different objections to the appearance of age view. First, a mature creation view implies that God has deceived us. Second, it makes scientific investigation illegitimate. Thirdly, from an AiG perspective, it would falsely imply that death preceded the Fall. Lastly, again causing problems for the YEC and AiG understanding of the Flood, it would undermine their understanding of the biblical teaching of Noah’s flood.

Dr. Snoke presented what he saw as two valid interpretive options on the age of the earth from a scientific viewpoint, meaning he accepted that scientific evidence in both would suggest an old earth. Vern Poythress then showed how a consistent mature creation view of creation could lead into both theological and scientific problems for a young earth that only give the appearance of being old.

So it seems the “nature, red in tooth and claw” illustrated in the opening video can fit within an interpretation of Genesis 1 consistent with an old earth view. Attempting to combine the origins of human and animal death in the manner done by YEC and AiG is both scientifically and theologically invalid. Look for more discussion on this issue in Part 2 of this article.

01/16/18

Biomedical ‘Big Brother’

© lightsource | stockfresh.com

The FDA approved a new pharmaceutical product that combines technology and medication: Abilify MyCite, “the first drug in the U.S. with a digital tracking system.” The pill contains a sensor that tracks whether or not a patient has taken their medication. The pill’s sensor sends a message to a wearable patch, which then transmits the information to a mobile app, enabling the patient to track their ingestion of the medication on their smart phone. Patients can also “consent” for others to access the information through a web-based portal. As more than one person observed, the old hide-the-pill-under-your-tongue trick doesn’t work with the new technology.

Science Alert said the sensor, called an Ingestible Event Marker (IEM) is about the size of a grain of sand. It’s made of safe levels of copper, magnesium and silicon.  When the pill and IEM are swallowed, stomach acid activates the sensor, which sends an electrical signal to the patch. “The patch records the date and time the pill was ingested, and relays this information to a smartphone app.”

A New York Times article on the approval, “First Digital Pill Approved to Worries About Biomedical ‘Big Brother,’ noted where experts estimate nonadherence or noncompliance with medications costs about $100 billion per year. Much of that cost is said to be due to patients getting sicker and needing additional treatment or hospitalization. Dr. William Shrank said: “When patients don’t adhere to lifestyle or medications that are prescribed for them, there are really substantive consequences that are bad for the patient and very costly.”  Ameet Sarpatwari said while the pill has the potential to improve public health, “If used improperly, it could foster more mistrust instead of trust.”

The IEM could be used to monitor medication compliance with post-surgical patients or for individuals required to use a digital medication as a condition for release from psychiatric facilities. “Asked whether it might be used in circumstances like probation or involuntary hospitalization, Otsuka officials said that was not their intention or expectation, partly because Abilify MyCite only works if patients want to use the patch and app.”

Nevertheless, Abilify was said to be an unusual choice for the first sensor-embedded medication. As an antipsychotic or neuroleptic, Abilify is approved for treating schizophrenia, bipolar disorder and as an adjunct for major depression. Some of these individuals may have delusions or become paranoid about their doctor or what the doctor intends. How receptive will they be to using a system that monitors their behavior and then potentially signals their doctor?

Dr Paul Applebaum of Columbia University’s psychiatric department said: “You would think that, whether in psychiatry or general medicine, drugs for almost any other condition would be a better place to start than a drug for schizophrenia.” Dr. Jeffrey Lieberman said many psychiatrists will likely want to try Abilify MyCite, but it has not yet been shown to improve medication adherence. “There’s an irony in it being given to people with mental disorders that can include delusions. It’s like a biomedical Big Brother.”

Many patients aren’t compliant with neuroleptics because of the side effects. These side effects can include: weight gain, diabetes, pancreatitis, gynecomastia (abnormal breast tissue growth), hypotension, akathesia (a feeling of inner restlessness), cardiac arrhythmias, seizures, sexual dysfuntion, tardive dyskinesia, anticholinergic effects (constipation, dry mouth, blurred vision, urinary retention and at times cognitive impairment). For more on the adverse effects with Abilify and other antipsychotics, see (“Abilify in Denial,” “Broken Promises with Abilify,” “Antipsychotic Big Bang,” “Wolves in Sheep’s Clothing,” and “Downward Spiral of Antipsychotics”).

Otsuka, the pharmaceutical company with the patent rights to distribute Abilify MyCite, has not set a price for the drug yet. Although Abilify alone is now off patent, Otsuka will have a new patent time frame for Abilify MyCite. Here may be the reason that Otsuka invested so much capital into Proteus Digital Health, the California company that developed the IEM technology. Profits from Abilify will continue to be made by Otsuka with the newly patented formula.

The labeling information for Abilify MyCite notes the ability of the product to improve patient compliance hasn’t been demonstrated yet. Additionally, it should not be used to track drug ingestion in “real-time” as detection may be delayed or may not occur. Robert McQuade, an executive vice president for Otsuka, confirmed the company does not have current data to say it will improve adherence. But they will likely study that after sales begin.

How patients themselves view Abilify MyCite is mixed. One person who takes Abilify for schizoaffective disorder participated in the clinical trial for Abilify MyCite. Compliant with his medication, he doesn’t think he needs digital monitoring. He hasn’t had paranoid thoughts for a long time. If he had a chance to take ‘digital Abilify,’ “I wouldn’t take it.” Another person who sometimes will stop taking his medication thought the idea behind Abilify MyCite was “overbearing.”

A third person reported his use of Abilify for 16 years to prevent recurrence of episodes of paranoia. He thought some people might use the new drug to avoid the injections of Abilify if they were noncompliant with their pills. But he said he would not use the digital Abilify. He didn’t want an electrical signal, strong enough for his doctor to read, coming out of his body. “But right now, it’s either you take your pills when you’re unsupervised, or you get a shot in the butt. Who wants to get shot in the butt?”

The above are the milder responses to the FDA’s news about Abilify MyCite. What follows are the thoughts of Michael Cornwall. He is a Jungian/Langian psychotherapist who specializes in providing psychotherapy for people in psychotic states (which he refers to as ‘extreme states’) without the use of medication. He has his own website and commented there how digital Abilify “can now automatically send signals to your doctor, family members and the courts, to show them when you comply and swallow the pill.”  Not one to pull punches, his article on Mad in America was titled: “The Orwellian New Digital Abilify Will Subjugate Vulnerable People across the US.”

He predicted there will be tragic personal injury coming from the use of Abilify MyCite to control people’s dosing compliance, “something that I believe would even make dystopian visionaries George Orwell and Aldous Huxley shudder.” Singling out California as an example, he said the state was ripe for an even more oppressive mental health “best practice” service model and standard of care for people in “extreme states” who are receiving forced in-home treatment. Like California, most states now have some version of in-home compulsory court-ordered medication treatment. Research has shown that 74% of people prescribed antipsychotic medications who are in extreme state stop taking their medications by 18 months. Pro-medication people find this unacceptable.

Tremendous pressure, I believe, will also be exerted by mental health care providers for people to voluntarily accept taking the new digital Abilify. I see that pressure being put on people seeking discharge from in-patient units, and pressure will be put on people in extreme states or with such histories, who will be involved anywhere on the spectrum of community mental health services and in jails and prisons too. In both mental health and penal systems, medication compliance, not providing humanistic care, is clearly the highest priority.

Yet there is reliable evidence suggesting long-term use of antipsychotics like Abilify is not necessary. Robert Whitaker wrote a paper, “The Case Against Antipsychotics” where he critiqued the research cited by psychiatry as evidence for long-term use of antipsychotics. Additionally, he presented a history that tells how antipsychotics, on the whole have actually worsened long-term outcomes. Whitaker described a long-term study by Harrow that followed an original group of patients for 20 years (and counting) after their initial hospitalization for schizophrenia.

Harrow discovered that patients not taking medication regularly recovered from their psychotic symptoms over time. Once this occurred, “they had very low relapse rates.” Concurrently, patients who remained on medication, regularly remained psychotic—even those who did recover relapsed often. “Harrow’s results provide a clear picture of how antipsychotics worsen psychotic symptoms over the long term.” Medicated patients did worse on every domain that was measured. They were more likely to be anxious; they had worse cognitive functioning; they were less likely to be working; and they had worse global outcomes. Also see “Worse Results With Psych Meds” for more on the Harrow study.

Returning to the thoughts of Michael Cornwall, he said treatment with Abilify MyCite was “a morally bankrupt approach that ensures a soul-numbing, hi-tech compliance-monitoring device be in the digestive tract of every DSM-labeled person in an extreme state, in order to keep them in line.” It places the controlling impulse of psychiatric care “within our very guts.” To the uninitiated, Cornwall’s rhetoric may sound extreme. But I suspect that for individuals wanting to cope with their “extreme state” without medication or struggling to live with the side effects from antipsychotics, it is spot on. Clearly as the technology behind the IEM improves, it will be used to “convince” individuals that using a digital antipsychotic like Abilify MyCite is in their best interests, particularly if they want to be discharged from a psychiatric facility, or to continue living outside of one. The first Biomedical Big Brother has arrived.

01/12/18

Drunken Monkeys

© dracozlat | 123rf.com

In the article, “What does it mean to be human?, the Smithsonian noted how DNA studies have demonstrated that on average, genetic differences between individual human individuals is about .1%. When the same kind of study is done comparing chimpanzees and bonobos with humans, the diversity is about 1.2%. The DNA difference of humans, chimpanzees and bonobos with gorillas is about 1.6%. “The DNA evidence leaves us with one of the greatest surprises in biology: the wall between human, on the one hand, and ape or animal, on the other, has been breached.” Given the close genetic relations we humans have to chimpanzees, apes and monkeys, perhaps it will be no great surprise to hear that researchers have shown they also have a weakness for ethanol. No, really, monkeys and chimpanzees like to get tipsy.

Watch this short BBC video of monkeys stealing alcoholic drinks from unaware, distracted humans at the beach. Keep watching and you’ll also see a suspiciously tipsy monkey who has trouble standing up. The video claims there are even tea totaling monkeys, at around the same percentage as humans. “In line with human habits, most [monkeys] drink in moderation. Twelve percent are steady drinkers and 5% drink to the last drop.” The theory is that like monkeys, we developed a taste for alcohol when we scoured the forest for ripe fermenting fruit.

This behavior also exists in the wild, independent of human drinking and fermentation behaviors. Here is a short video of a chimpanzee drinking fermented, alcoholic plant sap in the wild. The BBC also reported on a 17-year study of wild chimpanzees in the African country of Guinea where the chimpanzees repeatedly ingested naturally fermented palm wine from raffia palm trees. Local humans harvest the “palm wine” by tapping the trees at the crown and gathering the sap in plastic containers. Researchers have then seen chimpanzees—often in groups—climbing the trees to drink the fermented palm sap out of the containers.

The chimps even used drinking tools which the researchers called leaf sponges. They would chew handfuls of leaves and crush them into absorbent sponges. Then they would dip their sponge into the liquid and suck out the contents. In order to measure the extent of the chimp’s drinking, the scientists measured the alcohol content of the wine and filmed the chimps’ “drinking sessions.” Some individuals consumed the alcohol equivalent to of a bottle of wine. “[They] displayed behavioural signs of inebriation, including falling asleep shortly after drinking.”

Dr Catherine Hobaiter, from St Andrews University, said: “It would be fascinating to investigate the [behaviour] in more detail: do chimps compete over access to the alcohol? Or do those who drank enough to show ‘behavioural signs of inebriation’ have a bit of a slow day in the shade the next morning?”

The actual study, “Tools to Tipple: Alcohol Ingestion by Wild Chimpanzees Using Leaf-Sponges,” was published in the journal Royal Society Open Science. It reported that the ethanol in the palm wine varied between 3.1% alcohol by volume (ABV) and 6.9% ABV. Over 17 years the researchers observed 51 drinking events among the chimpanzees. They always used a leaf tool to drink, dipping it into the container with the fermented palm sap. “Individuals either co-drank, with drinkers alternating dips of their leaf-sponges into the fermented palm sap, or one individual monopolized the container, while others waited their turn.”

Some of the chimpanzees at Bossou consumed significant quantities of ethanol and displayed behavioural signs of inebriation. . . . Unlike examples of primates ingesting anthropogenic sources of ethanol elsewhere, such as introduced green monkeys at St Kitts targeting tourist cocktails [seen in the above video], chimpanzee attraction to fermented palm sap at Bossou does not result from former provisioning of ethanol by local people.

In another study, “Hominids Adapted to Metabolize Ethanol Long Before Human-Directed Fermentation,” Carrigan et al. suggested that an enhanced ability to metabolize alcohol in the last common ancestor of living African apes and humans may have resulted from an evolutionary change with an enzyme (ADH4). The changed enzyme enabled them to metabolize ethanol and happened “near the time that they began using the forest floor, about 10 million years ago.”  The researchers thought their findings had implications not only for understanding the forces shaping hominin adaptations to ground-based living, but also for understanding the medical complexities of humans and alcohol today.

There is a short audio on The Academic Minute on “Human Alcohol Consumption” summarizing the above research. In much the same way humans are wired to enjoy sugar, salt and fat Carrigan et al. suggest our genes adapted to promote alcohol consumption. The theory is this gave our ancestors a dietary benefit, as ethanol was present in fermenting fruit that fell from the trees onto the ground.

The results were very clear – the ancestor of humans and our close relatives, the chimpanzee and gorilla, acquired a mutation ~10 million years ago that enable them to metabolize ethanol much more efficiently than previous ancestors.   This coincided with a major global climate change that caused the African forests to shrink, and suggests our ancestors adapted to ethanol in fruit to cope with a dwindling food supply.This does not mean our genomes are adapted to the much higher levels of ethanol found in modern alcoholic beverages … and so much like with sugars, salt and fat, we are now at risk of over-consuming something that was once scarce but important.

Another academic paper by Robert Dudley, “Ethanol, Fruit Ripening, and the Historical Origins of Human Alcoholism in Primate Fruigivory,” made a similar point from his research in the Republic of Panama. “Sustained evolutionary exposure to low-concentration ethanol will favor the evolution of metabolic adaptations that maximize physiological benefits associated with ethanol ingestion while concomitantly minimizing related costs.” Conversely, exposure to higher concentrations of ethanol not naturally encountered may cause harm. Dudley’s 2004 paper led to the publication of his 2014 book, The Drunken Monkey: Why We Drink Alcohol. Dudley’s work was the origins of what is called “The Drunken Monkey Hypothesis,” which proposed that: “A strong attraction to the smell and taste of alcohol conferred a selective advantage on our primate ancestors by helping them locate nutritious fruit at the peak of ripeness.” In the Middle Ages, people learned to distill spirits, concentrating the natural alcoholic content of fermented fruits and grains. “The once advantageous appetite for alcohol became a danger to human health and well-being.”

The last part of the hypothesis suggesting that the development of distillation in the Middle Ages changed the advantageous appetite for alcohol to a danger for human health is not accurate. Hundreds, even thousands of years before that time people understood the dangers of even fermented fruits to human health and well-being. There is a Chinese proverb that says: “To stop drinking, study a drunkard when you are sober.”  An Egyptian proverb says: “Yesterday’s drunkenness will not quench today’s thirst.” The Greek poet Theognis, writing in the sixth century BC, made several comments on the problems with over indulging wine. Here are a few:

“Surely to drink much wine is an ill.”“Wine maketh light the mind of wise and foolish alike, when they drink beyond their measure.”My head is heavy with drink, Onomacritus, and wine constraineth me; I am no longer the dispenser of my own judgment, and the room runneth round. Come, let me rise and try if haply wine possess my feet as well as my wits.  I fear I may do some vain thing in my cups and have great reproach to bear.” “Wine maketh light the mind of wise and foolish alike, when they drink beyond their measure.”“He that overpasseth the due measure of drinking is no longer master either of his tongue or his mind, but telleth reckless things disgraceful to sober ears, and hath no shame in what he doeth in his cups, a wise man once, but now a fool.”

There are also biblical passages condemning drunkenness, such as Proverbs 20:1, which says: “Wine is a mocker and strong drink brawler, and whoever is led astray by it is not wise.” Proverbs 23:29-35 is an extended passage about the negative consequences drunkenness. Verses 31-32 read: “Do not look at wine when it is red, when it sparkles in the cup and goes down smoothly. In the end it bites like a serpent and stings like an adder.” So dividing the history of human appetite for alcohol as advantageous before distillation and dangerous afterwards seems to miss the point.

However, the allure of alcohol as a motive for human tree dwelling ancestors to spend more time on the ground looking for it in fermented fruit fits well with another hypothesis for why humans changed from hunter-gatherers to sedentary farmers in the so-called Neolithic Revolution.

In his paper on the origins of brewing technology in ancient Mesopotamia, Peter Damerow noted where the technique of brewing beer has been discussed as a possible motive for the development of human culture in Neolithic times. The theory suggests that rather than using grain for other foodstuffs like bread, the discovery of the intoxicating effect of ethanol in beer “caused the transition from hunting and gathering to living in stable settlements, domesticating animals, and cultivating the soil.” This happened about 7,000 BC. While there is no conclusive evidence to support this hypothesis, “there can be no doubt that the emergence of agriculture was closely related to the processing of grain after the harvest, and that beer brewing soon belonged to the basic technologies of grain conservation and consumption.”

It is intriguing and somewhat perverse to say the lure of intoxication seems to have guided human development at two crucial crossroads.

01/9/18

Their Way or the Highway

© bruno1998 | stockfresh.com

Writing for Christianity Today, Tim Stafford related what he thought was the most sobering moment of the BioLogos “Theology of Celebration” conference held in March of 2012. That was when David Kinnaman of Barna Research presented findings that more than half of U.S. pastors profess a 6-day, 24-hour creation view of Genesis 1. Fewer than one in five followed the BioLogos view, affirming an evolutionary process as God’s method of creation. The cited statistics illustrate the ongoing dispute within conservative Christian circles on how to interpret Genesis 1 and the role (if any) of evolutionary processes in creation.

BioLogos also posted an essay by Tim Keller, who was one of the participants at the 2012 conference, “Creation, Evolution, and Christian Laypeople.” Keller wanted to provide guidance to pastors ministering in the cultural context where “Many secular and many evangelical voices agree on one ‘truism’—that if you are an orthodox Christian with a high view of the authority of the Bible, you cannot believe in evolution in any form at all.” He noted there were many Christians who questioned the underlying premise to this truism, namely that science and faith were irreconcilable. He added how this left “many Christian laypeople … confused because the voices arguing that Biblical orthodoxy and evolution are mutually exclusive are louder and more prominent than any others.”

Keller sought to describe in his essay how Christians could approach three of the main difficulties presented by the current scientific account of biological evolution for orthodox Christians. Those three difficulties were:  Biblical authority, the confusion of biology and philosophy, and the historicity of Adam and Eve. In his concluding thoughts Keller cited Psalm 19 and Romans 1, which teach: “that God’s glory is revealed as we study his creation.” Nevertheless, he said, we must interpret the book of nature by the book of God.  His conclusion was that Christians who seek to correlate Scripture and science “must be a ‘bigger tent’ than either the anti-scientific religionists or the anti-religious scientists.”

He’s faced strong criticism of his paper from several creationist sources. For example, Lita Cosner of Creation Ministries International said he was struck by the weakness of Keller’s assertions. He questioned Keller’s understanding of Genesis and implied he had subordinated Scripture to science. E.S. Williams on The New Calvinists said Keller was a firm believer in theistic evolution who promoted “this false view of creation in the Christian Church.”

Ken Ham was more oblique, saying Keller had misrepresented or taken a shot at him. He also implied Keller had a low view of Scripture for Genesis 1-11 because he didn’t agree with Ham’s (Answers in Genesis’s) interpretation of those chapters. “For Genesis 1–11, they allow man’s fallible beliefs about evolution or millions of years to override the clear words in Scripture so man’s ideas can be accommodated into Scripture.” The message is clear. Any disagreement a young earth creationist (YEC) understanding of Genesis 1-11 means you have a low view of biblical authority; or you’ve misinterpreted Scripture. It’s their way or the highway.

Ted Davis noted how theistic evolution or evolutionary creation has been controversial among Christians for over one hundred years. “It was contested hotly in the 1920s, when William Jennings Bryan sought to outlaw the teaching of evolution in public schools and universities.” Bryan saw theistic evolution as “an anesthetic which deadens the pain while the patient’s religion is being gradually removed.” Yet Answers in Genesis (AiG) said Bryan himself allowed “compromise on the days of creation.” In an excerpt of the trial transcript from the Scopes Trial, as Clarence Darrow cross examined him, Bryan said he did not did not think the days in Genesis 1 were necessarily twenty-four hour days; and that the creation could have been going on for a very long time. “It might have continued for millions of years.”

Along with Bryan, AiG’s list of past and present “compromised” evangelical leaders include: Charles Spurgeon, Charles Hodge, B. B. Warfield, James Montgomery Boice, Gleason Archer, Bill Bright, Norman Geisler, William Lane Craig, J. P. Moreland, Billy Graham, Bruce Waltke, and Tim Keller. “Those leaders all made the enormous mistake of interpreting Genesis differently than AiG.” As a result, they failed to contend for “the literal historical truth of Genesis 1–11, which is absolutely fundamental to all other doctrines in the Bible,” according to AiG.

It is astonishing that any given alternative to the YEC interpretation is painted as an unacceptable “compromise” arising from a cowardly desire to mute one’s faith in conformity to the world. This tendency to demonize legitimate differences of opinion or interpretation is surely one of the main reasons why so many young Christians are leaving their faith behind.

Ken Ham and AiG, of course, have a different opinion on why so many young people are leaving the church. In a 2016 article he co-authored for AiG, Ham said young people are not getting solid answers to their questions about the Bible. “Research”  (AiG research?) shows how many of these questions “are related to Genesis and scientific issues such as evolution, long ages (millions of years), dinosaurs, and Noah’s Ark.”

These young people are not getting solid answers from church leaders and parents but, sadly, are often told they can believe in the big bang, millions of years, and evolution; they’re then admonished to reinterpret or ignore Genesis while being told to “trust in Jesus!” These young people recognize the inconsistency of reinterpreting the first book of the Bible and yet being expected to trust the other books that talk about Christ. If we can doubt and reinterpret Genesis, where do we stop doubting and reinterpreting?

AiG (Ham and his co-author) pointed out a Pew Research Center study that looked at “Why America’s ‘nones’ left religion behind.” A ‘none’ is a person who does not identify with a religious group. According to Pew, 78% of religious nones report they were raised in a particular faith before shedding it in adulthood. Forty-nine percent of these said they left their childhood faith over a lack of belief.  But here we run into some apparent difficulties when interpreting the Pew data.

Pew Research said the 49% of religious nones whose lack of belief led them away from religion “include many respondents who mention ‘science’ as the reason they do not believe in religious teachings.” AiG reported this as Pew Research finding the same thing they did: “A large percent of young people are leaving the church because of questions about science that lead to doubts about God’s Word.” The Pew quote was from their above article, but the article itself didn’t give anything more specific than what was quoted. I did some searching on the Pew website and couldn’t find any further data on nones saying science was the reason they don’t now believe religious teachings, so we’ll assume what the article said is all that is available.

I don’t read the above two quotes as saying the same thing, as AiG does. There may be a significant number of young people who say they left the church or don’t believe in religious teachings because of science, but you can’t draw that conclusion from the Pew report. Pew didn’t give any data on that issue; they merely said many respondents gave ‘science’ as a reason they no longer believed in religious teachings. Another factor to consider is the Pew data is a reflection of all faiths, and not just Christianity. So it seems AiG is illegitimately co-opting the Pew findings to support their own views when they say Pew Research found the same thing they did. Then they proclaim: “If we can’t trust the historical portions of the Bible that deal with our origins, why should we trust the message of Jesus Christ? We’ve been saying this for years now—it’s nothing new!”

Research done by the Barna Group on “Six Reasons Young Christians Leave Church” indicated there was no single reason that dominated “the break-up between church and young adults.” However, there were six significant themes for why 59% disconnect after the age of fifteen. One of those six themes was how the church comes across as antagonistic to science. The research showed that many science-minded young Christians are struggling to find ways of staying faithful to their beliefs and to their professional calling in science. The Barna Group findings seem to be in line with Ted Davis’s above opinion on why many young Christians are leaving their faith—because of their church is demonizing legitimate differences of opinion or interpretation. The most common reasons given by young adults who felt disconnected from church or faith because of perceived antagonism to science were as follows:

“Christians are too confident they know all the answers” (35%). Three out of ten young adults with a Christian background feel that “churches are out of step with the scientific world we live in” (29%). Another one-quarter embrace the perception that “Christianity is anti-science” (25%). And nearly the same proportion (23%) said they have “been turned off by the creation-versus-evolution debate.”

There were five other reasons in addition to how churches come across as antagonistic to science in the Barna Group findings. So perceived antagonism with science is only one of six significant themes why young Christians disconnect from church life. It is a factor, but can’t be said to be the primary reason. Now let’s look at the results of another Pew Research study: The Religious Landscape Study, which “surveys more than 35,000 Americans from all 50 states about their religious affiliations, beliefs and practices and social and political views.” One of the social questions was on the participant’s views on evolution.

Among Christians, 42% said humans always existed in their present form, 5% said they didn’t know, but 54% said humans evolved in one way or another. Twenty-one percent said humans evolved through natural processes, 29% said they evolved due to God’s design, and 4% said they evolved but didn’t know ho it happened.

Most evangelical Protestants (57%) said humans always existed in their present form, 5% said they didn’t know, but 38% said humans evolved in one way or another. Eleven percent said humans evolved through natural processes, 25% said they evolved due to God’s design, and 2% said they evolved but didn’t know ho it happened.

Another question asked in the Religious Landscape Study was on interpreting Scripture. Among Christians, 39% said the Bible was the Word of God and should be taken literally; 33% said the Bible was the Word of God, but not everything had to be taken literally; 18% said is was not the word of God; the rest weren’t sure in one way or another.

Most evangelical Protestants (55%) said the Bible was the Word of God and should be taken literally; 29% said the Bible was the Word of God, but not everything had to be taken literally; 8% said is was not the word of God; the rest weren’t sure in one way or another.

A literal interpretation of the Bible and believing humans always existed in their present form are beliefs consistent with a YEC position on creation. And the percentages of evangelical Protestants holding those beliefs corresponds to the Barna Group research reported above, that half of U.S. pastors profess a 6-day, 24-hour creation view of Genesis 1. Yet there are significant percentages of evangelical Protestants (38%) who hold to some form of human evolutionary development and believe that while the Bible is the Word of God, not everything had to be taken literally (29%).

Despite the detractors, it seems that Tim Keller’s advice in “Creation, Evolution, and Christian Laypeople” is particularly relevant to the church today. When Christians draw the line of orthodoxy at a literal interpretation of Genesis 1 to 11 and deny the possibility of a creation older than a few thousand years, they make their tent too small and in the process send those who can’t agree on their way. Hopefully they will encounter a pastor and a church who are trying to minister in the manner suggested by Keller.

01/5/18

In the Dark About Antidepressants

© tab62 | stockfresh.com

In 2011, antidepressants were the third most commonly prescribed medication class in the U.S. Mojtabai and Olfson noted in their 2011 article for the journal Health Affairs that much of the growth in the use of antidepressants was driven by a “substantial increase in antidepressant prescriptions by nonpsychiatric providers without an accompanying psychiatric diagnosis.” They added how the growing use of antidepressants in primary care raised questions “about the appropriateness of their use.” Despite this concern, antidepressant prescriptions continued to rise. By 2016, they were the second most prescribed class of medications, according to data from IMS Health.

A CDC Data Brief from August of 2017 reported on the National Health and Nutrition Examination Survey. The Data Brief provided the most recent estimates of antidepressant use in the U.S. for noninstitutionalized individuals over the age of 12. As indicated above, there was clear evidence of increased antidepressants use from 1999 to 2014. 12.7% of persons 12 and over (one out of eight) reported using antidepressant medication in the past month. “One-fourth of persons who took antidepressant medication had done so for 10 years or more.”

Women were twice as likely to take antidepressants. And use increased with age, from 3.4% among persons aged 12-19 to 19.1% among persons 60 and over. See the following figures from the CDC Data Brief. The first figure notes the increased use of antidepressants among persons aged 12 and over between 1999 and 2014. You can see where women were twice as likely to take antidepressants as men.

Figure 1

The second figure shows the percent of individuals aged 12 and over who took antidepressant medication in the past month between 2011 and 2014. Note how the percentages increase by age groups for both men and women, with the highest percentages of past month use for adults 60 and over for both men and women.

Figure 2

The third figure shows the length of antidepressant use among persons aged 12 and over. Note that while 27.2% reported using them 10 years or more, 68% reported using antidepressants for 2 years or more. “Long-term antidepressant use was common.” Over the fifteen-year time frame of the data, antidepressant use increased 65%.

Figure 3

The widespread use of antidepressants documented above is troubling when additional information about antidepressants is considered. A February 2017 meta-analysis done by Jakobsen et al., and published in the journal BMC Psychiatry, found all 131 randomised placebo-controlled trials “had a high risk of bias.” There was a statistically significant decrease of depressive symptoms as measured by the Hamilton Depression Rating Scale (HDRS), but the effect was below the predefined threshold for clinical significance of 3 HDRS points. Other studies have indicated that differences of less than 3 points on the HDRS are not clinically observable. See “Antidepressant Scapegoat” for more information on the HDRS. Jakobsen et al. concluded:

SSRIs might have statistically significant effects on depressive symptoms, but all trials were at high risk of bias and the clinical significance seems questionable. SSRIs significantly increase the risk of both serious and non-serious adverse events. The potential small beneficial effects seem to be outweighed by harmful effects.

In his review of the Jakobsen et al. study for Mad in America, Peter Simons noted where these results add to a growing body of literature “questioning the efficacy of antidepressant medications.” He pointed to additional studies noting the minimal or nonexistent benefit in patients with mild or moderate depression; the adverse effects of antidepressant medications; the potential for antidepressant treatment to potentially worsen outcomes. He concluded:

Even in the best-case scenario, the evidence suggests that improvements in depression due to SSRI use are not detectable in the real world. Given the high risk of biased study design, publication bias, and concerns about the validity of the rating scales, the evidence suggests that the effects of SSRIs are even more limited. According to this growing body of research, antidepressant medications may be no better than sugar pills—and they have far more dangerous side effects.

Peter Gøtzsche, a Danish physician and medical researcher who co-founded the Cochrane Collaboration, wrote an article describing how “Antidepressants Increase the Risk of Suicide and Violence at All Ages.” He said that while drug companies warn that antidepressants can increase the risk of suicide in children and adolescents, it is more difficult to know what that risk is for adults. That is because there has been repeated underreporting and even fraud in reporting suicides, suicide attempts and suicidal thoughts in placebo-controlled trials. He added that the FDA has contributed to the problem by downplaying the concerns, choosing to trust the drug companies and suppressing important information.

He pointed out a meta-analysis of placebo-controlled trials from 2006 where the FDA reported five suicides in 52,960 patients (one per 10,000).  See Table 9 of the 2006 report. Yet the individual responsible for the FDA’s meta-analysis had published a paper five years earlier using FDA data where he reported 22 suicides in 22,062 patients (which is 10 per 10,000). Additionally, Gøtzsche found there were four times as many suicides on antidepressants as on placebo in the 2001 study.

In “Precursors to Suicidality and Violence in Antidepressants” Gøtzsche co-authored a systematic review of placebo-controlled trials in healthy adults. The study showed that “antidepressants double the occurrence of events that can lead to suicide and violence.” Maund et al. (where he was again a co-author) demonstrated that the risk of suicide and violence was 4 to 5 times greater in women with stress incontinence who were treated with duloxetine (Cymbalta).

Although the drug industry, our drug regulators and leading psychiatrists have done what they could to obscure these facts, it can no longer be doubted that antidepressants are dangerous and can cause suicide and homicide at any age. Antidepressants have many other important harms and their clinical benefit is doubtful. Therefore, my conclusion is that they shouldn’t be used at all. It is particularly absurd to use drugs for depression that increase the risk of suicide when we know that psychotherapy decreases the risk of suicide. . . . We should do our utmost to avoid putting people on antidepressant drugs and to help those who are already on them to stop by slowly tapering them off under close supervision. People with depression should get psychotherapy and psychosocial support, not drugs.

Peter Breggin described “How FDA Avoided Finding Adult Antidepressant Suicidality.” Quoting the FDA report of the 2006 hearings, he noted where the FDA permitted the drug companies to search their own data for “various suicide-related text strings.” Because of the large number of subjects in the adult analysis, the FDA did not—repeat, DID NOT—oversee or otherwise verify the process. “This is in contrast to the pediatric suicidality analysis in which the FDA was actively involved in the adjudication.” Breggin added that the FDA did not require a uniform method of analysis by each drug company and an independent evaluator as required with the pediatric sample.

Vera Sharav, “a fierce critic of medical establishment,” the founder and president of the Alliance for Human Research Protection (AHRP), testified at the 2006 hearing. She reminded the Advisory Committee that the FDA was repeating a mistake they had made in the past.  She said in the past the FDA withheld evidence of suicides from the Advisory Committee. German documents and the FDA’s own safety review showed an increased risk of suicides in Prozac. “Confirmatory evidence from Pfizer and Glaxo were withheld from the Committee.”  Agency officials “obscured the scientific evidence with assurances.”

What the FDA presented to you is a reassuring interpretation of selected data by the very officials who have dodged the issue for 15 years claiming it is the condition, not the drugs. What the FDA did not show you is evidence to support that SSRI safety for any age group or any indication. They are all at risk. They failed to provide you a complete SSRI data analysis. They failed to provide you peer-reviewed critical analyses by independent scientists who have been proven right. FDA was wrong then; it is wrong now. Don’t collaborate in this. [But they eventually did]

Breggin commented that the FDA controlled and monitored the original pediatric studies because the drug companies did not do so on their own and failed to find a risk of antidepressant-induced suicidality in any age group. “Why would the FDA assume these same self-serving drug companies, left on their own again, would spontaneously begin for the first time to conduct honest studies on the capacity of their products to cause adult suicidality?”

In a linked document of two memos written by an Eli Lilly employee in 1990, Dr. Breggin noted where the individual questioned the wisdom of recommendations from the Lilly Drug Epidemiology Unit to “change the identification of events as they are reported by the physicians.” The person went on to say: “I do not think I could explain to the RSA, to a judge, to a reporter or even to my family why we would do this especially to the sensitive issue of suicide and suicide ideation. At least not with the explanations that have been given to our staff so far.” Those suggestions included listing an overdose in a suicide attempt as an overdose, even though (here he seems to be quoting from a policy or procedural statement) “when tracking suicides, we always look at all overdose and suicide attempts.” Eli Lilly brought the first SSRI, Prozac, to market in 1986.

Next time you hear someone say that the FDA studies only showed increased suicidality in children and young adults as opposed to adults, remember that the adult studies, unlike the pediatric studies, were not controlled, monitored or validated by the FDA. This is one more example of the extremes the FDA will go to in order to protect drug companies and their often lethal products.

The problems with antidepressants, most of which are SSRIs—selective serotonin reuptake inhibitors—were at least partially known as Prozac and its cousins were being developed and brought to market in the early 1990s. As the above discussion indicated, there seems to have been a disregard of the potential for multiple negative side effects from their use, up to and including the various forms of suicidality. The sleight-of-hand done by the drug companies, and apparently the FDA, means that many individuals are in the dark about the adverse side effects stemming from their SSRI medications.

01/2/18

What is the Future for Kratom?

© YANAWUT DUNTORNKIJ | 123rf.com

Kratom is back in the news as the FDA issued a public health advisory related to mounting concerns regarding the risks associated with its use. The FDA Statement from Scott Gottlieb about kratom singled out its use to treat opioid withdrawal as a particular concern. Gottlieb said: “There is no reliable evidence to support the use of kratom as a treatment for opioid use disorder.” Individuals who use kratom are playing doctor, as there are no dependable instructions for its use and there is no consultation with a healthcare professional about the product’s dangers, adverse effects or interactions with other drugs. “There’s clear data on the increasing harms associated with kratom.”

Calls to U.S. poison control centers about kratom increased 10-fold from 2010 to 2015. There are reports of 36 deaths from kratom or kratom-containing products, according to the advisory. Not surprisingly, given the opioid cocktails appearing on the streets, kratom has been reportedly laced with various opioids. “The use of kratom is also associated with serious side effects like seizures, liver damage and withdrawal symptoms.” There are no currently FDA-approved therapeutic uses of kratom.

Before it can be legally marketed for therapeutic uses in the U.S., kratom’s risks and benefits must be evaluated as part of the regulatory process for drugs that Congress has entrusted the FDA with. Moreover, Congress has also established a specific set of review protocols for scheduling decisions concerning substances like kratom. This is especially relevant given the public’s perception that it can be a safe alternative to prescription opioids.

Gottlieb pointed out that kratom is already a controlled substance in 16 countries, “including two of its native countries of origin, Thailand and Malaysia.” Australia, Sweden and Germany are among the other countries listing kratom as a controlled substance. Several states have banned kratom: Alabama, Arkansas, Indiana, Tennessee, Vermont and Wisconsin. Several others, Florida, Georgia, New York, North Carolina and Oregon have pending legislation to ban it. Gottlieb encouraged supporters to conduct the research that will help to better understand kratom’s risk and benefit profile. “In the meantime, based on the weight of the evidence, the FDA will continue to take action on these products in order to protect public health.”

In response to the FDA advisory on kratom, The American Kratom Association (AKA) has asked the FDA to “review and correct” it. It claimed the advisory was based on “discredited, incomplete, and mischaracterized scientific claims” and as a result, it should be rescinded. Medscape reported the AKA has filed a formal dispute resolution petition with the Department of Health and Human Services challenging what it claimed was the “weak scientific basis” of the FDA advisory.

For years, the FDA has published scientifically inaccurate information on the health effects of consuming kratom, directly influencing regulatory actions by the DEA [Drug Enforcement Administration], states, and various local government entities. AKA believes the FDA health advisory on kratom will lead to more state and local bans, all based on discredited, incomplete, and mischaracterized scientific claims.

The AKA disputed the FDA claim that kratom has opioid-like abuse potential, arguing it is primarily used because it is beneficial, and not as a means to get high. The organization also downplays the overdose risk with kratom, saying: “The handful of possible kratom-associated deaths in the US involved people taking multiple drugs, with apparent causes of death varying widely, quite unlike what is seen with narcotic-like opioids.” Reversing the FDA concern that unrestricted kratom use could increase the opioid crisis, the AKA claimed the ban would increase it. The AKA claimed kratom consumers are afraid they will be forced to seek out illegal opioids if kratom was banned. “It would be an outrageous and unacceptable public health outcome if the effect of the FDA assault on kratom backfires and leads to more opioid addiction and death.”

Despite the claims by the AKA, the FDA public health advisory does not seem ill advised. A CDC Morbidity and Mortality Report in 2016 described a study of U.S.  poison centers between 2010 and 2015. During the study poison centers received 660 calls about reported exposure to kratom. The number of calls increased tenfold, from 26 in 2010 to 263 in 2015. See the figure below.

Among the calls, 90.2% reported ingestion of the drug. Isolated exposure, use of kratom alone, was reported by 64.8% of cases. “Among calls reporting use of kratom in combination with other substances (multiple exposures), the most commonly reported other substances were ethanol, other botanicals, benzodiazepines, narcotics, and acetaminophen.”

Medical outcomes associated with kratom exposure were reported as minor (minimal signs or symptoms, which resolved rapidly with no residual disability) for 162 (24.5%) exposures, moderate (non-life threatening, with no residual disability, but requiring some form of treatment) for 275 (41.7%) exposures, and major (life-threatening signs or symptoms, with some residual disability) for 49 (7.4%) exposures.

There was one death reported in the CDC Report, but the person had also ingested paroxetine (Paxil) and lamotrigine (Lamictal) in addition to the kratom. While the FDA advisory said it was aware of 36 deaths associated with kratom products, it did not provide further information. The August 2016 DEA announcement to schedule kratom, which was later rescinded, said the DEA was aware of 15 kratom-related deaths between 2014 and 2016; but again did not provide any further information.

An article in BioMed Research International, “Following ‘the Roots’ of Kratom” described several short term adverse effects from kratom, including nausea, constipation, sleep problems, itching, sweating, erectile dysfunction. Long-term adverse effects include: anorexia, dry mouth, darker skin and hair loss. Withdrawal symptoms can include: hostility, aggression, muscle and bone aches, jerky limb movements, anorexia and weight loss, and insomnia. Kratom could be a deadly substance when mixed with other compounds. Multiple fatalities with a kratom product known as “Krypton” have been reported. See “Following ‘the Roots’ of Kratom,” for more particulars on the reported deaths. For more information on Krypton, see “Krypton Can Kill You.”

Table 1 of the article listed several substances found mixed with kratom in fatalities, including antidepressants, a mood stabilizer and a hypnotic sleep aide: O-desmethyltramadol (a metabolite of tramadol); propylhexedrine (an analog of methampheamine); over-the-counter cold medications and benzodiazepines; venlafaxine (Effexor), diphenhydramine (Benadryl), and mirtazapine (Remeron); zopiclone (Zopiclone), citalopram (Celexa), and lamotrigine (Lamictal).

According to preclinical data and case reports published in scientific literature as well as anecdotal experiences posted online, kratom is not a safe drug. Its consumption is associated per se with drug dependency, development of withdrawal symptoms, craving, serious adverse effects, and life-threatening effects, especially in a multidrug-intoxicating scenario.

Another article in the International Journal of Legal Medicine, “The Pharmacology and Toxicology of Kratom,” said there was growing international concern for the kratom’s effects and safety due to “an increase in hospital visits and deaths in several countries that are said to have been caused by extracts of the plant.” The abuse potential of kratom “requires careful evaluation of its benefits and potential toxicities.”

So what is next with kratom? When the DEA reversed its decision to temporarily classify kratom as a Schedule I Controlled Substance in October of 2016, it said it would solicit public comments (which it did) and review the FDA’s “scientific and medical evaluation” of the proposed scheduling. The FDA public health advisory for kratom indicates the agency concluded there was cause for concern in keeping it unregulated. I’d guess that further action by the DEA to schedule kratom will be delayed, pending the outcome of the AKA’s suit against the FDA.

Gottlieb’s public encouragement of research into kratom’s possible use as a therapy for “a range of disorders” may suggest room for a scheduling of kratom other than as a Schedule I Controlled Substance. If kratom were to be placed even temporarily as a Schedule I controlled Substance, further research into its potential medical benefits would be difficult to conduct. Funding for kratom research is also hard to come by. Obtaining kratom of the quality needed for research is difficult as well. A researcher referred to the FDA requirements to develop a clinical trial for kratom as a “bureaucratic nightmare.” Edward Boyle, a kratom researcher, said: “Is it an effective treatment for opioid withdrawal, or is it another pathway to addiction? I don’t think anybody has a defined concept of where it actually lies on that continuum.”

See “The Secret of Kratom” and “Kratom: Part of the Problem or a Solution?” for more information on kratom.