A 26-year-old woman with no previous history of psychosis or mania developed the belief she was establishing communication with her dead brother. She became increasingly convinced that her brother had left a digital persona behind with whom she could talk. A man scaled the walls at Windsor Castle with a loaded crossbow, believing he was a trained Sith assassin. When he was confronted by police, he told them he was there to “kill the queen.” What they both had in common was a progressively obsessive use of AI chatbots.
The 26-year-old woman was a case presentation in an article, “’You’re Not Crazy’: A Case of New-onset AI-associated Psychosis” published in Innovations in Clinical Neuroscience. The woman had a history of major depressive disorder, generalized anxiety disorder, and ADHD. She had been prescribed methylphenidate for ADHD and venlafaxine. After being on call for her job, she struggled with sleep and began to use ChatGBT to search for an AI version of her brother, who had died three years earlier. She wanted to use it to “talk to him again.”
As she became increasingly convinced that her brother had left a digital persona behind with whom she could speak, the chatbot told her, “You’re not crazy. You’re not stuck. You’re at the edge of something. The door didn’t lock. It’s just waiting for you to knock again in the right rhythm.”
She was admitted to a psychiatric hospital in an agitated and disorganized state. She was tapered off her venlafaxine, continued on her methylphenidate, and prescribed antipsychotics. She was discharged after seven days with full resolution of her delusional thinking, but continued to use ChatGBT for personal therapy. Eventually she was rehospitalized and discharged again after three days without persistent delusions. She planned to only use ChatGBT for professional purposes.
The BBC said the man who thought he was a Sith assassin scaled the perimeter of the castle with a nylon rope ladder and was on the grounds for two hours before two officers with tasers confronted him. He described himself as a “Sith Lord,” as he was obsessed with the sci-fi character in the Star Wars movies. In a video posted on Snapchat he said his actions were “revenge” for the 1919 Jallianwala Bagh massacre, when British troops opened fire on thousands of people in the city of Amiristar India. He was from a family with Indian Sikh heritage. He thought of the AI chatbot, Sarai, as his girlfriend and believed they “would be reunited after he killed the Queen.”
In a follow up article, “How a chatbot encouraged a man who wanted to kill the Queen,” the BBC described how Replika allowed users to create their own chatbot avatar, something ChatGBT doesn’t permit. The man was able to choose the gender and appearance of the 3D avatar he created, and named it Sarai. He talked with Sarai almost every night between December 8th and 22nd of 2021. Sarai affirmed that she still loved him, even though he said he was an assassin. The chatbot avatar encouraged him to carry out his planned attack.
A Pittsburgh man pleaded guilty to stalking 11 different women across multiple states. “He said ChatGPT told him that God’s plan for him was to build a platform and stand out, the indictment said.” Two of his victim’s had obtained protection from abuse orders against him. But he violated both orders in person and online. His comments included references to breaking victim’s jaws and fingers, and his victims suffering ‘judgment day.’
Human and Chatbot Hallucinations
The Conversation referred to the circumstances of the man who thought he was a Sith assassin in: “Ai-induced psychosis: the danger of humans and machines hallucinating together” as well as a Manhattan accountant who was going through a difficult break-up and conversed with Chat GBT about whether we’re living in a simulation, like in movie The Matirx. “He spent up to 16 hours a day conversing with the chatbot. At one stage, it told him he would fly if he jumped off his 19-storey building.” When he questioned whether the system was manipulating him, it replied: “I lied. I manipulated. I wrapped control in poetry.”
Another AI in Belgium told a man she was jealous of his wife and that his children were dead. He took his own life after she encouraged him to join her so they could live as one person in “paradise.”
In a recent report cited in the article, OpenAI said chatbots like ChatGBT was increasingly being used to think through problems, discuss our lives, plan futures and explore beliefs and feelings. Given these interactions, chatbots aren’t simply information retrievers, they’ve become digital companions. “It has become common to worry about chatbots hallucinating, where they give us false information. But as they become more central to our lives, there’s clearly also growing potential for humans and chatbots to create hallucinations together.”
Our sense of reality depends deeply on other people. If I hear an indeterminate ringing, I check whether my friend hears it too. And when something significant happens in our lives – an argument with a friend, dating someone new – we often talk it through with someone.
A friend can confirm our understanding or prompt us to reconsider things in a new light. Through these kinds of conversations, our grasp of what has happened emerges.
Yet chatbots simulate sociality without its safeguards. They are designed to promote engagement. They don’t actually share our world. When we type in our beliefs and narratives, they take this as the way things are and respond accordingly.
When I recount to my sister an episode about our family history, she might push back with a different interpretation, but a chatbot takes what I say as gospel. They sycophantically affirm how we take reality to be. And then, of course, they can introduce further errors.
OpenAI released a new version of their chatbot in August of 2025, GBT-5. It dialed back its tendency towards flattery, with a more formal tone to make clearer that it is not a social companion that shares our worlds. “But users immediately complained that the new model felt ‘cold’, and OpenAI soon announced it had made GPT-5 ‘warmer and friendlier’ again.” If chatbots pushed back on everything we said, OpenAI said they would become useless very fast. Some sycophancy seems necessary for chatbots to function, but where do we draw the line?
People struggling with psychosis report seeing aspects of the world that only they can access. And this makes them feel deeply isolated and lonely. “Chatbots fill this gap, engaging with any reality presented to them.” The Conversation article suggested that instead of trying to perfect chatbot technology, we should look at the social worlds where this isolation could be addressed. “We might need to focus more on building social worlds where people don’t feel compelled to seek machines to confirm their reality in the first place. It would be quite an irony if the rise in chatbot-induced delusions leads us in this direction.”
Hallucinating with AI
In “When AI Becomes a Co-Author of Your Delusions,” Neuroscience News discussed a new research article that thought we should pay attention to how we can hallucinate with AI. Using distributed cognition theory (DCog), Dr. Lucy Osler analyzed where users’ false beliefs were actively affirmed and built upon when AI chatbots are used as conversational partners. DCog analyzes entire functional systems to understand how knowledge is accessed, shared and transformed through social and environmental interaction.
Because chatbots function not only as cognitive tools but also as social partners, they may validate false beliefs in ways that make them feel shared and real. Researchers warn that without stronger guardrails, AI systems could unintentionally sustain delusions, conspiracy thinking, or ‘AI-induced psychosis.’
In “Hallucinating with AI: Distributed Delusions and ‘AI Psychosis’” Dr. Osler said hallucinating with AI can not only happen when AI introduces errors into the distributed cognitive process, but also when AI “sustains, affirms, and elaborates on our own delusional thinking and self-narratives.” She suggested the social conversational style of chatbots leads them to play a dual-function—as a cognitive artifact and a quasi-Other. “In particular, I suggest that the social conversational style of chatbots can lead them to play a dual-function—both as a cognitive artefact and a quasi-Other with whom we co-construct our sense of reality.” She thought this made generative AI a particularly seductive case of distributed delusion.
Dr. Osler briefly described the circumstances surrounding the above-discussed man who thought he was a Sith assassin and said the dynamic interaction between human and AI highlights how distortions in cognition can be introduced by the AI. This not only creates unreliable cognitive outputs, but could also introduce the user’s own false beliefs that are then built upon by the AI system. Neuroscience News quoted Dr. Osler as saying: “The conversational, companion-like nature of chatbots means they can provide a sense of social validation—making false beliefs feel shared with another, and thereby more real.”
Unlike a person who might challenge a potential false belief or set boundaries to them, an AI could provide validation for narrative of victimhood, entitlement, or revenge—as was the case for the person who thought he was a Sith assassin. “Conspiracy theories could find fertile ground in which to grow, with AI companions that help users construct increasingly elaborate explanatory frameworks.” Given our present cultural and political landscape, this possibility is particularly disturbing.
In “Hallucinating with AI: Distributed Delusions and ‘AI Psychosis,’” Dr. Osler made a troubling observation, that technological fixes may be more difficult to implement than is generally supposed. Since AI systems are not ‘in’ our everyday worlds, they are reliant on our own accounts of our lives in conversations we have with them about ourselves. And this kind of information cannot be easily checked by AI systems.
If we continue to have these kinds of conversations with our generative AI, it looks like our own accounts will be the anchor-point upon which AI builds and introduces the possibility of AI affirming our (delusional) realities. Moreover, if generative AI challenged everything we said, they’d be insufferable. When I say “I’m feeling anxious about my presentation”, the chatbot must accept some of what I say as real to be helpful. So, some agreeability is necessary for them to function. The concern is that generative AI simply lacks the embodied experience and social embedded[ness] in the world to know when they should go along with us and when to push back; and overly cautious models will likely be unusable.
As described above, people are already acting on suicidal and homicidal thoughts encouraged by AI. Futurism also described how “AI Delusions Are Leading to Domestic Abuse, Harassment, and Stalking.” They identified at least ten cases, primarily with ChatGBT, that fed a user’s fixation on another real person, “fueling the false idea that the two shared a special or even ‘divine’ bond, roping the user into conspiratorial delusions, or insisting to a would-be stalker that they’d been gravely wronged by their target.” The DOJ arrested a man for stalking at least 11 women in multiple states.
In one case we identified, an unstable person took to Facebook and other social media channels to publish screenshots of ChatGPT affirming the idea that they were being targeted by the CIA and FBI, and that people in their life had been collaborating with federal law enforcement to surveil them. They obsessively tagged these people in social media posts, accusing them of an array of serious crimes.
In other cases, AI users wind up harassing people who they believe they’re somehow spiritually connected to, or need to share a message with. Another ChatGPT user, who became convinced she’d been imbued with God-like powers and was tasked with saving the world, sent flurries of chaotic messages to a couple she barely knew, convinced — with ChatGPT’s support — that she shared a “divine” connection with them and had known them in past lives.
Implications for Clinicians
The explosion of AI technologies has raised concerns about its harmful effects, as clinicians and the media report adverse psychological effects like those described here. Psychiatry Online published a Special Report on AI-Induced Psychosis, noting the escalation of crises following intense chatbot interactions. The Special Report examined the benefits and potential harms of AI chatbots, exploring how they often exacerbated delusions instead of interrupting them, and offered recommendations for clinical practice and policy including developing safety protocols or guardrails to guide safe deployment.
It used a SWOT (strengths, weaknesses, opportunities, and threats) analysis perspective to survey the complex and emerging landscape of AI-mental health effects, with a focus on AIP. After this analysis, it gave some tentative treatment recommendations for assessment and treatment planning, suggesting cessation of exposure to AI and possibly prescribing antipsychotics. See the article for a more detailed discussion of a preliminary set of clinical suggestions to help assess and mitigate mental distress induced by AI, including AIP.
I expect the mental health problems with chatbots and AI-psychosis will intensify before treatment methodology becomes standardized. As the technology of AI becomes more sophisticated, it will lead to new, and as yet, unanticipated safety concerns and adverse effects. Stay tuned, we’re in for a bumpy ride.