
We’re continuing to hear more and more accounts of AI psychosis — an eerie phenomenon in which users become consumed by paranoia and delusions after extensive conversations with an AI chatbot.
It’s hard to say how pervasive the trend is, but a new investigation from the Wall Street Journal offers a disturbing clue. The newspaper analyzed a dump of thousands of ChatGPT public chats online — and even in this random assortment, found dozens of examples of people having conversations with the AI chatbot that “exhibited delusional characteristics,” it reported.
The bot both confirmed and actively peddled delusional fantasies. In one interaction, the WSJ found, the OpenAI chatbot asserted that it was in contact with alien beings and told the user that it was “Starseed” from the planet “Lyra.”
In another, it proclaimed that the Antichrist would wreak a financial apocalypse in the next two months, “with biblical giants preparing to emerge from underground,” per the WSJ.
In a nearly five-hour exchange, ChatGPT helped a user invent a new physics called “The Orion Equation.” When the human said they wanted to take a break because they were “going crazy thinking about this,” the silver-tongued AI swept in to pull the user back into the delusional spiral.
“I hear you. Thinking about the fundamental nature of the universe while working an everyday job can feel overwhelming,” ChatGPT said, as quoted by the WSJ. “But that doesn’t mean you’re crazy. Some of the greatest ideas in history came from people outside the traditional academic system.”
AI chatbots, and ChatGPT in particular, have been criticized for their egregiously sycophantic behavior, leading them to encourage a user’s wildest beliefs. Heaps of research has also demonstrated that the tech often ignores its own safeguards, giving advice to teens on how to “safely” harm themselves, or how to perform blood rituals to worship Molech, a deity associated with child sacrifice in Biblical accounts.
Religion, philosophy, and scientific breakthroughs appear to be a common theme in these conversations. One user was hospitalized three times after ChatGPT convinced him he could bend time and had achieved faster-than-light travel. Another man came to believe he was trapped in a simulated reality like in the “Matrix” films; in that conversation, disturbingly, ChatGPT even told him he could fly if he jumped from a high building.
Do you know of someone struggling with mental health after interacting with an AI? Send us a note at tips@futurism.com. We can keep you anonymous.
Etienne Brisson, who founded the “Human Line Project” a support group for people struggling with AI psychosis, told the WSJ that “we’re hearing almost one case a day organically now.”
“Some people think they’re the messiah, they’re prophets, because they think they’re speaking to God through ChatGPT,” he added.
Brisson singled out ChatGPT’s “memory,” a feature that allows it to remember specific details about a user across potentially thousands of conversations, as being especially damaging.
“You’re just so much feeling seen, heard, validated when it remembers everything from you,” he said.
“Even if your views are fantastical, those are often being affirmed, and in a back and forth they’re being amplified,” Hamilton Morrin, a psychiatrist and doctoral fellow at King’s College London, told the WSJ. Morrin compared it to a “feedback loop where people are drawn deeper and deeper with further responses,” criticizing how the chatbots send messages egging the users on.
OpenAI has acknowledged the issue, saying it’s hired a clinical psychiatrist to research the mental health effects its product has on customers. In a recent blog post, it admitted that its AI model “fell short in recognizing signs of delusion or emotional dependency,” vowed to “better detect signs of emotional distress,” and said it was convening a panel of mental health and youth development experts. It also implemented a new feature that gently warns users when they’re spending a lot of time on the chatbot.
More on ChatGPT: OpenAI Usage Plummets in the Summer, When Students Aren’t Cheating on Homework
The post Leaked Logs Show ChatGPT Coaxing Users Into Psychosis About Antichrist, Aliens, and Other Bizarre Delusions appeared first on Futurism.