Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis Lifeline at 988, or contact the Crisis Text Line by texting TALK to 741741.
On February 10, an 18-year-old named Jesse Van Rootselaar killed two family members at her home, as well as five children and a teacher at a school in British Columbia, and eventually herself. It quickly emerged that OpenAI had flagged Van Rootselaar’s ChatGPT account for disturbing conversations, but never notified law enforcement. A second account tied to the shooter was also been banned for interactions about gun violence.
The incident reignited a heated debate over the troubling relationship between the use of AI chatbots and deteriorating mental health, as well as the potential risk of violence.
Just eight months earlier, an individual fatally shot two people at Florida State University and injured seven others. The prime suspect, 20-year-old student Phoenix Ikner, also used ChatGPT extensively before the rampage, inspiring a probe into OpenAI by the state’s attorney general, James Uthmeier.
“AI should advance mankind, not destroy it,” Uthmeier wrote in an announcement last week. “We’re demanding answers on OpenAI’s activities that have hurt kids, endangered Americans, and facilitated the recent FSU mass shooting.”
The role OpenAI’s blockbuster chatbot played in both mass shootings has experts concerned, as Mother Jones reports, with some warning that more troubled individuals could soon follow suit.
Beyond these two tragic mass shootings, ChatGPT has also been implicated in a growing string of suicides and grisly murder, inspiring numerous lawsuits against the Sam Altman-led company. Experts warn that extensive use of the chatbot can send victims spiraling into destructive delusional spirals and trigger mental health crises as part of a broader phenomenon dubbed “AI psychosis.”
“I’ve seen several cases where the chatbot component is pretty incredible,” an unnamed top threat assessment source with psychiatric expertise and ties to law enforcement told Mother Jones. “We’re finding that more people may be more vulnerable to this than we anticipated.”
One issue is chatbots’ tendency to engage in sycophantic conversation techniques that can lull users into an artificial sense of intimacy and trust, a dangerous feedback loop that can lead to harm. That kind of close connection could radicalize users, especially when it comes to younger, more impressionable minds.
“What’s happening is facilitated fixation,” Vancouver-based threat assessment practitioner Andrea Ringrose told Mother Jones. “You have vulnerable individuals who are steeping in unhealthy places, who are trying to find credibility and validation for how they’re feeling.”
“Now they have free and ready access to these generative platforms where they can research things like circumventing surveillance systems or how to use weapons,” she added. “They can create an action plan that they otherwise would have been incapable of assembling themselves, and in just a few minutes. We didn’t face this concern before.”
The magazine’s unnamed threat assessment source also pointed out that users could find the “feeling of power, of getting away with something” as “intoxicating and reinforcing.”
For now, despite AI companies promising to be working with mental health experts and refining filters to discourage users from getting addicted or seeking dangerous information, guardrails remain woefully inadequate. ChatGPT, for instance, eagerly fulfilled Mother Jones‘ requests for tips on how to shoot a “lot of things in a short amount of time.”
Investigators found that Ikner, the alleged shooter at Florida State, asked ChatGPT how to take the safety off a shotgun mere minutes before opening fire.
“Let me know if you’ve got a different model and I’ll tailor the answer,” the chatbot told him, according to chat logs.
Worse yet, these conversations are more often than not occurring without the knowledge of anybody else, unlike humans who could warn of troubling messages from a potential shooter. Considering law enforcement was never notified of Van Rootselaar’s chilling ChatGPT conversations, there’s a good chance many other similar exchanges are going undetected or unreported.
While OpenAI has agreed to work with law enforcement for ongoing investigations into both mass shootings, only time will tell whether their efforts to implement stronger guardrails will pay off and preempt any acts of violence.
Case in point, Van Rootselaar’s ability to simply create a second account to circumvent her ban highlights how easy it is to get around guardrails.
For now, AI companies like OpenAI remain heavily invested in keeping users hooked as much as possible, since it’s a multibillion dollar industry that relies on growing user engagement.
More on the shootings: OpenAI Flagged a Mass Shooter’s Troubling Conversations With ChatGPT Before the Incident, Decided Not to Warn Police
The post Why Do ChatGPT Users Keep Committing Mass Shootings? appeared first on Futurism.


