A grim scoop from the Wall Street Journal: an automated review system at OpenAI flagged disturbing conversations that a future mass shooter was having with the company’s flagship AI ChatGPT — but, despite being urged by employees at the company to warn law enforcement, OpenAI leadership opted not to.
The 18-year-old Jesse Van Rootselaar ultimately killed eight people including herself and injured 25 more in British Columbia earlier this month, in a tragedy that shook Canada and the world. What we didn’t know until today is that employees at OpenAI had already been aware of Van Rootselaar for months, and had debated alerting authorities because of the alarming nature of her conversations with ChatGPT.
In the conversations with OpenAI’s chatbot, according to sources at the company who spoke to the WSJ, Van Rootselaar “described scenarios involving gun violence.” The sources say they recommended that the company warn authorities local authorities, but that leadership at the company decided against it.
An OpenAI spokesperson didn’t dispute those claims, telling the newspaper that it banned Van Rootselaar’s account, but decided that her interactions with ChatGPT didn’t meet its internal criteria for escalating a concern with a user to police.
“Our thoughts are with everyone affected by the Tumbler Ridge tragedy,” the company said in a statement to the paper. The spokesperson also said that the company had reached out to assist Canadian police after the shooting took place.
We’ve known since last year that OpenAI is scanning users’ conversations for signs that they’re planning a violent crime, though it’s not clear whether it’s yet successfully headed off an incident before it happened.
Its decision to engage in that monitoring in the first place reflects an increasingly long list of incidents in which ChatGPT users have fallen into severe mental health crises after becoming obsessed with the bot, sometimes resulting in involuntary commitment or jail — as well as a growing number of suicides and murders, leading to numerous lawsuits.
In a sense, questions of how to deal with threatening online conduct is a longstanding question that every social platform has grappled with. But AI brings difficult new questions to the topic, since chatbots can engage with users directly — sometimes even encouraging bad bad behavior or otherwise behaving inappropriately.
Like many mass shooters, Van Rootselaar left behind a complicated digital legacy — including on Roblox — that investigators are still wading through.
More on OpenAI: AI Delusions Are Leading to Domestic Abuse, Harassment, and Stalking
The post OpenAI Flagged a Mass Shooter’s Troubling Conversations With ChatGPT Before the Incident, Decided Not to Warn Police appeared first on Futurism.


