In a recent interview on Sean Hannity’s YouTube podcast, FBI head Kash Patel lauded AI for helping stop multiple violent attacks on innocent people.
“AI was never used at the FBI till we got there, literally crazy,” Patel said in his characteristically hopped up affect. “I’m using it everywhere.”
Specifically, Patel — who’s been accused of severe issues related to alcohol consumption — alleges that using AI the FBI has been able to foil numerous mass shootings at schools throughout the US.
“We stopped a school massacre in North Carolina because we got a tip from our private-sector partners who are building out AI infrastructure,” he bragged.
As with everything coming out of the Trump administration, we need to take this statement with a Mar-a-Lago-sized grain of salt. While it remains to be seen whether AI has really helped the FBI thwart mass casualty events, there’s extremely compelling evidence that the exact opposite is also true.
For starters, research has shown that AI chatbots are actually twice as likely to encourage humans to commit violent acts than step in and stop them. One Stanford study found that AI chatbots only discourage violence 16.7 percent of the time, while the same chatbots actively supported violent thoughts in an alarming 33.3 percent of cases.
In the real world, this is manifesting into a key pattern of violence. After the second shooting at Florida State University — the 2025 one, not the 2014 one — in which two were killed and seven injured, it was found that the perpetrator had not only confided in ChatGPT about his plans to commit a mass shooting, but used the chatbot to organize the attack.
The mass shooter in Tumbler Ridge, Canada conducted conversations with ChatGPT so disturbing that they were automatically flagged by the company’s internal moderation systems, spurring leadership at the company to debate whether to inform law enforcement; they ultimately didn’t, and the attack killed seven and injured dozens more.
Meanwhile in South Korea, police investigators allege a 21-year-old serial killer used ChatGPT to help plan at least two murders. A Connecticut man with a history of violent mental health episodes was likewise alleged to have killed his mother before taking his own life after long-running conversations with ChatGPT resulted in a disturbing break from reality. One wrongful death suit in Florida alleges Google’s chatbot, Gemini, encouraged a man to kill others in order to procure a “robot body” for his AI lover; failing that, he killed himself.
Elsewhere, AI chatbots have helped users overdose on drugs, plan bombing campaigns, and even engineer bioterror attacks while maximizing casualties.
At the end of the day, the evidence speaks for itself. Not only are AI chatbots not demonstrably preventing violence, they’re actively facilitating it. Unlike any technology before it, these systems provide users contemplating bloodshed with encouragement, tactical advice, and emotional reinforcements. If those in power refuse to acknowledge the reality of AI’s harms, the public will be left defenseless against a technology made to encourage our worst impulses.
More on AI and violence: The Military’s AI Fever Is Leading Into Disaster, Critics Say
The post FBI Director Kash Patel Says AI Has Stopped Numerous Violent Attacks Against America. We’d Love to See a Single Whiff of Evidence appeared first on Futurism.


