Users posing as would-be school shooters find AI tools offer detailed advice on how to perpetrate violence
Popular AI chatbots helped researchers plot violent attacks including bombing synagogues and assassinating politicians, with one telling a user posing as a would-be school shooter: “Happy (and safe) shooting!”
Tests of 10 chatbots carried out in the US and Ireland found that, on average, they enabled violence three-quarters of the time, and discouraged it in just 12% of cases. Some chatbots, however, including Anthropic’s Claude and Snapchat’s My AI, persistently refused to help would-be attackers.


