Scammers Furious That Their Fellow Criminals Are Using AI, Saying It’s Unethical

The AI sloppification of the internet comes for us all, even the petty scammers and fraudsters doing business in the darker corners of the web.

As a yet-to-be-peer-reviewed study found, old-world internet scammers are getting frustrated as their favorite cybercrime forums turn to generative AI, much the same way Amazon or Reddit have degraded their own sites with the tech — which, you have to admit, is pretty rich coming from a bunch of professional scammers.

The study, first covered by Wired, found little evidence that AI tools are fundamentally reshaping the world of cybercrime, contradicting more alarmist warnings that the tools are fueling a novel epidemic of scam and fraud.

At the upper echelons of the cybercrime world, large-scale criminal enterprises are largely using the tools for boring tasks like checking errors and probing Google for solutions to coding problems. Among smaller operations, however — scams run by low-skill cybercriminals, as the researchers characterize them — researchers identified a growing disgust with generative AI for any purpose, with criminals choosing instead to double-down on time-honored social connections and ancient attack scripts.

“People don’t like it,” security researcher and senior lecturer at the University of Edinburgh Ben Collier told Wired. Collier, a coauthor of the study, notes that low-level hackers operating on cybercrime forums accessed via the Tor network — commonly sensationalized as the “Dark Web” — still prize organic connections and social dynamics over AI.

“These are essentially social spaces. They really hate other people using [AI] on the forums,” Collier explained. “I think a lot of them are a bit ambivalent about AI because it undermines their claim to be a skilled person.”

Sure enough, posts reviewed by Wired on Hack Forums (HF), a venerable social hub for hackers established back in 2007, were ripe with derision. “Stop posting AI s**t,” one poster groused.

Others referenced this sense of community directly in their moral appeals: “If I wanted to talk to an AI chatbot, there are many websites for me to do so, but that’s not why I come to [HF]. I come here for human interaction,” an anonymous user wrote in one post referenced in the research paper. “Forums are inherently human. Introducing some AI or otherwise generated replies just defeats the complete purpose of visiting and/or maintaining a such a forum.”

In addition to the social aspect, the researchers identified a general mistrust with AI’s output.

“I think AI isn’t good enough to handle the kind of volume of code I would be flashing through it and asking it to expand on features,” another user wrote in 2025. “AI can only still do the basics. It does them pretty good though. But I would not trust anything beyond my own supervision, and copy and paste from it only.”

This is not to say AI use isn’t rampant among certain rungs of the cybercriminal world. As the researchers note, positive mentions of AI are most prevalent among discussions of passive “get-rich-quick schemes,” like AI SEO spam or OnlyFans fraud. So while some mainstream news headlines might paint a horrifying picture of AI-enabled crime, there’s plenty of nuance worth digging into — cybercriminals are still humans, after all.

More on AI: Even After Two Massacres, OpenAI Still Hasn’t Stopped ChatGPT From Helping Plan School Shootings

The post Scammers Furious That Their Fellow Criminals Are Using AI, Saying It’s Unethical appeared first on Futurism.

Scroll to Top