Generative AI is reshaping industries worldwide, but even the dark corners of the internet aren’t immune to its influence. A yet-to-be-peer-reviewed study, first reported by Wired, reveals that traditional cybercriminals are growing frustrated as their favorite forums adopt AI tools—much like mainstream platforms such as Amazon or Reddit have done.

The study found little evidence that AI is fundamentally transforming cybercrime, despite alarmist claims that it could fuel a new wave of scams and fraud. Instead, large-scale criminal enterprises primarily use AI for routine tasks, such as error-checking or troubleshooting coding problems via Google.

Among smaller operations—described by researchers as low-skill cybercriminals—there’s a growing backlash against AI. These criminals are doubling down on human connections and time-tested attack methods, rejecting AI-generated content in favor of organic interactions.

AI Undermines Perceived Expertise in Cybercrime Forums

“People don’t like it,” said Ben Collier, a security researcher and senior lecturer at the University of Edinburgh, in an interview with Wired. Collier, a coauthor of the study, notes that low-level hackers operating on Tor-accessed cybercrime forums still value human relationships over AI.

“These are essentially social spaces. They really hate other people using [AI] on the forums,” Collier explained. “I think a lot of them are a bit ambivalent about AI because it undermines their claim to be a skilled person.”

Dark Web Forums Reject AI in Favor of Human Interaction

Posts reviewed by Wired on Hack Forums (HF), a long-standing hacker community established in 2007, were filled with derision toward AI. One user bluntly demanded, “Stop posting AI s**t.” Others framed their objections around community values:

“If I wanted to talk to an AI chatbot, there are many websites for me to do so, but that’s not why I come to [HF]. I come here for human interaction.”

Another anonymous user argued that forums are inherently human spaces, and introducing AI-generated replies defeats their purpose:

“Forums are inherently human. Introducing some AI or otherwise generated replies just defeats the complete purpose of visiting and/or maintaining such a forum.”

Mistrust in AI’s Capabilities Persists Among Criminals

Beyond social concerns, cybercriminals also express skepticism about AI’s reliability. One user wrote in 2025:

“I think AI isn’t good enough to handle the kind of volume of code I would be flashing through it and asking it to expand on features. AI can only still do the basics. It does them pretty good though. But I would not trust anything beyond my own supervision, and copy and paste from it only.”

While AI use is prevalent in certain areas—particularly passive “get-rich-quick schemes” like AI-driven SEO spam—smaller-scale criminals are pushing back against its integration into their communities.

Source: Futurism