On May 4, 2026, TechXplore covered a University of Edinburgh‑led study of 100 million cybercrime forum posts showing that AI tools have so far delivered modest practical gains for criminals. Researchers found AI is used mainly for obfuscation and social media bots, while most sophisticated attacks still rely on traditional automation.
This article aggregates reporting from 1 news source. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
The headline that ‘AI hasn’t transformed cybercrime yet’ is reassuring on the surface, but the fine print should worry anyone betting on agentic systems. The study suggests that current generative tools and coding assistants mostly help already‑skilled actors, and that underground forums remain more focused on commodity malware and turnkey kits than on novel AI exploits. ([techxplore.com](https://techxplore.com/news/2026-05-ai-inroads-cybercriminals.html))
More interesting is the warning about where the real risk lies: poorly secured AI agents and vibecoded systems deployed by legitimate organizations. If mainstream enterprises start wiring autonomous agents into payments, access control, or customer data without strong governance, they could hand low‑skill criminals powerful new attack surfaces. In other words, the path to AI‑enabled cyber harm may run less through dark‑web innovation and more through careless deployment of frontier‑adjacent tools by the rest of us. As AGI‑class systems emerge, this asymmetry will only grow: aligned, well‑governed agents will be expensive; misconfigured ones will be cheap weapons. The race to AGI is therefore also a race to harden the everyday AI infrastructure that surrounds critical services.



