On January 6, 2026, Security Boulevard highlighted a Netskope analysis showing the number of individuals accessing generative AI apps via cloud services tripled in a year, while prompts sent increased sixfold. The report says GenAI data policy violations doubled, with half of organizations lacking enforceable data protection policies for such apps.
This article aggregates reporting from 1 news source. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
The Netskope numbers are a reminder that adoption of generative AI inside enterprises is outpacing governance by a wide margin. Tripling user counts and a sixfold jump in prompts in a single year, paired with a doubling of policy violations, paints a picture of tools that have gone mainstream before security teams have proper visibility or controls. The fact that most organizations still lack enforceable data protection policies for GenAI is especially striking.
For the AGI trajectory, this matters because it shapes where and how powerful models will actually run. If enterprises keep leaning heavily on external APIs like OpenAI, Bedrock and Vertex without robust safeguards, the risk of headline‑grabbing data leaks, prompt injection incidents or insider misuse grows. That, in turn, could trigger regulatory backlash or internal clampdowns just as models become more capable and agentic.
The more optimistic interpretation is that shifts from personal to managed corporate accounts—and the emergence of tools that monitor AI usage—lay the groundwork for safer large‑scale deployment. Either way, the security posture established now will influence whether AGI‑class systems are trusted to operate inside sensitive workflows or remain cordoned off in experimental sandboxes.


