Technology
CyberPress
GreyNoise blog (referenced)
2 outlets
Friday, January 9, 2026

CyberPress flags 91,000 attacks probing AI deployments worldwide

Source: CyberPress
Read original|GOOGL $328.57META $653.06

TL;DR

AI-Summarizedfrom 2 sources

On January 9, 2026, CyberPress reported that security firm GreyNoise observed over 91,000 malicious sessions targeting AI infrastructure between October 2025 and early January 2026. Two campaigns exploited SSRF flaws in tools like Ollama and Twilio webhooks and probed more than 73 large‑language‑model endpoints for misconfigured API access.

About this summary

This article aggregates reporting from 2 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.

2 sources covering this story|6 companies mentioned

Race to AGI Analysis

This report is a reminder that as AI infrastructure scales, it becomes its own attack surface. GreyNoise’s telemetry suggests that attackers — or at least very aggressive researchers — are now systematically scanning for misconfigured LLM APIs and abusing features like model registry pulls (in Ollama) to exfiltrate data via SSRF. The list of targeted API formats spans practically every major model family: GPT‑4o, Claude, Llama, DeepSeek, Gemini, Mistral, Qwen, Grok and OpenAI‑compatible endpoints.

For AGI, robust security is not a nice‑to‑have; it’s a prerequisite. If organizations can’t trust that their model endpoints, prompt logs and fine‑tuning data are safe, they will be far more conservative about deploying stronger, more autonomous systems. Conversely, every successful exploit that leaks prompts, API keys or proprietary data will accelerate the arms race between attackers and defenders around AI‑native infrastructure.

Strategically, this will pull cybersecurity vendors deeper into the AI stack: expect “LLM firewalls,” model‑pull allowlists, and egress controls to become standard features of serious deployments. It also raises the odds that regulators will eventually require explicit safeguards for AI endpoints in critical sectors, just as they do for payment or health systems today.

Who Should Care

InvestorsResearchersEngineersPolicymakers

Companies Mentioned

OpenAI
OpenAI
AI Lab|United States
Valuation: $500.0B
Anthropic
Anthropic
AI Lab|United States
Valuation: $183.0B
Mistral AI
Mistral AI
AI Lab|France
Valuation: $13.8B
xAI
xAI
AI Lab|United States
Valuation: $200.0B
Google
Google
Cloud|United States
Valuation: $3790.0B
GOOGLNASDAQ$328.57
Meta
Meta
Consumer Tech|United States
Valuation: $1680.0B
METANASDAQ$653.06

Coverage Sources

CyberPress
GreyNoise blog (referenced)
CyberPress
CyberPress
Read
GreyNoise blog (referenced)
GreyNoise blog (referenced)
Read