On May 3, 2026, Bloomberg Law reported that Treasury Secretary Scott Bessent said US financial and tech firms are bolstering defenses against artificial intelligence–enabled cyberattacks on bank accounts. He referenced April meetings with bank CEOs about cyber risks from Anthropic’s latest AI model and pledged to keep the financial system safe.
This article aggregates reporting from 3 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
Bessent’s latest comments mark a turning point: frontier AI models are now being treated as concrete, near‑term threats to financial stability, not just long‑run abstractions. When the Treasury Secretary goes on national television to talk about AI being used to hack bank accounts, it codifies something security researchers have been saying quietly for years—offensive AI can scale reconnaissance, phishing, and exploit discovery far faster than legacy defenses were built to handle.
The specific reference to Anthropic’s newest model and April’s emergency meeting with bank CEOs highlights how individual model launches are starting to trigger macro‑prudential concern. That matters for the race to AGI because it tightens the coupling between model capability and regulatory response: each new leap will increasingly be evaluated not only on accuracy benchmarks, but on how easily it can be weaponized.
In practice, this is likely to push banks and cloud providers toward heavier red‑teaming, AI‑assisted defense tools, and more formal incident‑reporting regimes. It may also strengthen the hand of policymakers arguing for licensing or usage‑based controls on the most capable systems. None of that stops AGI research, but it adds friction and compliance overhead that labs and their financial backers will need to factor into deployment timelines.



