A new edition of the Future of Life Institute’s AI Safety Index concludes that leading AI developers including Anthropic, OpenAI, xAI and Meta lack robust strategies to control potential superintelligent systems, leaving their safety practices "far short" of emerging international norms. The report, based on an independent expert panel, comes amid growing concern over AI‑linked self‑harm cases and AI‑driven hacking, and has prompted renewed calls from researchers such as Max Tegmark, Geoffrey Hinton and Yoshua Bengio for binding safety standards and even temporary bans on developing superintelligence until better safeguards exist.
OpenAI, Anthropic, Block and major cloud providers are co-founding the Agentic AI Foundation under the Linux Foundation to steward open, interoperable standards for AI agents.
Founding members created the Agentic AI Foundation under the Linux Foundation to fund and govern open standards like MCP, goose and AGENTS.md for interoperable agentic AI.
Accenture and Anthropic formed a new business group and training program around Claude to bring production-grade AI services to tens of thousands of Accenture staff and clients.
Accenture and Anthropic entered a multi‑year strategic partnership to co‑invest in a dedicated business group, training 30,000 Accenture staff on Claude and co‑developing AI solutions for regulated industries.
OpenAI and Deutsche Telekom agreed a multi‑year collaboration to co‑develop AI products and deploy ChatGPT Enterprise across the telecom group.


