A new edition of the Future of Life Institute’s AI Safety Index concludes that leading AI developers including Anthropic, OpenAI, xAI and Meta lack robust strategies to control potential superintelligent systems, leaving their safety practices "far short" of emerging international norms. The report, based on an independent expert panel, comes amid growing concern over AI‑linked self‑harm cases and AI‑driven hacking, and has prompted renewed calls from researchers such as Max Tegmark, Geoffrey Hinton and Yoshua Bengio for binding safety standards and even temporary bans on developing superintelligence until better safeguards exist.
OpenAI and NEXTDC entered a multi-year agreement under which OpenAI will anchor a hyperscale AI campus and GPU supercluster at NEXTDC’s S7 facility in Sydney to support large-scale AI inference and enterprise workloads.
Snowflake and Anthropic signed a multi-year strategic partnership expansion valued at $200 million to integrate Claude-powered AI agents into Snowflake’s Cortex AI data cloud for enterprise customers.
OpenAI agreed to acquire neptune.ai to integrate its experiment‑tracking and training‑monitoring tools into OpenAI’s frontier model training stack.
OpenAI provides custom holiday tools built on ChatGPT to enhance NORAD’s annual Santa‑tracking program as a public‑facing AI partnership.
OpenAI acquires an equity stake in Thrive Holdings in exchange for dedicated AI research and integration support to modernize Thrive’s accounting and IT services portfolio.

