
On December 19, 2025, Google Cloud published a technical blog explaining how stochastic rounding enables reliable low‑precision training for large generative AI models on its TPUs and NVIDIA Blackwell GPUs. The post highlights support for 4‑bit and FP8 formats via tools like JAX, the Qwix quantization library, and A4X VMs with GB200 NVL72 systems.
This article aggregates reporting from 1 news source. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
This blog post is a window into how the frontier labs are squeezing more out of every FLOP. Moving to 4‑bit and FP8 training is not just an optimization trick; it’s a prerequisite for sustaining current scaling trends without running into memory and power walls. By foregrounding stochastic rounding as a first‑class technique and wiring it into TPU hardware and NVIDIA’s Blackwell stack, Google is arguing that numerical methods are now as strategic as GPUs themselves.
For AGI, the implication is straightforward: whoever can train the biggest and most capable models per watt and per dollar will have a compounding advantage. Stochastic rounding helps make ultra‑low‑precision training stable enough to be mainstreamed into production tooling like JAX, Qwix, and Transformer Engine JAX. That lowers the barrier for both Google’s internal teams and external customers to push up context windows, parameter counts, and multi‑agent complexity without linear cost growth. It also subtly shifts competition away from pure chip counts toward integrated hardware–software stacks where numerical tricks, compiler optimizations, and model architectures are co‑designed. If this approach works as advertised, it could effectively extend the runway of Moore‑style scaling for foundation models and keep the AGI race on a steeper trajectory.
DOE signed nonbinding MOUs with 24 AI and compute organizations to apply advanced AI and high-performance computing to Genesis Mission scientific and energy projects.
Google Public Sector and Google DeepMind will provide Gemini-based AI platforms and tools to DOE’s Genesis Mission, giving all 17 U.S. national laboratories secure access to frontier models such as Gemini for Government and the AI co-scientist system.
Waymo is reportedly negotiating a funding round exceeding $15 billion at around a $100 billion valuation to expand its robotaxi operations.
Google and Google DeepMind committed roughly $13.05 million in grants to India’s AI centers of excellence, Wadhwani AI and several Indic‑language AI startups to accelerate AI deployment in health, agriculture, education and smart cities.
Nvidia acquired SchedMD, developer of the open-source Slurm workload manager, as part of a broader push to expand its open-source AI software and model stack with Nemotron 3.


