SocialWednesday, December 31, 2025

AI Futures Model update pushes full coding automation median to early 2030s

Source: AI Futures Project (blog)
Read original

TL;DR

AI-Summarized

On December 31, 2025, the AI Futures Project released a major update to its quantitative AI Futures Model, which forecasts timelines to milestones like fully automated coding and artificial superintelligence. The authors say the new model suggests roughly 2–4 years longer timelines to full coding automation than their previous “AI 2027” forecasts, with a median around 2031–2032.

About this summary

This article aggregates reporting from 1 news source. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.

Race to AGI Analysis

This update is important because it’s one of the few transparent, quantitative attempts to forecast not just when AGI arrives but how fast things accelerate afterward. The authors integrate benchmark‑based extrapolations (like METR’s coding time horizons) with assumptions about compute, data, and AI‑assisted R&D, then explicitly model three stages: automating coding, automating research taste, and a post‑AGI intelligence explosion. Their headline conclusion—that better modeling of AI‑assisted R&D actually lengthens their median timeline for full coding automation by a few years compared with earlier work—cuts against the dominant narrative that every new datapoint must shorten timelines. ([blog.ai-futures.org](https://blog.ai-futures.org/i/182911449/timelines-and-takeoff-forecasts))

For Race to AGI readers, the value isn’t in any single date but in having a structured way to think about uncertainty. The model exposes the hinge assumptions: will data become the bottleneck, or compute, or research taste? How fast does AI speed up its own development once it’s good at coding? Those are the levers policymakers, labs and investors can actually push on. Even if you disagree with the parameters, this kind of explicit modeling is far superior to vibes‑based doom or hype, and it will likely influence how serious safety orgs and some frontier labs talk about planning horizons.

Impact unclear

Who Should Care

InvestorsResearchersEngineersPolicymakers