On Jan 15, 2026 Baidu announced that its ERNIE-5.0-0110 model scored 1,460 points on the LMArena text leaderboard, ranking No. 8 globally and first among Chinese models. The model also placed second worldwide for math performance and has exited its preview phase.
This article aggregates reporting from 4 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
ERNIE 5.0’s jump into the global top‑10 on LMArena, and especially its No. 2 math ranking behind an unreleased GPT‑5.2 variant, is a concrete sign that China’s flagship models are closing the capability gap with Western frontier systems. The model is not a small tweak: Baidu describes a roughly 2‑trillion‑parameter mixture‑of‑experts architecture that activates only a small fraction of experts per query, pushing the same sparse‑scaling frontier as OpenAI, Google and others but with far less Western fanfare.
Strategically, this strengthens Baidu’s hand inside China’s crowded model market and gives Beijing a more credible answer to questions about whether domestic systems can match or exceed U.S. and European offerings on high‑stakes reasoning tasks. It also underscores that benchmarking ecosystems like LMArena are becoming de facto battlegrounds for model prestige and procurement decisions. Companies and governments picking “sovereign” or “preferred” models will increasingly lean on public leaderboards to justify their choices.
For Western labs, ERNIE 5.0 is a reminder that the race is multipolar. China can now field at least one model that, on some metrics, outperforms widely used GPT‑5.1 and Gemini variants, even if access and safety regimes differ. That reality will shape export‑control debates and the calculus around open vs. closed deployment of next‑gen systems.



