On February 5, 2026, UN Secretary‑General António Guterres’ new Independent International Scientific Panel on AI came into focus as outlets reported the 40 experts he has recommended to the General Assembly. Coverage highlighted members such as IIT Madras professor Balaraman Ravindran, Nobel laureate Maria Ressa and Yoshua Bengio, who will serve three‑year terms providing science‑based advice on AI’s global impacts.
This article aggregates reporting from 3 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
The UN’s Independent International Scientific Panel on AI is an explicit attempt to build an IPCC‑style knowledge backbone for AI governance. By naming 40 experts across machine learning, public health, cybersecurity, child development and human rights, Guterres is trying to create a single, politically neutral reference point for what is actually happening in AI systems and markets, separate from any one government or lab’s narrative. ([un.org](https://www.un.org/independent-international-scientific-panel-ai/zh)) The panel’s first report is due in time to inform a Global Dialogue on AI Governance in July, which means its early framing could heavily shape multilateral talks on safety thresholds, compute governance and cross‑border rules.
For the race to AGI, this matters less as a technical milestone and more as an institutional one. If the panel can produce rigorous, trusted syntheses of emerging capabilities and risks, it will strengthen the hand of policymakers arguing for coordinated guardrails—on frontier model evaluations, high‑risk deployment domains or even compute caps. Conversely, if it gets politicised or captured, its outputs could be ignored like many UN reports before it. The presence of leading technical figures such as Yoshua Bengio, alongside voices from the Global South and media like Maria Ressa, is a sign the UN understands both the scientific and information‑integrity stakes.
In practice, AGI‑relevant labs now face a world where there is a standing multilateral body watching their progress and advising governments. That won’t stop the race, but it could gradually normalise the idea that frontier capabilities must clear globally articulated evidence‑based standards before being deployed at scale.

