Back to Archive
Sent to 16 readers

Race to AGI Daily Digest - Monday, December 29, 2025

Share:

TLDR

Nvidia’s reported $20B Groq licensing deal shifts the focus from selling raw GPU capacity to owning high-efficiency inference IP.

Nvidia–Groq licensing deal ->

OpenAI is turning frontier risk into an operational function with a $555k head of AI preparedness role.

OpenAI preparedness role ->

SK Telecom’s 519B-parameter A.X K1 model underscores the rise of national-scale foundation models as strategic assets.

A.X K1 model brief ->

China’s draft rules for anthropomorphic emotional-companion AIs preview how regulators may treat agentic, “human-like” systems.

China companion-AI draft ->

While Palantir and Tesla sold off, Chinese and fabrication-linked names like Alibaba, Baidu, and TSMC inched higher, reflecting investor sorting within the AI stack.

AI market board ->

The Mockito maintainer’s decision to step down after a decade highlights the human maintenance burden behind critical tooling in the AI era.

HN: Mockito maintainer steps down ->

The Full Story

Last week was all about scale and skepticism: tech giants talked up roughly $67.5B of AI build-out in India while investors started asking whether anyone is actually earning their cost of capital. That tension is now a full narrative we’re tracking in our Investors Demand Discipline in AI Spending stream discipline-in-AI-spending tracker ->. Today’s clearest data point on that theme is Nvidia’s reported $20B licensing “acqui-hire” of Groq’s inference technology Nvidia–Groq licensing deal ->. Instead of just shipping more GPUs, Nvidia is buying a path to cheaper, faster inference at scale, tightening its grip on the full stack NVIDIA profile ->. On the model side, SK Telecom unveiled the 519B-parameter A.X K1, a half-trillion-parameter model meant to anchor Korea’s AI push A.X K1 model brief ->. It fits the pattern we’ve seen with “national champion” models: huge parameter counts paired with big political expectations, and a quieter question about who pays for inference over time. Safety and governance are also leveling up from vibes to job descriptions. OpenAI is hiring a head of AI preparedness on a reported $555k package, essentially a frontier-risk COO for bad scenarios OpenAI preparedness role ->. It’s a sign that preparedness is becoming an operational discipline, not just a research side project OpenAI profile ->. Governments are moving in parallel. China’s draft rules for anthropomorphic emotional-companion AIs try to fence off how “human” these agents are allowed to feel and behave China companion-AI draft ->. So while companies chase engagement, regulators are already asking what happens when people emotionally bond with bots. Markets are still trying to price all this. Palantir and Tesla slipped more than 2% while Alibaba, TSMC, and Baidu ticked up, a tiny hint that investors are rotating toward infrastructure and regional plays rather than pure narrative stocks. And underneath the headlines, the Hacker News post from the Mockito maintainer stepping down after 10 years is a reminder that the AI boom still rests on open-source maintainers and aging Java libraries. This week, we’ll be watching how that human side, the safety push, and the hardware land grab all collide with the new demand for discipline.

Get This Delivered Daily

Join thousands of AI professionals who start their day with Race to AGI.

No spam, ever. Unsubscribe anytime.