Back to Archive
Sent to 16 readers

Race to AGI Daily Digest - Thursday, January 1, 2026

Share:

TLDR

xAI’s MACROHARDRR project pushes toward 2GW of AI supercompute, extending the hardware arms race from chip IP into massive owned infrastructure.

xAI MACROHARDRR plan ->

ByteDance’s planned $14B NVIDIA chip spree deepens the link between Chinese consumer AI demand and US export‑control policy.

ByteDance chip spree ->

China’s new companion AI rules explicitly target emotional risk and teen safety, turning affective AI into a regulated category.

China companion AI rules ->

California’s chatbot law adds safety and disclosure rules for consumer bots starting in 2026, pushing safety from policy papers into product requirements.

California chatbot law ->

Stocks tied to narrative-heavy AI software, including C3.ai, IBM, Palantir, and Qualcomm, fell while TSMC gained, extending a slow rotation toward fabrication and core hardware.

All AI‑exposed companies ->

The Full Story

Following Monday’s NVIDIA–Groq licensing story, Tuesday’s Meta–Manus agents deal, and yesterday’s SoftBank–OpenAI funding splash, today feels like the moment everyone asks: who’s actually in charge of this thing? On the tech side, xAI just laid out one answer: build your own gravity well. Its MACROHARDRR project targets 2GW of AI supercompute, a data center on the scale of a small power plant xAI MACROHARDRR plan ->. That’s the logical endgame of the inference hardware storyline we’ve been following since NVIDIA’s Groq deal—own the IP, then own the concrete and megawatts too xAI profile ->. In parallel, ByteDance is doubling down on the chip side with a planned $14B NVIDIA GPU spree to scale its China AI stack ByteDance chip spree ->. Tie that to ongoing US export fees on high-end parts and you get the clearest version yet of our national security narrative: compute isn’t just capex, it’s foreign policy NVIDIA profile -> AI chip exports narrative ->. Now the guardrails. China released companion AI rules that go straight at emotional risk and teen safety—essentially saying: you can build AI friends, but you’re responsible for their psychological footprint China companion AI rules ->. California’s new chatbot law heads in the same direction from another angle, adding safety and disclosure rules for consumer-facing bots starting in 2026 California chatbot law ->. That’s our “frontier AI safety as an operational discipline” storyline maturing into actual statutes. Meanwhile, AI quietly seeps deeper into everyday life. Perfect Corp is rolling out new generative beauty tools across YouCam AI beauty tools launch ->, while Reddit debates whether AI‑assisted sculpting is creative and whether AI‑generated fiction is just a shortcut. Users are clearly attracted to the power but wary of what it does to craft and authenticity. Markets are still sorting the stack. C3.ai, IBM, Palantir, and Qualcomm slipped, while TSMC rose again—another small nudge toward the folks who actually fabricate the chips All AI‑exposed companies ->. Put it together and the week’s arc is sharp: mega-infra and agents at the top, export controls and state laws underneath, all converging on one question—how much autonomy are we really comfortable handing to machines and the people who run them?

Get This Delivered Daily

Join thousands of AI professionals who start their day with Race to AGI.

No spam, ever. Unsubscribe anytime.