Chinese chipmaker Hygon used its HAIC 2025 conference in Kunshan to lay out a new “dual-chip strategy” that pairs its DCU AI accelerators with general-purpose CPUs. The plan, reported on December 26, 2025, is aimed at system-level AI computing for domestic data centers under export controls.
This article aggregates reporting from 1 news source. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
Hygon’s dual-chip roadmap is another data point in China’s push to build a self-contained AI compute stack in the face of US export controls. By tightly coupling its domestic CPU line with DCU accelerators, Hygon is trying to recreate, on Chinese soil, the kind of CPU–GPU synergies that Nvidia, AMD and Intel offer hyperscalers elsewhere. That’s strategically important because many large AI deployments hinge as much on host CPU, memory bandwidth and networking as on the accelerator die itself.
From a system-design perspective, dual-chip strategies can partially offset process-node disadvantages. If Hygon can orchestrate CPU and DCU to behave more like a coherent unit—pooling memory, optimizing data movement, and abstracting complexity in software—it can deliver acceptable AI performance on older nodes while domestic fabs climb the learning curve. For the broader AGI race, this development reinforces that compute capacity is becoming multipolar. Even if Hygon’s hardware lags Nvidia’s on raw FLOPs, its existence increases global aggregate compute and reduces Western leverage over where and how frontier models get trained.


