China Daily reports that Ant Group has rebranded its AI health app AQ as “Ant Afu,” adding a health companion function, and says the app now serves over 15 million monthly users, answering more than 5 million health questions per day. Baidu has also upgraded its AI-powered health assistant into a multi-agent system that offers end-to-end services from light consultations to complex disease planning.
This article aggregates reporting from 1 news source. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
Ant Group and Baidu quietly turning their AI health assistants into always-on “health companions” is a big deal for how large language models are tested in the wild. These systems sit in a sweet spot between high stakes and massive scale: they aren’t legally diagnosing, but they are shaping how millions of users think about symptoms, prevention and treatment, and they do it continuously. That’s precisely the kind of environment where subtle LLM failures—hallucinated advice, biased triage, brittle reasoning—will surface long before regulators or benchmarks catch up. ([global.chinadaily.com.cn](https://global.chinadaily.com.cn/a/202512/24/WS694b48b9a310d6866eb30337.html))
Strategically, the move shows Chinese tech giants leaning into verticalized AI agents rather than just generic chatbots. A multi-agent health system that can understand free-form descriptions, sync device data, coordinate family health records and trigger downstream services is effectively a proto-operating system for personal wellbeing. If Ant and Baidu can prove safety and reliability, they’ll own a critical layer between users, insurers and care providers—not unlike how WeChat became the default OS for daily life. That data and workflow dominance, in turn, feeds back into better models for reasoning over longitudinal, noisy, real-world data.
For the AGI race, healthcare is one of the hardest domains to crack because it demands robust causal reasoning, calibration and value alignment. Whoever can make an AI “health friend” that doctors trust and patients rely on without systemic harm will have demonstrated capabilities not far from the kind of general-purpose reasoning AGI proponents talk about.


