
Spanish outlet 65ymás reported on December 21, 2025 that UCLA researchers built an AI model using electronic health records to identify patients with undiagnosed Alzheimer’s disease. The semi‑supervised tool achieved 77–81% sensitivity across multiple racial and ethnic groups, substantially outperforming traditional supervised models while reducing diagnostic disparities.
This article aggregates reporting from 3 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
This is a good example of frontier methods quietly moving into high‑stakes domains. UCLA’s team didn’t just bolt a generic LLM onto electronic health records—they used semi‑supervised positive‑unlabeled learning with explicit fairness constraints to find Alzheimer’s cases that the system is currently missing, especially in underdiagnosed Black, Latino and East Asian populations. In other words, they’re using sophisticated ML not just to predict disease, but to correct systemic bias in who gets diagnosed at all. ([uclahealth.org](https://www.uclahealth.org/news/release/researchers-develop-ai-tool-identify-undiagnosed-alzheimers?utm_source=openai))
For the race to AGI, this matters in two ways. First, it showcases how advanced model architectures are being tailored to messy, real‑world data—messaging, billing codes, fragmented clinical notes—rather than benchmark‑perfect datasets. That kind of engineering discipline is exactly what will be needed to make more general cognitive systems useful outside the lab. Second, it highlights an emerging norm that cutting‑edge AI deployments in healthcare must ship with fairness and validation baked in: genetic risk scores, cross‑system replication, and rigorous sensitivity metrics across subgroups, not just a single ROC curve.
As AGI‑class models get woven into clinical decision support, regulators and hospital systems will look to projects like this as proof that AI can actually reduce inequities instead of amplifying them. That raises the bar for every other medical AI team: strong algorithms alone are not enough, you need a story for bias, validation, and clinical workflow integration.



