On January 2, 2026, Kyoto-based startup mui Lab announced it will debut the “mui Calm Sleep Platform” at CES 2026 as the first installment of its “Spatial AI” services. The system uses a wooden wall-mounted mui Board, millimeter-wave sensing and AI to provide non-wearable, smartphone-free sleep support, with pre-orders for the second-generation mui Board opening on Indiegogo from January 6.
This article aggregates reporting from 1 news source. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
Mui Lab’s “Spatial AI” framing is a nice encapsulation of a trend we’ve been watching: AI moving from screens into the fabric of physical spaces. Instead of another wristband or phone app, their sleep system embeds sensing and decision-making into a quiet, ambient object that orchestrates lighting, guidance and monitoring around the bedroom. Conceptually, this is very close to the “calm technology” vision of ubiquitous computing, now powered by modern sensing and ML.
Why does this matter for AGI? Because any realistic path to generally intelligent assistants will require tight coupling between models and environments—not just chat windows. Startups like mui Lab are experimenting with that coupling in narrow domains like sleep, but the same pattern (millimeter-wave sensing + embedded models + orchestrated actuators) can generalize to broader home, office and hospitality contexts. The more these deployments normalize continuous, unseen AI decision-making in physical spaces, the more political and technical pressure there will be to develop robust, interpretable and safe agent systems that operate beyond text.

