CIO reported on January 15, 2026 that McKinsey is piloting a hiring process where candidates use the firm’s internal AI assistant, Lilli, during case interviews. The test evaluates how applicants query and interpret AI output, and could be rolled out across all recruiting if the trial proves successful.
This article aggregates reporting from 1 news source. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
McKinsey using its own AI assistant as part of the hiring process is a small but telling cultural shift. The firm isn’t just asking, “Can you solve a case?” but “Can you work with an AI partner to solve a case?” That reframes what elite white-collar work looks like in a post‑chatbot world: the scarce skill becomes orchestration—asking good questions, judging when to trust or override model output, and integrating machine suggestions into client-ready thinking.([cio.com](https://www.cio.com/article/4117688/mckinsey-begins-testing-candidates-with-an-ai-assistant.html))
As AI systems become more capable, especially in reasoning and multi-step planning, the human–AI division of labor will keep sliding. By embedding Lilli directly into interviews, McKinsey is implicitly signalling to clients and recruits that AI-native consulting is the future baseline, not a side experiment. For the race to AGI, this accelerates the feedback loop between frontier capabilities and high‑value workflows: consultants who live inside AI-assisted tools all day will push vendors for more general, more reliable, more controllable systems. They’re also exactly the kind of structured users whose behavior can generate rich training data for future agentic systems.