On January 18, 2026, Ukrainian tech outlet dev.ua summarized new Google–Ipsos survey data showing that only about 40% of US adults used AI tools in the past year, below adoption levels in countries such as Nigeria and India. The survey of 21,000 people across 21 countries found rising global optimism about AI’s benefits, especially among regular users, and highlighted shifts from entertainment toward learning and problem‑solving as primary use cases.
This article aggregates reporting from 2 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
The Google–Ipsos numbers break a common assumption: that the countries leading AI R&D will also lead in day‑to‑day adoption. Instead, early data suggests richer English‑speaking markets are comparatively cautious, while places like Nigeria and India are leaning in harder. That matters because the behaviors that shape how AI systems actually evolve—what people ask, where they rely on AI in work and life—may increasingly be set outside the US and Western Europe. ([dev.ua](https://dev.ua/en/news/paradoks-ale-amerykantsi-vykorystovuiut-shi-ridshe-za-meshkantsiv-niherii-ta-indii-pro-shcho-svidchat-rezultaty-novoho-velykoho-opytuvannia-1768740030))
From a competitive standpoint, this flips the script on where “AI‑native” work cultures might emerge. If millions of professionals in Lagos or Bangalore are more comfortable using AI for learning, decision support and everyday tasks than their peers in New York or Berlin, you could see emerging markets leapfrog in productivity, entrepreneurship and even in shaping alignment norms. At the same time, lower adoption in the US could reflect justified risk sensitivity in high‑stakes settings, or simply a lag that will close quickly once tools feel more trustworthy.
For the AGI race, adoption patterns won’t change fundamental research trajectories, but they will determine where the richest interaction datasets come from and which cultures’ values are over‑represented in feedback signals for fine‑tuning and reinforcement learning from human feedback.



