On January 11, 2026, The Straits Times, citing a Seoul Education Research and Information Institute survey, reported that 94.7% of Seoul middle and high school students have used generative AI tools such as ChatGPT and Gemini. Over 90% of teachers surveyed expressed concern about students’ over‑reliance on these tools and potential plagiarism.
This article aggregates reporting from 2 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
The Seoul survey is one of the clearest data points yet that a whole cohort of teenagers is growing up with generative AI as a default study tool, not a novelty. When roughly 95% of students have used systems like ChatGPT or Gemini, and a majority lean on them for language classes and assignments, AI stops being an add‑on and starts becoming part of basic literacy. ([straitstimes.com](https://www.straitstimes.com/asia/east-asia/nearly-all-seoul-students-use-generative-ai-as-teachers-warn-of-overreliance))
For the AGI trajectory, this accelerates two parallel trends. On one hand, it normalizes interacting with increasingly capable models from a young age, building intuition and expectations that can both spur demand for more general systems and create a workforce comfortable orchestrating AI agents. On the other, teachers’ fears about plagiarism and hollowed‑out critical thinking hint at a riskier equilibrium where students outsource reasoning to black boxes. If entire education systems adapt by reshaping curricula and assessment around AI‑assisted work, we’ll be training humans and models in tandem, not in isolation.
In the competitive landscape, countries whose school systems move fastest to harness AI productively—without letting it replace actual learning—will likely generate more AI‑fluent talent. South Korea, which already has strong STEM performance and a robust games and content industry, is positioning that talent base right at the frontier of model use and critique.


