On January 11, 2026, Korean media highlighted that South Korea will begin enforcing its Framework Act on the Development of Artificial Intelligence and Building Trust on January 22, making it the first country to fully implement a comprehensive AI law. The coverage notes growing unease among companies about new rules, including watermarking, obligations for “highly impactful AI”, and potential fines and grace periods.
This article aggregates reporting from 2 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
South Korea is moving from drafting to enforcing an AI framework faster than any other major economy, leaping ahead of the EU in actual implementation. The law combines promotional goals—national AI plans, investment support, data centers—with stricter obligations for “high‑impact” AI affecting health, finance and fundamental rights. For frontier developers, this turns Korea into an early test case for risk‑based regulation: how clear are category definitions, how heavy is the compliance burden, and does it meaningfully change deployment behavior or just add paperwork?
In the AGI race, early enforcement could cut both ways. On one hand, formal rules and a grace period may give Korean companies more legal certainty than the current patchwork of guidelines in many countries, potentially speeding up deployment in regulated sectors. On the other, firms nervous about being the first enforcement example may steer away from truly experimental systems or relocate risky research to friendlier jurisdictions. The world will watch how Korea applies watermarking, audit, and agent accountability requirements in practice—and whether these become templates for other governments or cautionary tales.


