
On December 10, 2025, Vietnam’s National Assembly approved its first Artificial Intelligence Law, with details published December 17 by the Ministry of Science and Technology. The law requires AI‑generated or AI‑edited audio, images and video that mimic real people or events to be clearly labeled, and clarifies that AI systems are not intellectual‑property right holders.
This article aggregates reporting from 1 news source. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
Vietnam’s first AI law is notable because it goes straight at two hot‑button issues: deepfakes and authorship. The mandatory labelling of AI‑generated or AI‑edited media that imitates real people or events is a strong, concrete content rule in a space where many jurisdictions are still debating principles. At the same time, the law draws a clear line that AI systems are not IP rightsholders; fully autonomous AI outputs don’t get copyright‑style protection, while human‑directed works using AI as a tool remain protectable.
For the AGI conversation, this is another data point that emerging economies are not waiting for the EU or US to define the rules. Vietnam is embedding transparency and human primacy into its legal framing early, which could influence how local platforms, creatives and startups adopt generative tools. It also surfaces tricky edge cases: what happens when user input is minimal but the AI’s contribution is substantial? The law anticipates this by giving government leeway to define thresholds of human creativity.
As more countries adopt similar labelling and authorship rules, global AI companies will need robust provenance, watermarking and consent systems baked into their products, not bolted on later. That kind of infrastructure, once deployed at scale, could meaningfully shape how future AGI‑class systems are trained, logged and governed.



