RegulationWednesday, December 17, 2025

Vietnam passes first AI law mandating labels on AI-generated content

Source: Vietnam Ministry of Science and TechnologyRead original
Quốc hội lần đầu thông qua Luật Trí tuệ nhân tạo, yêu cầu gắn nhãn các sản phẩm AI

TL;DR

AI-Summarized

On December 10, 2025, Vietnam’s National Assembly approved its first Artificial Intelligence Law, with details published December 17 by the Ministry of Science and Technology. The law requires AI‑generated or AI‑edited audio, images and video that mimic real people or events to be clearly labeled, and clarifies that AI systems are not intellectual‑property right holders.

About this summary

This article aggregates reporting from 1 news source. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.

Race to AGI Analysis

Vietnam’s first AI law is notable because it goes straight at two hot‑button issues: deepfakes and authorship. The mandatory labelling of AI‑generated or AI‑edited media that imitates real people or events is a strong, concrete content rule in a space where many jurisdictions are still debating principles. At the same time, the law draws a clear line that AI systems are not IP rightsholders; fully autonomous AI outputs don’t get copyright‑style protection, while human‑directed works using AI as a tool remain protectable.

For the AGI conversation, this is another data point that emerging economies are not waiting for the EU or US to define the rules. Vietnam is embedding transparency and human primacy into its legal framing early, which could influence how local platforms, creatives and startups adopt generative tools. It also surfaces tricky edge cases: what happens when user input is minimal but the AI’s contribution is substantial? The law anticipates this by giving government leeway to define thresholds of human creativity.

As more countries adopt similar labelling and authorship rules, global AI companies will need robust provenance, watermarking and consent systems baked into their products, not bolted on later. That kind of infrastructure, once deployed at scale, could meaningfully shape how future AGI‑class systems are trained, logged and governed.

Who Should Care

InvestorsResearchersEngineersPolicymakers