On April 2, 2026, the Actors Committee of the China Broadcasting and Television Social Organisation Association issued a formal statement condemning rampant AI face-swapping, voice cloning and unauthorized use of performers’ likenesses to train models. The group warned that such practices seriously harm performers’ rights and disrupt the audiovisual industry.
This article aggregates reporting from 4 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
China’s national actors’ committee stepping in on AI deepfakes is another sign that generative media has moved from novelty to labor dispute and rights issue. The statement doesn’t create new law, but it consolidates a clear industry position: unauthorized AI face-swapping, voice cloning and dataset scraping using performers’ likenesses are treated as rights violations, not just grey-area experimentation. That will shape how platforms, advertisers and studios think about their risk when deploying synthetic media at scale.
From a race-to-AGI perspective, this doesn’t slow foundational research, but it does constrain one of the easiest on-ramps for applying powerful generative models: entertainment and influencer content. If rights enforcement tightens — via lawsuits, take-downs or future regulations — model builders will need cleaner consent frameworks and licensing arrangements for high-quality audio-visual data. Over time, that likely rewards players who invest early in rights-respecting datasets and watermarking rather than those who scrape first and apologize later.
More broadly, the statement is another data point in a global pattern: sectoral bodies (actors’ guilds, news publishers, education ministries) are starting to articulate boundaries well before comprehensive AI laws arrive. That bottom-up pressure will influence how frontier labs and consumer apps design their tools, especially around controls for identity cloning and content provenance.