China’s internet watchdog issued new rules that govern synthetic media.
What’s new: Legislation from the Cyberspace Administration of China limits the use of AI to create or edit text, audio, video, images, and 3D digital renderings. The law took effect on January 10.
How it works: The rules regulate so-called “deep synthesis” services:
- AI may not be used to generate output that endangers national security, disturbs economic or social order, or harms China’s image.
- Providers of AI models that generate or edit faces must obtain consent from individuals whose faces were used in training and verify users’ identities.
- Providers must clearly label AI-generated media that might confuse or mislead the public into believing false information. Such labels may not be altered or concealed.
- Providers must dispel false information generated by their models, report incidents to authorities, and keep records of incidents that violate the law.
- Providers are required to review their algorithms periodically. Government departments may carry out their own inspections. Inspectors can penalize providers by halting registration of new users, suspending service, or pursuing prosecution under relevant laws.
Behind the news: The rules expand on China’s earlier effort to rein in deepfakes by requiring social media users to register by their real names and threatening prison time for people caught spreading fake news. Several states within the United States also target deepfakes, and a 2022 European Union law requires social media companies to label disinformation including deepfakes and withhold financial rewards like ad revenue from users who distribute them.
Why it matters: China’s government has been proactive in restricting generative AI applications whose output could do harm. Elsewhere, generative AI faces a grassroots backlash against its potential to disrupt education, art, and other cultural and economic arenas.
We’re thinking: Models that generate media offer new approaches to building and using AI applications. While they're exciting, they also raise questions of fairness, regulation, and harm reduction. The AI community has an important role in answering them.