California, a jurisdiction that often influences legislators worldwide, passed a slew of new laws that regulate deepfakes.
What’s new: California Governor Gavin Newsom signed into law eight bills that aim to curb the use of generative AI in politics and entertainment.
How it works: The legislation prohibits deceptive AI-generated media in political campaigns; requires permission for using digital stand-ins for actors, musicians, and other entertainers; and criminalizes generation of sexually explicit imagery without the subject’s consent.
- One law prohibits knowingly distributing deceptive AI-generated information about candidates, elections officials, or voting processes between 120 days before and 60 days after elections. The bill defines “materially deceptive content” as images, audio, or video that were intentionally created or modified but would appear to a reasonable person to be authentic.
- Two related laws mandate disclosure when AI is used to produce political advertisements. The first requires that AI-generated campaign ads include the statement, “ad generated or substantially altered using artificial intelligence.” The other calls for large online platforms to label or remove AI-generated media related to elections.
- Two further laws protect performers by controlling “digital replicas,” defined as “computer-generated, highly realistic electronic representation[s] of an individual’s voice or likeness.” One voids contracts for the use of digital replicas if performers didn’t have legal or union representation when they made the agreements. The other prohibits commercial use of deceased performers’ digital replicas without permission of their estates.
- Two laws regulate sexually explicit synthetic content. One establishes the creation and distribution of non-consensual, AI-generated sexually explicit content as a disorderly conduct misdemeanor. The other requires social media platforms to report sexually explicit deepfakes.
- An additional law requires that AI-generated media include a disclosure of its provenance.
Behind the news: Newsom has not yet acted on Senate Bill 1047, a controversial law that would impose significant burdens on AI model developers. He has expressed that the bill could interfere with innovation, especially with respect to open source projects.
Why it matters: Laws passed in California often point the way for legislators in other U.S. states, the federal government, and consequently other countries. The new laws that regulate deepfakes in political campaigns fill a gap left by the Federal Election Commission (FEC), which has said it lacks authority to regulate the use of AI in political ads. Meanwhile, the Federal Communications Commission (FCC) proposed rules that would mandate disclosure of uses of AI in political ads but has yet to implement them.
We’re thinking: We’re glad to see California target undesirable applications rather than AI models. Regulating applications rather than general-purpose technology that has a wide variety of uses — many of which are beneficial — avoids the dangers of California SB-1047, which is still awaiting the governor’s signature or veto. That law, which seeks to restrict AI models, would endanger innovation and especially open source.