California’s Proposed AI Safety Law Puts Developers at Risk California SB 1047 is intended to make AI safer, but its unclear requirements put developers, innovation, and open source in jeopardy.

Published
Reading time
4 min read
California’s Proposed AI Safety Law Puts Developers at Risk: California SB 1047 is intended to make AI safer, but its unclear requirements put developers, innovation, and open source in jeopardy.

Dear friends,

I continue to be alarmed at the progress of proposed California regulation SB 1047 and the attack it represents on open source and more broadly on AI innovation. As I wrote previously, this proposed law makes a fundamental mistake of regulating AI technology instead of AI applications, and thus would fail to make AI meaningfully safer. I’d like to explain why the specific mechanisms of SB 1047 are so pernicious to open source. 

To be clear, there are routes that regulators should pursue to improve safety. For example, I would welcome outlawing nonconsensual deepfake pornography, standardizing watermarking and fingerprinting to identify generated content, and investing more in red teaming and other safety research. Unfortunately, the proposed bill pursues a less beneficial and more harmful path.

SB 1047’s purported goal is to ensure safety of AI models. It puts in place complex reporting requirements for developers who fine-tune models or develop models that cost more than $100 million to train. It is a vague, ambiguous law that imposes significant penalties for violations, creating a huge gray zone in which developers can’t be sure how to avoid breaking the law. This will paralyze many teams. 

You can read the latest draft of the law here. I’ve read through it carefully, and I find it ambiguous and very hard to follow.

Developers who try to navigate the law’s complex requirements face what feels like a huge personal risk. It requires that developers submit a certification of compliance with the requirements of the law. But when the requirements are complex, hard to understand, and can even shift according to the whims of an unelected body (more on this below), how do we ensure we are in compliance?

For example, the certification must include many different sections. One is an analysis of “the nature and magnitude of critical harms … the model might reasonably cause or enable.” But given that even leading AI researchers aren’t sure what harms models might cause or enable, how is a team of developers supposed to figure this out and declare — under penalty of perjury — that they meet this requirement? 

Further, some developers will be required to implement “protections to prevent … misuse of, or unsafe post-training modifications of, the covered model and all covered model derivatives … that are appropriate in light of the risks associated with the covered model, including from advanced persistent threats or other sophisticated actors.” Even leading AI researchers don’t agree on how best to “protect” AI models against these supposed risks, or what would be “appropriate.” So how are developers supposed to figure out how to comply with this requirement? 

This creates a scary situation for developers. Committing perjury could lead to fines and even jail time. Some developers will have to hire expensive lawyers or consultants to advise them on how to comply with these requirements. (I am not a lawyer and am not giving legal advice, but one way to try to avoid perjury is to show that you are relying on expert advice, to demonstrate that you had no intent to lie.) Others will simply refrain from releasing cutting-edge AI products.

If this law passes, the fear of a trial by a jury — leading to a verdict that can be very unpredictable with significant penalties in the event of a conviction — will be very real. What if someone releases a model today after taking what they genuinely felt were reasonable safeguards, but a few years later, when views on AI technology might have shifted, some aggressive prosecutor manages to convince a jury that whatever they did was not, in hindsight, “reasonable”? Reasonableness is ambiguous and its legal interpretation can depend on case law, jury instructions, and common facts, among other things. This makes it very hard to ensure that what a developer does today will be deemed reasonable by a future jury. (For more on this, see Context Fund’s analysis of SB 1047.)

One highly placed lawyer in the California government who studied this law carefully told me they found it hard to understand. I invite you to read it and judge for yourself — if you find the requirements clear, you might have a brilliant future as a lawyer! 

Adding to the ambiguity, the bill would create a Frontier Model Division (FMD) with a five-person board that has the power to dictate standards to developers. This small board would be a great target for lobbying and regulatory capture. (Bill Gurley has a great video on regulatory capture.) The unelected FMD can levy fees on developers to cover its costs. It can arbitrarily change the computation threshold at which fine-tuning a model becomes subject to its oversight. This can lead to even small teams being required to hire an auditor to check for compliance with an ambiguous safety standard. 

These provisions don’t ensure that AI is safe. They create regulatory uncertainty, and more opportunities for vested interests wishing to stifle open-source to lobby for shifts in the requirements that raise the cost of compliance. This would lock out many teams that don’t have a revenue stream — specifically, many open-source contributors — that would let them pay for lobbyists, auditors, and lawyers to help ensure they comply with these ambiguous and unreasonable requirements. 

Open source is a wonderful force that is bringing knowledge and tools to many people, and is a key pillar of AI innovation. I am dismayed at the concerted attacks on it. Make no mistake, there is a fight in California right now for the future health of open source. I am committed to doing what I can to preserve open source, but I don’t assume that the pro-open source side will prevail. I hope you will join me in speaking out against SB 1047 and other laws that threaten to stifle open source. 

Keep learning!

Andrew 

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox