Innovation Can’t Win Bureaucracy chokes AI growth as lawmakers tighten grip

Published
Reading time
3 min read
Cartoon characters in costume contest: ghost wins 1st, mad scientist 2nd, hula girl 3rd.

Politicians and pundits have conjured visions of doom to convince lawmakers to clamp down on AI. What if terrified legislators choke off innovation in AI?

The fear: Laws and treaties that purportedly were intended to prevent harms wrought by AI are making developing new models legally risky and prohibitively expensive. Without room to experiment, AI’s benefits will be strangled by red tape. 

Horror stories: At least one law that would have damaged AI innovation and open source has been blocked, but another is already limiting access to technology and raising costs for companies, developers, and users worldwide. More such efforts likely are underway.

  • California SB 1047 would have held developers of models above a certain size (requiring 1026 floating-point operations or cost $100 million to train) liable for unintended harms caused by their models, such as helping to perpetrate thefts, cyberattacks, or design weapons of mass destruction. The bill required such systems to include a “kill switch” that would enable developers to disable them in an emergency – a problematic requirement for open-weights models that could be modified and deployed anywhere. Governor Gavin Newsom vetoed the bill in October, arguing that it didn’t target real risks and that it could have unintended consequences, but legislators may yet introduce (and the governor could sign) a modified bill.
  • The European Union’s AI Act, implemented in August 2024, restricts applications deemed high-risk, such as face recognition and predictive policing. It subjects models to strict scrutiny in essential fields like education, employment, and law enforcement. It also requires developers to provide detailed information about their models’ algorithms and data sources. But critics argue that it could stifle European companies’ early-stage research. Meta restricted Llama 3’s vision capabilities in the EU, which may run afoul of the union’s privacy laws, and Apple delayed launching AI features in Europe due to regulatory uncertainties. Meta, Apple, Anthropic, TikTok, and other leading companies did not sign the EU’s Artificial Intelligence Pact, which would have committed them to comply with certain provisions of the AI Act before they take effect.
  • In September, the U.S, UK, and many countries in Europe and elsewhere signed the Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law. This treaty, which will take effect by the end of the year, requires that AI models respect democracy and human rights. It’s legally binding on signatories and may be enforceable by the council’s international Court of Human Rights. In practical terms, though, each member can impose its own definition of democracy and human rights, potentially creating a patchwork of legal uncertainties and burdens for AI companies worldwide.
  • China has passed a number of laws that focus on reducing AI’s potential harms by exerting strong government control. Key laws require companies to label AI-generated output and disclose training sets and algorithms to the government, and mandate that AI-generated media align with government policies on inappropriate speech. Some companies, like OpenAI and Anthropic, have restricted their offerings in China.

How scared should you be: The veto of SB 1047 was a narrow escape for California and companies and labs that operate there. Yet regulations like the AI Act are poised to reshape how AI is trained and used worldwide. History suggests that restrictive laws often lead to more caution and less experimentation from technologists.

Facing the fear: AI needs thoughtful regulation to empower developers to help build a better world, avoid harms, and keep learning. But effective regulation of AI requires restricting applications, not the underlying technology that enables them. Policymakers should align with a wide range of developers – not just a few that have deep pockets – to address harmful applications without stifling broader progress.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox