Angst at the prospect of intelligent machines boiled over in moves to block or limit the technology.
What happened: Fear of AI-related doomsday scenarios prompted proposals to delay research and soul searching by prominent researchers. Amid the doomsaying, lawmakers took dramatic regulatory steps.
Driving the story: AI-driven doomsday scenarios have circulated at least since the 1950s, when computer scientist and mathematician Norbert Weiner claimed that “modern thinking machines may lead us to destruction.” Such worries, amplified by prominent members of the AI community, erupted in 2023.
- The not-for-profit Future of Life Institute published an open letter that called for a six-month pause in training powerful AI models. It garnered nearly 34,000 signatures.
- Deep learning pioneers Geoffrey Hinton and Yoshua Bengio expressed their worries that AI development could lead to human extinction, perhaps at the hands of a superhuman intelligence.
- Google, Microsoft, and OpenAI urged the U.S. Congress to take action.
- The UK government convened the international Bletchley Summit, where 10 countries including France, Germany, Japan, the U.S., and the UK agreed to form a panel that will report periodically on the state of AI.
Regulatory reactions: Lawmakers from different nations took divergent approaches with varying degrees of emphasis on preventing hypothetical catastrophic risks.
- China aimed to protect citizens from intrusions on their privacy without limiting government power. It added requirements to label AI-generated media and prohibit face recognition, with broad exceptions for safety and national security.
- The United States moved to promote individual privacy and civil rights as well as national safety under existing federal laws. Although the U.S. didn’t pass national regulations, the White House collaborated with large AI companies to craft both voluntary limits and an executive order that requires extensive disclosure and testing of models that exceed a particular computational threshold.
- The European Union’s AI Act aims to mitigate the highest perceived risks. The bill limits certain AI applications including biometric identification or determinations of eligibility for employment public services. It also mandates that developers of general-purpose models disclose information to regulators. The law imposes a lighter burden on smaller companies and provides some exceptions for open source models. Like China, it exempts member states’s military and police forces.
Striking a balance: AI has innumerable beneficial applications that we are only just beginning to explore. Excessive worry over hypothetical catastrophic risks threatens to block AI applications that could bring great benefit to large numbers of people. Some moves to limit AI would impinge on open source development, a major engine of innovation, while having the anti-competitive effect of enabling established companies to continue to develop the technology in their own narrow interest. It’s critical to weigh the harm that regulators might do by limiting this technology in the short term against highly unlikely catastrophic scenarios.
Where things stand: AI development is moving too quickly for regulators to keep up. It will require great foresight — and a willingness to do the hard work of identifying real, application-level risks rather than imposing blanket regulations on basic technology — to limit AI’s potential harms without hampering the good that it can do. The EU’s AI Act is a case in point: The bill, initially drafted in 2021, has needed numerous revisions to address developments since then. Should it gain final approval, it will not take effect within two years. By then, AI likely will raise further issues that lawmakers can’t see clearly today.