Seoul AI Summit Spurs Safety Agreements AI summit in Seoul achieves safety commitments from companies and governments

Published
Jun 12, 2024
Reading time
3 min read
Seoul AI Summit Spurs Safety Agreements: AI summit in Seoul achieves safety commitments from companies and governments

At meetings in Seoul, government and corporate officials from dozens of countries agreed to take action on AI safety.

What’s new: Attendees at the AI Seoul Summit and AI Global Forum, both held concurrently in Seoul, formalized the broad-strokes agreements to govern AI, The Guardian reported. Presented as a sequel to November’s AI summit in Bletchley Park outside of London, the meetings yielded several multinational declarations and commitments from major tech firms.

International commitments: Government officials hammered out frameworks for promoting innovation while managing risk.

  • 27 countries and the European Union agreed to jointly develop risk thresholds in coming months. Thresholds may include a model’s ability to evade human oversight or help somebody create weapons of mass destruction. (Representatives from China didn’t join this agreement.)
  • 10 of those 27 countries (Australia, Canada, France, Germany, Italy, Japan, the Republic of Korea, the Republic of Singapore, the United Kingdom, and the United States) and the European Union declared a common aim to create shared policies while encouraging AI development. 
  • In a separate statement, those 10 nations and the EU laid out more specific goals including exchanging information on safety tests, building an international AI safety research network, and expanding AI safety institutes beyond those currently established in the U.S., UK, Japan, and Singapore.

Corporate commitments: AI companies agreed to monitor their own work and collaborate on further measures.

  • Established leaders (Amazon, Google, IBM, Meta, Microsoft, OpenAI, Samsung) and startups (Anthropic, Cohere, G42, Inflection, xAI) were among 16 companies that agreed to evaluate advanced AI models continually for safety risks. They agreed to abide by clear risk thresholds developed in concert with their home governments, international agreements, and external evaluators. If they deem that a model has surpassed a threshold, and that risk can’t be mitigated, they agreed to stop developing that model immediately.
  • 14 companies, including six that didn’t sign the agreement on risk thresholds, committed to collaborate with governments and each other on AI safety, including developing international standards.

Behind the news: Co-hosted by the UK and South Korean governments at the Korea Advanced Institute of Science and Technology, the meeting followed an initial summit held at Bletchley Park outside London in November. The earlier summit facilitated agreements to create AI safety institutes, test AI products before public release, and create an international panel akin to the Intergovernmental Panel on Climate Change to draft reports on the state of AI. The panel published an interim report in May. It will release its final report at the next summit in Paris in November 2024.

Why it matters: There was a chance that the Bletchley Park summit would be a one-off. The fact that a second meeting occurred is a sign that public and private interests alike want at least a seat at the table in discussions of AI safety. Much work remains to define terms and establish protocols, but plans for future summits indicate a clear appetite for further cooperation. 

We’re thinking: Andrew Ng spoke at the AI Global Forum on the importance of regulating applications rather than technology and chatted with many government leaders there. Discussions focused at least as much on promoting innovation as mitigating hypothetical risks. While some large companies continued to lobby for safety measures that would unnecessarily impede dissemination of cutting-edge foundation models and hamper open-source and smaller competitors, most government leaders seemed to give little credence to science-fiction risks, such as AI takeover, and express concern about concrete, harmful applications like the use of AI to interfere with democratic elections. These are encouraging shifts!

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox