World Powers Move to Lighten AI Regulation Global AI summit reveals deep divisions on regulation and governance

Published
Reading time
3 min read
AI Action Summit in a grand hall with domes, flags, and a crowd attending the event.
Loading the Elevenlabs Text to Speech AudioNative Player...

The latest international AI summit exposed deep divisions between major world powers regarding AI regulations.

What’s new: While previous summits emphasized existential risks, the AI Action Summit in Paris marked a turning point. France and the European Union shifted away from strict regulatory measures and toward investment to compete with the United States and China. However, global consensus remained elusive: the U.S. and the United Kingdom refused to sign key agreements on global governance, military AI, and algorithmic bias. The U.S. in particular pushed back against global AI regulation, arguing that excessive restrictions could hinder economic growth and that international policies should focus on more immediate concerns.

How it works: Participating countries considered three policy statements that address AI’s impact on society, labor, and security. The first statement calls on each country to enact AI policies that would support economic development, environmental responsibility, and equitable access to technology. The second encourages safeguards to ensure that companies and nations distribute AI productivity gains fairly, protect workers’ rights, and prevent bias in hiring and management systems. The third advocates for restrictions on fully autonomous military systems and affirms the need for human oversight in warfare.

  • The U.S. and UK declined to sign any of the three statements issued at the AI Action Summit. A U.K. government spokesperson said that the declaration lacked practical clarity on AI governance and did not sufficiently address national security concerns. Meanwhile, U.S. Vice President JD Vance criticized Europe’s “excessive regulation” of AI and warned against cooperation with China.
  • Only 26 countries out of 60 agreed to the restrictions on military AI. They included Bulgaria, Chile, Greece, Italy, Malta, and Portugal among others.
  • France pledged roughly $114 billion to AI research, startups, and infrastructure, while the EU announced a roughly $210 billion initiative aimed at strengthening Europe’s AI capabilities and technological self-sufficiency. France allocated 1 gigawatt of nuclear power to AI development, with 250 megawatts expected to come online by 2027.
  • Despite the tight regulations proposed at past summits and passage of the relatively restrictive AI Act last year, the EU took a sharp turn toward reducing regulatory barriers to AI development. Officials emphasized the importance of reducing bureaucratic barriers to adoption of AI, noting that excessive regulation would slow Europe’s progress in building competitive AI systems and supporting innovative applications. 
  • Shortly after the summit, the European Commission withdrew a proposed law (the so-called “liability directive”) that would have made it easier to sue companies for vaguely defined AI-related harms. The decision followed criticism by industry leaders and politicians, including Vance, who argued that excessive regulation could hamper investment in AI and hinder Europe’s ability to compete with the U.S. and China in AI development while failing to make people safer.

Behind the news: The Paris summit follows previous gatherings of world leaders to discuss AI, including the initial AI Safety Summit at Bletchley Park and the AI Seoul Summit and AI Global Forum. At these summits, governments and companies agreed broadly to address AI risks but avoided binding regulations. Nonetheless, divisions over AI governance have widened in the wake of rising geopolitical competition and the emergence of high-performance open weights models like DeepSeek-R1.

Why it matters: The Paris summit marks a major shift in global AI policy. The EU, once an ardent proponent of AI regulation, backed away from its strictest proposals. At the same time, doomsayers have lost influence, and officials are turning their attention to immediate concerns like economic growth, security, misuse, and bias. These moves make way for AI to do great good in the world, even as they contribute to uncertainty about how AI will be governed.

We’re thinking: Governments are shifting their focus away from unrealistic risks and toward practical strategies for guiding AI development. We look forward to clear policies that encourage innovation while addressing real-world challenges.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox