The European Union, United Kingdom, United States, and other countries signed a legally binding treaty that regulates artificial intelligence.
What’s new: The treaty, officially known as the Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law, provides a legal framework for states to preserve democratic values while promoting AI innovation. It was negotiated by member nations of the Council of Europe (a transnational organization that promotes democracy and human rights and includes nearly twice as many countries as the EU) as well as observer states including Australia, Canada, and Mexico, which have not yet signed it. Countries that did not participate include China, India, Japan, and Russia.
How it works: The treaty will take effect later this year. It applies to any use of AI by signatories, private actors working on behalf of signatories, or actors in those jurisdictions. AI is broadly defined as any “machine-based system . . . [that generates] predictions, content, recommendations, or decisions that may influence physical or virtual environments.” The signatories agreed to do the following:
- Ensure that all AI systems are consistent with human-rights obligations and democratic processes, including individual rights to participate in fair debate
- Prohibit any use of AI that would discriminate against individuals on the basis of gender or other characteristics protected by international or domestic law
- Protect individual privacy rights and personal data against uses by AI
- Assess AI systems for risk and impact before making them widely available
- Promote digital literacy and skills to ensure public understanding of AI
- Notify individuals when they are interacting with an AI system
- Shut down or otherwise mitigate AI systems when they risk violating human rights
- Establish oversight mechanisms to ensure compliance with the treaty and provide remedies for violations
Exceptions: The treaty allows exceptions for national security and doesn’t cover military applications and national defense. It also doesn’t apply to research and development of AI systems that are not yet available for general use, unless testing such systems can interfere with human rights, democracy, or the rule of law.
Behind the news: The Council of Europe oversees the European Convention on Human Rights and its Court of Human Rights in Strasbourg, France. Its AI treaty builds on previous initiatives including the European Union's AI Act, which aims to regulate AI based on risk categories, and other national and international efforts like the United States’ AI Bill of Rights and the global AI Safety Summit.
Why it matters: As the first binding international agreement on AI, the treaty can be enforced by signatories’ own laws and regulations or by the European Court of Human Rights. Since so many AI companies are based in the U.S. and Europe, the treaty may influence corporate practices worldwide. Its provisions could shape the design of deployed AI systems.
Yes, but: Like any regulation, the treaty’s effectiveness depends on the interpretation of its high-level concepts. Its core terms (such as accountability measures, democratic processes, oversight, privacy rights, and transparency) represent a broad framework, but their precise meaning is vague and interpretation is left to the signatories. Also, the nonparticipation of major AI powers like China and large countries like Russia and India raises questions about whether its standards can be applied globally.
We’re thinking: The EU and U.S. have very different approaches to AI regulation; the EU has taken a much heavier hand. Yet both agreed to the treaty. This could indicate that these regions are finding common ground, which could lead to more uniform regulations internationally.