International Guidelines for Military AI Global coalition endorses blueprint for AI’s military use

Published
Reading time
2 min read
Global leaders discuss Responsible AI in the military at the REAIM 2023 summit in The Hague, Netherlands.

Dozens of countries endorsed a “blueprint for action” designed to guide the use of artificial intelligence in military applications.

What’s new: More than 60 countries including Australia, Japan, the United Kingdom, and the United States endorsed nonbinding guidelines for military use of AI, Reuters reported. The document, presented at the Responsible Artificial Intelligence in the Military (REAIM) summit in Seoul, South Korea, stresses the need for human control, thorough risk assessments, and safeguards against using AI to develop weapons of mass destruction. China and roughly 30 other countries did not sign.

How it works: Key agreements in the blueprint include commitments to ensure that AI doesn’t threaten peace and stability, violate human rights, evade human control, and hamper other global initiatives regarding military technology.

  • The blueprint advocates for robust governance, human oversight, and accountability to prevent escalations and misuse of AI-enabled weapons. It calls for national strategies and international standards that align with laws that govern human rights. It also urges countries to share information and collaborate to manage risks both foreseeable and unforeseeable and maintain human control over uses of force.
  • It leaves to individual nations the development of technical standards, enforcement mechanisms, and specific regulations for technologies like autonomous weapons systems.
  • The agreement notes that AI can enhance situational awareness, precision, and efficiency in military operations, helping to reduce collateral damage and civilian fatalities. AI can also support international humanitarian law, peacekeeping, and arms control by improving monitoring and compliance. But the agreement also points out risks like design flaws, data and algorithmic biases, and potential misuse by malicious actors.
  • The blueprint stresses preventing AI’s use in the development and spread of weapons of mass destruction, emphasizing human control in disarmament and nuclear decision-making. It also warns of AI increasing risks of global and regional arms races.

Behind the News: The Seoul summit followed last year’s REAIM summit in The Hague, where leaders similarly called for limits on AI military use without binding commitments. Other international agreements like the EU’s AI Act and Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law regulate civilian AI, but exclude military applications. Meanwhile, AI-enabled targeting systems and autonomous, weaponized drones have been used in conflicts in Somalia, Ukraine, and Israel, highlighting the lack of international norms and controls.

Why it matters: The REAIM blueprint may guide international discussions on the ethical use of AI in defense, providing a foundation for further talks at forums like the United Nations. Though it’s nonbinding, it fosters collaboration and avoids restrictive mandates that could cause countries to disengage.

We’re thinking: AI has numerous military applications across not only combat but also intelligence, logistics, medicine, humanitarian assistance, and other areas. Nonetheless, it would be irresponsible to permit unfettered use of AI in military applications. Standards developed by democratic countries working together will help protect human rights.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox