Google Joins AI Peers In Military Work Google revises AI principles, lifting ban on weapons and surveillance applications

Published
Reading time
3 min read
Illustration of the Google logo near a futuristic facility with fighter jets flying overhead.
Loading the Elevenlabs Text to Speech AudioNative Player...

Google revised its AI principles, reversing previous commitments to avoid work on weapons, surveillance, and other military applications beyond non-lethal uses like communications, logistics, and medicine.

What’s new: Along with releasing its latest Responsible AI Progress Report and an updated AI safety framework, Google removed key restrictions from its AI principles. The new version omits a section in the previous document titled “Applications we will not pursue.” The deleted text pledged to avoid “technologies that cause or are likely to cause overall harm” and, where the technology risks doing harm, to “proceed only where we believe that the benefits substantially outweigh the risks” with “appropriate safety constraints.”

How it works: Google’s AI principles no longer prohibit specific applications but promote developing the technology to improve scientific inquiry, national security, and the economy.

  • The revised principles state that AI development should be led by democracies. The company argues that such leadership is needed given growing global competition in AI from countries that are not widely considered liberal democracies.
  • The new principles stress “responsible development and deployment” to manage AI’s complexities and risks. They state that AI must be developed with safeguards at every stage, from design and testing to deployment and iteration, and those safeguards must adapt as technology and applications evolve.
  • The revised principles also emphasize collaborative progress, stating that Google aims to learn from others and build AI that’s broadly useful across industries and society.
  • Google emphasizes the need for “bold innovation,” stating that AI should be developed to assist, empower, and inspire people; drive economic progress; enable scientific breakthroughs; and help address global challenges. Examples include AlphaFold 3, which figures out how biological molecules interact, a key factor in designing chemical processes that affect them.
  • The revised principles are buttressed by the 2025 Responsible AI Progress Report. This document outlines the company’s efforts to evaluate risks through measures that align with the NIST AI Risk Management Framework including red teaming, automated assessments, and input from independent experts.

Behind the news: Google’s new stance reverses a commitment it made in 2018 after employees protested its involvement in Project Maven, a Pentagon AI program for drone surveillance, from which Google ultimately withdrew. At the time, Google pledged not to develop AI applications for weapons or surveillance, which set it apart from Amazon and Microsoft. Since then, the company has expanded its work in defense, building on a $1.3 billion contract with Israel. In 2024, Anthropic, Meta, and OpenAI removed their restrictions on military and defense applications, and Anthropic and OpenAI strengthened their ties with defense contractors such as Anduril and Palantir.

Why it matters: Google’s shift in policy comes as AI is playing an increasing role in conflicts in Israel, Ukraine, and elsewhere, and while global geopolitical tensions are on the rise. While Google’s previous position kept it out of military AI development, defense contractors like Anduril, Northrop Grumman, and Palantir — not to mention AI-giant peers — stepped in. The new principles recognize the need for democratic countries to take the lead in developing technology and standards for its use as well as the massive business opportunity in military AI as governments worldwide seek new defense capabilities. Still, no widely accepted global framework governs uses of AI in combat.

We’re thinking: Knowing how and when to employ AI in warfare is one of the most difficult ethical questions of our time. Democratic nations have a right to defend themselves, and those of us who live in democracies have a responsibility to support fellow citizens who would put themselves in harm’s way to protect us. AI is transforming military strategy, and refusing to engage with it doesn’t make the risks go away.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox