Big AI Pursues Military Contracts Meta and Anthropic open doors for AI in U.S. defense and national security

Published
Reading time
2 min read
Llama wearing a camouflage helmet, looking determined with a light blue background.

Two top AI companies changed their stances on military and intelligence applications.

What’s new: Meta made its Llama family of large language models available to the U.S. government for national security purposes — a major change in its policy on military applications. Similarly, Anthropic will offer its Claude models to U.S. intelligence and defense agencies.

How it works: Meta and Anthropic are relying on partnerships with government contractors to navigate the security and procurement requirements for military and intelligence work.

  • Meta’s partners in the defense and intelligence markets include Accenture, Amazon, Anduril, Booz Allen, Databricks, Deloitte, IBM, Leidos, Lockheed Martin, Microsoft, Oracle, Palantir, Scale AI, and Snowflake. These companies will integrate Llama models into U.S. government applications in areas like logistics, cybersecurity, intelligence analysis, and tracking terrorists’ financial activities.
  • Some Meta partners have built specialized versions of Llama. For example, Scale AI fine-tuned Llama 3 for national security applications. Called Defense Llama, the fine-tuned model can assist with tasks such as planning military operations and analyzing an adversary’s vulnerabilities.
  • Anthropic will make its Claude 3 and 3.5 model families available to U.S. defense and intelligence agencies via a platform built by Palantir, which provides big-data analytics to governments, and hosted by Amazon Web Services. The government will use Claude to review documents, find patterns in large amounts of data, and help officials make decisions.

Behind the news: In 2018, Google faced backlash when it won a contract with the U.S. government to build Project Maven, an AI-assisted intelligence platform. Employees protested, resigned, and called on the company to eschew military AI work. Google withdrew from the project and Palantir took it over. Subsequently, many AI developers, including Meta and Anthropic, have forbidden use of their models for military applications. Llama’s new availability to U.S. military and intelligence agencies is a notable exception. In July, Anthropic, too, began to accommodate use of its models for intelligence work. Anthropic still prohibits using Claude to develop weapons or mount cyberattacks.

Why it matters: The shift in Meta’s and Anthropic’s policies toward military uses of AI is momentous. Lately AI has become a battlefield staple in the form of weaponized drones, and AI companies must take care that their new policies are consistent with upholding human rights. Military uses for AI include not only weapons development and targeting but also potentially life-saving search and rescue, logistics, intelligence, and communications. Moreover, defense contracts represent major opportunities for AI companies that can fund widely beneficial research and applications.

We’re thinking: Peace-loving nations face difficult security challenges, and AI can be  helpful in meeting them. At the same time, the militarization of AI brings challenges to maintaining peace and stability, upholding human rights, and retaining human control over autonomous systems. We call on developers of military AI to observe the guidelines, proposed by Responsible Artificial Intelligence in the Military, which are endorsed by more than 60 countries and call for robust governance, oversight, accountability, and respect for human rights.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox