The U.S. Department of Defense issued new ethical guidelines for contractors who develop its AI systems.
What’s new: The Pentagon’s Defense Innovation Unit, which issues contracts for AI and other high-tech systems, issued guidelines that contractors must follow to ensure that their systems work as planned without harmful side effects.
How it works: The authors organized the guidelines around AI system planning, development, and deployment. Throughout each phase, questions arranged in a flowchart prompt contractors to satisfy the Defense Department’s ethical principles for AI before moving on to the next stage.
- During planning, contractors work with officials to define a system’s capabilities, what it will take to build it, and how they expect it to be deployed.
- During development, contractors must explain how they will prevent data manipulation, assign responsibility for changes in the system’s capabilities, and outline procedures for monitoring and auditing.
- During deployment, contractors must perform continuous assessments to ensure that their data remains valid, the system operates as planned, and any harm it causes is documented.
- In a case study, the guidelines helped a team realize that its system for examining x-ray images could deny critical care to patients with certain rare conditions. To address the issue, the team tested the model on rare classes of x-ray images.
Behind the News: The Pentagon adopted its ethical principles for AI in February 2020 after 15 months of consultation with experts in industry, academia, and government. The document, which applies to service members, leaders, and contractors, broadly defines ethical AI as responsible, transparent, reliable, governable, and deployed with minimal bias.
Why It Matters: The Department of Defense (DOD) invests generously in AI. One estimate projects that military spending on machine learning contracts will reach $2.8 billion by 2023. But the department has had difficulty collaborating with big tech: In 2018, over 4,000 Google employees protested the company’s involvement in a DOD program called Project Maven, highlighting qualms among many AI professionals about military uses of their work. DOD’s new emphasis on ethics may portend a smoother relationship ahead between big tech and the military.
We’re thinking: The document doesn't mention fully autonomous weapons, but they lurk in the background of any discussion of military AI. While we acknowledge the right of nations to defend themselves, we support the United Nations’ proposal to ban such systems.