The UK’s electronic surveillance agency published its plan to use AI.
What’s new: Government Communications Headquarters (GCHQ) outlined its intention to use machine learning to combat security threats, human trafficking, and disinformation — and to do so ethically — in a new report.
What it says: GCHQ said its AI will augment, rather than supplant, human analysts. Moreover, the agency will strive to use AI with privacy, fairness, transparency, and accountability by emphasizing ethics training and thoroughly reviewing all systems. Such systems will:
- Analyze data on large computer networks to prevent cyber attacks, identify malicious software, and trace them back to their origins.
- Intercept sexually explicit imagery featuring minors and messages from sexual predators.
- Combat drug smuggling and human trafficking by analyzing financial transactions and mapping connections between the individuals behind them.
- Counter misinformation using models that detect deepfakes, assist with checking facts, and track both content farms that pump out fake news and botnets that spread it.
Behind the news: While intelligence agencies rarely detail their AI efforts, several examples have come to light.
- German law enforcement agencies use AI-generated images of minors to trap online predators.
- The U.S. National Reconnaissance Office is developing a system to guide surveillance satellites.
- The U.S. National Security Agency is training models to audit regulatory compliance by other intelligence agencies as they search for international criminals and look for warning of emerging crises.
Why it matters: The GCHQ plan emphasizes the utility of AI systems in securing nations and fighting crime — and highlights the need to ensure that sound ethical principles are built into their design and use.
We’re thinking: GPT-007 prefers its data shaken, not perturbed.