A crime-fighting AI company altered evidence to please police, a new investigation claims — the latest in a rising chorus of criticism.
What’s new: ShotSpotter, which makes a widely used system of the same name that detects the sound of gunshots and triangulates their location, modified the system’s findings in some cases, Vice reported.
Altered output: ShotSpotter’s output and its in-house analysts’ testimony have been used as evidence in 190 criminal cases. But recent court documents reveal that analysts reclassified as gunshots sounds the system had attributed to other causes and changed the location where the system determined that gunshots had occurred.
- Last year, ShotSpotter picked up a noise around one mile from a spot in Chicago where police believed someone was murdered at the same time. The system classified it as a firecracker. Analysts later reclassified it as a gunshot and modified its location, placing the sound closer to the scene of the alleged crime. Prosecutors withdrew the ShotSpotter evidence after the defense requested that the judge examine the system’s forensic value.
- When federal agents fired at a man in Chicago in 2018, ShotSpotter recorded only two shots — those fired by cops. The police asked the company to re-examine the data manually. An analyst found five additional shots, presumably those fired by the perpetrator.
- In New York in 2016, a company analyst reclassified as gunshots a sound that the algorithm had classified as helicopter noise after being contacted by police. A judge later threw out the conviction of a man charged with shooting at police in that incident, saying ShotSpotter’s evidence was unreliable.
The response: In a statement, ShotSpotter called the Vice report “false and misleading.” The company didn’t deny that the system’s output had been altered manually but said the reporter had confused two different services: automated, real-time gunshot detection and analysis after the fact by company personnel. “Forensic analysis may uncover additional information relative to a real-time alert such as more rounds fired or an updated timing or location upon more thorough investigation,” the company said, adding that It didn’t change its system’s findings to help police.
Behind the news: Beyond allegations that ShotSpotter has manually altered automated output, researchers, judges, and police departments have challenged the technology itself.
- A May report by the MacArthur Justice Center, a nonprofit public-interest legal group, found that the vast majority of police actions sparked by ShotSpotter alerts did not result in evidence of gunfire or gun crime.
- Several cities have terminated contracts with ShotSpotter after determining that the technology missed around 50 percent of gunshots or was too expensive.
- Activists are calling on Chicago to cancel its $33 million contract with the company after its system falsely alerted police to gunfire, leading to the shooting of a 13-year-old suspect.
Why it matters: ShotSpotter’ technology is deployed in over 100 U.S. cities and counties. The people who live in those places need to be able to trust criminal justice authorities, which means they must be able to trust the AI systems those authorities rely on. The incidents described in legal documents could undermine that trust — and potentially trust in other automated systems.
We’re thinking: There are good reasons for humans to analyze the output of AI systems and occasionally modify or override their conclusions. Many systems keep humans in the loop for this very reason. It’s crucial, though, that such systems be transparent and subject to ongoing, independent audits to ensure that any modifications have a sound technical basis.