What if you could identify just about anyone from a photo? A controversial startup is making this possible.
What happened: Hundreds of U.S. law enforcement agencies are using a face ID service that matches photos against a database of billions of images, the New York Times reported.
How it works: Clearview AI scraped photos from Facebook and other social media sites, employment sites, mugshot archives, news sites, and message boards. The company’s promotional materials say it holds over 3 billion images, a repository far bigger than law-enforcement databases, as shown in the image above.
- The company trained a neural network to convert faces into geometric vectors representing the distance between a person’s eyebrows, the angle of the cheekbones, and so on.
- The network compares such vectors in a submitted photo with those in the database and returns matching photos along with the URLs they came from. Frequently these are on social-media pages, making it possible to connect a name to the face.
- More than 600 U.S. law enforcement agencies have licensed the application, which has been used to investigate crimes from shoplifting to murder. The company also contracts with corporate customers.
Behind the news: Clearview AI was founded in 2016 by an Australian programmer with backing from tech investor Peter Thiel. The company has raised $7 million, according to the funding tracker Pitchbook.
Yes, but: The New York Times outlines a number of concerns.
- Scraping photos violates terms of service for most social media companies, including Facebook, Instagram, and Twitter.
- Some experts worry the service invites misuse. “Imagine a rogue law enforcement officer who wants to stalk potential romantic partners,” one expert said.
- Clearview AI doesn’t report an error rate. The model could make false matches, putting innocent people in jeopardy.
We’re thinking: We need regulations that balance development and deployment of useful technologies against their potential for abuse and harm. Face identification vendors should be required to report performance metrics, and police departments should be required to use models that pass federally established guidelines and perform background checks of personnel who have access to the technology.