Hundreds of law enforcement agencies across the US have started using a new facial recognition system from Clearview AI, a new investigation by The New York Times has revealed. The database is made up of billions of images scraped from millions of sites including Facebook, YouTube, and Venmo. The Times says that Clearview AI’s work could “end privacy as we know it,” and the piece is well worth a read in its entirety.
The use of facial recognition systems by police is already a growing concern, but the scale of Clearview AI’s database, not to mention the methods it used to assemble it, is particularly troubling. The Clearview system is built upon a database of over three billion images scraped from the internet, a process which may have violated websites’ terms of service. Law enforcement agencies can upload photos of any persons of interest from their cases, and the system returns matching pictures from the internet, along with links to where these images are hosted, such as social media profiles.
The NYT says the system has already helped police solve crimes including shoplifting, identify theft, credit card fraud, murder, and child sexual exploitation. In one instance, Indiana State Police were able to solve a case within 20 minutes by using the app.
The use of facial recognition algorithms by police carry risks. False positives can incriminate the wrong people, and privacy advocates fear their use could help to create a police surveillance state. Police departments have reportedly used doctored images that could lead to wrongful arrests, and a federal study has uncovered “empirical evidence” of bias in facial recognition systems.
Using the system involves uploading photos to Clearview AI’s servers, and it’s unclear how secure these are. Although Clearview AI says its customer-support employees will not look at the photos that are uploaded, it appeared to be aware that Kashmir Hill (the Times journalist investigating the piece) was having police search for her face as part of her reporting:
While the company was dodging me, it was also monitoring me. At my request, a number of police officers had run my photo through the Clearview app. They soon received phone calls from company representatives asking if they were talking to the media — a sign that Clearview has the ability and, in this case, the appetite to monitor whom law enforcement is searching for.
The Times reports that the system appears to have gone viral with police departments, with over 600 already signed up. Although there’s been no independent verification of its accuracy, Hill says the system was able to identify photos of her even when she covered the lower half of her face, and that it managed to find photographs of her that she’d never seen before.
One expert quoted by The Times said that the amount of money involved with these systems means that they need to be banned before the abuse of them becomes more widespread. “We’ve relied on industry efforts to self-police and not embrace such a risky technology, but now those dams are breaking because there is so much money on the table,” said a professor of law and computer science at Northeastern University, Woodrow Hartzog, “I don’t see a future where we harness the benefits of face recognition technology without the crippling abuse of the surveillance that comes with it. The only way to stop it is to ban it.”
Yup laws need to be passed ASAP, this is a disgusting overreach especially since we know how racists facial recognition already is.
So basically, law enforcement are using a version of google image search? A photo is taken from a crime scene, it’s run through an image search to look for similar images (in the public domain?), the police can then review the matches against the profile, and potentially put a name to the face.
This doesn’t feel like an area where bias or risk will creep in significantly without abuses from the wider system (the search result wouldn’t be able to be the only evidence used to convict). There’s the worrying point raised where Clearview can see who’s being searched for – that’s a clear abuse of the system, but other that, what’s creepy (other than the click bait headline)?
I’ll make an argument. There is rules and engagement for photo acquisition from you to Facebook. But there is no particular opt in or opt out between you and Clearview. More importantly, there’s no opt in, or even continue consent for using your image. The fact that many people in law enforcement will be able to use this tech to find criminals within a certain match, would make sense if you can guarantee the accuracy, which you can’t, or the removal of bias, which you can’t. The reason why you can’t is because there’s no authority or outside referee actually checking for all of these issues.
If the photo is publicly available, whether that’s on a public Facebook post, or a photo of you at the fair on the local news site, then you’ve lost control of that image, and I don’t see why the Police can’t use it… you can bet that a lot of other companies already are. If it’s a private post, and they’ve been obtained through ‘hacks’ then that’s another matter, and I agree that they shouldn’t be used.
As to the accuracy – this tech is the equivalent to a partial number plate. If, for example, the police can use a match to find all the white station wagons with 23 in their number plates, then cross them off their list as they figure out if they could have been in the area at the time of the crime, why can’t they do that with photos of faces? They don’t need a 100% accuracy to be able to follow up and do some investigating. But if your photo is a broad match, and you don’t have an alibi, and you could have been in the area, and you own clothes matching the clothes worn by the perp, then maybe that’s enough to get a warrant to do a search, etc. That’s the modern equivalent to doing an artists sketch and posting it in the wanted board.
I wonder how many times this tech has been, and will be used, as an attack on journalists or whistleblowers.