Clearview AIs calamitous press campaign had another mishap yesterday when the ACLU slammed its claims of being rated 100% accurate in a test based on a method created by the civil rights group.
The facial recognition company has been embroiled in a storm of controversy since The New York Times revealed that it was helping police match photos of suspects faces to images scraped from websites and social media platforms to aid in their investigations. Clearview says that it has amassed a database of more than three billion images and that its software has already been used by more than 600 law enforcement agencies.
Clearviews use of peoples photos without gaining their consent has heightened privacy concerns about the use of facial recognition software. But it has also attracted awe for its astonishing claims of accuracy. If these claims are proven false, Clearviews technology could become a whole lot scarier. False matches could lead to innocent people being accused of crimes they didnt commit particularly if they are women or people of color.
In a report obtained by BuzzFeed News, Clearview declared that an independent test had rated its image-matching software as 100% accurate. It added that this accuracy was consistent across all racial and demographic groups.
The report states that this test used the same basic methodology as the American Civil Liberties Union had applied to test the accuracy of Amazons Rekognition software a claim that the ACLU has refuted.
[Read: Facial recognition company CEO says he doesnt need permission to use your face]
In a blogpost, ACLU Northern California attorney Jacob Snow wrote that Clearviews test couldnt be more different from the ACLUs work, and called for the company to stop manufacturing endorsements.
The report is absurd on many levels and further demonstrates that Clearview simply does not understand the harms of its technology in law enforcement hands, Snow told BuzzFeed News.
The basis of Clearviews claim is a 2018 report by the ACLU on how law enforcement agencies were using Amazons Rekognition software. To test the tools accuracy, the ACLU used it to match photos of members of Congress to a database of mugshots. It fell far short of perfection, incorrectly identifying 28 Congress members as other people who had been arrested for a crime.
The results also revealed that false matches disproportionately affect ethnic minorities. Nearly 40% of the false matches were of people of color despite them only making up 20% of Congress. Numerous other studies have found that facial recognition systems are far more likely to misidentify people of colour and women than white men.
Furthermore, Clearviews test compared the photos of members of Congress to images in its own enormous database, rather than to mugshots. This led to more questions from the ACLU about its accuracy. If police are searching for someone whose photo isnt in the database, this could increase error rates. The matching is also less likely to be accurate when the photos are matched to those taken in real-world settings.
False matches could lead to wrongful arrests of innocent people. And the risks could grow far greater if Clearview can achieve its reported ambition to supply its software to authoritarian regimes with long track records of human rights abuses.
Youre here because you want to learn more about artificial intelligence. So do we. So this summer, were bringing Neural to TNW Conference 2020, where we will host a vibrant program dedicated exclusively to AI. With keynotes by experts from companies like Spotify, RSA, and Medium, our Neural track will take a deep dive into new innovations, ethical problems, and how AI can transform businesses. Get your early bird ticket here and check out the full Neural track here. 
Published February 11, 2020 — 17:40 UTC