Despite backlash, US police are still using Clearview AI face recognition software
The majority of Americans have always been somewhat uncomfortable about their police forces tracking their faces, but today the BBC reported that one of the most well-known firms in this regard is going stronger than ever.
In what seems like a surprising admission given the controversy surrounding face recognition technology, Clearview AI Inc. Chief Executive Hoan Ton-That told the BBC that it has run close to a million searches for U.S. law enforcement from a database that now stores close to 30 billion images. These images have been scraped from people’s social media accounts without their permission.
There’s a certain dystopian element to this, which has privacy advocates worried. The issue came to the fore in 2020 when Clearview made the headlines of the Western media after its database of regular folks’ faces was breached. At the time, it was found that the company had been selling its spying technology to private companies. Soon after, it said it was “canceling the accounts” of all its customers not related to a government entity.
Then in 2022, after the American Civil Liberties Union took Clearview to court in Illinois for illegally scraping people’s social media accounts, Clearview was told it was banned from selling its products to private entities all over the U.S. – including law enforcement in Illinois. “Other companies would be wise to take note, and other states should follow Illinois’ lead in enacting strong biometric privacy laws,” the ACLU said at the time.
After that, it appeared the technology might become a thing of the past, and people could breathe a sigh of relief that wherever they went, they weren’t effectively standing in police lineups. Then the U.K. also banned the software, saying it breached the country’s data protection laws. Australia, France and Italy were also on board with bans.
But today, the BBC explained that Clearview, headquartered in New York City, hasn’t even taken its foot off the pedal. Police departments all over the U.S. are still using the products, even though it’s well-known that such technology isn’t only frowned upon but can give false positives – often where black people are concerned. Ton-That said the software has an almost perfect accuracy rate, blaming “poor policing,” not an algorithmic miscalculation, for wrongful arrests.
The Miami Police admitted to the BBC that it uses Clearview AI’s tech “for every type of crime.” Hundreds of U.S. police departments use it, said the BBC, and for the most part, they don’t let the public know that. There are exceptions. Face recognition technology is banned in some U.S. cities, including Portland, San Francisco and Seattle.
Although the Big Brother aspect certainly concerns most people, the Miami assistant police chief said it has used the software on average about 450 times a year and, so far, it has led to arrests, including the arrests of murderers.
Police have always said face recognition technology by no means is conclusive and only helps with the beginning of an investigation, but some would argue that once innocent people’s faces are matched, they are already at risk of wrongful arrest. Miami police say the software only acts as a “tip” and a “traditional” inquiry follows, but many people have spent time in prison after investigators have had tunnel vision.
Photo: Bernard Hermant/Unsplash
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU