Clearview AI has been contributing to a dystopian police database in the United States with 30 billion photos from Facebook and other social networks collected by the company, all an attack on citizen privacy amid fears that a “roundup” is forming. perpetual police recognitionâ€, a kind of exposure of tens of millions of people -innocent included- at the disposal of different police departments.
This was recently recognized by the general director of Clearview AI, who even highlighted its potential, giving as an example the possibility of identifying the assailants of the United States Congress on January 6, saving child victims of abuse or exploitation and helping to exonerate people wrongly accused of crimes.
However, critics of the massive use of this technology by the police recall that violations of privacy and erroneous arrests fueled by faulty identifications made by facial recognition, including cases in Detroit and New Orleans, are cause for concern about this technology. .
In fact, Hoan Ton-That, CEO of Clearview AI, acknowledged last month in an interview with the BBC that his company took photos without the knowledge of users and that thanks to this the huge database was able to expand very quickly. of the company, which is marketed on its website to law enforcement as a tool “to bring justice to victims.”
Records indicate that US police have accessed Clearview AI’s facial recognition database nearly a million times since the company’s founding in 2017, though relations between law enforcement and Clearview AI remain opaque and that number is still up in the air, as there could be more.
In his defense, Ton-That assures that “the Clearview AI database is used for post-crime investigations by security forces, and is not available to the general public”, with which “every photo of the data set is a potential lead that could save a life, bring justice to an innocent victim, prevent misidentification, or exonerate an innocent.â€
This technology has long drawn criticism for its intrusiveness from both privacy advocates and digital platforms, and major social media companies, including Facebook, sent Clearview cease and desist letters in 2020 for violating the privacy policy. privacy of its users.
A Meta spokesperson came to publicly point out Clearview AI for “invading people’s privacy”, something that he did not let go: “We prohibited its founder from using our services and we sent a legal demand to stop accessing any data, photo or video,†denouncing that the company was scraping user photos and working with security forces.
How are you trying to avoid ‘scraping’ by Facebook? When this unauthorized practice is detected, the company can take measures “such as sending cease and desist letters, deactivating accounts, filing lawsuits or asking hosting providers for help†to protect user data, the spokesperson explained.
However, even despite internal policies, once Clearview AI has extracted a photo, biometric fingerprints are taken from the face and cross-checked into the database, linking people to their social media profiles and other information. identification forever – and the people in the photos have little recourse to remove themselves from the database.
Different international media such as CNN and the BBC reported last year that more than 3,100 US police departments, such as the FBI or the NSA, are working with Clearview AI, and in the specific case of the Miami Police it is being used to clarify crimes. like shoplifting or unsolved murders.
Faced with this panorama that years ago seemed worthy of a dystopian novel, now Matthew Guariglia, an expert from the organization in defense of digital rights Electronic Frontier Fund, highlights that there is “the fear that a police officer will take out his phone in a protest, scan the faces of the crowd, suddenly get their social media profiles, all the photos they’ve been in, their identities.â€
While many law enforcement departments have been working with material posted on social media for years, the difference is that the use of Clearview AI is unmonitored and not subject to national regulation, making many analysts and citizen privacy advocates the conclusion is clear: its use must be prohibited.