PROVIDENCE — Facebook has stated that it will close down its face-recognition software and delete all faceprints of more 1 billion users. This comes amid growing concerns over the technology’s misuse by police and governments.
“This change will be one of the biggest shifts in facial recognition use in the technology’s historical history,” Jerome Pesenti (vp of artificial intelligence for Facebook), wrote in a blog posting on Tuesday. Its removal will lead to the deletion of over a billion individuals’ individual facial recognition templates.
He stated that the company was trying balance the benefits of the technology against growing social concerns, particularly since regulators have not yet provided clear rules.
After a busy few months, Facebook has made a complete overhaul. It announced Thursday that Meta was its new name for Facebook, the company, but it did not include the social network. It stated that the change will allow it to focus on the development of technology for the “metaverse,” which it sees as the next generation of the internet.
After documents were leaked by Frances Haugen, a whistleblower, the company now faces perhaps its most serious public relations crisis. It revealed that it knew about the dangers its products caused but did not do anything to reduce them.
Over a third (33%) of Facebook’s daily users have signed up to have their faces recognised by the social networking site’s system. This is approximately 640 million people. Facebook recently began to reduce facial recognition’s use after it was introduced more than a decade back.
In 2019, the company ended its practice using face recognition software for identifying friends in uploaded photos, and suggested they “tag” them. Facebook was also sued by Illinois for its tag suggestion feature.
Kristen Martin, a professor of technology ethics from the University of Notre Dame, said that the decision was “a good example of trying making product decisions which are good for both the user and company”. The move also shows the power of regulatory pressure since the face recognition system has been criticized for more than a decade.
Meta Platforms Inc. is Facebook’s parent company. It appears that they are looking into new ways to identify people. Pesenti stated Tuesday’s announcement was a company-wide shift away from broad identification and towards narrower forms personal authentication.
He wrote that facial recognition is especially valuable when it operates on an individual’s device. This method of facial recognition on-device, which does not require communication with an external server, can be found in most of the systems that unlock smartphones.
Privacy activists and researchers have been raising concerns for years about the use of face-scanning software in the tech industry. They cite studies that showed it works differently across racial, gender, and age boundaries. One concern is that the technology may incorrectly identify those with darker skin.
Face recognition has another problem. Companies have to create unique faces of large numbers of people in order to use it. This is often done without their consent.
He said, “This is a hugely significant recognition of the fact that this technology inherently danger.”
There has been an increase in concern about the extensive surveillance system used by China’s government, particularly as it was deployed in a region that is home to one of China’s predominantly Muslim ethnic minorities.
Fears over civil rights violations and racial bias, as well as invasions of privacy have led to at least seven states and almost two dozen cities limiting government use of this technology. According to data compiled in May by the Electronic Privacy Information Center, around 20 states capitals have been engaged in debates about additional reporting requirements, limits, and bans.
Meta’s new cautious approach to facial recognition comes after other U.S. tech giants like Amazon, Microsoft, and IBM made last year decisions to stop or pause sales of facial recognition software for police. This was due to concerns about false identifications, and in the context of a wider U.S. reckoning on policing, and racial justice.
In October, President Joe Biden’s science & technology office launched a fact-finding mission that examined facial recognition and other biometric tools that can be used to identify people and assess their mental or emotional state.
European legislators and regulators have taken measures to prevent law enforcement from scanning facial features within public spaces. This is part of larger efforts to regulate the most dangerous applications of artificial intelligence.
Facebook’s face scanning practices contributed to the $5B fine and privacy restrictions that the Federal Trade Commission placed on the company in 2019. After a year-long investigation, Facebook settled with the FTC. It promised to provide “clear notice” before facial recognition technology was applied to photos and videos.