From unlocking our smartphone and the applications installed on it to paying at any store or transforming our appearance with Instagram or TikTok filters, digital identification has carved an immovable place in our lives. And it has done so through complex technology that continues to improve thanks to advances in graphics, algorithms trained with machine learning or artificial intelligence. In parallel, doubts do not stop arising about where all this data from our biometric characteristics goes. How could they be used? Where are the limits of its use?
Biometrics is the set of techniques that make it possible to determine the identity of a person by analyzing physical or behavioral traits. In recent years, biometric applications have multiplied: beyond authentication codes to, for example, carry out bank management, today we have hardware sensors in mobile phones, such as fingerprint detectors, voice recognition or special cameras for facial recognition (3D), which allows us to authenticate ourselves just by looking at the phone screen. This last case is the one that has revolutionized digital recognition more and faster in recent times.
Facial recognition analyzes the geometry of our face, including details such as the distance between our eyes and the size of our nose, and creates a unique encrypted digital model for each user. Then our face is scanned in real time and mathematically compared with that previously stored model, which allows verification without the need for passwords or cards. Some systems have evolved so much that they can recognize us even without removing our mask in places where we still wear it.
The positive part of this technology is that it helps prevent theft or other illegal practices and is very convenient and fast, both when using it on the mobile to, for example, pay, and in other registration or arrival applications at airports, hotels, offices or events.
But there is also a negative part, and that is that it violates our privacy by keeping our data and it can even end up being used for uses such as mass surveillance, or it generates biases. There are already examples of all this.
In 2010, Facebook began implementing facial recognition automatically when tagging people in photos uploaded to the platform, using a name suggestion system. Then users were not explicitly asked if they wanted this recognition to be activated and the story ended in a class action lawsuit and a fine against Facebook, which now does ask for express consent from users who want to activate the option for users.
More recently, the Chinese social network TikTok updated its privacy policies in order to collect biometric data from users and did so without notifying them. This new policy stated that the app could collect biometric data, such as features and attributes of the face and body, audio, and text of the words spoken by the user in its content for experiential purposes such as “enabling special video effects, moderation content, demographic ranking, and ad recommendations.
However, there are doubts that this data is handled correctly, due to the background of the Chinese company ByteDance, which is the one behind the popular social network. In early 2020, the company had to pay $92 million for violating the Illinois Privacy Law, after being accused of illegally collecting personal information from users under the age of 13, including their names, locations, and email addresses, all of this. without seeking parental consent.
At the end of February this year, the European Commission decided that the platform would be banned from official mobile phones. Since then several countries have joined this initiative. The last to join has been New Zealand. The social network is also in conflict with the United States in this regard, since the Biden administration and some legislators fear that the application is being used to spy on American users or deliver information to the Chinese government.
Mass surveillance -whether by governments or private companies- is illegal according to international human rights laws and, therefore, countries must adapt their laws so that the rights of citizens are guaranteed and that only a judge can authorize that private communications be intercepted. However, the legality and scope of mass surveillance varies depending on each nation’s laws and judicial systems, and is often justified as a way to combat terrorism and protect national security, or on a lesser scale, put an end to the social unrest caused by common crime.
A report by European Digital Rights (EDRi), an international group of 44 non-profit civil rights organizations, denounced in 2020 that at least 15 European countries had experimented with biometric identification technology in public spaces, by both public and private actors, with practices that would clearly be mass surveillance and that, furthermore, would be outside the legal requirements to justify this level of violation of rights. Although facial recognition as such is not prohibited, it must be justified in order to be used, as stated in the EU Charter of Fundamental Rights, the General Data Protection Regulation (GDPR) and the data processing directive 2016/680 personal data by the competent authorities. According to the report, mass surveillance measures disproportionately impact groups that already lack privacy, such as migrants or people with few resources.
In 2020, Mercadona came to test a facial recognition pilot project for several months in 48 of its 1,640 supermarkets with the aim of detecting people with a final sentence and a precautionary measure of a restraining order to the establishments, that is, those convicted for thefts and robberies. The system worked by capturing faces and comparing them with those previously loaded into a database. According to the company, the process lasted only 0.3 seconds and did not save any type of information. Even so, he had to finish the test before a procedure imposed by the Spanish Agency for Data Protection and pay a penalty of 2.5 million euros.
We said that another of the dangers of this technology is that, however perfected it may be, there is still no 100% effective system and there may be cases of false positives, discrimination based on race or sex, or ingenious ways to circumvent it. These biases of the facial recognition algorithms come from the characteristics of the databases with which the artificial intelligence is fed, made by humans with their virtues but also with their defects and, therefore, with these biases.
For example, a 2018 study by scientists and digital activists Joy Buolamwini and Timnit Gebru showed how facial data processing technologies are biased against Black people, especially women. That and other studies caused numerous lawsuits that stopped some large technology projects.