The debate on the use of facial recognition still has to find an answer, at this time, to a key question, before deciding whether or not this technology is legal. To control a few people, do you have to process data from millions of citizens? That is “the core of the matter,” says Leandro Núñez, Audens’ lawyer and an expert in law related to new technologies. “It is the unknown that remains to be resolved,” he reiterates.

So things are not clear at all. Or yes, depending on how you look at it. And if not, ask Mercadona, sanctioned by the Spanish Agency for Data Protection with a fine of 2.5 million euros for installing a facial recognition system in its stores. “That has been, in this case, the response via punishment to the question we asked ourselves,” says Leandro Núñez, “since that company processed the faces of all the customers (thousands) who entered its establishments for days to control access to them only a few dozen people, repeat thieves who were subjected to restraining orders to Mercadona stores”.

Another stumbling block not yet resolved with this advance in artificial intelligence, no less serious, refers to the fact that “this technology still cannot ensure the detection of false positives (erroneous identifications) with the disastrous consequences that this would have among those affected” , adds this lawyer. The system is still not infallible when it comes to processing the data (extracted from a database of facial features), and it has been shown that sometimes – mainly, when confusing ethnic groups – it is wrong.

The regulation in which the EU is now working on biometric recognition, which includes facial recognition, “takes into account, of course, the possibility of these errors,” says Núñez. “Notable biases have been discovered for people of color – continues this expert in law on new technologies – so that in the West, the algorithms used by these programs have been trained with databases where there is a predominance of Caucasian subjects, which makes that their margin of error scales significantly when they have to identify people with other characteristics”.

This racist bias, recalls Núñez, “was exposed during the protests of the Black Lives Matter movement, after which moratoriums were applied to the sale of this technology and its use by the police was vetoed.”

The only clear thing now in Spain, as the AEPD instructions state, “is that since there is no specific regulation that allows the use of facial recognition for security purposes, this technology cannot be used for that purpose” , adds this lawyer who is an expert in law on new technologies. To pave the way, he continues, “therefore, it is necessary for the legislator to approve a law that not only explains the essential public interest that would arise in this case (for example, to prevent or prosecute criminal conduct), but also has to incorporate measures that guarantee the fundamental rights of the people whose faces are recorded and processed”.

At this time, says Leandro Núñez, “we are in a regulatory limbo and until the report of the European Committee is final, the AEPD will hardly be able to clarify its position further.” And warning for boaters: “As long as this does not happen, my advice – concludes this lawyer – is not to make investments, much less install this technology without permission.” The example of the million-dollar penalty imposed on Mercadona by the AEPD should therefore serve as a warning.

The lead feet with which the European Union moves when it comes to setting the rules on the use of this artificial intelligence in a general way collide with the policies, much less transparent, of countries like China or Russia, where facial recognition is extended regardless of the opinion of the citizens.