The debate on the use of facial recognition still needs to find an answer, at the moment, to a key question, before deciding whether or not this technology is legal. To control a few people is it necessary to process data of millions of citizens? Here is “the heart of the matter”, says Leandro Núñez, Audens lawyer and expert in law related to new technologies. “It’s the unknown that needs to be solved”, he reiterates.
So things are not at all clear. Or yes, depending on how you look at it. And if not, ask Mercadona, which has been fined 2.5 million euros by the Spanish Data Protection Agency for installing a facial recognition system in its stores. “This was, in this case, the answer via punishment to the question we posed – says Leandro Núñez – since this company processed for days the faces of all the customers (thousands) who entered their establishments to control the access of only a few dozen people, repeat thieves on whom there were orders to stay away from Mercadona stores”.
Another obstacle not yet resolved with this advance in artificial intelligence, no less serious, refers to the fact that “this technology cannot yet ensure the detection of false positives (misidentifications) with the dire consequences that this would have among the affected”, adds this lawyer. The system is still not infallible when processing the data (extracted from a database of facial features) and it has been shown that sometimes – mainly when races are confused – it is wrong.
The regulation that the EU is currently working on on biometric recognition, which includes facial recognition, “of course takes into account the possibility of these errors”, points out Núñez. “Remarkable biases have been discovered for racialized people – continues this legal expert on new technologies – so, in the West, the algorithms that use the programs have been trained with databases where there is a predominance of subjects Caucasians, which causes the margin of error to increase significantly when they have to identify people with other characteristics”.
This racist bias, recalls Núñez, “was exposed during the protests of the Black Lives Matter movement, after which moratoriums were applied to the sale of this technology and its use by the police was banned”.
The only clear thing now in Spain, as indicated by the instructions of the AEPD, “is that since there is no specific regulation that allows the use of facial recognition for security purposes, it cannot be used technology for this purpose”, adds this lawyer expert in law on new technologies. To smooth the way, he continues, “it is necessary, therefore, that the legislator approves a law that not only explains the essential public interest that would be involved in this case (for example, to prevent or prosecute criminal conduct), but that, in moreover, it must incorporate measures that guarantee the fundamental rights of the people whose faces are recorded and processed”.
At the moment, says Leandro Núñez, we are in regulatory limbo and until the European Committee’s report is final, the AEPD can hardly clarify its position further. And warning for navigators: “Until this happens, my advice – concludes this lawyer – is not to make investments and, much less, to install this technology without permission”. The example of the million dollar fine imposed on Mercadona by the AEPD should therefore serve as a warning.
The lead feet with which the European Union moves when it comes to setting the rules on the use of this artificial intelligence in a generalized way collide with the much less transparent policies of countries such as China or Russia, where facial recognition is widespread regardless of the opinion of citizens.