Anyone who hasn’t used a messaging app at some point to send an upbeat message or picture throws the first stone. We do it believing that we are in a space protected from others. What is said or done is between us and the code. This could be about to change in the European Union, according to digital rights activists, in the name of the fight against child sexual abuse. The European Commission aims to approve before October a regulation – that is, of direct application to all EU countries – that would oblige all messaging platforms, including encrypted communication, to monitor communications for research of suspicious behavior. Almost all organizations in defense of digital rights and a large number of technology experts warn of the possible loss of fundamental rights and dangers to network security that its approval would entail. Nevertheless, both the Minister of the Interior, Ylva, and the speaker of the project, the Spaniard Francisco Javier Zarzalejos, affirm that the Regulation to Prevent and Combat Child Sexual Abuse is not about privacy, but about children’s rights, and accuse the activists of “distorting the meaning of the project”.

“It’s not about children, I’ve read the entire draft, and in 140 pages it doesn’t talk about child sexual abuse. Only from the control of the internet”, denounces Sergio Salgado, activist of the platform in defense of digital rights Xnet, which today publishes the report together with Political Watch, The Commoners, Eticas, Inter_ferencias and Guifi.net “

The current bill proposes the creation of algorithms that scan communications, in particular sexually explicit material (also between adults), to detect pedophile material or behaviour. There is talk of a “neutral technology”, that is to say, the algorithm in question would navigate, so to speak, with its eyes closed, and would only open them if it detected suspicious cases. This material would then be sent to a data processing center, where the human hand would make a first selection. From there, if necessary, it would be sent to the authorities of the respective countries. The problem is that, as of now, say the law’s detractors, such technology does not exist. And everything that has been tested before has been discarded due to the unacceptable number of false positives it generated. “The approach is not to say what technology each platform must use, but to demand that certain standards be met”, defends Zarzalejos in conversation with La Vanguardia. The MEP states that each service has its own particularities, therefore, they will have to apply different measures to control criminal use by users and the aim of the regulation is “not to become obsolete”.

“We know for a fact that the best technologies in the world to do this have an error rate of between 10% and 20%. 10 billion messages are sent daily on WhatsApp alone, a 10% to 20% error rate is billions. The error rates will be enormous”, points out Ella Jakubowska, digital human rights expert at the European Digital Rights (EDRi) network. Regarding this, Zarzalejos replies that “a success rate of 90%, bearing in mind that there would be a human review afterwards, is not scandalous”.

However, when we talk about false positives, we don’t talk in the abstract: some platforms like Facebook or Gmail already use similar technologies to identify nodes. Salgado explains, for example, the case of a person in the United States who suffered legal problems for a decade after Google identified some photos of the son, which he sent for a medical consultation, as child sexual abuse material. Jakubowska goes further. Irish police falsely identified hundreds of people as potential abusers after sharing images of children on the beach or even consensual images between adults.

On May 11, 30 of Europe’s top IT and cybersecurity experts published an open letter to European leaders warning that the Commission’s proposal was based on highly imprecise technology and would “put everyone’s security at risk, including there are some of the most vulnerable: the children”. But Zarzalejos disagrees and assures that this technology must “adapt and calibrate”. “Algorithms can be as specific as you want. It seems suspicious to me that there is technology for everything and not for this,” he says.

The report on Chat Control (as the project is known among activists) criticizes that efforts are wasted on “unnecessary and even harmful” actions when those dedicated to combating these crimes report that they do not even give the scope to eliminate the pederasty images already detected on the web. “In fact, these kinds of criminals don’t use mainstream messaging,” recalls Salgado, who also warns that opening back doors to access encrypted communication not only violates the right to privacy of communications, but weakens the security of the entire network. “There are no back doors just for the good,” he explains. “A less secure internet with the excuse of protecting children is a less secure internet for everyone. More open to interference from intelligence services, criminals, foreign powers and… pederasts”.

“Commissioner Johansson has been saying for some time that they want to discourage the use of encryption because it prevents the police and governments from getting into citizens’ private conversations,” says Jakubowska. When asked about the case, Zarzalejos admits that the encryption of communications is one of the project’s thorniest issues. Although he assures that his proposal “does not foresee any measures that break encryption”, he does acknowledge that there is a “debate” about “technical solutions”, such as scanning communications before they occur. “Before an image was sent, for example, it could be detected if it is sexual abuse material,” he explains. But he insists: “The encryption will not be touched nor will anyone be required to access the content.”