The qualitative leap that generative artificial intelligence (AI) has taken in a few months requires the implementation of safeguard measures to mitigate its risks “immediately”, before the legislation currently in process enters into force, maintains the European Commission, which today will propose to companies in the sector that they join a voluntary code of conduct to guarantee a safe transition. The most immediate measure that Brussels demands from the platforms is that when they incorporate AI-based services they “clearly label” all the content generated by these methods.

“AI-based technologies can be a force for good for society” but “their dark side should not be overlooked, because they plan new risks and there are possible negative consequences for society, such as misinformation,” said the Vice President of the European Commission responsible for Values ​​and Transparency, VÄ›ra Jourová. “In a matter of seconds, they can generate complex content. Images of things that never happened, voices of people based on a sample of a few seconds…”, Jourová detailed, presenting the new measures as the necessary reaction to “face the new challenges in the fight against disinformation”.

Brussels proposes, firstly, that platforms adopt measures to prevent their services from being used by “evil actors” to generate disinformation and, secondly, that they develop technologies to identify content generated by AI and label it “clearly”. for users, so that it is quickly clear to them that behind it is a machine, not a person. This measure should be adopted “immediately”, Jourová declared in a brief press appearance today in Brussels. “I always say that we must protect freedom of expression and information, but I don’t think machines have it.”

Content labeling aims to limit the impact, above all, of the so-called ‘deep fakes’ and prevent the already blurred borders between reality and fiction from being further compromised by the appearance of texts and images with a highly credible appearance. but generated by machines and, depending on the interests of its creator, not always faithful to the facts. These innovations, warns the European Commission, can have serious consequences for the political and personal lives of European citizens.

Vice President Jourová and the Commissioner for the Internal Market, Thierry Breton, presented the proposal today to representatives of more than 40 companies -among them, all the ‘big tech’ companies, from Google, to Meta, TikTok or Microsoft- who have signed the european code of conduct on disinformation. Brussels hopes that those that incorporate AI into their services distinguish this type of text, photograph or video content “clearly” for the user. Recently, Twitter has abandoned the European code of conduct. Jourová has reiterated that he believes that Elon Musk’s company has made “a mistake” by opting for “confrontation” and has recalled that, although this agreement is voluntary, compliance with the new European legislation on digital content that will come into force at the end of August is mandatory. “Her actions will be scrutinized in detail,” she warned.

The proposals for digital platforms to mitigate the risks derived from new technologies come with a few months left before the EU agrees to the Law on Artificial Intelligence, a world first that many countries, lagging behind from the regulatory point of view, are watching with interest. But, as the vice president and head of the Competition portfolio, Margarethe Vestager, warned last week in Lulea (Sweden) in a meeting with the US Secretary of State, Antony Blinken, the new regulations “will not arrive, in the best of cases, up to three years from now”, hence the initiative to propose a voluntary code of conduct for the sector, “that can be applied immediately and anticipates the advancement of artificial intelligence”.