The EU approves the first law that regulates Artificial Intelligence

The European Union reached an agreement on Friday night to close the European Artificial Intelligence Law. A pioneering standard in the world that for the first time regulates this technology based on the level of danger that its use entails, after three days and 36 hours of negotiations in total.

“Historic! The European Union becomes the first continent to approve clear rules for the use of AI!”, announced the Commissioner for the Internal Market, Thierry Breton on his account on the social network X. For his part, the president of The European Commission, Ursula Von der Leyen, argued that the new law offers “a single legal framework for the development of Artificial Intelligence that can be trusted.”

The point that has most complicated the negotiations has been the one that affects biometric surveillance, such as facial recognition cameras in public spaces that use artificial intelligence in real time.

In a tug of war, the European Parliament, which called for a total ban, and the countries, which insisted on exceptions, a balance was finally achieved. It will be prohibited in general, and will only be used in three exceptions: in the case of searching for victims of kidnappings or trafficking in human beings; for the prevention of terrorist attacks or when one is already occurring; or for the identification of a person suspected of having committed crimes of terrorism, murder, rape or sex trafficking. For the European Parliament negotiators, it is “a success” (and they admitted that it was not easy for the countries) in that this system can only be used with prior judicial authorization.

States have insisted from the beginning on its use in cases of national security or to prevent cases of terrorism, but Parliament was concerned that the Fundamental Rights of citizens could be violated.

On the other hand, a total ban on biometric categorization systems through characteristics, such as sexual ideology or orientation, has been agreed upon. As well as the recognition of emotions at work or in educational institutions. Likewise, systems that “score” citizens (known as “social scoring”) will not be allowed, which consists of artificial intelligence tools that give points to the credibility or reputation of a person based on their interactions on the Internet or through a series of traits, a technology that is used massively in China.

The legislators also agreed on a list of systems considered high risk, and although their use will not be prohibited, it will be limited so that obligations will be applied to the companies themselves that want to enter the European market.

The European Commission proposed this law in 2021, when ChatGPT had not even been born and experts had not yet warned of its disruptive power. But the speed at which this technology advances made its regulation necessary, and this was a compelling reason for the negotiators of the European Parliament and the governments, aware that not approving anything means the status quo, and right now there was nowhere to limit its use.

Precisely, another of the issues that was of most concern was the regulation of foundational models, generative Artificial Intelligence, such as ChatGPT, Bard or the recent Gemini, and which the European Parliament asked to include in this law. Against all odds it was one of the agreements reached in the first meeting on Thursday night. Generative Artificial Intelligence models, such as ChatGPT, must meet transparency requirements, such as abiding by the rights of European intellectual property law and avoiding copied content.

Both institutions agreed that it was necessary to meet transparency requirements, such as reporting when specific content has used this technology. It was important for countries to regulate and establish order, for example, in how these systems are used in the film industry and avoid an avalanche of lawsuits (or strikes, such as the one that was recently called in Hollywood in which the use of Artificial Intelligence has been one of the main workhorses).

At the negotiating table, it has also been discussed how it can affect technologies that create a great impact, such as deepfakes. Very realistic videos that look like people talking and it is very difficult to distinguish if they are real or created by Artificial Intelligence. As well as those altered images, such as the case of the minors who were victims of manipulated images last September in Almendralejo. In both cases, the possibility of including a brand that made it clear that the content is the result of AI was agreed upon.

Because it is a law that regulates a technology that is constantly evolving, and although it is intended to last over time, changes will be allowed without having to reform the entire regulation from top to bottom. “We have managed to maintain an extremely delicate balance: between promoting innovation and strengthening artificial intelligence in Europe and at the same time fully respecting the fundamental rights of our citizens,” said the Secretary of State for Digitalization and Artificial Intelligence, Carme Artigas, in charge of lead the negotiations for the Spanish rotating presidency of the Council, after admitting that the negotiations were “tough and stressful”, but that they were “worth it”.

The puzzle to reach an agreement was not only about closing technical aspects, although of great significance, it was also important to find a balance between the benefits of Artificial Intelligence and the economic impact, and not lose competitiveness to giants such as China or the United States, who invest massively. While the European Union only represents 7% of annual investment, the United States and China together reach 80%. For this reason, both France and Germany insisted on the importance of this law not limiting competitiveness. Spain thus closes one of the key pieces of legislation of the European legislature and which marked one of its milestones for the presidency of the Council of the EU.

Following the agreement reached today, it must now be ratified by countries and the European Parliament, and will fully come into force in 2026, although some issues will apply earlier, such as the prohibition of some systems and the obligation for models such as ChatGPT to adhere to the rules of transparency.

Exit mobile version