Today, Europe definitively votes on the world’s first artificial intelligence law. The creation of this legislation has been complex, but it makes it possible to bring order to a sector that is going at devilish speed in the hands of a handful of companies without regulations that mark them with red lines. The final text strikes a complex balance between those who want more guarantees for citizens’ rights and those who hope that artificial intelligence (AI) will have uses such as policing.

In yesterday’s plenary session, prior to today’s vote, the Internal Market Commissioner, Thierry Breton, mentioned the work that the Spanish Secretary of State for Digitalization and Intel did in the final agreement of the law. Artificial intelligence, Carme Artigas, last December, given that “during the Spanish presidency we reached a historic compromise” which the commissioner considers “is balanced and will stand the test of time”.

The European artificial intelligence law states that a future European Office of Artificial Intelligence will classify AI systems according to their risks and will determine what requirements companies and organizations that use them will have to meet .

Lawmakers consider that most AI providers will be in the minimal risk category, which will allow for broad use in the economy by businesses.

Another category will be those of high risk, which includes systems that could have a potential negative impact on people’s safety or their fundamental rights.

In the next step, the law also establishes a detailed list of impermissible risk uses. This group includes the use of AI for social scoring for public or private use, exploiting people’s vulnerabilities using subliminal techniques, and remote biometric identification in real time and in public places , by the security forces and bodies. This point is one of the most controversial, because in the final wording, limited exceptions were introduced to a series of crimes.

Other uses considered inadmissible and prohibited are the categorization of people based on biometric data to deduce their ethnicity, political opinions, trade union affiliation, religious or philosophical beliefs or sexual orientation. Nor will it be possible to use biometrics for individual predictive policing or emotion recognition in the workplace and in schools, although there will be medical and security exceptions.

Systems that pose a risk to transparency will also be classified. With conversational bots, users will need to be aware that they are interacting with a machine.

The text anticipates the systemic risks of general-purpose AI models, such as large generative AI models of the same type as ChatGPT, because they can be used for many different jobs. Their widespread use by the population could have widespread consequences, for example, if they spread a negative bias or are affected by a cyber attack.

All companies that use AI in Europe will have to be registered by a future European Office of Artificial Intelligence that will be created as soon as the text is published officially, in 20 days.

The rest of the implementation of the law, once it passes today, will not be so fast. The process will be gradual and will take time to be fully in place, around the spring of 2026. There are some parts that will be adopted much earlier, such as the bans, which will come into force in six months. Governance rules and obligations for general use AI models will be implemented in one year.

The last intervention yesterday in the European Parliament before today’s vote was given by one of the speakers of the law, Drago? Tudorache, who asked himself: “Have we found the right balance between protection and innovation?” Do we have adequate safeguards for citizens against the risks that technology can bring or against possible abuses by governments or bad actors? Have we secured this legislation sufficiently for the future? To answer the incredible speed with which all these questions evolve I have one answer: absolutely; Yes”.