The European Union has just reached a first provisional agreement on how it will regulate artificial intelligence (AI). Through this new law, Europe becomes the first region in the world to delimit its scope of application as well as its regulation. The standard divides the applications of AI systems into three groups based on the risk they may pose to society: those prohibited, those of high risk and those of low risk. However, it will not apply to issues that affect national security, military systems or research and development.
The first group of systems included in this legislation are those that pose an unacceptable risk and, therefore, are prohibited. Those who can subliminally manipulate groups such as minors or people with disabilities, causing physical or psychological harm. Here a question arises: could applications like TikTok, which keep young people engaged for hours, be considered something that subliminally manipulates minors? Other prohibited technologies are biometric remote monitoring applications, capable of identifying us through video or photographs. Through this technology, AI systems can locate us at all times using cameras in public spaces. These are currently used in Japan to solve crimes and apprehend fugitives in a matter of hours. The regulation in this case indicates that they can only be used exceptionally by security forces.
Finally, this law also prohibits the use of citizen scoring systems, which consist of scoring a citizen according to their behavior. China has been scoring citizens for years and using these scores to decide their access to loans, housing or even air travel. The second category are high-risk ones, such as those used to grant loans, calculate life insurance premiums or even decide access to social assistance. The law will only affect large companies, so startups and other small businesses should not be penalized by the regulation. Companies and institutions that use these systems will have to fill out complex documentation about the data used to train their AI models, their application, reliability… This documentation will be filed in a global database that the EU will have to decide if its use is correct and legitimate, as well as being able to carry out the necessary audits. The third category will be low-risk applications, which for the moment will not be affected by the regulation.
The EU has taken a strong step forward to protect consumers from the dangerous use of AI, just as it did with the publication of the Data Protection law. It remains to be seen how it is put into practice and the possible effects it will have on companies. Europe is far behind the US and China in the application of AI and risks falling even further behind if this regulation ends up discouraging its use or slowing its adoption.