It’s time to move from good words to deeds. It is the message that, through different channels, the European Commission has transmitted to companies active in the promising and at the same time disturbing sector of generative artificial intelligence to get them to commit to following some basic principles that guarantee a deployment of these technologies in accordance with European principles and values ​​for the defense of rights and freedoms.
“With a technology as powerful as this, we can’t wait for things to develop on their own, we can’t take that risk. Just because it’s something uncertain and we don’t have all the answers, we shouldn’t stop doing what we think would make sense”, defended yesterday the vice-president of the EC, Margrethe Vestager, in a meeting with several European media at the headquarters of the community executive. “We need to move from debates to compromises as soon as possible” to “mitigate the risks” of AI and “enjoy its potential benefits”.
The European Union is working on several fronts, through different initiatives that can feel overlapping, to erect a kind of common “security fences” for the whole industry to guide the deployment of the next generations of AI . Brussels, for example, has decided to incorporate this type of technology among the factors that must be taken into account when working with the large technological platforms in the fight against disinformation, and yesterday it asked companies to “identify and clearly label” all content that has been generated by machines.
“AI-based technologies can be a force for good for society”, but “its dark side should not be overlooked, because they pose new risks and have possible negative consequences for society, such as disinformation”, added the vice-president of the European Commission responsible for Values ​​and Transparency, VÄ›ra Jourová, after a meeting with the 44 signatories of the code of good practices against disinformation created in 2018, among which are all the companies sector leaders (Meta, Google, Microsoft, TikTok…) and representatives of civil society.
“We want the platforms to label the content generated by AI so that the normal user, who is also usually distracted by many different things, can see it clearly”, explained Jourová, who has asked the companies to act immediately. “In a matter of seconds, generative AI can generate complex content, images of things that have never happened, people’s voices based on a sample of a few seconds…”, recalled the Czech commissioner, in charge of dealing with risks from from the point of view of the fight against disinformation, one of the areas in which AI is most effective.
On the other hand, the proposed law on artificial intelligence proposed by the European Commission two years ago has entered the final stretch of processing. The Eurochamber will set its negotiating position next week and, if an internal agreement is reached, it will be able to negotiate the final version of the regulations with the member states (the Council) from September, a process that will be piloted by the Spanish presidency of the EU. The European Commissioner for the Internal Market, Thierry Breton, responsible for the legislative initiative, has also proposed to companies to join a “pact” to prepare for the entry into force of the new law, possibly on 2026.
Precisely because the transposition period of the European regulations will be around three years, Vestager is committed to working with companies so that they immediately assume some commitments on what kind of sources they can use, what types of tests are carried out or what routes are enabled to monitor these services or resolve issues.
Thus, the G-7 summit in Hiroshima asked the representative of the European Commission and her counterparts in the United States to prepare “before the end of the year” a voluntary code of conduct, with contributions from the industry itself, but to a certain extent. Although there may be “an alignment of interests” in terms of the shared concern about how this technology can be used”, it is important “not to let the process devolve into measures of the lowest common denominator, because it would not work” , says Vestager.
“I think there is a high level of awareness about how powerful this technology is and how it can make many things easier for us”, but, at the same time, we know that it can give very bad results”, he says before recalling the case of a lawyer who asked ChatGPT to find legal precedent cases to defend a client seeking compensation from an airline and when the counterparty reviewed the information discovered that none of the cases were real. “If you don’t trust it, will you use it?”, he asks.