Francesca Bria is an innovation economist and digital policy expert. She was a digital commissioner for Barcelona and now collaborates with Hamburg and other cities. Among other positions, she is a high-level advisor to the New European Bauhaus led by Ursula von der Leyen. Throughout her career, she has actively advocated for data to remain in the public sphere and not be hijacked by large corporations. She responds to this interview from her home in Rome, after giving the inaugural conference for the presentation of the Algorithmic Transparency Center of the European Commission in Seville.
Are governments still in time to act in the face of the lack of control of advanced Artificial Intelligence (AI) models? Do you share the decision of the Italian data protection agency to block ChatGPT?
Regulating and governing the AI ​​to address present and future damage is crucial and it’s not too late… the time is now. We have seen a forceful reaction that has reminded us of the urgency of proposing a robust and holistic approach to AI regulation. Thousands of technology experts and academics have called for a temporary moratorium on advanced AI systems due to security concerns. The unauthorized use of copyrighted materials in AI-generated music and the collision of art and AI in landmark litigation further highlight the need for effective government oversight of AI. In my opinion, the motivations behind the ban on ChatGPT by the Italian data protection authorities are clear: they suspended the service because the massive collection of personal data is carried out illegally to train their AI algorithms, violating the privacy of people. However, regulation must take place at a European level.
In this sense, we are witnessing an interesting phenomenon: a wave of lawsuits that threaten the future of the large social media platforms. Can the same thing happen with AI?
AI companies may face similar challenges in the future. As AI systems become entrenched in various industries and in everyday life, it is crucial to prevent the companies that own these systems from spreading harmful content or promoting discrimination and disinformation on a large scale. Some experts argue that a certain level of opacity is an inevitable consequence of the methodology employed by AI systems. This requires a proper social debate to redefine what transparency, trust and experience mean in this new context. It is a huge social task that we must tackle. I believe that regulators should adopt even stricter liability measures for AI companies to safeguard public interests, protect our culture and our democracy. It is also obvious that we should not leave these types of critical technologies only in the hands of big tech. We should propose democratically governed alternatives.
This reminds me of a telling (also terrifying) quote from investor Ian Hogarth in the Financial Times about the drivers of advanced AI: “In private, many admit they still don’t know how to slow down. I think they would sincerely appreciate governments stepping in.” You belong to that world. Do you think that statement is true?
Well, I guess the real intent of these companies is to drive technology regulation; it’s not being regulated… They ostensibly advocate fair regulation, but we should discuss what this means in practice. In recent years, there have been cases where AI companies have fired workers who have advocated for ethics within the company, such as Timnit Gebru and Margaret Mitchell, co-leaders of Google’s AI ethics team, and many others less so. public. There are some debates that these companies do not want to have, such as data protection, algorithmic biases and AI ethics, monopoly power, taxation and content moderation… We must not forget that the big technology companies spent millions of dollars in lobbying against the European regulation of Data Protection and other regulations.
Is there a European way to solve this problem? Can we expect something from Europe?
The European way should be to find a balance between regulation, innovation and the public interest. The European Parliament will soon debate the Artificial Intelligence Law, presented by the Commission in April 2021, which is, mind you, the first comprehensive regulation of AI systems in the world. The Law includes notable aspects such as risk-based categorization, the prohibition of certain AI practices… It specifies transparency, responsibility and security requirements, as well as public database systems for the scrutiny of society. However, it also faces certain challenges and limitations that need to be further examined.
I suppose he means that Europe is, so to speak, the world champion in digital regulation, but instead it has serious difficulties to compete in innovation with the United States or Asia…
To avoid being overshadowed by the new Cold War rivalry between the US and China, Europe must work to improve its industrial competitiveness and economic resilience. Commissioner Thierry Breton says this when he stresses that the current de-industrialisation must be reversed and critical technologies made in Europe must be promoted, shifting attention towards scientific, technological and industrial innovation. Europe can get back to leading the way in global innovation by promoting Europe’s high-tech pioneers, recruiting and training one million high-tech researchers and fostering a strong network of technology transfer centres.
We return to the idea of ​​the beginning. We are still on time…
In fact, we are at a crossroads. The most used metaphor is that of the Sputnik moment, which refers to the rivalry between the two superpowers, the United States and China, in a new cold war for technological supremacy. In line with the best European scientific tradition based on global cooperation, we should take up the example of CERN, a symbol of unity and scientific progress, to develop technological infrastructures, knowledge and applications of artificial intelligence to face the greatest challenges that face us. we face.
You have said that it is not possible to “uninvent” AI. Have the controls to which the pharmaceutical industry is subjected, with gradual experiments supervised by the FDA or the European Medicines Agency, been used as an example of good practice? Do you think it is a more suitable system to develop AI than the current total liberalization?
Sure, I have long advocated for greater democratic control of digital technologies, data, and artificial intelligence. We must adopt a forward-thinking regulatory strategy to anticipate potential challenges and ensure that Europe remains a leader in open source AI research and development. Today, the driving force for AI research is primarily financial, with unprecedented investments recorded this year. However, venture capital often emphasizes advancing AI capabilities rather than delving into the inner workings of these systems. As a result, the basic unsupervised and unrestricted models are only available to private companies, leaving governments and academic institutions out of the game. This trend leads to more powerful AI systems, but not necessarily more secure and robust or ethically sound. AI models should only be openly disclosed when their implications are fully understood and appropriate governance mechanisms are in place.
You have given a lecture on the future algorithmic research center in Seville. What role can Spain and its cities play in this debate?
Look, cities are central to the European strategy. They reinforce digital citizenship and develop and test digital tools. As a former digital chief of Barcelona, ​​and now a collaborator of Hamburg and other cities, I believe that I have promoted a new social pact on data. This initiative encourages the exchange of data between companies and society, ultimately transforming data as a common good. On your specific question… Spanish cities, led by Barcelona, ​​must continue to be at the forefront of this critical movement to establish a citizens’ pact on data and artificial intelligence, fostering innovation for the public interest and collaboration across Europe.
The breakneck development of AI also comes at a high cost to the planet. Little is said about this.
Especially in the midst of the climate and energy crisis… AI systems require considerable energy consumption and computing power, and that translates into greenhouse gases. The growing demand for AI and increasingly complex models exacerbates energy consumption and CO2 emissions. It is vital to develop efficient AI models, using renewable energy in data centers, ensuring responsible recycling of hardware, and promoting more research and development on green AI and efficient supercomputing.
In another order, what risk do you see that the dizzying advances in AI blow up concepts such as intellectual property or copyright?
I see that our copyright system needs to be reformed. We have to rise to the challenge. We have to be able to enforce regulation against big platforms that use that data, information and knowledge for free, while making it easier for public institutions, artists and creators to use this data. Now we do the opposite: we make it easy for big companies to take advantage of the knowledge created by content and information producers for free, while we penalize artists and public institutions.
You yourself are someone who lives from the ideas you create…
Yes, as a creator of ideas I hope we can live in a world where power relations in the copyright system are fairer and more creative. You have to open up new creative and artistic possibilities, and human capacity too with AI technology, instead of concentrating more corporate accumulation.
Finally, what can be the effect on the world of work. There are apocalyptic predictions…
The EU Regulation on Artificial Intelligence classifies as “high risk” AI systems intended to “make decisions on the promotion and termination of contractual employment relationships, for the assignment of tasks and for monitoring and evaluating the performance and behavior” of workers. Proposing a fair, transparent and inclusive regulation of data and algorithms means proposing a new vision of labor law and democratic algorithmic control in the era of AI and automation, to guarantee workers better protection of rights. This approach has been followed here in Spain in the labor reform of Yolanda DÃaz.