This 2024 will be the year of artificial intelligence (AI). Especially, after you notice in 2023 that it enters a critical phase due to the extraordinary advances in its development. A fascinating moment, which exposes us to risks and uncertainties that require establishing as soon as possible a minimal governance with which to face the future with relative confidence, because it will be impossible to fully guarantee it. The reason is that there are two factors pressing on the AI ??that prevent it from being controllable.

On the one hand, the geopolitical struggle between the US and China over the increase in their capabilities. A rivalry that escalates as both internalize that the technological and planetary hegemony they fight for is subordinate to whoever leads their research. In fact, having more competitive companies, more lethal weapons and more effective governments in the capacity of social control will depend on this. And on the other, the utopian gene that beats in the synthetic DNA of AI. A more determining factor than we assume, since it has been operating since it was born 70 years ago and gives it the nature of a technology that goes beyond a facilitating technology. It is a finalist technology that wants something to become someone. For this reason, it endows it with a more and more powerful statistical intelligence that aspires to achieve mental states similar to those of its creator.

The sum of the two factors releases an increasingly disturbing denouement. This is confirmed by what has happened this year that leaves us. In which relevant initiatives have proliferated where the existence of increasingly serious risks in AI research has been insisted upon. The most surprising thing is that they are never detailed in a concrete way, but it is pointed out where they can come from. As with the manifesto signed by more than a thousand AI scientists last March 29. He says that “advanced AI may represent a profound change in the history of life on Earth and should be carefully planned and managed.” Something that is not happening because of the aggressive competition that companies and countries maintain. To the extent that it resembles a “race out of control” that may lead to “digital minds that are increasingly powerful and that no one, not even their creators, can reliably understand, predict or control”.

There will be those who think it is something exaggerated. Especially because the text concluded with the request for a moratorium on the investigation that would allow a recapitulation of where he was and where he was going. If we analyze what the same technology corporations that lead the development of AI in the United States are saying, we will see the concern they also express about their research. They don’t do it with the dystopian voices of scientists, but with an obvious concern, since they insist on the declaration they signed on July 21: self-regulation that puts limits on their development is urgent, at least. Among other things, because “innovation cannot occur at the expense of the rights and security of Americans”.

That is why the signatories – Google, Meta, Microsoft, OpenAI, Amazon, Anthropic and Inflection AI – commit to be transparent when testing the security of their AI system developments, to make public the results of control tests and to avoid bias that produce discrimination and violation of privacy and intimacy. Some theses that are in line with the presidential executive order of the United States of October 31 this year as well. In this order, in addition to turning the tenant of the White House into an “AI commander in chief” who supervises and coordinates public and private research in this field, it is insisted that it must be used responsibly if it wants to become a promise for all of humanity. Otherwise: it could lead us to “exacerbate social harms such as fraud, discrimination, prejudice and misinformation, marginalize and disempower workers, stifle competition and pose risks to national security”.

More forceful, if possible, are the statements of reasons that accompany both the Bletchley statement of November 2 and the text of the common proposal agreed by the Commission, the Council and the European Parliament of December 7 on the regulation of i.a. In all of them, the imperative need for a governance that neutralizes the extraordinary risks that can accompany uncontrolled AI research is stressed. Risks described, even, as “catastrophic” for humanity.

I will not analyze in detail the text of the final proposal of the European regulation. It remains for the next installment, although I will emphasize that Europe has let itself be dragged down by the anxiety of competing with China and the USA to develop its own AI guided by a stroke of geopolitical realism. Forgetting that if Europe wants to be a global actor, it must be in the name of all those excluded by the competitive logic that has led the Chinese and Americans to remove ethics from above by seeing it as an obstacle to research .

However, the most worrying thing about the aforementioned initiatives is that a governance design that seeks security in AI is approached as if we were dealing with another enabling technology, when it is not. In doing so, we are victims of a scientific mindset that is unable to appreciate that we are dealing with a stronghold of utopian research that wants to imitate the human brain to make it artificially perfect, regardless of the consequences. Here’s the problem. In the Faustian mentality that accompanies an AI for which no governance will be possible if it is designed to be a will to power in itself. A nihilistic AI for a world dominated by nihilism.