Artificial intelligence (AI) is on everyone’s lips. It even hegemonizes the public conversation about digitalization. In doing so, the opportunities of AI are mostly appealed to. Although, in recent months, the existence of risks associated with it has also begun to be mentioned. Those who cite her in speeches, classes, conferences, symposiums, boards of directors, the media or simple conversations, use verbs that thread a more or less optimistic story about her. The majority positively emphasizes the need to “develop,” “enhance,” “favor,” “promote” or “maximize” it. Many talk about “regulating” and “supervising” it, although it is generally not specified to what extent, and almost never how and to what extent. Some invoke the urgency of “controlling” and “limiting” it. And a few do not avoid pointing out that it must be “paralyzed” or, if necessary, “banned.”
The focus of this public conversation is clear. It is framed within technical and economic perspectives led by scientists dedicated to AI, as well as entrepreneurs who want to take advantage of the profitability associated with their economic developments. But, also, some democratic governments are beginning to want to articulate public policies that identify a general interest that redirects the private interests that push towards the inevitability of their universalization. Not so much for political principles, but rather for social utilities linked to the pragmatic opportunity to regulate.
In any case, it is a discursive phenomenon that only occurs in the most advanced democratic societies in the world. Those that are still “full”, although we do not know for how long. But admitting that AI is in our conversation doesn’t help much. Let us remember that our technologists, businessmen and rulers talk about it to convince us of its timely necessity. Something that most people already sense. Without AI there will be no future for anyone. It is unavoidable. We see it in mobility, finance, infrastructure, administration, health, security, education and even culture.
But recognizing this does not mean that the conversation should not be more ambitious. At least as long as it remains monopolized by those who have private interests of a professional, economic or political nature around it. Which is what happens with the scientific community that works on it, the businessmen who implement it and the rulers who seek to supervise it in line with the interests of other groups.
The outcome is to talk about AI from a simplistic perspective, focused on explaining what it does and what it can do. As well as what will be the transformative impact it will have on the digitalizing acceleration of our societies, governments and companies thanks to platform or cognitive capitalism, which is based on the algorithmic knowledge that is deduced from it. It is true that, in recent years, a layer of complexity has been added through an ethical angle that appeals to the need to introduce moral limits and even regulations that avoid negative externalities such as inequality.
There has even been relative social concern about the risks that the development of unsupervised AI can lead to. Although here, as has been seen with the European Regulation on AI, a collective ethic of self-help for mass political consumption is invoked more than a deep philosophical regulation. Based on purposes and that is at the level of what the appearance of a technology that will alter the moral and, perhaps, existential foundations of our species will represent for the human condition.
We take steps towards an artificial civilization and we don’t think about it. The blame lies with a public conversation about AI that is part of the problem. Among other things, because he does not think that we are facing a technology that is disruptively evolving towards a nihilistic futurism, which will become perfect if no one remedies it. At least, if the one who defines the future of AI is one of these two options. Or the neoliberal silicon Calvinism of the GAFAM (Google, Apple, Facebook-Meta, Amazon and Microsoft) or the synthetic Confucianism of the Chinese Communist Party.
In this sense, Europe would have to reclaim its right to think, which is what it has been doing for two and a half millennia, when it brought philosophy to the world. From there you have to make the conversation evolve and lead it towards an authentic debate. You must claim the right to decide about AI. That is, to discuss why humanity wants it and define a purposeful meaning that helps us get from it what we hope it has in store for us. Something that will only arise if we think about AI. If we try to critically understand what underlies it. Specifically, in the utopian and Hobbesian logic that drives its development towards the apotheosis of itself in symbolic terms.
We can’t keep talking about her like we do. Seeking to exploit and maximize capabilities without purpose, turning it into a will to power that facilitates and maximizes actions without meaning behind. The current conversation about AI is useless. At least if we want to defend a human condition open to the search for happiness and the sense of transcendence. And not just because it is mediocre, uncritical and lacks intellectual ambition. But because it leads us to a future that takes away our right to stand on the shoulders of the gigantic titan that will be AI to see further from it.