The only inevitable thing in the future will be artificial intelligence (AI). At least if we want to successfully manage the enormous complexity and amount of data released by our automated societies. But what kind of AI do we want? What’s more, where is AI taking us? Or, if you prefer, what do we want from it?
AI has become fashionable with the popularization of applications based on it. The fact that ChatGPT is free has produced a click of media visibility, despite the fact that it has coexisted for some time with other, more powerful models of linguistic interaction with humans. There are Copilot, AlphaCode, LaMDA, GPT-J, GPT-Neox or Wudao 2.0 or, for a few days Bard and, soon, the expected Ernie Bot.
In all, AI demonstrates capabilities that interact on a peer-to-peer basis, or exceed human creativity in combinatorial, exploratory, and transformational aspects. As Margaret Boden admits, no aspect of our lives will escape AI. This possibility confronts us with disturbing fundamental reflections that some of us have highlighted years ago, but which now lead us to more complex and risky dilemmas for which linear and naive interpretations, inspired by strictly technological and economic perspectives, are useless.
In order to succeed in the way of relating to AI, it is essential to assume its inevitability. Something to do with the systemic automation of humanity. It does not matter the culture, the language, the social and educational status, the religion or the level of income: all human beings, in one way or another, and in a more or less intense and accelerated way, we see how our lives are automated . An experience that will go further.
Five billion people inhabit the internet using a smartphone and generating data that feeds a platform capitalism, which uses algorithms to design the business models of the 21st century. Today, the global economy is a cognitive capitalism based on the information that humanity produces when it interacts in the infosphere or machines when they operate with each other through the Internet of Things.
It is well known that data is the support of the information that platform capitalism manages. Data that grows due to the digitalization of the human being, of societies and of the global economy, which associates greater productivity and competitiveness with the intensive use of exponential technologies. This is demonstrated by the volume of data we generate as a species. We have gone from 2 zettabytes in 2010 to 16 in 2015. In 2020, the year of the pandemic, we reached 67 and the forecast for 2025 is 180.
The management of this enormous growth of data causes a complexity that leads us to what Niklas Luhmann defined as a critical transition risk. A fact that tests the viability of technological civilization. Because? Because the tsunami of information that we produce overwhelms the individual and collective capacities of human intelligence. This can lead us to collapse. So if we want to seize the extraordinary opportunities for innovation and prosperity that automation of the human species offers and the vastness of data it unleashes, we need the help of AI. In fact, only by relying on it will we be able to manage the information from the 180 zettabytes of data that we will generate in 2025 and transform it into knowledge.
This will make us advance and progress if we are able to reform the designs of the human organizations that manage our applied intelligence, starting with companies and ending with political institutions. This circumstance is what is behind, although it is not made explicit in the form of a collective story, the development of more evolved and autonomous AI systems. But, also, what stimulates the fierce competition between technology corporations and the hegemonic struggle that the US and China maintain around it. Something that explains the innovative push that has set the goal of reaching the development of a strong or general AI by 2050. That is, an AI with information processing capabilities superior to human intelligence and that, in addition, generates and understands the environment in which it operates by contextualizing, provoking intentionality and being creative.
Until now, AI progress has been based on algorithms that process, according to logical laws, the information available to them. They learn and reach progressive synthetic conclusions, recognize human emotions and progress in contexts of creativity. It is evidenced by applications such as DALL-E or Mindjourney.
The question is knowing what will happen tomorrow. Especially if it evolves driven by reinforced learning through the maximization of rewards and punishments that make it understand its own contexts, establish scales of behavioral values, attribute markings of changing meanings to its performance in reality, and finally reach a notion close to common sense. .
Achieving this strong AI requires data that better reflects human cognitive complexity. Specifically, the psychic springs that make it possible. A record of our emotionality carried out by applications with immersive environments that probe our neural activity, as happens, for example, with the metaverse. The horizon behind it is to favor an AI that replicates our intelligence. For what purpose? Professionally replace the human being? Give him more free time? Transform the notion of human work? We don’t know, although in 2020, 67% of the workforce rested on humans and 33% on machines. The forecast for 2025 is that the human contribution is at 53% and that of machines at 47%. The improvement of AI contributes to this.
This is where the debate on the meaning we want to give to AI in our lives should open. A collective reflection that assumes the inevitability of that if we seek to hit the questions and purposes that encourage us to use it. Not only identifying the whys but also the whys. Do we want an AI that confirms the inferiority of human intelligence or an AI that helps increase it? Let us not delay in formulating the questions and discussing them. AI is designed not to wait for humans.