AI is more than just an acronym. It summarizes in two letters the tension of our time. It contains all the factors that lead us to one of those decisive moments for humanity. A moment that requires an accurate diagnosis of what lies beneath the surface of AI news, as spotting its keys can help us anticipate the inevitable trends that innovation is unleashing on it.
The battle that has been waged in OpenAI around the figure of Sam Altman is a reflection of this. The same as the desperate reaction of many technologists, who in March promoted a manifesto calling for a moratorium on generative AI research. And what can we say about the presidential executive order of the White House a few weeks ago, which is justified by the extraordinary urgency to govern both the development and use of AI according to security and accountability criteria? These are terms that invite alarm and that also make their home, although with less discursive gravity, the Bletchley declaration, approved by the British Government at the beginning of the month.
In this declaration, subscribed even by China, the urgency of a global public-private oversight of the research of the AI Emphasizing the frontier, that is to say, that which is at the cutting edge of innovation in the generative field. Debate that also shakes Europe when the final stretch of the negotiation that takes place in the trilogues that precede the imminent approval of the European AI regulation is elucidated.
What is going on behind this news? That AI is entering a moment of no return on the capabilities it is acquiring that may make definitive progress toward strong or general AI viable much sooner than expected. That is, an AI with cognitive capabilities similar to our common sense and our consciousness, although with a statistical intelligence behind it infinitely superior to ours. It was ventured that it would be reached in 2050, but this possibility may be advancing two decades.
A phenomenon that is produced by the pressure of global geopolitical warming, fueled by the fierce competition between China and the United States for technological hegemony. The Chinese, in accordance with a verticalized planning of AI research where the State controls the entire process. The Americans, through a horizontal competition between the famous Gafam (Google, Amazon, Facebook-Meta, Apple and Microsoft), as reflected in the helm given by Microsoft to OpenAI.
Let’s remember that the American design of technological innovation is based on the winner-takes-all principle. A neoliberal model that has worked successfully since the birth of the digital market in the United States and that has allowed efficient competition between monopolies, which is what the Gafams are. The problem is that this competition can be broken if one company dominates the disruptive change that AI is experiencing. Which is what might happen to Microsoft’s ultimate control over OpenAI.
We remember that this company was born not for profit and with an open, collaborative and ethical approach. He wanted to develop a generative AI governed by those principles, but driven by the same utopian logic that has been the engine of research in this area since Alan Turing: to reproduce a human brain without the defects that so often make it fail. The advances made by OpenAI in a short time, through its applied deep learning models, have developed a prototype like ChatGPT, which has started to change things. To the point of being very close to offering a generative AI at such a low entry price that it could drive any competitor out of the market for similar services.
A commercial initiative that could generate a monopoly capable of projecting itself over the entire digital market. One thing was discussed on November 6, when Sam Altman announced the launch of a multi-sided AI design with a turbo GPT-4 at the forefront and an App Store at the back to monetize the service offering crossed
This decision sparked a battle within OpenAI. On the one hand, there were those who wished to keep the company within its original approaches in order to slow down the research and be able to supervise it ethically in the face of the increase in risks associated with the advance in the generative capacities of the developed AI systems for the company And on the other hand, those who, with Altman at the helm, want to take the investigation to its final consequences. Something that would allow OpenAI to achieve the click that produces a strong AI that gives its main shareholder, Microsoft, a monopoly on the digital ecosystem. A battle that, as we have seen, Altman and Microsoft have won.
An outcome that, curiously, occurs a month after the approval of the presidential order that attributes to the tenant of the White House the status of AI Commander in Chief and that must be related to the control of the supervision over innovation in AI established in 2022 by the Chips and Science law. Having said that, it is not surprising that the presence of Larry Summers, ex-Secretary of State of the Treasury and ex-Rector of Harvard, on the new board of OpenAI helps us understand what has happened in the aforementioned geopolitical key. A key that is already decisive. It reflects a will to power around the existence of an “innovator-industrial complex” on AI that is fighting determinedly for the United States to be the first to achieve strong AI by 2030.