There seems to be a consensus that the race to achieve human-like artificial intelligence (AI) has many similarities to what was the race for atomic energy. Both are technologies with great transformative potential capable of generating a lot of well-being if used well or endangering our existence otherwise. Another of the parallels is the race between blocks. If the race for nuclear energy was led by the United States and the Soviet Union, in that of AI it is the USA and China who are competing for supremacy.
There are many similarities but there is a big difference: private companies also participate in this race, in fact they are the ones leading it. Next to centuries-old entities like the USA or millennials like China we must add entities that are only a few dozen years old like Microsoft, Google or Facebook or that don’t even reach the tenth, as is the case from OpenAI. On the Chinese side, we must count Baidu, Alibaba, Tencent and Bytedance, the parent company of the ubiquitous TikTok.
Perhaps another difference, and this is a fundamental one, is that AI might not actually pose an existential threat to humanity. NYU professor and head of AI at Meta Yan LeCun recently commented on Twitter. In one thread, I argued that we don’t have an AI system that even remotely approaches the intelligence of a dog; to worry today about the negative effects of an eventual superhuman AI is “as if in 1920 they had worried about the safety of jet engines”.
It is clear that AI, or any technology, carries with it risks. They included Ford’s Model T, airplanes and nuclear power. And it’s also true that when they arise we don’t think about the worst-case scenarios, but we refine, put safeguards and regulate as we detect the risks. When we were young, seat belts were not mandatory; the risk was lower until then, as cars were slower and there were few of them. They became mandatory when the risk of death in a crash increased. Similarly, in the 1950s nuclear-powered cars and airplanes were discarded because they were too dangerous, and helium-filled blimps even earlier.
The idea that AI could endanger our existence is more science fiction than science at the moment. For this to happen, three things would have to happen: 1) that his “intelligence” was considerably higher than that of a dog (it is not known if it will ever happen); 2) that this AI was designed expressly with the aim of ending our existence (since it is not known if this will happen, no one knows how to do it), and 3) that as a species we were not capable of designing a AI that is aligned with humanistic values.
We need more AI and more poetry.