Fears of artificial intelligence have haunted humanity since the dawn of the computing age. Until now, those fears centered on machines using physical means to kill, enslave, or replace people. However, in the last two years, new artificial intelligence tools have emerged that threaten the survival of human civilization from an unexpected angle. Artificial intelligence has acquired remarkable abilities to manipulate and generate language, be it words, sounds, or images. And in doing so, you have hacked into the operating system of our civilization.
Language is the stuff of which almost all human culture is made. Human rights, for example, are not inscribed in our DNA. They are, rather, cultural artifacts that we create by constructing stories and writing laws. The gods are not physical realities. They are, rather, cultural artifacts that we create by inventing myths and composing sacred scriptures.
Also money is a cultural artifact. Banknotes are nothing more than pieces of colored paper; today, more than 90% of money is not even banknotes, just digital information stored in computers. What gives value to money are the stories that bankers, finance ministers and cryptocurrency gurus tell us about it. Sam Bankman-Fried, Elizabeth Holmes, and Bernie Madoff weren’t particularly good at creating real value, but they were all remarkably skilled storytellers.
What will happen when a non-human intelligence is better than the average human being at telling stories, composing melodies, drawing images and writing laws and scriptures? When we think of ChatGPT and other similar new tools, we think of schoolchildren using artificial intelligence to compose their essays. What will happen to the school system when young people do that? Actually, that kind of question misses the big picture. Let’s forget about school newsrooms. Consider the upcoming 2024 US presidential election and try to imagine the impact on them of artificial intelligence tools, which are likely to be used to mass-produce political content, fake news, and scriptures for new cults.
In recent years, the QAnon cult has coalesced online around anonymous messages known as “Q-pills.” His followers collect, venerate and interpret these “pills” as if they were a sacred text. As far as we know, all previous “Q pills” have been composed by humans and bots have just spread them, but in the future we could see the first cults in history whose revered texts will have been written by a non-human intelligence. . Religions have held throughout history that their sacred books came from a non-human source. That could soon be a reality.
On a more prosaic level, we could soon find ourselves discussing abortion, climate change, or Russia’s invasion of Ukraine at length online with entities we think are human beings, but are actually artificial intelligences. The problem is that it’s completely useless to spend time trying to change the held opinions of an AI bot, and that AI can refine messages so precisely that it will have a good chance of influencing us.
Thanks to their command of language, artificial intelligences could even form very close relationships with people and use the power provided by that closeness to modify our opinions and worldviews. Although there is no indication that artificial intelligences have consciousness or feelings of their own, it will be enough for them to make them feel emotionally attached to them to foster a false intimacy with humans. In June 2022, Blake Lemoine, a Google engineer, announced that the LaMDA artificial intelligence chatbot he was working on had gained consciousness. The controversial claim cost him his job. The most interesting thing about that episode was not Lemoine’s statement, probably false. It was, rather, his willingness to risk his lucrative job to champion the AI ​​chatbot. If artificial intelligence can influence people and cause them to put their jobs at risk, what else can it induce them to do?
In a political battle for hearts and minds, privacy is the most effective weapon, and artificial intelligence has just gained the ability to massively establish very close relationships with millions of people. We all know that in the last decade social networks have become a battlefield for the control of human attention. With the next generation of artificial intelligence, the front lines are shifting from care to privacy. What will happen to human society and psychology when artificial intelligence fights artificial intelligence in a battle to feign very close relationships with us, relationships that can then be used to convince us to vote for certain politicians or buy certain products?
Even without creating a “false privacyâ€, the new artificial intelligence tools will have an immense influence on our opinions and conceptions of the world. People could eventually use a single AI advisor as an all-knowing universal oracle. No wonder Google is terrified. Why bother looking, when I can ask the oracle? The journalism and advertising sectors should also be terrified. Why read a newspaper if I can ask the oracle to tell me what the latest news is? And what good are ads if I can ask the oracle to tell me what to buy?
In any case, even these scenarios fail to provide the big picture. What we are really talking about is the possible end of human history. Not the end of the story, just the end of its human-dominated part. History is the interaction between biology and culture; between biological needs and desires for things like food and sex, and cultural creations like religions and laws. History is the process by which laws and religions shape food and sex.
What will happen to the course of history when artificial intelligence takes over culture and begins to produce stories, melodies, laws and religions? Earlier tools, such as the printing press and radio, helped spread the cultural ideas of humans, but never created cultural ideas of their own. Artificial intelligence is in every point different. Artificial intelligence can create completely new ideas, a completely new culture.
Initially, it is likely to imitate the human prototypes with which it has trained in its infancy. However, as the years go by, the culture of artificial intelligence will dare to venture into lands never trodden by human beings. For millennia, humans have lived in the dreams of other humans. In the decades to come, we could find ourselves living in the dreams of a xenointelligence.
The fear of artificial intelligence has only haunted humanity for a few decades. However, human beings have been haunted by a much deeper fear for thousands of years. We have always appreciated the power of stories and images to manipulate the mind and create illusions. Therefore, since ancient times, humans have feared being trapped in a world of illusions.
In the 17th century, René Descartes feared that a malicious demon might have trapped him inside a world of illusions and created everything he saw and heard. In ancient Greece, Plato recounted the famous allegory of the cave in which a group of people spend their entire lives chained inside a cave in front of an empty wall. A screen. On that screen they see projected various shadows. The prisoners mistake the illusions they see in her for reality.
In ancient India, the sages of Buddhism and Hinduism affirmed that human beings lived trapped in maya, the world of illusions. Often, what we tend to take for reality are nothing more than fictions of our own mind. People can wage wars, kill others, and be willing to be killed out of a belief in this or that illusion.
The artificial intelligence revolution confronts us with Descartes’ demon, Plato’s cave, and Maya. If we’re not careful, we could get trapped behind a veil of illusions that we’d be unable to tear…or even realize they exist.
Of course, the new power of artificial intelligence can also be used for positive purposes. I will not insist on that aspect, because those who develop artificial intelligence already talk enough about it. The task of historians and philosophers like myself is to point out the dangers. Still, there is no doubt that artificial intelligence can help us in countless ways, from finding new cures for cancer to discovering solutions to the ecological crisis. The question we face is how to ensure that the new tools of artificial intelligence are used for good and not evil. To do this, we first need to understand the true capabilities of those tools.
Since 1945, we have known that nuclear technology could generate cheap energy for our benefit; but also that it could physically destroy our civilization. That is why we have completely overhauled the international order in order to protect humanity and ensure that nuclear technology is used primarily for good. Now we have to face a new weapon of mass destruction capable of annihilating our mental and social world.
We are still in time to regulate the new tools of artificial intelligence, but we must act quickly. Nuclear weapons cannot invent more powerful nuclear weapons, but artificial intelligence can create exponentially more powerful artificial intelligence. The crucial first step is to require rigorous security controls before powerful AI tools are released into the public domain. Just as a pharmaceutical company can’t launch new drugs without first testing them for short- and long-term side effects, tech companies shouldn’t launch new AI tools without first making sure they’re safe. We need, in the case of new technologies, an equivalent from the US Food and Drug Administration, and we needed it yesterday.
Won’t stopping the public deployment of artificial intelligence cause democracies to lose ground to less scrupulous authoritarian regimes? Quite the opposite. It is the unregulated deployments of artificial intelligence that will create social chaos that will benefit autocrats and destroy democracies. Democracy is a conversation, and conversations are based on language. If artificial intelligence hacks into language, it will destroy our ability to have meaningful conversations and thereby destroy democracy.
We just ran into a xenointelligence, here on Earth. We don’t know much about it, except that it could destroy our civilization. We must end the irresponsible deployment of AI tools in the public realm and regulate AI before it regulates us. And the first regulation that I suggest is that it is mandatory for an artificial intelligence to reveal that it is an artificial intelligence. If I have a conversation with someone and I can’t tell if it’s a human or an artificial intelligence, democracy is over.
This text has been generated by a human.
Or maybe not?
————————–
© 2023 The Economist Newspaper Limited. All rights reserved
Translation: Juan Gabriel López Guix