Philosopher Marshall McLuhan (with the invaluable help of his friend Professor John Culkin) is credited with a statement that is not only not outdated, but increasingly valid: “We shape our tools, and then our tools shape us”. It is as if these two visionaries who met in the sixties at Fordham University in the Bronx had anticipated the digital revolution that would lead to artificial intelligence (AI), perhaps the technological advance with the greatest capacity to modify the behavior of our species.
This week saw a new milestone in the rapid evolutionary escalation: the emergence of an AI chat, free for now, that provides the illusion of having oral conversations with an intelligent being. It is easy to imagine that thousands of people around the world have tested the benefits of their new friend these days. Some, most likely, with the desire to start a romantic relationship that will rescue them from the misadventures of real life, in the style of the character played by Joaquin Phoenix in the 2013 film Her by Spike Jonze.
It is not bad news that in a world ravaged by the epidemic of unwanted loneliness, technological solutions are emerging to make life easier. In this sense, virtual assistants such as OpenAI’s Chat GPT-4o can keep people company who have no one to talk to and therefore become an effective complement to another booming advancement : assistance robotics.
The problem – the hidden face – is sensed when the focus is widened and our prior knowledge of the disruptive potential of these tools is applied to the new advance. In other words, when we ask ourselves, openly, what can lead to the mass use of an assistant designed more to satisfy the user’s wishes than to provide him with truthful and verified information.
You only need to chat with one of these chatbots for a few minutes to realize that it’s very easy to get them to say exactly the kind of things we like to hear, the ideas that will serve to reinforce our own ideological positions without anyone bringing the versus.
In a society increasingly compartmentalized by the ideological bubbles that shape social networks, the development of AI can give rise to true sectarian bunkers, virtual fortresses inhabited by like-minded individuals who will prefer the applause and brotherhood of other comrades rather than continuing to use spaces of shared thought, as the media have traditionally been.
As Éric Sadinal warned in his book La era del individuo tyrano (Caja Negra, 2022), “we are experiencing the advent of a personal resentment that is both isolated and extreme and, nevertheless, it is felt on a wide scale”. For this French techno-critic philosopher, this era of the individual tyrant entails “the progressive abolition of all common ground to make room for a tingling of scattered beings who seek […] to occupy a preponderant position by right”.
At the opposite end of this exacerbation of the individual that chats like OpenAI will facilitate, there would be the experience of AI as a way to create community. This is what the Barcelona collective Domestic Data Streamers is proposing with the Citizens’ Office of Synthetic Memories, which has just opened its window at the Design Hub Barcelona. The DDS, which has been investigating the use of data for the common good for a decade, proposes to recreate with AI the past experiences of the city’s residents to set up an exhibition that will be increasingly rich and that it will become a kind of collective Barcelona memory.
To participate in the project and reconstruct, in this way, diffuse images of one’s own biography, it is necessary to make an appointment. The author of this text has already requested it.