A quote is attributed to the philosopher Marshall McLuhan (with the invaluable help of his friend Professor John Culkin) that not only is not obsolete, but is increasingly valid: “We shape our tools and, later, our tools shape us.” we”.

It is as if those two visionaries who met in the 1960s at Fordham University in the Bronx had anticipated the digital revolution that would eventually lead to Artificial Intelligence (AI), perhaps the technological advance with the greatest capacity to modify the behavior of our species.

This week we experienced a new milestone in this precipitous evolutionary climb: the emergence of an AI chat, for now free, that provides the illusion of having live voice conversations with an intelligent being.

It is easy to imagine that thousands of people around the world have tested the benefits of their new friend these days.

Some, most likely, longing to start a romantic relationship that will compensate them for the misadventures of real life, in the style of the character played by Joaquin Phoenix in the 2013 film Her, by Spike Jonze.

It is not bad news that in a world ravaged by the epidemic of unwanted loneliness, technological solutions are emerging to make life easier. In this sense, virtual assistants such as Chat GPT-4o, from the company OpenAI, can keep people company who have no one to talk to, becoming an effective complement to another advancement in full expansion: assistive robotics.

The problem – the hidden side – is intuited when the focus is broadened and our previous knowledge about the disruptive potential of these tools is applied to the new advance. That is, when we openly ask ourselves what the massive use of an assistant designed more to satisfy the user’s desires than to provide truthful and verified information can lead to.

You only have to chat for a few minutes with one of these chatbots to realize that it is very easy to make them say exactly the type of things that we like to hear, the ideas that will serve to reinforce our own ideological positions without anyone contradicting us. .

In a society increasingly compartmentalized in the ideological bubbles that make up the networks, this development of AI can give rise to authentic sectarian bunkers, virtual fortresses inhabited by like-minded individuals who will prefer the applause and brotherhood of other comrades rather than continue using spaces for shared thought, as the media have traditionally been.

As Éric Sadin warned in The Age of the Tyrant Individual (Caja Negra, 2022), “we are experiencing the advent of a personal resentment that is both isolated and extreme and yet felt on a broad scale.”

For this French techno-critical philosopher, this era of the tyrannical individual entails “the progressive abolition of all common foundations to make room for a swarming of scattered beings who seek (…) to occupy a preponderant position by right.”

At the opposite end of this exacerbation of the individual that chats like OpenAI’s will foster would be the experience of AI as a way of creating community. This is what the Barcelona collective Domestic Data Streamers proposes with its Citizen Office of Synthetic Memories, which has just opened its window at the Disseny Hub Barcelona.

The DDS, who have been investigating the use of data for the common good for a decade, propose to recreate with AI experiences of the city’s residents from the past to configure with them an exhibition that will be increasingly richer and that will become a sort of collective Barcelona memory.

To participate in the project and thus reconstruct diffuse images of one’s own biography, you must make an appointment. The author of this text has already done it.