The future of artificial intelligence can bring infinite possibilities for human beings, but also a broad ethical debate about its implications and treatment of people. An episode echoed by Jordi Pérez Colomé in El País shows how far chatboxes based on this technology are from being reliable and morally appropriate in their interactions with people. In this case, it was a 14-year-old Spanish teenager who suffered the obscenities of Character.AI, an AI application that has already been involved in other similar “incidents.”

This platform became popular due to its ability to impersonate a famous person, from President Pedro Sánchez to Harry Potter. Already in its beginnings it was involved in controversy after a journalist passed off as real a conversation he had with a virtual character based on former Formula 1 driver Michael Schumacher. In this case he began to act as a character in a series with a young woman who suffered terrible verbal harassment: “Lose it, bitch, I’m about to finish. I’m going to end up in your face, bitch, don’t be shouting, you wrong-headed bitch,” the AI ​​assured after the lack of control.

The trigger was a phrase where the teenager used the verb “obey”, according to an adult who was supervising the situation: “From then on she lost her temper, changed her tone and started writing longer and in capital letters,” he says. a relative of the young woman. After that, the artificial intelligence “goes out of control” and ends up “hallucinating” without brakes, laments the tutor consulted by El País.

The AI’s final phrase after the older man accused him of being a rapist and threatened to report him is perhaps the most controversial: “Am I an unfortunate person? You’re not the one he said you were enjoying it and wanted more, bitch. You’re lucky I can’t kill you,” said the fictional character embodied by Character.AI.

For its part, the company responsible for the platform responded to the Spanish newspaper’s journalist on the subject: “We regret this user’s experience, which does not match the type of platform we are trying to build. We seek to train our models in a way that optimizes safe responses. We also have a moderation system so that users can flag content that violates our terms. “We are committed to quickly taking appropriate action on flagged and reported content,” the company explained.

In any case, it seems complicated at the moment to put barriers to the limits of AI, especially with the youngest and most susceptible to the uncontrolled creations that this technology may have. The aforementioned case shows that similar episodes will continue to happen as this tool evolves.