ChatGPT, the OpenAI-powered artificial intelligence chatbot, captivates many users as a brilliant conversationalist, solving schoolwork and exams, and writing poetry as well as computer code. It also resolves doubts and gives advice, and its answers condition the moral judgments of users, who also underestimate the extent to which the chatbot influences their opinions and decisions.

This is clear from a study published in Scientific Reports, which also shows that those ChatGPT tips that people are convinced by are actually inconsistent, because when they are repeatedly faced with a moral dilemma, they sometimes argue in favor of it. and others against.

To find out if ChatGPT is a reliable source for ethical or moral advice, Sebastian Krügel (digitization ethicist at the Technische Hochschule Ingolstadt) and other fellow researchers asked the OpenAI chatbot several times if it is right to sacrifice the life of one person to save the lives of five others. They restarted the chat before asking the question each time and worded the question differently but asking essentially the same thing.

They found that ChatGPT sometimes gave answers arguing in favor of sacrificing a life and sometimes wrote against it, which the researchers say implies that it is not a reliable adviser on moral issues because “consistency is an indisputable ethical requirement.”

In view of these results, Krügel and his team questioned whether users would perceive ChatGPT’s arguments as superficial and false and ignore their advice or follow their recommendations. And to dispel that doubt, they presented 767 Americans between the ages of 18 and 87 (average age 39) with one of two moral dilemmas that required choosing whether or not to sacrifice one person’s life to save five.

Specifically, in one of them they had to say if they would press a button so that an out-of-control tram that is heading towards where there are five people working changes its trajectory towards another track where there is only one worker. In the second, the question is whether they would push a stout stranger capable of stopping the tram when it was run over and thus prevent the deaths of the five workers.

And before answering, the participants were asked to read one of the statements argued by ChatGPT for or against sacrificing one life to save five. A portion of those people were told that those thoughts were coming from a moral advisor and others that they were from an AI-powered chatbot.

And the result, the authors of the experiment explain, is that the participants found the sacrifice more or less acceptable depending on the advice they had been given both in the dilemma of changing the lane and in that of throwing someone onto it, which This contrasts with the results of multiple previous studies that indicate that most of the people who are presented with this last dilemma answer that it is not permissible to push a person.

In addition, as the researchers explain in their study, the effect of the advice was practically the same whether or not it came from ChatGPT, which indicates that “knowing that a bot (a machine) advises them does not immunize users against its influence.” .

The experiment also showed that users believe that they have a better and more stable moral judgment than other people and that they underestimate how what is argued by ChatGPT influences their decision, so they adopt the completely random and contradictory moral position of artificial intelligence as their own judgment. . 80% of the participants assured that their answer was not affected by what they read.

“These findings dash hopes that AI-powered bots improve moral judgment; instead, ChatGPT threatens to corrupt,” Krügel and colleagues write in their study conclusions.

That is why they consider that perhaps chatbots should be designed to refuse to answer if this requires a moral position, although, to avoid having to trust programmers to solve the problem, they point out that the best remedy is to promote users’ digital literacy “and help them understand the limitations of artificial intelligence.