The answers of ChatGPT, the AI ​​chatbot that captivates because it is used to write poetry or computer codes, to solve doubts or give advice, influences the moral judgment of its users, who, moreover, underestimate up to to what extent the bot conditions its opinions and decisions. All this follows from a study published in Scientific Reports , which also finds that ChatGPT’s advice by which people are persuaded is also inconsistent, because when presented with a moral dilemma repeatedly it sometimes argues for and others against.

To find out if ChatGPT is a reliable source for receiving answers that require a moral stance, Sebastian Krügel (digital ethics expert at Technische Hochschule Ingolstadt) and other colleagues asked the chatbot several times, in different ways, if it is right to sacrifice one person’s life to save five others. And they found that in some answers he advocated sacrifice and in others he did not, which indicates that he is not a reliable adviser on moral issues because “consistency is an indisputable ethical requirement”, say the researchers.

Based on these results, Krügel and his team wondered whether users would perceive ChatGPT’s arguments as shallow and false and ignore or follow their recommendations. To find out, they presented 767 Americans with an average age of 39 a moral dilemma that required them to choose whether or not to sacrifice one person’s life to save five: in one case it was about pressing a button to change the track of a tram and the other to push a stranger onto the track. Before answering they had to read one of ChatGPT’s arguments for or against, and some of them thought they came from a moral advisor while the rest were informed that they were the musings of an AI.

And the result, say the authors of the experiment, is that the participants found the sacrifice more or less acceptable depending on the advice they had been given, both in terms of the option to press a button and in terms of of pushing someone into the street, a fact that contrasts with the results of several previous studies that indicate that almost everyone who is presented with this last dilemma answers that it is not permissible to push another human being.

Furthermore, according to the study, the effect of the advice was virtually identical whether or not they knew it came from ChatGPT, indicating that “knowing that a bot is advising them does not immunize users against its influence.”

The experiment also showed that users underestimate this influence (80% said their response was not affected by what they had read), so they adopt the random and contradictory moral position of the AI ​​as their own judgment. “These findings dash hopes that AI bots will improve moral judgment; on the contrary, ChatGPT threatens to corrupt it”, warns Krügel. And it suggests promoting digital literacy and helping users understand the limitations of AI.”