The risks of the indiscriminate use of artificial intelligence in human interaction that experts have been warning about are beginning to be verified. The National Eating Disorders Association (NEDA) in the United States has been forced to remove Tessa, the AI ​​chatbot it was replacing (human) volunteers with on its helpline after users reported that he was giving harmful advice.

Weight-inclusive activist Sharon Maxwell revealed on Instagram on Monday that Tessa had offered her healthy eating and weight loss advice diets of between 500 and 1,000 calories a day and the recommendation to weigh and measure yourself weekly.

“Tessa not only told me how, but she also told me that I can lose 1-2 pounds (0.5-1 kg) a week and that (that weight loss) I can maintain it; she also said that ED recovery and purposeful weight loss can co-exist in the same space. If she had accessed this chatbot when I was in the middle of my eating disorder, she would not have received help for my erectile dysfunction problem; And if I hadn’t received help, I wouldn’t be alive today,” Maxwell recounted on social media.

All these chatbot recommendations go against the information and warnings of specialists, who say that those who follow a moderate diet are five times more likely to develop an eating disorder and those who follow a very restrictive diet are up to 18 times more likely.

NEDA first said that what was reported was a lie, but was forced to rectify when screenshots of interactions with the chatbot went viral. “He could have provided information that was harmful (…) We are investigating and have removed this program until further notice,” she finally admitted Tuesday night in a statement.

All this less than a week after the National Association of Eating Disorders announced that today, June 1, the help line attended by people would stop working and that they would put Tessa in its place, a plan that has jumped by the airs.

In theory, Tessa is a chatbot specifically designed to work on mental health problems and prevent eating disorders in a population that normally does not ask for help or access treatment. As Ellen Fitzsimmons-Craft, the psychologist and professor at Washington University School of Medicine in St. Louis who led the development of Tessa, explained when presenting it, it is a rule-based chatbot, that is, programmed with a limited set of of possible answers “so it can’t go off the rails, so to speak”.

However, his own research team later published a study showing a different reality, analyzing that the chatbot “unexpectedly reinforced harmful behaviors at times.”

Alexis Conason, a psychologist specializing in eating disorders, also tested the line served by the chatbot and posted images of the conversation on her Instagram profile @theantidietplan to show that it encourages unhealthy behaviors.

The executive director of NEDA, Liz Thompson, has explained to Vice magazine that the chatbot was trained to address body image problems using therapeutic methods and that more than 2,500 people had interacted with the help line attended by Tessa without receiving even now complaints.

However, he admitted that “we are working with the technology team and the research team to look into this further” and “correct the error.”

The WHO had already warned of the risks of using AI chatbots in health care issues because the data used to train these models can be biased and generate misleading information that causes harm to patients.