ChatGPT, the fashionable conversational chatbot, knows how to do almost everything. Search for what we tell you, answer all our questions or write us an essay in the blink of an eye. He is also capable of developing a software code or solving a mathematical problem. And he even dares to give advice. For example, what strategy to follow to conquer a woman.

But his knowledge is not innate, but behind it there is a hard learning based on training. This artificial intelligence (AI)-based system is trained from the information they find on the web, including articles published by the media. They use all those texts, with authorship included, to respond to the user and without any type of remuneration. It comes out totally free, and some companies are starting to dislike that.

According to a Bloomberg statement, “anyone who wants to use the work of Wall Street Journal journalists to train artificial intelligence must obtain the proper license of the rights to do so from Dow Jones,” recalls the general counsel of News Corp’s Dow Jones unit. , Jason Conti. “Dow Jones doesn’t have that deal with OpenAI,” he points out.

The person in charge criticizes the “misuse” that is being made of the work of journalists who work in that medium. “We take seriously the misuse of the work of our journalists and we are reviewing this situation,” he added in his message.

You are not the only means of communication annoying with ChatGPT. CNN also believes that using its articles violates the network’s terms of service, a person with knowledge of the matter told Bloomberg. The company wants to have a meeting with OpenAI to resolve this issue as soon as possible. Other outlets fear that the chatbot could spread misinformation. In recent weeks, publications like CNET and Men’s Journal have been forced to correct AI-written articles that were riddled with errors.

The problem became known after computer journalist Francesco Marconi posted a tweet explaining that the chatbot had based one of its responses on various media articles. “ChatGPT is trained on a large amount of news data from major sources that feed its AI. It is not clear if OpenAI has agreements with all of these publishers. Extracting data without permission would break the publishers’ terms of service,” he notes.

Marconi asked the chatbot for a list of news sources it had used. In total, he named up to twenty media, including Forbes, Business Insider, Reuters, Financial Times or The New York Times.

The AI ??has several open fronts. The latter, with ChatGPT and the indiscriminate use of articles published in the media to answer user questions. But the popular chatbot has also received criticism in recent hours for contributing to the writing and plagiarism of literary works. In fact, the American magazine Clarkesworld has had to close the reception of new manuscripts due to the avalanche of spam cases.

In January, Getty Images, one of the world’s largest agencies, took legal action against Stable Diffusion, a generative artificial intelligence capable of generating high-quality digital images, claiming it “infringed intellectual property rights, including copyright author”.