The new chatbot developed by Google, Bard, is part of the artificial intelligence revolution that is increasingly capable of quickly generating any content, from an essay on Federico García Lorca to rap lyrics in the purest DMX style. This technology threatens to revolutionize the job market and the industry, but more and more voices claim that it has come to light before it is convenient.

Both Bard and Chat GPT and the rest of the AI-based chat systems still have a serious problem: sometimes things are made up. A sample of it appeared last Sunday on the CBS news program, 60 Minutes. In it, Bard talked about an alleged book on the history of economic inflation in the United States that doesn’t really exist.

The worst thing about this lie is that it could very well be real. And it is that the presumed author of this non-existent book presented by the AI ​​is a renowned economist from MIT. Bard “hallucinated” the existence of that work, as well as a list of other bogus books on economics that he mentioned when asked about inflation.

This isn’t the first “hallucination” Bard has committed in public. When launched in March to compete with OpenAI’s ChatGPT, the chatbot claimed in a public demonstration that the James Webb Space Telescope had been the first to capture an image of an exoplanet in 2005… But the reality is that the so-called Very Large Telescope had already achieved it a year earlier in Chile.

But, to what extent are these types of hallucinations common? Google CEO Sundar Pichai acknowledged Sunday in an interview on 60 Minutes that no one has been able to find a solution to this problem. “Nobody has solved the problem of hallucinations. All the models have it,” he said in the report.

Let’s remember that chatbot services like Bard and ChatGPT use large linguistic models, taking advantage of billions of pieces of data to predict the next word in a text string. This method of so-called generative artificial intelligence tends to produce hallucinations in which the models generate text that seems believable, but is not real.

When asked if the problem of hallucinations will be solved in the future, Pichai said that it is an issue that generates “intense debate” among experts, although he also indicated that he is confident that his team will eventually make “progress on it.”

In this sense, Pichai himself also acknowledged in the same interview that there are still parts of artificial intelligence that its engineers “do not fully understand.” “There is an aspect of AI that we call a black box. We cannot know very well why he says certain things, or why he is wrong, ”he related.

For weeks, some critical voices have warned of the possible unintended consequences of complex artificial intelligence systems. Among them are the co-founder of Microsoft, Bill Gates, or the founder of Telsa, Elon Musk.

In fact, the CEO of SpaceX was part of a group of more than 1,100 AI CEOs, technologists and researchers who last month publicly called for a six-month pause in the development of such tools.

This Sunday, Pichai revealed that he shares some of the concerns of these researchers, arguing that artificial intelligence “can be very harmful” if used inappropriately. “We don’t have all the answers yet and technology is advancing very quickly. And does that make me lose sleep? Of course,” he declared on 60 Minutes.

Finally, Sundar Pichai stressed that the development of artificial intelligence systems should not involve “only engineers, but also social scientists, ethicists and philosophers” to ensure that the result benefits everyone.