In the race to bring Artificial Intelligence (AI) closer to the mass public through the Internet, Microsoft has the advantage over Google. It all came to a head when the Redmond-based company announced last February that its Bing search engine and Edge browser would integrate OpenAI’s AI language models, which have been publicly available for months with ChatGPT. But the truth is that Google had a similar project as advanced or more than Open IA.

It wasn’t until days after rival Microsoft’s move that Google’s reaction came. He introduced Bard, his chatbot based on the Language Model for Dialogue Applications (LaMDA) conversation technology. There are not a few experts who affirm that the Mountain View company was handled with excessive prudence, which helped Open AI and Microsoft to take the lead in an area of ​​knowledge that seems destined to star in the main advances -and investments- of the large technological companies in the short and medium term.

According to The New York Times, two Google employees, whose job it is to review the company’s artificial intelligence products, tried to prevent Google from launching its chatbot. These workers believed that this tool generated inaccurate and dangerous statements. According to this medium, the director of Google’s Responsible Innovation group, Jen Gennai, modified the final document to remove the recommendation and minimize alarm about potential risks.

This modus operandi at Google is not new. It is worth remembering the case of Blake Lemoine, an engineer who was suspended by the search engine company last summer for expressing in an interview that the Artificial Intelligence chatbot LaMDA was “aware”. The investigator was fired a few weeks later. Barely half a year after that, the debate that opened on the ability to feel, reason, express moral opinions or even get emotional dominated the conversations on social networks and the media news. These controversies are now part of the concerns of some Western governments, such as Italy, Germany or Canada.

In response to The New York Times, Gennai confirmed that he had “corrected inaccurate assumptions” in the analyst report and denied that analysts had to set a release date or timeframe for the technology. Bard ended up being released in February, although it can only be used by a limited number of users. The multinational assures that it continues to improve its chatbot so that it becomes the benchmark in the market.

Ten months earlier, ethicists and other employees raised similar concerns at Microsoft. There were also reports warning of the disinformation dangers of ChatGPT. And just a couple of weeks ago, hundreds of experts and businessmen called in a public letter to suspend and regulate AI advances.

Still, tech companies don’t seem overly concerned with ethical or security issues. In recent months, Microsoft has kept the ethics and society team at a bare minimum, after executing several rounds of layoffs. Those affected told Platformer they believed they had been fired because the company was more concerned with leading the competition than long-term “socially responsible thinking.”

The urgency to lead the race for AI crystallized in an internal email, seen by The New York Times: “It was an absolutely fatal mistake at this point to worry about things that can be fixed later,” said Sam Schillace, a Microsoft technology executive.