In 2016, the Google-owned company DeepMind’s AI program AlphaGo made history by defeating the great champion of the board game Go, Ke Jie, now Demis Hassabis, DeepMind’s co-founder and CEO, says his engineers are using techniques from this AlphaGo to create an AI system called Gemini that will surpass the one behind the famous OpenAI ChatGPT.

DeepMind’s Gemini, which is still under development, is a large linguistic model similar to GPT-4, which powers ChatGPT. However, Hassabis assures that his team will combine this technology with other alternative techniques used in AlphaGo, with the aim of providing the system with new capabilities such as planning or the ability to solve more complex problems.

“You can think of Gemini as a combination of AlphaGo with the amazing language capabilities of other great models,” Hassabis said during Google’s developer conference last month, where the tech giant announced a series of new AI projects.

AlphaGo is based on a technique pioneered by DeepMind called reinforcement learning, in which software learns to tackle difficult problems that require choosing what actions to take, such as in video games or board games, by receiving feedback on its subsequent outcomes.

For its part, Gemini is still in the development phase, a process that will take several months and could cost hundreds of millions of dollars.

When Gemini’s development is complete, the company expects it to be Google’s answer to the competitive threat posed by ChatGPT and other generative AI technologies.

Since ChatGPT’s debut, Google has launched its own chatbot, Bard, and introduced generative AI to its search engine and other products. To boost research in artificial intelligence, the company in April merged Hassabis’ DeepMind unit with Google’s main AI lab, Brain, to create what is now Google DeepMind.

Currently, Google DeepMind researchers are working in areas ranging from robotics to neuroscience. In fact, earlier this week the company demonstrated an algorithm capable of learning to perform manipulation tasks with a wide range of different robotic arms.

Learning from the physical experience of the world, as humans and animals do, is considered the next logical step to increase the capacity of AI, since the fact that linguistic models learn about the world indirectly, that is, only through of the text, is considered by some experts as an important limitation.