Google has decided to address the threat to its business posed by proprietary artificial intelligence (AI) apps, such as Open AI’s ChatGPT, and has boosted many of its flagship products with powerful features and tools of generative AI, plus some new ones that directly compete with applications from Sam Altman’s company.

The company announced them on Tuesday at the Google I/O developer conference, and the most important ones, precisely aimed at protecting its business, are those that affect its search engine. In view of the risk of users abandoning Chrome in favor of ChatGPT, in the United States from next Tuesday, users of this search engine will have different and new options at their disposal.

These are AI Overviews, “AI summaries” created by their Gemini AI model, which appear at the top of search results, alongside traditional link-based ones. Google’s head of research, Liz Reid, said AI Overviews would be available to “more than a billion people” by the end of the year.

In this way, Google intends to respond to more complex and specific searches that its browser could not serve until now, but the paid versions of ChatGPT could, while Google’s intention is, for the time being, to maintain the free this new function. For example, it will now be possible to ask Chrome to find us the best pilates centers less than 20 minutes’ walk from home – or from the location we ask – and to show us their rates and if they have offers. AI Overviews will be able to offer us a summary with the requested information, so it will save us the work of having to scroll and click on the various links. Another example: we can also ask you to help us organize any event in our life, from a dinner in a restaurant to celebrate a birthday – with various options of types of restaurants, prices, cuisine and menus for groups – , a dinner at home – with recipes to prepare in more or less time and depending on our preferences and those of our guests – or even to help us organize a trip anywhere in the world. In addition, the user can always choose between three summary levels, from simplest to most extensive.

On the other hand, Google has also improved Lens – its search based on images – which can now, for example, be shown an image or a video of a problem we have with a device and ask why not it works because it shows us a solution. Lens will be able to detect the model of the product and create a summary with the necessary steps to solve the problem.

The technology also brings AI to its Google Photos application, in which it will already do so much to have disorganized photos, since it can now be asked to select all the photos from a place, which meet a certain requirement – ​​like for example, that someone in particular appears – and that he chooses the best one from this whole selection.

Google also revealed a new artificial intelligence assistant that is still in development under the tentative name of Project Astra. He showed off a preliminary version of the voice tool that can use a smartphone’s camera to verbally identify locations, read and explain programming codes, and create alliterated sentences [ Three sad tigers eat trigo in a wheat field ]. During the conference, the assistant interacted by voice with a Google worker, and using the camera lens was able to pinpoint the worker’s location, just by what was seen from a window: the London area of ​​King’s Cross, where Google has its AI unit.

Finally, Google also introduced Veo, a cinematic-quality video generator using text created by Deep Mind that aims to compete with Sora, a tool that does exactly the same thing that Open AI introduced in February. With Veo, users will be able to generate 1080p videos longer than 1 minute, which is the limit of its rival. Veo doesn’t have a release date yet.