Platforms such as Midjourney or Stable Diffusion have multiplied the generation of images by Artificial Intelligence (AI), content that can be dangerous, because it represents moments or scenes that have never occurred. So that there is no confusion, starting this summer, Google will incorporate new tools to know if the image is real or AI has been used.
This feature will be called ‘About this image’ and will be similar to the ‘About this’ dropdown, which appears in links in regular Google search results.
As of this summer, first in the US and later in the rest of the countries, the user will be able to find out various relevant data: knowing when that image was indexed for the first time, the place of origin, the web page where it appeared by first time or where it has been seen since.
“You can find this tool by clicking the three dots on an image in Google Images results, searching with an image or screenshot in Google Lens, or by swiping up in the Google app when you’re on a page and get to through an image about which you want more informationâ€, explains the technology in its blog.
Later, this option will also be available by right-clicking or long-pressing an image in Chrome, both on desktop and mobile.
In addition to this tracking system, Google will also give the option of adding a badge, warning that it is an image generated by AI, as applications such as Midjourney or Shutterstock already do.
According to a 2022 Poynter study, 72% of people believe they come across misinformation on a daily or weekly basis. And technology wants to bolster user trust with this new announcement: “We continue to build easy-to-use tools and features in Google Search to help you spot misinformation online, quickly evaluate content, and better understand the context of what you’re looking for. is watchingâ€.