Google has taken a crucial step in its quest to make AI-created content more transparent by announcing the launch of SynthID, a tool that adds digital watermarks to algorithmically generated images, allowing them to be easily identified.
According to a blog post by DeepMind, the AI ??division of Google, SynthID implements an identifier directly into the pixels of an image, making it undetectable to the human eye. Despite its subtle presence, this watermark does not affect image quality and can withstand further editing without losing its effectiveness. This means that even after applying filters, cropping, or formatting changes, the markup will still be embedded in the file.
In a context where AI models like Midjourney or Stable Diffusion are generating realistic content that can easily fool non-expert users, the need to mark and track algorithmically generated images has been increasing. Examples such as the fake images of the arrest of Donald Trump or Pope Francis wearing a puffer coat that went viral have shown how easily these creations can spread.
SynthID not only makes it possible to include a watermark, but it can also analyze an image to determine if it was generated by AI. The tool is based on two deep learning models: one for watermarks and one for image identification. These models were simultaneously trained with a variety of images to achieve their effectiveness. DeepMind engineers explain that the identifier is visually matched to the original content to verify its presence.
However, it is important to note that SynthID is not infallible against extreme image manipulations. Despite this, Google will continue to refine the tool, and its possible application could be extended to other types of content, such as audio, video or text.
SynthID is currently available to a limited number of users using Image, a model of Google’s Vertex AI capable of creating photorealistic images from text descriptions. This digital watermark supports metadata-based identification methods, such as those found in programs like Photoshop.
Google has confirmed its intention to expand the use of SynthID to third-party generative AI models in the near future. The company is committed to continuing the development of this tool to incorporate improvements in future versions, using the feedback from users during the testing phase to refine the models.
Amid discussions about the regulation of artificial intelligence, the identification of content generated by algorithms is a fundamental issue. Governments and organizations are struggling to find systems that can detect synthetic images and texts that could be used to misinform. With the rise of deepfakes and other means of digital manipulation, the need for tools like SynthID becomes essential to maintain the integrity of information and visual content online.