Twitter wants to end the confusion induced by images manipulated or generated by Artificial Intelligence (AI), better known as deepfakes. The platform is promoting a system to add context to this type of publication, which it has named Community Notes.

“Today we are testing a function that puts a superpower in the hands of collaborators: Community Notes”, the microblogging platform has announced from one of its official profiles.

This system will allow tweets to be given a “valuable” context, given the misleading information and manipulations that circulate on Twitter. However, this work will not be done by the social network team, but rather by the users themselves who will be in charge of adding these comments.

Community Notes will only be available to those subscribed to Twitter Blue. Those users will be able to “select this option when they think the added context would be useful regardless of the tweet to which the note is attached.” For example, they will be able to tell if an image was created by AI or if it is manipulated.

Users will be able to specify what their clarification refers to: if it is about that specific tweet; or about the image in this tweet, and should appear in all tweets that include this image. Any clarification will appear under the distinctive ‘About the image’, where all this information can be consulted.

As reported by Twitter on its blog, this feature is currently experimental and only supports tweets with a single image. “We are actively working to expand it to support Tweets with multiple images, GIFs, and videos. Stay tuned for updates,” she teases.

Twitter tries to stop the viralization of false images. The latest case, that of a falsified photograph of an explosion near the US Pentagon, which accumulated millions of retweets. The authorities had to deny the information, through various channels. “No explosion or incident is taking place on or near the Pentagon reservation, and there is no immediate danger or danger to the public,” the Arlington Police Department wrote.

Another high-profile case was the alleged arrest of Donald Trump. The images were also created by AI, but they soon went viral and confused millions of users. Until the media denied the information and the user who created them acknowledged that they were not real and that “he was just being silly.”