The image, with a vaguely Gothic air, looks like a painting, but it is not. In the center there is a dying young woman watched over by two other women while a man leans against the window with his back to the scene. It is called Fading away and is the work of the British photographer Henry Peach Robinson. Produced in 1858, it is the first known photomontage; to achieve this, its author combined images from five different negatives. Trained as a painter, Robinson obviously did not have a documentary goal, but an artistic one (so to speak of manipulation would make no sense)

A century ago, however, photomontages – in their most technically elaborated version or in the most pedestrian version of cut and paste – began to be used systematically to falsify reality with a political intent. Under Stalin’s merciless dictatorship, the Soviet power was not satisfied with purging – by sending them to the gulag or the firing squad – opponents, dissidents and wayward comrades, but instead set out to erase their memory from history. It is famous how Stalinism devoted itself to systematically suppressing the image of Trotsky, one of the historical Bolshevik leaders, from all photos of the revolutionary era, before ordering his murder.

With the appearance of the photo editing program Photoshop in 1986, retouching images became something for children. Or almost Suddenly you could not only enhance light or color, but also erase and move elements. The time of photo manipulation had come… Since then the technology has only improved and the development of generative artificial intelligence (AI) threatens to take counterfeiting to uncontrollable extremes.

What has happened with the failed photo of Kate Middleton’s return to society illustrates the permanent temptation of power to touch up or sweeten reality. After weeks without news of the Princess of Wales following surgery, the British royal house wanted to put an end to the rumors by spreading an idyllic family picture of Catherine surrounded by her children. But the photo was rigged and the effect was the opposite of what they were looking for. The fact that the assembly was sloppy and easily perceptible does not detract from the seriousness of the matter. Next time they will do it right – the means exist for that – and we won’t find out.

In the United States, in the middle of the campaign for the presidential elections in November, the dance of false images circulating on social networks is beginning to take on worrying dimensions. A BBC investigation has found that supporters of Donald Trump – although there is no evidence of the involvement of his campaign team – are spreading across the networks dozens of fake images, created by AI, in which the Republican candidate smiling with groups of African-Americans, with captions suggesting he has growing support in the black community. Some of the whistleblowers, identified and people contacted by British radio and television journalists, have hundreds of thousands of followers on social networks. It turns out that one of the images – in which Trump is seen sitting on a porch with a group of young black men – was originally created by a satirical website critical of the former president, but was later used with the opposite objective.

The choice of the subject is not accidental. The dispute for the vote of the black community will be essential in this second confrontation between Donald Trump and the Democrat Joe Biden. A recent poll by The New York Times and Sienna College indicated that in six swing states, 71% of black voters would vote for the current president this time, while in 2020 92% supported.

The British organization Center for Countering Digital Hate conducted a test in February to check how vulnerable AI systems are to manipulation to generate political disinformation. The experiment, focused on the American elections and carried out with Microsoft’s ChatGPT, Midjourney, Stability and Image Creator, showed that protections could be circumvented in 41% of cases. And generate false images such as Trump arrested by the police or Biden admitted to a hospital. Falsehoods of all kinds were already circulating four years ago, but now it will be much faster, much easier.

In a recent article published in Foreign Affairs, cybersecurity officials from the US Department of Homeland Security warned of the extent to which generative AI “is a threat to democracy”, as it adulterates reality with surprising ease. In the coming months, millions of people will go to vote all over the planet. And it will be extremely easy to disseminate false images of politicians in invented situations, and even to create videos in which they are attributed – in their own voice – with fictitious statements. Maneuvers of this kind, tending to discredit candidates, can then be aimed at calling into question the cleanliness of the process and the electoral result.

In the last presidential election in Indonesia, held on February 14, the Functional Groups Party (Golkar) – which ruled the country between 1971 and 1999 – released a fake video, generated by AI, in which the deceased appeared resurrected dictator Suharto – with his face and his voice – in which he asked for the vote. As if in Spain Franco reappeared in an election spot. In this case, the deception would be very difficult to pass… Or maybe not so much?