A hot topic that seemed about to explode, the prosecution of Donald Trump on one of the charges he faces, and the new version of an image-generating artificial intelligence have sparked a fire that reveals how this technology is blurring the increasingly faint line that separates reality from manipulation. Little could Eliot Higgins, founder of the Bellingcat journalism page, imagine that when he typed on the Midjourney platform “Donald Trump falling while he is arrested” he would open the box of thrones. Artificial intelligence began to produce a series of photorealistic images that, upon closer inspection, are flawed but appear fascinatingly real.

Higgins has confessed in an interview that he thought “maybe five people” would retweet the fake images uploaded to Twitter. He was wrong. It was 5.6 million. The digital journalist got excited, because in his opinion, “I was just doing the enze”. He asked Midjourney for footage of Trump in custody, in the courtroom, upon his arrival at the prison, his life as a prisoner and even his escape from prison.

The first realistic images of people created by artificial intelligence, known as deepfakes, started in 2014 based on a technique called generative adversarial networks, which compares models of real images to create others that they do not exist in reality. Since then, technology has gained a lot. Midjourney, the platform for Trump’s controversial images, ended up suspending Higgins’ account, but that wasn’t the problem.

It was immediately clear that what was failing was the AI’s own barriers, because anyone could continue to request fake images of real people. They were requested, for example, by Jack Posobiec, a far-right US ex-military activist and compulsive tweeter, who posted both an image of Hillary Clinton being arrested by the police and a video of President Joe Biden announcing a conscription for the Ukrainian war. All false, of course.

But the misinformation and polarization of society, more accelerated with AI than it was with social networks without its intervention, is only one of the great dangers of deepfakes. Porn is another, and it is especially harsh on women. Blaire is a popular Twitch streamer known as QTCinderella who discovered this month that someone had put her factions through AI on a porn actress. “For every person who says it’s not worth it – he explained – you don’t know what it feels like when they send your family a photo of you doing things you’ve never done.”

The cases are endless. A student who was angry with his teacher used artificial intelligence to make her the star of a porn film. The woman was fired because the parents did not want her to work with their children. The list of women attacked in this way points in particular to politicians, such as Alexandria Ocasio-Cortez, Lauren Book, Sarah Palin, Katie Hill, Nancy Pelosi, Marjorie Taylor Greene, Hillary Clinton and Michelle Obama, among others.

One of the latest occurrences has been to put the voice of the dead co-founder of Apple, Steve Jobs, on ChatGPT. The result is a bot called Forever Voices for Telegram that advertises itself like this: “Experience the magic of engaging in two-way voice conversations with iconic stars like Steve Jobs and Taylor Swift. Be inspired, entertained and enlightened by our AI voice conversations with the legends you’ve always admired.”

Until recently, creating these fake images and videos required some computer knowledge. Not only are fakes better now, but they are available to pretty much anyone who cares to use them.

Regulation is one of the solutions to illicit uses of technology. Authorities can impose it, but it requires companies to put up their own barriers. Some already do, although they generally adopt a policy of solving problems only as they arise.