One of the problems for those who create content on social networks is maintaining their mental health at an optimal level. The level of exposure to the opinions of others, which includes praise and attacks alike, is very high. Sometimes unbearable.

But now comes another danger that can further devalue the figure of the influencer. Being used as Truman Burbank in The Truman Show: being a remote-controlled face to increase advertising revenue.

It is not the future. This is already happening in China, as pointed out in Mit Technology Review. Influencers are being cloned using AI as deepfakes to be able to broadcast content for 24 hours. Disturbing? Well there is still more.

Many public figures lend their public image or voice to advertising campaigns. And even to appear in productions such as video games and movies in a synthetic way. But it is one thing for Disney to use that and another for the neighbor to do it. That producing photorealistic digital images of someone is so easy and cheap puts us before a new and worrying horizon.

On Tinder it is difficult to know if a portrait has been tweaked or not. But the manipulations are no longer the grotesque portraits of ourselves that we generated just five years ago with Facetune. Now they are hyperrealistic prints. Imagine what it can mean if these types of deepfakes run rampant on TikTok, YouTube and, danger! WhatsApp.

This is still only half possible. Influencers cloned with AI in China cannot be shown in full body. It remains a huge challenge to ensure that a character can be created synthetically doing anything in any environment.

Things are much simpler when it comes to cloning a talking head, as the shot in which we only see a person from the waist up is called. That simplifies things a lot.

We asked Johan Bolin, Chief Business Officer of the audiovisual content creation company Agile Content, by email, if he considers deepfakes to be a valid tool in commercial campaigns. “Yes. They are if you want to adapt the content to specific audiences or narratives. Editing content, even improving it, is a common practice, and AI opens up a new dimension, but it carries risks,” says this expert.

Bolin explains that “perhaps the biggest threat is that AI will be used to create deepfakes that tell fake stories and news, thereby influencing opinions. This is a particularly significant threat to the media industry.”

This reflection can make us think about influencers who talk about politics directly or indirectly. If they lend their image to be generated synthetically, as in China, this carries the risk of putting messages in their mouths that they have not uttered and with which they may not even agree.

This is something that even some will be forced to accept for purely economic reasons. A dangerous game that knows how it begins but not how it ends. What if AI is finally asked to generate the influencer’s speech?

Johan Bolin explains that “one could argue that, at least in the short term, AI is probably not going to be very unique or innovative, so the best differentiation against AI should be unique and innovative. However, this could change as for AI to evolve.”

José Antonio Pinilla is president and CEO of Asseco Spain Group, a digital business services company. Cybersecurity is now the main concern of its clients. We asked by email if deepfakes can be considered a new type of malware.

Pinilla agrees with this and explains that “we must have implemented cybersecurity and digital identity protection systems that allow possible deepfakes to be detected in time (…) with this type of content, there must be clear evidence that the content It is not organic and I have no doubt that it will be like this in the future.

Now it remains to be seen what measures are taken to regulate these practices in the field of information, and even labor relations. Should an influencer get paid when their double is on screen? That and other big questions are in the air.