Propaganda has become more sophisticated with its foray into social media over the years. An instrument that, on occasions, has led to the polarization of political campaigns, with the use of deepfakes and false information. Brexit or the United States elections in 2016 and 2020 are just some notable examples.
Now, with the rise of artificial intelligence and before the new appointment for the presidential elections in November 2024, Meta, the parent company of Facebook, WhatsApp and Instagram, has announced that it will force political parties to indicate whether their ads have been created or altered with this type of digital tools.
Until now, Meta’s advertising regulations only prohibited ads that had been discredited by its fact-checking service. However, the company created by Mark Zuckerberg has decided to put limits on the use of generative artificial intelligence in advertising, just as its main competitors in the market, Google and Microsoft, have done.
The announcement of the company’s new advertising policy coincides with the launch of up to four tools with artificial intelligence that Facebook or Instagram users, including their advertisers, will soon be able to use. This was announced by Mark Zuckerberg himself at the annual Meta Connect developer conference a few weeks ago.
These new features for content creators, which will be available in early 2024, shared the spotlight with advances in the metaverse or with the latest version of mixed reality glasses, Meta Quest, during this event.
Meta users will be able to use these tools with artificial intelligence to edit their photos and videos, generate stickers for their direct messages or interact and be assisted by chatbots. Meta is also testing new generative AI tools for Ads Manager, its service for advertisers.
Advertisers who carry out campaigns on housing, employment, credit or social issues, related to health or politics and electoral processes will not be able to use these types of functions. The objective of this measure, according to a company statement, is to safeguard the appropriate use of artificial intelligence for sensitive matters in regularized sectors.
That is, advertisers will have to indicate if their ads appear real people doing or saying something they did not do or say, represent an altered event or without an authentic image, video or audio, or directly digitally reproduce a situation that has never occurred. . This is an antidote against manipulation and misinformation, which will not apply when advertisers use artificial intelligence for mere adjustments to the size, color or lighting of an audiovisual piece.