Politics is supposed to be about persuading; although it has always been stalked by propaganda. Those who are in the campaign hide, exaggerate and lie. They broadcast lies, from blatant to white, by any means available. It used to be that anti-vaccine conspiracies were spread through pamphlets, not podcasts. A century before covid-19, anti-vaxxers from the time of the Spanish flu carried out a disinformation campaign and used telegrams (not yet Telegram) to send false messages from the health authorities. Since people are not angels, elections have never been free from falsehoods and misbeliefs.

Now, with the global prospect of a series of votes in 2024, a new phenomenon is causing great concern. In the past, misinformation has always been created by humans. Now, advances in generative artificial intelligence (with models capable of producing complex text and creating realistic images from text instructions) make synthetic propaganda possible. The fear is that disinformation campaigns will skyrocket in 2024, the year in which there will be elections in countries with a collective population of about 4 billion (including the United States, Great Britain, India, Indonesia, Mexico, and Taiwan). To what extent should its citizens care?

It is important to specify what generative artificial intelligence tools like ChatGPT do and do not change. Before they appeared, misinformation was already a problem in democracies. The destructive idea that the 2020 US presidential election was rigged brought protesters to Capitol Hill on January 6, but was disseminated by Donald Trump, Republican elites and the conservative media through mainstream channels. In India, activists from the Indian People’s Party (PJB) have spread rumors with WhatsApp threads. Chinese Communist Party propagandists broadcast talking points to Taiwan through legitimate-looking media outlets. All of that is done without using generative artificial intelligence tools.

What could the great language models change in 2024? One aspect is the amount of misinformation: if the volume of nonsense were multiplied by 1,000 or 100,000, it could convince many people to vote differently. A second aspect refers to quality. There is a chance that hyper-realistic deepfakes will sway voters before these fake audios, photos and videos can be debunked. The third is microtargeting. With artificial intelligence, voters could be inundated on a large scale with highly personalized propaganda. Propaganda botnets will be harder to detect than current disinformation efforts. Voters’ trust in their fellow citizens, which has been declining in the United States for decades, will suffer if people begin to doubt everything.

This is a troubling dynamic, but there is reason to believe that artificial intelligence is not going to destroy humanity’s 2,500-year-old experiment with democracy. Many people believe that others are more gullible than themselves. In reality, voters are hard to convince; especially in relation to high-profile political issues like who they want to be president. (Let the reader wonder what deepfake would change his choice between Joe Biden and Trump.) The multibillion-dollar US election campaign industry, which uses human beings to convince voters, can only bring about minimal changes in their behavior.

The tools to produce credible fake images and text have been around for decades. Generative artificial intelligence is a technology that will save Internet troll farms work, though it is not clear that the binding limitation on the production of disinformation has so far been effort. The new image-generating algorithms are impressive, but without tuning and without human judgment they are still prone to producing images of people with six fingers on each hand, for the moment keeping the possibility of custom deepfakes away. Even if these AI-amplified tactics were to prove effective, they would soon be adopted by many stakeholders: the cumulative effect of influence operations would make social media even more cacophonous and unusable. It is difficult to show that distrust translates into a systematic advantage of one party over another.

Social media platforms, which is where disinformation spreads, and artificial intelligence companies claim to deal with risks. OpenAI, the company behind ChatGPT, says it will monitor its use to try to detect political influence operations. Big tech platforms, criticized for spreading misinformation in the 2016 election and removing too much in 2020, have gotten better at identifying suspicious accounts (although reluctant to arbitrate the veracity of real-world content). . Alphabet and Meta prohibit the use of manipulated media in political advertising and say they respond quickly to deepfakes. Other companies are trying to come up with a technology standard that establishes the provenance of real images and videos.

However, voluntary regulation has limits, and involuntary regulation poses risks. Open source models such as Llama de Meta (which generates text) and Stable Diffusion (which creates images) can be used without supervision. And not all platforms are created equal: TikTok, the video-sharing social networking company, has ties to the Chinese government and its app is designed to promote virality from any source, including new accounts. Twitter (now called X) reduced its oversight team after being bought by Elon Musk, and the platform is a haven for bots. The agency that regulates elections in the United States is considering a disclosure requirement for campaigns that use synthetically generated images. This is a sensible move, but malicious actors won’t abide by it. In the United States, some advocate a Chinese-style system of extreme regulation. In that country, artificial intelligence algorithms must be registered with a government body and somehow embody core socialist values. Such tight control would erode the lead the United States has in AI innovation.

Technological determinism, which blames tools for all people’s flaws, is tempting. However, it is also wrong. It is important to be aware of the potential for generative artificial intelligence to disrupt democracies, but panic is not justified. Even before the technological advances of the last couple of years, people were quite capable of transmitting all kinds of destructive and terrible ideas to each other. The 2024 US presidential campaign will be clouded by misinformation about the rule of law and election integrity. Now, its parent will not be something new like ChatGPT. It will be Donald Trump.

© 2023 The Economist Newspaper Limited. All rights reserved 

Translation: Juan Gabriel López Guix