Do you know that the wildfires that devastated Hawaii last summer were caused by a secret “weather weapon” being tested by the US military, and that US NGOs are spreading dengue fever in Africa? That Olena Zelenska, the Ukrainian first lady, went shopping in New York and spent $1.1 million on Fifth Avenue? Or that Narendra Modi, Prime Minister of India, is supported in a new song by Mahendra Kapoor, an Indian singer who died in 2008?
All of these stories are, of course, false. They constitute examples of disinformation: falsehoods whose objective is to deceive. Such lies are spread throughout the planet through increasingly sophisticated campaigns. They use sophisticated artificial intelligence (AI) tools and complex networks of social media accounts to create and share disturbingly convincing photos, videos and audio that mix reality and fiction. In a year in which half the world is holding elections, this situation fuels fear that technology will make disinformation impossible to combat and mortally wound democracy. To what extent should we worry?
Misinformation has existed since there are two sides in an argument. Ramses II did not win the Battle of Kadesh in 1274 BC. C. It was, at best, a tie; But we would never be able to imagine that if we are guided by the monuments that the pharaoh built in honor of his triumph. Julius Caesar’s account of the Gallic Wars is as much political propaganda as it is historical narrative. The era of the printing press did not bring any improvement. During the English Civil War of the 1640s, controls on the press collapsed, leading to widespread concern about “false and defamatory pamphlets.”
The Internet has greatly aggravated the problem. False information can be distributed cheaply on social media; AI also makes it cheap to produce. Much of the misinformation is very murky. It is planted and spread through a network of accounts on social networks and websites. The Russian campaign against the wife of the president of Ukraine, for example, began as a video on YouTube, before moving to various African fake news websites and being boosted by other websites and social media accounts. The result has an apparent veneer of verisimilitude.
Broadcasting accounts gain followers by posting about football or the British royal family; They thus gain the trust of followers and then introduce misinformation. Much of the research on misinformation tends to focus on a specific topic on a specific platform and in a single language. However, the truth is that most campaigns work in a similar way. The techniques used in Chinese disinformation operations to discredit South Korean companies in the Middle East, for example, closely resemble those used in Russian-led efforts to spread falsehoods in Europe.
The objective of many operations is not necessarily to gain support for one political party to the detriment of another. Sometimes the goal is simply to contaminate the public sphere, or sow distrust in the media, governments, and the very idea that the truth can be known. And thus appear the Chinese fables about meteorological weapons in Hawaii, or Russia’s attempt to promote conflicting narratives to hide its role in the downing of a Malaysian passenger plane.
All of this raises fears that technology, by making misinformation invincible, will end up threatening democracy itself. However, there are ways to minimize and manage the problem.
Encouragingly, technology is a force that serves both good and evil. AI makes the production of disinformation considerably cheaper, but it can also help with its monitoring and detection. Even as campaigns become more sophisticated, and each disseminating account varies its language enough to be credible, AI models are capable of detecting stories that appear similar. Other tools can recognize suspicious videos by identifying spoofed audio or looking for signs of real heartbeats, as revealed by subtle variations in skin color on a person’s forehead.
Better coordination can also help. In some ways, the situation is analogous to that of climatology in the 1980s, when meteorologists, oceanographers, and earth scientists saw something happening, but each of them could only see part of the picture. Only when they pooled what they knew did the true magnitude of climate change become clear. Likewise, academic researchers, NGOs, technology companies, media and government agencies find it impossible to tackle the problem of misinformation alone. If they coordinate, they can share information and detect patterns, and that is what allows technology companies to label, silence or remove misleading content. For example, Facebook parent company Meta shut down a disinformation operation in Ukraine in late 2023 after receiving a notice from Google.
However, a deeper understanding also requires better access to data. In today’s world of algorithmic feeds, only technology companies are in a position to know who reads what. Under US law, these companies are not required to share the data with researchers. On the other hand, in Europe, the new Digital Services Law requires data sharing and is a possible model for other countries. If they are concerned about sharing secret information, companies could let researchers send them programs to run themselves instead of sending the data to the researchers.
Such coordination will be easier to carry out in some places than in others. Taiwan, for example, is considered the benchmark in the fight against disinformation campaigns. It helps that the country is small, trust in the government is high, and the threat from a hostile foreign power is clear. Other countries have fewer resources and weaker trust in institutions. In the United States, unfortunately, political polarization means that coordinated attempts to combat disinformation have been presented as evidence of a vast left-wing conspiracy to silence right-wing voices online.
The dangers of misinformation must be taken seriously and studied carefully. In any case, it must be taken into account that they are still uncertain. There is so far little evidence that misinformation alone can influence the outcome of an election. For centuries there have been people who have spread false information, and people who have wanted to believe it. However, societies have often found ways to cope with them. Disinformation may be taking a new and more sophisticated form today, but it has not yet emerged as an unprecedented and unassailable threat.
© 2024 The Economist Newspaper Limited. All rights reserved
Translation: Juan Gabriel López Guix