In 2023 we have fully entered the era of artificial intelligence (AI). With the deployment of applications as capable as ChatGPT, Bard or Midjourney v5, the possible applications have multiplied. Not only in text generation but also in image, audio, video and other production. Many of these tools equal or exceed human performance in complex tasks. For example, GPT-4 already scores in the 90th percentile of human performance on the SATs, the equivalent of selectivity in the North American educational system, on the LSAT, which is the entrance test to law schools in the US. or in the BAR, the Bar Association entrance exam in that country and considered one of the most demanding tests in the world. This means that 90% of humans obtain worse results than these AI systems. We are, therefore, faced with software that demonstrates undoubted characteristics of intelligence. At least intelligence and ability as we have measured them to date.

Aside from the limitations that are still observed in AI systems, and especially in generative AI systems, it is evident that the consequences of their development and deployment are going to be broad. The debate about its impact on democracy has only just begun and everything seems to indicate that not even the creators of these systems are capable of anticipating it. I believe, in any case, that its consequences can be organized into four large categories:

The first would be in the area of ??security. AI will enable the development of new sophisticated offensive and defensive weapons. There is already a debate today, for example, about the need to ban fully autonomous lethal weapons. These are weapons whose entire cycle of use, from target identification, to authorization and execution of the kill order, is carried out by an AI. It is likely that a global consensus will be sought to ban these weapons due to the risk they may pose. Nobody wants an AI with full capabilities to execute people.

We also know that weapons equipped with AI can be used to attack strategic infrastructure of states, such as the electrical grid or the telecommunications system. Likewise, the interaction between AI and biological weapons can lead to the proliferation of the latter. Anyone with access to sufficiently powerful AI can learn how to configure a bioweapon and build it from products available to the general public. In the past, the great barrier to these developments was the lack of knowledge about biology and weapons; AI dismantles that barrier. And this is one of the fundamental risks of AI: given its low and decreasing cost, it will eliminate the barrier to access to multiple types of weapons. This is bad news for states, including advanced democracies, that had a monopoly on certain types of military capabilities.

AI also poses a very particular challenge to democracies: its ability to amplify disinformation. In recent months we have seen how AI can generate completely fake images and videos with great ease. This was already possible before, but with the new tools it is even simpler and cheaper. It is very likely that our electoral processes will be besieged by more sophisticated disinformation campaigns, with extensive use of fake videos, audios and images. The acid test will be the year 2024 when there will be elections in the European Union, the United States, Mexico, India and other democratic countries. One can imagine that these interference campaigns are even personalized, that clearly false social network profiles are generated, that they interact individually with thousands of people at a time, and that they transmit subversive messages and calls to damage the electoral process or erode the democratic institutional legitimacy. If we add to this the fact that new generations are increasingly informed through networks, one begins to gauge the scale of the challenge.

There are of course solutions to these risks. It will be necessary to work with the developers of AI tools to ensure that their use is not harmful and that their activity can be tracked. Also with the networks so that they control the content that is distributed through their channels and that they begin to act as true content mediators and not as mere bulletin boards; something they stopped being a long time ago. States will also have to strengthen their monitoring capacity and intervene when campaigns reach a certain scale. Ultimately we will need citizens who are more aware of the risks of networks and accustomed to checking everything they see and hear, to seeking reliable sources, and to comparing information. It is possible, and desirable, that this process of noise on the networks helps to once again value the traditional media that lives off the veracity of what they share with the public.

The second area of ??impact of AI will be the economy and the generation and distribution of income. AI applications are going to create and destroy jobs. The aggregate effect is still uncertain, but it seems reasonable to estimate that they will displace quite sophisticated and high-paying jobs. AIs will be able to perform customer service tasks, configure contracts and monitor their compliance, give advice on tax issues, write texts, generate complex images, or assist in medical consultations. We have already seen a screenwriters’ strike in Hollywood concerned about the use of AI by film studios. It is also possible that these technologies contribute even more to the concentration of income in certain types of capital; in this case, highly productive technological capital. If so, the AI ??revolution could continue to fuel the hollowing out of the Western working middle class that we have already experienced in the last three decades. This emptying of the center of our income distribution has been accompanied by the emptying of the political center in the West and the rise of national populism. Therefore, the impact of AI on employment and the production model could aggravate the social and political crisis in advanced democracies.

Proper governance of these technologies therefore becomes essential. It would be necessary to ensure that it is taxed appropriately, with effective capital taxes. Also that possible market concentration effects derived from the deployment of AI systems are contained. And that the population be provided with training opportunities to access the jobs of the future.

The third category of impact of AI on democracy includes its effects on the legitimacy of the political model itself. Democracies have enjoyed great legitimacy because they are systems that, better or worse, listen to their citizens. Indeed, in the democratic model, it is the citizens who, through their intervention, express their preferences and shape the political body. Freedom of the press, of expression, of association or the right to vote are mechanisms through which citizens’ preferences are inserted into the consensus-building process and decision-making. That is, individual freedom is the cornerstone of the democratic political system. Now, with the advent of advanced AI systems, a different model is emerging. One in which citizens’ preferences are not heard, but rather inferred from their monitored behavior. This system, which has its greatest expression in China, begins by collecting as much information about its citizens as it is capable of. Some have called this reality the surveillance state since practically all aspects of a citizen’s life are subject to monitoring. Once this information is collected, which comes from public and also private sources, since private companies are forced to deliver their clients’ information to the State, it is aggregated by public entities in extensive databases. There is no segregation of information here or limits on the level of aggregation. From here, data analytics and AI systems are used to understand and, if possible, anticipate the behavior of citizens. The kindest version of this system depicts it as something capable of governing complex societies and generating positive results for its members. The most critical speaks of a technological Leviathan.

This second interpretation may gain strength when the monitoring State adds to its arsenal of control the advances that are coming in behavioral science and neuroscience. Indeed, the line between knowledge of how an individual behaves and the ability to shape that behavior is becoming increasingly thinner. That is, the State that knows everything can become the State that orders everything. Even more so if neuroscience and neurotechnology break the barrier of mind reading. The latter is not so far away and has already led UNESCO, for example, to work on recommendations on the ethics of neurological data.

Given China’s extraordinary economic development in recent decades, this non-democratic model is gaining prestige. There are many countries that wonder if the democratic path is the one that produces the best results. Indeed, democracies face a challenge in their legitimacy of results; or, to put it another way, in its ability to meet the needs of its citizens. In the coming decades we will see how democracies update and look for ways to integrate technology in a way that is consistent with their values. The field of GovTech, the technology for the deployment of public action, will see enormous development. Also that of the ethics of technology. And, ultimately, international initiatives to regulate the use, export and control over sensitive technologies will proliferate. In fact, this international action will require democracies to develop a much more robust technological diplomacy, which takes seriously the implications of AI and other technologies for the political model and for individual rights and freedoms.

Before closing this essay, it is worth mentioning here the fourth and final category of AI risk for democracies. This is broader and includes the existential risk that these technologies can pose. The democratic overflows here, and it is, really, a civilizational risk.

We are still at the beginning of this debate, but it is very likely that it will occupy a greater part of our time in the coming years. It is not ruled out that AI escapes human control. This can occur in many ways and can lead AI, for example, to pursue a specific objective in a mechanical way that is highly harmful to humanity.

Let’s imagine something that Oxford University philosopher Nick Bostrom has proposed: that an AI is in charge of producing clips, and that it decides to produce them in such quantity that it exhausts resources fundamental to the survival of the human species. It is a superficial example, but it illustrates well the problem of controlling a highly capable AI. There are those who are already saying that we are close to the singularity or the moment in which an AI generally surpasses a human being in intelligence. If that same AI had the capacity to improve itself, it would soon have what has been called civilizational intelligence, or what is the same, the intelligence of all human beings since the beginning of time. How to control that AI? What kind of behavior would he have? What would be your objectives? How would you perceive human beings? A simple solution would be to implant in AI the obligation to care for human beings and not harm them. But what if he concludes that man’s worst enemy is man himself and therefore he must deprive him of his freedom? This brief intellectual exercise serves to outline the challenge we face in this field. For the first time in the history of man we are within reach of creating an intelligence superior to that of humans. An inexhaustible and constant intelligence.

AI opens up immense possibilities for man. In the coming years we will see major technological advances supported by the work of AI. But AI will present us with important governance challenges; in the field of security, employment, the political model and others. Much of our future will depend on us addressing these challenges with anticipation, skill and effectiveness.

Manuel Muñiz Villa is international rector of IE University and dean of its School of Political Science, Economics and Global Affairs. He was also Secretary of State in the Ministry of Foreign Affairs, European Union and Cooperation of Spain between 2020 and 2021.