It’s 2035 and artificial intelligence (AI) is omnipresent. AI systems run hospitals, operate airlines, and litigate against each other in court. Economic productivity has skyrocketed to unprecedented levels, and countless previously unimaginable businesses have grown at breakneck speed. New products, cures and innovations come to market every day as science and technology accelerate their advances. And yet, the world is increasingly unpredictable and fragile. Terrorists are finding new ways to threaten societies with ever-evolving intelligent cyberweapons, and professional workers are losing their jobs en masse.
Just a year ago, such a scenario would have sounded like pure fantasy; Today, it seems almost inevitable. Generative AI systems are already able to write more clearly and convincingly than most humans and produce original images, art and even computer code from simple linguistic instructions. And generative AI is just the tip of the iceberg. Its appearance constitutes a true big bang, the beginning of a technological revolution that will change the world and reshape politics, economies and societies.
As with previous technological waves, AI will combine extraordinary opportunities with immense risks. However, unlike previous waves, it will also initiate a radical shift in the structure and balance of global power, as AI threatens the position of nation states as the world’s leading geopolitical actors. Whether they admit it or not, the creators of AI are themselves geopolitical agents, and their sovereignty over AI further entrenches a nascent technopolar order in which technology companies exercise in their domains the same kind of power previously reserved for nation states. . In the last decade, large technology companies have become independent and sovereign players in the digital spheres they have created. AI accelerates that trend and extends it far beyond the digital world. The complexity of technology and the speed of its advancement will make it almost impossible for governments to develop relevant standards at a reasonable pace. If governments don’t catch up soon, they may never catch up.
Unfortunately, much of the debate over AI governance remains caught in a dangerous false dilemma: harness AI to expand national power or curb it to avoid its risks. Even those who adequately diagnose the problem try to solve it by forcing the inclusion of AI into historical or existing governance frameworks. The point is that AI cannot be governed like any previous technology, and the truth is that it is already changing traditional notions of geopolitical power.
A challenge as unusual and pressing as that posed by AI requires an original solution. Before governments can begin to design an appropriate regulatory structure, they will need to agree on a set of basic principles from which to govern AI. From the outset, any governance framework must be precautionary, agile, inclusive, impermeable and specific. Building on those principles, policymakers should create at least three overlapping governance regimes: one to establish facts and advise governments on the risks posed by AI, another to prevent a rampant arms race between them, and another to manage the disruptive forces of a technology unprecedented in the world.
Whether we like it or not, 2035 is already close. And it will depend on what policymakers do now whether that year will be characterized by the positive advances allowed by AI or by the negative disruptions caused by it.
AI is different. Different from other technologies and different in its effect on power: it not only poses political challenges; Their hyperevolutionary nature also makes solving them increasingly difficult. Such is the paradox of the power of AI.
The pace of progress is impressive. Take Moore’s Law, for example, which has successfully predicted the doubling of computing power every two years. The new wave of AI makes that pace of progress seem outdated. When OpenAI released its first major language model, GPT-1, in 2018, it had 117 million parameters, which gives a measure of the scale and complexity of the system. Five years later, the company’s fourth-generation model, GPT-4, is believed to be worth more than a billion. The amount of computation used to train the most powerful AI models has increased tenfold every year for the past ten years. In other words, today’s most advanced AI models (also known as frontier models) use five billion times the computing power of cutting-edge models from a decade ago. Processing that previously took weeks is now done in a matter of seconds. Within a couple of years we will have models capable of handling tens of trillions of parameters. Brain-scale models with more than 100 trillion parameters (roughly the number of synapses in the human brain) will be viable within five years.
With each new order of magnitude unexpected capabilities emerge. Few predicted that training with raw text would allow massive language models (LLMs) to produce coherent, novel, and even creative sentences. Even fewer expected language models to be able to compose music or solve scientific problems. Soon, AI developers will likely be able to create self-improving systems, a critical juncture in the technology’s trajectory that should be food for thought for everyone.
AI models also do more with less. Yesterday’s cutting-edge capabilities run today on smaller, cheaper and more accessible systems. Just three years after OpenAI released GPT-3, open source teams have created models capable of the same level of performance that are less than one-sixtieth its size; That is, they are sixty times cheaper to run in production, they are completely free and available to everyone on the Internet. The massive language models of the future are likely to follow that trajectory of efficiency, becoming available as open source just two or three years after major AI labs have spent hundreds of millions of dollars developing them.
As with any software or code, AI algorithms are much easier and cheaper to copy and share (or steal) than physical assets. The proliferation risks are evident. For example, Meta’s powerful LLM Llama-1 was leaked online in March, just a few days after its presentation. Although the most powerful models still require sophisticated hardware to operate, the mid-range versions run on computers that can be rented for a few dollars an hour. Soon, those models will work on smartphones. No technology with that power has become so widely and quickly accessible.
AI also differs from previous technologies in that almost all of it can be characterized as dual-use; that is, it has military and civil applications. Many systems are inherently general, and in fact, that generality is the main goal of many AI companies. They want their apps to help as many people and in as many ways as possible. Now, the same systems that drive cars can drive tanks. An AI app built to diagnose diseases could create (and weaponize) a new disease. Thus, the lines between what is civilly safe and what is militarily destructive are inherently blurred, which partly explains why the US has restricted the export of the most advanced semiconductors to China.
All of this plays out on a global terrain: once released, AI models can and will be everywhere. And it only takes a single evil or unleashed model to wreak havoc. For this reason, the regulation of AI cannot be done in a piecemeal manner. There is little point in regulating AI in some countries if it remains unregulated in others. Given its ease of proliferation, AI governance cannot have loopholes.
What’s more, the damage that AI can cause has no obvious limit, although the incentives to create it (and the benefits of doing so) continue to grow. AI could be used to generate and spread toxic disinformation, thereby eroding social trust and democracy; to monitor, manipulate and subjugate citizens, and thereby undermine individual and collective freedom; or to create digital or physical weapons that threaten human lives. AI could also destroy millions of jobs, exacerbating existing inequalities and creating new ones; entrench discriminatory patterns and distort decision-making, with the consequent amplification of feedback loops of misinformation; or trigger unforeseen and uncontrollable military escalations that end up leading to wars.
It is also unclear what the time frame of the greatest risks is. Online disinformation is an obvious threat in the short term, as autonomous warfare also seems plausible in the medium term. Further on the horizon loom the promise of artificial general intelligence, that still uncertain point at which AI will surpass human performance in any task, and the danger (still speculative, it must be admitted) that artificial general intelligence escapes control human and becomes self-directed, self-replicating and self-performing. All these dangers must be taken into account from the beginning in the governance architecture.
AI is not the first technology with some of these powerful features, but it is the first to combine them all. It is not like cars or airplanes, which are built on hardware that can be gradually improved and whose most costly failures occur in the form of individual accidents. They are not like chemical or nuclear weapons, which are difficult and expensive to develop and stockpile, let alone share or deploy secretly. As their enormous advantages become increasingly evident, AI systems will become bigger, better, cheaper and more ubiquitous. They will even achieve near-autonomy (they will be able to achieve specific goals with minimal human supervision) and will acquire the ability to improve themselves. Any of these characteristics would challenge traditional government models; All of them together inevitably make these models inadequate.
As if that were not enough, by changing the structure and balance of global power, AI complicates the very political context in which it is governed. In some cases, AI will undermine existing authorities; In others, it will strengthen them. Furthermore, its advancement is driven by irresistible incentives: every country, corporation, and individual will want to own some version of it.
AI will allow those who use it to surveil, deceive and even control populations, boosting the collection and commercial use of personal data in democracies and perfecting the tools of repression used by authoritarian governments to subjugate their societies.
AI will also be the focus of intense geopolitical competition. Whether due to its repressive capabilities, economic potential, or military advantage, AI supremacy will be a strategic objective of any government with the resources to compete.
The vast majority of countries do not have the money or technological know-how to compete for leadership in AI. Its access to cutting-edge AI will be determined by relationships with a handful of already rich and powerful companies and states. That dependency threatens to aggravate current geopolitical power imbalances.
The most powerful governments will compete to control the world’s most valuable resource, while developing countries will be left behind. That doesn’t mean that only the richest will benefit from the AI ??revolution. As in the case of the internet and smartphones, AI will proliferate without respecting borders, nor will the productivity gains to which it gives rise. And, like energy and green technology, AI will benefit many countries that do not control it, including those that help produce AI inputs like semiconductors.
Now, at the other end of the geopolitical spectrum, the competition for AI supremacy will be fierce. At the end of the Cold War, the superpowers were able to cooperate to allay each other’s fears and stop a potentially destabilizing technological arms race. However, today’s tense geopolitical environment makes such cooperation much more difficult. Rightly or wrongly, the two most important players (China and the US) perceive the development of AI as a zero-sum game that will provide the winner with a decisive strategic advantage in the coming decades.
From the point of view of Washington and Beijing, the risk of each other gaining an advantage in AI is greater than any theoretical risk that the technology may pose to society or to their own national political authority. That’s why both the US and Chinese governments are investing huge resources in developing AI capabilities, while striving to deprive each other of the inputs needed for next-generation advancements. (So ??far, the US has been much more successful than China in the latter; especially with its export controls on advanced semiconductors.) This zero-sum dynamic (and the lack of trust on both sides) aims to Consequently, Beijing and Washington focus on accelerating the development of AI, rather than slowing it down. In his opinion, a pause in development to assess the risks (as some prominent figures in the sector have requested) would amount to foolish unilateral disarmament.
However, that perspective assumes that states can maintain at least some control over AI. This may be the case of China, which has integrated its technology companies into the state fabric. In the West, however, AI is likely to undermine public power rather than strengthen it. Outside China, a handful of large AI companies now control every aspect of the new technological wave: what AI models can do, who can access them, how they can be used and where they can be deployed. And, because those companies closely guard their computing power and algorithms, only they understand (most of) what they are creating and (most of) what those creations are capable of. Those few companies may maintain their advantage for the foreseeable future, or they may be eclipsed by smaller developers as low barriers to entry, open source development, and near-zero marginal costs lead to uncontrolled proliferation of AI. . In any case, at least for the next few years, the trajectory of AI will be largely determined by the decisions of a handful of private companies, regardless of what policymakers in Brussels or Washington do. In other words, it will be technologists, not politicians or bureaucrats, who will exercise authority over a force that could profoundly alter both the power of nation states and the way they relate to each other. That makes the challenge of governing AI nothing like what governments have faced before.
The governments are already late. Most proposals to govern AI treat it as a conventional problem amenable to 20th-century, state-centered solutions: agreements on standards negotiated by political leaders. That won’t work with AI.
For global AI governance to work, it must adapt to the specific nature of the technology, the challenges it poses, and the structure and balance of power in which it operates. However, because the evolution, uses, risks, and rewards of AI are unpredictable, it is impossible to fully specify its governance from the beginning; nor, in fact, at any time. That governance must be as innovative and evolutionary as the technology itself it seeks to govern, and share some of the characteristics that make AI such a powerful force.
The overall goal of any global AI regulatory architecture should be to identify and mitigate risks to global stability without stifling innovation and the opportunities that arise from it. Let us call such an approach technoprudentialism, a mandate very similar to the macroprudential role played by global financial institutions such as the Financial Stability Board, the Bank for International Settlements and the International Monetary Fund. Its objective is to identify and mitigate risks to global financial stability without endangering economic growth.
A technoprudential mandate would function in a similar way, requiring the creation of institutional mechanisms to address the various aspects of AI that could threaten geopolitical stability. Those mechanisms, in turn, would be guided by common principles tailored to the unique characteristics of AI and reflect the new balance of technological power that has put tech companies in the driver’s seat. Such principles would help policymakers craft more detailed regulatory frameworks to govern AI as it evolves and becomes an increasingly ubiquitous force.
The first, and perhaps most important, principle for AI governance is caution. As the term itself indicates, technoprudentialism is guided at its core by the precautionary creed: first, do no harm. Limiting AI to the maximum would mean giving up its opportunities and benefits; but releasing it to the maximum would mean exposing oneself to all its potentially catastrophic risks. In other words, the risk-reward profile of AI is asymmetric. Given radical uncertainty about the scale and irreversibility of some of AI’s potential harms, its governance should aim to prevent those risks before they materialize rather than mitigate them afterward. That’s critically important given that AI could weaken democracy in some countries and make it harder for them to enact regulations.
Governance must also be agile to adapt as AI evolves and improves itself. Public institutions often calcify to the point of being unable to adapt to change. And, in the case of AI, the very rapid pace of technological progress will quickly outpace the ability of existing governance structures to catch up and not be left behind.
In addition to being precautionary and agile, AI governance must be inclusive and invite the participation of all the actors necessary to regulate it in practice. This means that AI governance cannot be focused solely on the State, since governments neither understand nor control AI. Private technology companies lack sovereignty in the traditional sense, but they exercise real (even sovereign) power and intervention in the digital spaces they have created and de facto govern. Non-state actors should not have the same recognized rights and privileges as states, which are internationally recognized as acting on behalf of their citizens; But they should be at international summits and be signatories of any agreement on AI. That expansion of governance is necessary because any regulatory structure that excludes the real power brokers of AI is doomed to failure.
Technology companies should not always have a say; Some aspects of AI governance are best left to governments, and it goes without saying that states should always retain final veto power over policy decisions. Governments must also guard against regulatory hijacking, ensuring that tech companies do not use their influence within political systems to promote self-interest at the expense of the public good. However, an inclusive governance model will ensure that the actors who will actually determine the fate of AI are involved in and governed by the standard-setting processes.
AI governance should also be as watertight as possible. Since global AI governance will only be as good as it is in the worst-governed country, company, or technology, it must be airtight everywhere. A single loophole, weak link, or unscrupulous defector will open the door to widespread leaks, rogue actors, or a regulatory race to the bottom.
In addition to covering the entire planet, governance must cover the entire supply chain. That means technoprudential regulation and oversight across all nodes of the AI ??value chain, from microprocessor production to data collection, from model training to end use. This tightness will guarantee that there are no gray areas susceptible to exploitation.
Finally, governance will have to be specific, not one-size-fits-all. Since AI is a multipurpose technology, it poses multidimensional threats. A single governance tool is not sufficient to address the various sources of AI risk. For example, AI will be evolutionary in some applications, aggravating current problems such as privacy violations, and revolutionary in others, creating entirely new harms. Sometimes the best place to intervene will be where the data is collected. Other times, it will be the point at which advanced microprocessors are sold, to ensure they do not fall into the wrong hands. Fighting disinformation and misinformation will require different tools than those needed to address the risks of artificial general intelligence and other uncertain technologies with potentially existential ramifications. In some cases, a light regulatory touch and voluntary guidance will suffice; In others, governments will have to strictly enforce compliance.
All of this requires a deep understanding and up-to-date knowledge of the technologies in question. Regulators and other authorities will need to monitor and access key AI models. Indeed, they will need an audit system capable of not only tracking capabilities remotely, but also directly accessing core technologies, which in turn will require the right talent. Only such measures will ensure that new AI applications are proactively evaluated, both for obvious risks and potentially disruptive second- and third-order consequences. In other words, selective governance must be well-informed governance.
Based on these principles there should be a minimum of three governance regimes, each with different objectives, mechanisms and participants. All will have to be novel in their design, but could draw on existing agreements to address other global challenges, such as climate change, arms proliferation and financial stability.
a scientific organization
The first regime would focus on fact-finding and would take the form of a global scientific body to objectively advise governments and international bodies on issues as basic as what AI is and the types of policy challenges posed. In the absence of consensus on the definition of AI or the possible extent of its harms, it will be impossible to develop effective policies. In this sense, climate change is instructive. To create a shared knowledge base for climate negotiations, the United Nations created the Intergovernmental Panel on Climate Change (IPCC) and gave it a simple mandate: to provide policymakers with “periodic assessments of the scientific basis of climate change, its repercussions and future risks, as well as adaptation and mitigation options. AI needs a similar body that regularly assesses its status, impartially assesses its potential risks and impacts, forecasts scenarios, and studies technical solutions to protect the global public interest. Like the IPCC, this body would enjoy global approval and scientific (and geopolitical) independence. And its reports could inform multilateral negotiations on AI, just as IPCC reports inform United Nations climate negotiations.
International consensus
The world also needs a way to manage tensions between major AI powers and prevent the proliferation of dangerous advanced systems. In AI, the most important international relationship is that between the US and China. Cooperation between the two rivals is already difficult to achieve under the best of circumstances. However, in the context of heightened geopolitical competition, an uncontrolled race in the field of AI would doom any hope of forging an international consensus on its governance. One area where Washington and Beijing could find fruitful collaboration is measures to contain the proliferation of powerful systems that could endanger the authority of nation states. Ultimately, the threat of runaway, self-replicating artificial general intelligences (should they be invented in the coming years) will provide strong incentives to coordinate security and containment.
On all these fronts, Washington and Beijing should try to create common areas and even security barriers proposed and monitored by third parties. There, oversight and verification approaches typically found in arms control regimes could be applied to AI’s most important contributions; specifically, those related to computer hardware, including advanced semiconductors and data centers. Regulation of major hotspots helped contain a dangerous arms race during the Cold War, and could help contain a potentially more dangerous race now.
A Council for stability
However, the decentralized nature of AI development and the technology’s fundamental characteristics, such as the proliferation of open source, increase the likelihood that it will be weaponized by cybercriminals, state-backed actors, and lone wolves. That is why the world needs a third governance regime that can react when dangerous disruptions occur. As a model, policymakers could draw inspiration from the approach that financial authorities have used to maintain global financial stability. The Financial Stability Board, composed of central bankers, finance ministries, and supervisory and regulatory authorities from around the world, works to prevent global financial instability by assessing systemic vulnerabilities and coordinating the actions necessary to address them among national and international authorities. A similar technocratic body in the case of AI risks (let’s call it the Geotechnological Stability Council) could work to maintain geopolitical stability amid rapid AI-driven changes. With the support of national regulatory authorities and international standards bodies, it would pool knowledge and resources to prevent or respond to AI-related crises, thereby reducing the risk of contagion. In addition, it would also collaborate directly with the private sector, in recognition that major multinational technology players play a key role in maintaining geopolitical stability, as do systemically important banks in maintaining financial stability.
A regime designed to maintain geotechnological stability will also fill a dangerous gap in the current regulatory landscape: the responsibility to govern open source AI. Some level of online censorship will be necessary. If someone publishes an extremely dangerous model, that body must have the clear authority (and ability) to remove it or direct national authorities to do so. This is another area of ??possible bilateral cooperation. China and the US should collaborate to include security restrictions on open source software; for example, limiting the extent to which models can provide instructions to users on how to develop chemical or biological weapons or create pandemic pathogens. Furthermore, there may be scope for Beijing and Washington to cooperate in global counterproliferation efforts, including through the use of interventionist cyber tools.
Each of these regimes should work universally and involve the main AI actors. Regimes should be specialized enough to deal with real AI systems and dynamic enough to adapt as AI evolves. Working together, these institutions could take a decisive step toward technoprudential management of the nascent world of AI.
Promote the best, prevent the worst
None of those solutions will be easy to implement. Despite all the talk and talk from world leaders about the need to regulate AI, the political will to do so remains lacking. Few powerful groups currently favor containing AI, and all incentives point to continued inaction. Still, well-designed, a governance regime like the one described here could satisfy all stakeholders and enshrine principles and structures that promote the best of AI while avoiding the worst. The alternative (uncontained AI) will not only pose unacceptable risks to global stability, but will also be harmful to businesses and contrary to the national interests of each country.
A strong governance regime would alleviate the social risks posed by AI and ease tensions between China and the US by reducing the degree to which AI constitutes an arena (and instrument) of geopolitical competition. On the other hand, such a regime would achieve something even more profound and lasting: establish a model for addressing other emerging and disruptive technologies. AI is not going to be even remotely the last disruptive technology that humanity will have to face. Quantum computing, biotechnology, nanotechnology and robotics also have the potential for a radical transformation of the world. Successfully governing AI will help the world successfully govern those technologies as well.
The 21st century will bring with it few challenges as daunting and few opportunities as promising as those offered by AI. In the last century, policymakers began to build an architecture of global governance that they hoped would be equal to the tasks of the time. Now, they must build a new governance architecture to contain and harness the most formidable and potentially decisive force of this era. The year 2035 is just around the corner. There is no time to lose.
Ian Bremmer is President and Founder of Eurasia Group and GZERO Media. He is the author of ‘The power of crisis: How three threats and our response will change the world’. Mustafa Suleyman is CEO and Co-Founder of Inflection AI. Co-founder of DeepMind, he is also the author of ‘The coming wave: technology, power, and the twenty-first century’s greatest dilemma’.