Artificial intelligence (AI) consists of the simulation of human intelligence in machines. That diffuse goal includes many techniques and technologies that allow computers and machines to perform a wide range of tasks; among them, making decisions, learning, recognizing specific elements in images or videos, understanding the human voice and reacting to specific stimuli.

Some applications of AI are very familiar. For example, virtual assistants like Siri or Alexa recognize natural language, process the questions or requests we ask them, and respond to them. Streaming platforms make personalized recommendations based on our previous choices. Fraud detection algorithms identify fraud by studying deviations in spending patterns. Navigation apps use real-time traffic information to indicate the best routes. Ultimately, all these AI applications have two common elements: they collect digital information and analyze it using sophisticated methods.

The potential offered by the application of AI to many productive activities has led analysts to ask all kinds of questions. Will machines replace humans? Will AI revolutionize production processes and lead to unprecedented growth in living standards? Will it transform organizations, with the elimination of middle management and their replacement by an army of technologists in charge of training algorithms? Will it make the distribution of income more unequal? ? and lead to an increase in social tensions?

Although these are questions that provoke extensive reflection, any concrete answer today is mere speculation given the uncertainty about the future evolution of AI, its applications and the way in which we will use them. Furthermore, an added difficulty is that, unlike other technologies whose use is evident to the naked eye (cars or wind turbines, for example), most AI applications are not easy to observe when one does jogging in a park or working in the office, which makes it difficult to even measure its current spread and its impact on the economy.

After these caveats, it is possible to make conjectures about the answers to some of these questions because, like electricity or computers, AI is what we call a general purpose technology (GPT). TPGs are technologies with a wide range of applications in different sectors and the potential to change production processes throughout the economy. The development and diffusion of TPGs follow common patterns. Therefore, by looking at the past we can understand what will happen in the future.

The first characteristic of TPG is that they are not isolated technologies, but rather a group of complementary technologies that, used together, provide many more benefits than when used separately. One consequence is that TPGs only become widely used after the development of a significant number of applications. Hence its long delay in diffusion with respect to non-TPG technologies. For example, the demand for electricity only reached notable levels after the invention of a set of household appliances that included the radio, washing machine, refrigerator and electric oven in the first two decades of the 20th century. That is, about four decades after Edison invented the first commercially viable light bulb.

The development of radically new technologies such as TPG is long and complex. Thomas Edison, for example, tested more than six thousand different materials to make the filament for his light bulb in 1879. And that happened decades after other inventors developed the first models of light bulbs; But they were light bulbs with such a short lifespan that they were not viable in commercial terms.

Due to the complementarity of the technologies that make up a TPG and the difficulty of developing these technologies, their impact on the economy is very gradual and only manifests itself in productivity statistics decades after having been introduced. The classic example is computers. In 1987, sixteen years after the first personal computer was commercialized, Nobel Prize winner Robert Solow formulated his famous paradox: “computers are everywhere except in productivity statistics.” It was not until the mid-1990s that American productivity growth picked up and then, for a decade, had a pace comparable to that of the golden 1960s.

Beyond the effect on productivity, much of the interest in TPGs focuses on their possible distributional effects. Does your broadcast have winners and losers? And if so, can we predict who they will be? Economists have thoroughly studied the impact of certain TPGs on the relative demand of workers with university education versus those without. For example, companies took advantage of the improvements brought by computing power and new computer programs to incorporate new tasks into their production processes that required the use of computers by workers. New occupations thus appeared, such as programmer, chip designer or information technology consultant; and the fact that, in both new and old occupations, college-educated workers had an advantage in using computers fostered their relative demand and caused their wages to rise relative to those of non-college-educated workers. .

However, TPGs do not always increase the relative demand for skilled workers. Electricity facilitated the creation of larger, more efficient factories, which raised the productivity of relatively low-skilled production workers. Transportation-related TPGs, such as cars, trucks or airplanes, allowed companies to reach new and more distant markets and increased the scale of their operations, resulting in increased efficiency. And that had a symmetrical impact on the productivity of all workers, skilled and unskilled. Therefore, the historical evidence does not support a consistent distributional bias caused by TPGs, even though some may produce it.

What can we extrapolate to AI from the historical regularities of TPG? What similarities and differences are there between AI and previous TPGs?

Let’s start with the speed of diffusion and the time it will take for AI to appear in aggregate productivity data. Currently, we are already able to affirm that the diffusion of AI is similar to the previous TPG. Techniques used to analyze data (for example, machine learning or neural networks) have existed for several decades. As with electricity or computers, new applications are being developed that increase the attractiveness of incorporating AI into production processes. Clearly, AI applications are far from polished. For example, I asked ChatGPT to write this article for me, and the result was so disappointing that here I am, typing.

In my opinion, there are three aspects that we must consider to evaluate how long we will have to wait until we see a reflection of AI in productivity statistics.

AI applications may take longer to develop than applications for other TPGs for two reasons. First, machine learning and neural network techniques are very effective at identifying valuable nonlinear patterns present in data. However, they require a lot of data to work. For example, data on financial transactions from several million borrowers are needed to develop a neural network algorithm that predicts loan defaults more accurately than traditional econometric models used by banks. That type of data does not exist in most contexts, and its collection is difficult and expensive.

Second, the results of the algorithms are opaque. For example, an algorithm may predict the probability that a potential customer will default on a loan, but it does not shed light on how it arrived at that estimate or on the relevance of the different variables considered in the application. Additionally, algorithms can be affected by confounding factors that have predictive power on a variable of interest. For example, an applicant’s racial status can play an important role in an algorithm even though it is not a cause of default per se, because race is correlated with some causal factors for loan default that have not been taken into account. by the algorithm. On the other hand, algorithms may be biased by the data used in the training phase. A recent study1 has shown that AI language models contain different political biases depending on the data used to train them, and that it is virtually impossible to clean training data a priori to avoid these biases a posteriori. I predict that the opacity and potential biases of the algorithms that underpin all AI applications will create resistance to application development and curb the potential impact of AI on the economy.

However, there is a countervailing force that will facilitate the spread of AI applications relative to older technologies. Over the past two hundred years, the rate of diffusion of new technologies has continued to accelerate.2 Technologies invented ten years later have spread an average of four years faster. That trend began with the industrial revolution and was not altered by the arrival of digital technologies. However, because they are newer, they have spread faster than any previous technology. The applications of AI are even newer and will undoubtedly spread faster than any other technology we have experienced. The example of ChatGPT, with 1 million users 2 days after its launch, 100 million 9 months later, and 200 million (predicted) 13 months later, is consistent with such a prediction.

A different issue is the impact of AI on the long-term growth of the economy. Some analysts have surmised that AI will change the way we innovate and accelerate long-term growth in the economy. His reasoning is as follows. Artificial brains will replace human brains in the development of ideas; And, as those will be more powerful and not subject to diminishing returns, the pace of creation of new ideas will increase, leading to a new era of faster technological change and higher rates of productivity growth.

As appealing as that story sounds, I have doubts about its plausibility. Innovation is a complex process, and we are far from understanding it well. One thing we know about innovation is that it’s not just about having new ideas. Ideas must become prototypes that materialize them before they can be marketed and used. And the part of the innovation process that consumes the most time and resources usually consists of experimenting with prototypes and tweaking them until they become viable machines, software, products or processes. Such tinkering and tinkering is not easy to automate and digitize, and AI is unlikely to change the situation.

Furthermore, good ideas do not simply arise from the combination of concepts in a reasoned manner. Chefs do not create new recipes by mixing ingredients neatly, but instead rely on inspiration, which is the “ability to understand something immediately, without the need for conscious reasoning.” It’s one thing to learn what word comes next in a text or what concept goes well with another based on what humans have done in the past. A much more demanding challenge is to develop the instinct that guides good researchers toward producing great innovations. That is why I am skeptical about the possibilities of AI transforming the innovation process and providing the extraordinary riches dreamed of by some.

The significant impact of information technologies on the job skill premium and inequality has sparked interest in the distributional effects of AI. In this initial phase, there are already some relevant observations that point to what can be expected in the future. According to some recent research, AI technologies are flattening organizations, eliminating middle levels of management and increasing the demand for workers with technical and scientific training.3 This emerging trend will impact workers’ career paths, as management positions intermediates will no longer be a natural progression for young professionals.

Evidence also suggests that AI is changing the relative demand for undergraduate and graduate workers trained in STEM (science, technology, engineering and mathematics) relative to workers trained in the social sciences and humanities. Furthermore, companies that start with a higher proportion of STEM workforce are also the most likely to adopt AI technologies with greater potential to grow and capture market share in their respective sectors compared to companies that have a lower dependence on STEM workers. Consequently, we are beginning to see a STEM premium in wages that raises those of skilled workers trained in those fields relative to those who are less able to adapt or whose work can be more easily replaced by AI. Along those lines, it is revealing that one of the key demands of Hollywood screenwriters to end their strike was to limit generative AI in scripts.

Despite the plausibility of these trends, it is important to keep in mind that they are based on projections made in very early stages of the spread of AI and that there is a considerable margin of error. After all, in the most recent year for which I have seen an indicator (2018), less than 0.1% of the US workforce was employed in AI-intensive occupations. So it may still be a little early to become obsessed with AI.

Diego Comín is a professor of Economics at Dartmouth University