The mathematician Norbert Wiener, father of cybernetics, wrote in 1960 that to avoid the negative consequences of machines, their progress and understanding of their operation had to be simultaneous. Otherwise, when we realized we wanted to rectify, we would have lost control, as is now feared with artificial intelligence (AI). Some even think that its development will lead to the extinction of the human species. I don’t think so, although it may exacerbate several current problems, expanding misinformation, increasing bias in decisions, and damaging privacy and intellectual property rights.
Machine learning is very good at making predictions based on patterns, on regularities in the data, but it doesn’t understand why what happens: it’s a black box that even the developers themselves can’t explain its predictions. It works in situations where we have good data and understand what’s going on. But it doesn’t work with data when we don’t understand what’s going on. For example, if the machine sees that when hotel prices are high there is high demand for rooms, it can predict that demand is high because prices are high, when the causality is the opposite.
Generative AI (such as ChatGPT4) takes text, images, audio or video and generates answers to questions in the same format that is consistent with what it has been trained to do. This type of conversation system has been described, not without debate, as a “stochastic parrot”. These answers can be very correct or be “hallucinations”. For example, he can answer the questions of a university-level exam very well but make up the publications of a scientist. He can help write computer code and be a good co-pilot to the researcher, or he can help out with amazing advice.
It is clear that AI has great potential and can be a technology that revolutionizes the economy and society, just as major technological changes such as the internet or the smartphone have done. Like any revolution, it will lead to a restructuring of the productive system by destroying jobs and generating new ones. The question is to what extent should we regulate it and at what speed.
Some argue that action needs to be taken now, a moratorium should be instituted, and that regulators should treat AI like drugs that need a trial period before being approved. It seems that China will take this path (also for reasons of political control, leader in facial recognition). Others think that if it is regulated very strictly it will slow down its development and this will occur where it is less regulated (the United Kingdom, for example). The European Union tries a middle way and cannot be left behind.
The regulation of AI touches many keys: privacy, consumer protection, intellectual property rights and defense of competition. The latter can be threatened, as the big BigTech platforms are the ones that have the data, resources, computing capacity and skills and expertise needed to take it forward. They already have the advantage of being pioneers, Microsoft and Alphabet in particular, but also Amazon in the cloud and Meta. Keep in mind that AI will need enormous resources to develop and BigTech has them. An oligopoly will control AI and regulators must be alert to defend and encourage competition for the benefit of society.