The mathematician Norbert Wiener, father of cybernetics, wrote in 1960 that to avoid the negative consequences of machines, their progress and the understanding of how they work had to be simultaneous. Otherwise, when we wanted to rectify ourselves, we would have lost control, as is now feared with artificial intelligence (AI). Some come to think that its development will lead to the extinction of the human species. I don’t think so, although it can aggravate several current problems, spreading misinformation, increasing bias in decisions and damaging privacy and intellectual property rights.
Machine learning is very good at making predictions based on patterns, on regularities in the data, but it doesn’t understand why what happens: it’s a black box that even developers can’t explain their predictions. It works in situations where we have good data and understand what’s going on. But it doesn’t work with data when we don’t understand what’s going on. For example, if the machine sees that when hotel prices are high, there is high demand for rooms, it can predict that demand is high because prices are high, when the causality is the opposite.
Generative AI (such as ChatGPT4) takes text, images, audio or video and generates answers to the questions asked in the same format that is consistent with the one in which it has been trained. This type of conversational system has been described, not without debate, as a “stochastic parrotâ€. The answers can be very accurate or “hallucinationsâ€.
For example, you can answer the questions on a university level exam very well, but make up the publications of a scientist. You can help write computer code and be a good co-pilot for the researcher, or you can rave by giving surprising advice.
It is clear that AI has great potential and can be a technology that revolutionizes the economy and society, just as great technological changes such as the internet or the smartphone have done. Like any revolution, it will entail a restructuring of the productive system, destroying jobs and creating new ones.
The question is to what extent should we regulate it and at what speed. Some are of the opinion that we must act now, establish a moratorium, and that regulators must treat AI like drugs that need a trial period before being authorized. It seems that China will take this path (also for reasons of political control, leader in facial recognition). Others think that if it is regulated too strictly, its development will be slowed down and it will occur where it is less regulated (the United Kingdom, for example). The EU is trying a middle path and cannot be left behind.
AI regulation touches many keys: privacy, consumer protection, intellectual property rights and defense of competition. The latter may be threatened, since the large BigTech platforms are the ones that have the data, resources, computing capacity, and the skills and expertise necessary to move it forward. They already have first-mover advantage, Microsoft and Alphabet in particular, but also Amazon in the cloud and Meta.
It must be taken into account that AI will need enormous resources to develop and BigTech has them. An oligopoly will control AI and regulators have to be alert to defend and encourage competition for the benefit of society.