Now that artificial intelligence puts almost all of us in a vulnerable position, we should prepare to withstand its potential impacts with as little damage as possible. The signs that arrive are not rosy. It is enough to see the feeling of helplessness with which the senators of the United States judiciary committee expressed themselves on Tuesday, who this week invited Sam Altman, CEO of OpenAI, the ChatGPT company, to participate in an informative session. From his questions, it was clear that they don’t know what to do. The worrying thing is that they are the legislative branch of the most powerful country on Earth.
The only conclusion that both parliamentarians and witnesses agreed on – apart from Altman, Professor Gary Marcus and IBM’s chief privacy officer Christina Montgomery – is that we have a dire need for AI systems to be regulated by The authority. Another cause for concern is that no one knows how, and if they did, they don’t seem to be in a hurry either.
Waiting to make decisions due to the slow bureaucratic procedures that democratic countries have endowed themselves with is beginning to be dangerous. Complying with legal guarantees should not mean shooting yourself in the foot. The European Union is preparing legislation that will be clear and will set the lines for AI companies, but the process will lead to the probable date for its application being at the beginning of 2025. We should remind you that ChatGPT has been within everyone’s reach for six months.
A large part of artificial intelligence systems are in the hands of powerful private companies. We can naively believe in slogans such as that its mission is to take the world to new heights of progress, such as solving climate change, achieving incredible advances in medicine and equipping us with tools to ultimately make us happier.
The reality is that the main objective of all these companies, above all other considerations, is to make money. If the main purpose of achieving an artificial general intelligence (AGI) that solves everything is human well-being, let’s use it only for that and set it by law, but among its possible uses every day, are misinformation and its potential to cause damage to society and increase inequalities.
Trained without permission on Humanity’s legacy of art and information, taken by storm and without pay, AI systems are closed boxes that even those who created them don’t quite know how they work. The ChatGPT authors have admitted in a paper that they do not understand how their model makes decisions. Even when they find out, it’s too late. The problem with AI is that: we don’t know much of what we should already know.