Of the many challenges facing humanity this year that has just begun, artificial intelligence is one of the most unknown. In the little over a year that has passed since OpenAI revealed ChatGPT to the world, progress has been accelerated. AI will be present in many of the things that happen with big impacts in many areas, but the scope of these effects is still unpredictable.

The emergence of the large language models that power generative AI bots is a recent technology that even the experts who create it still have much to discover. No one has been able to figure out, for example, why the machine makes a certain decision when it chooses an option, or why, after a series of coherent reasoning, the algorithm deviates from an answer or gives a false one, something called “hallucination”.

One of the challenges that have been raised the most in the last twelve months and will continue to be very present in day-to-day life is that of regulation. Governance plagues political leaders in all countries. Europe has pushed against the clock for an agreement to draft a world-leading AI law, although it will take around two years to come into force. That deadline is an eternity in a technology where jumps happen practically every week.

Waiting for lawmakers from both major US parties to agree, President Joe Biden signed an emergency executive order in late October to compel AI companies to share critical security information with the federal government of their systems. The presidential initiative was accelerated after seeing the risks of an AI in the latest Mission: Impossible movie. The decree also tries to protect the use of AI to design dangerous biological material and misleading content.

The vast majority of what happens with AI will happen inside the companies’ laboratories, so the fight against biases – of ethnicity and gender, for example -, errors and the use of tools of this technology to to massive cyberattacks will depend on the responsibility of the companies themselves. This they call “self-regulation” which had disastrous consequences in other sectors.

AI will also have to face the challenge of sustainability this year. The use of enormous supercomputing power connected to millions of online users can mean enormous energy expenditure. Therefore, one of the approaches of 2024 will be the use of smaller AI models for daily use inside mobile devices and computers, so that it does not connect with the outside for the user’s request. This model of small AIs will also involve a new batch of devices with chips capable of handling these functionalities with ease.

The progressive implementation of AI tools that will automate many jobs previously done by humans also poses a huge economic and social challenge, because it can mean mass redundancies, with information indicating that prosperous technologies such as Google could leave up to 30,000 workers on the streets to restructure your organization with artificial intelligence. Let’s fasten our belts.

artificial intelligence