In recent times, artificial intelligence has taken a very significant step after moving from algorithms with predefined or statistical operating rules to advanced generative models. Generative artificial intelligence comes from decades of research in machine learning and neural networks. The aim was to create programs capable of generating original and creative content. From a technical point of view, one of the milestones that changed evolution was the development of recurrent or convolutional neural networks, which allow learning more complex patterns, introducing context and creating new data.

The emergence of large-scale language models, such as the GPT (Generative Pre-trained Transformer), has marked a notable advance in the generation of coherent text with relevant context. ChatGPT, from OpenAI, which is based on this architecture, represents how the ability of large language models (LLM) to understand and generate natural responses in a conversation reaches the general public. Large amounts of data, advances in machine learning algorithms, and increased computational capacity have driven this generative AI.

LLM language models work by learning large amounts of text to identify linguistic and contextual patterns. They use neural network architectures with numerical representations such as words to generate coherent responses. During training, the models adjust a set of parameters of the neural connections. Knowledge is not modeled by a database, but by parameters, which cannot be deciphered directly. As they are trained with more data, LLMs develop a more careful use of language and generate more precise and contextualized texts, and they can improve with the feedback received from users and developers. Despite this, they have no conscious understanding of errors or a human learning process.

These algorithms, for now, also do not have the ability to make decisions in the human sense, taking into account cognitive, emotional and social factors. They apparently make decisions in dialogue, but it’s just a statistical probability based on what they’re asked. In short, these algorithms have no consciousness, no sense of transcendence, no intentions, and no autonomous decision-making capacity like people. Their creativity is an imitation of patterns.

The data used in your training are books, articles, news, websites, social networks and other online content. This operation leads them to present biases in their responses, which reflect the biases present in the data with which they have been trained (gender or cultural biases and social stereotypes based on ethnicity, nationality or religion). This can influence people’s perception and behavior and reinforce and amplify existing prejudices and discrimination.

That already has an impact today. Being excluded as a candidate for renting an apartment, being rejected from being granted a mortgage or being cornered in a personnel selection process because of having black skin or being a woman are some examples of training biases. some algorithms. Activist Joy Boulamwini coined the term excoded, which is the fusion of excluded and encoded, to talk about this phenomenon and how artificial intelligence can make those who were already quite fragile even more vulnerable.

The Pope has spoken of concern about the developments of technologies linked to artificial intelligence that do not address the ethical and social risks of their advancement and application. He points to the possible interests linked to their development and how they can increase inequalities and promote conflict rather than improving lives. We must create bodies that examine the ethical issues related to AI, draw red lines regarding people – freedom, the perception of reality – and work to ensure responsible application.

AI does not have free will. It is essential to become aware of their nature, of the limits and of the utilities, to be able to work and use them as instruments that make certain types of work more efficient, assuming from now on daily life and the democratization of their use. We urgently need to have a public dialogue about what space we want to give to this technological trend, of which today we only see the tip of the iceberg.