Google Meet just released a tool for artificial intelligence to go to meetings for you. In other words: let it do the work for you. There are more and more -and more advanced- AI tools that facilitate human work or directly replace the role of people. We are still in an early stage of the development of this technology, but there are already many voices warning that, depending on how it is used, there is a risk of falling into what is being called ‘technological paternalism’.

The idea of ​​paternalism comes from a kind of authority that the machine or computational system exercises over the user. This subordination can imply a loss of control on the part of the person who manipulates the machine: either out of confidence (the fact that they are told that the error rate is small), or sometimes out of convenience, as Efraín Fandiño-López explains. , PhD in Law from the Paris Cité University and with a thesis entitled ‘Automated creations and copyright. Reflections on the works created with artificial intelligence systems’.

The World Economic Forum published a report in July in which it predicted that 25% of jobs would change radically. The conclusion was that within five years there will be 14 million fewer jobs: 83 million jobs will disappear and only 69 million will be created. Because of the changes it will bring to society, there are those who are fascinated by how far they can go with AI and there are those who fear the future.

Fandiño-López believes that the debate is not as simple as seeing AI as the devil or the panacea. “As naive as it sounds, in the end the technology is going to be what we want,” he says. What he is clear about is that regulation and control are needed from now on. As an example, he points to the case of deepfakes (or ultrafakes, according to the translation of the AI ​​Act).

AI offers a number of techniques that can open up endless creative possibilities. However, Fandiño-López qualifies that these advances have been used to modify pornographic videos, with the aim of inserting the faces of women to attribute the participation of the video to them. “These kinds of actions need to be sanctioned so that they do not become the landscape of daily life,” she stresses.

In addition, artificial intelligence is already being tested in areas as sensitive as Health or Justice. In this last area, the doctor in Law from the University of Paris says that a few months ago in Colombia it was news that a judge relied on ChatGPT to draft a sentence. And it has happened more times. “The big problem is that ChatGPT was not designed to analyze legal norms and judicial precedents to offer a decision,” emphasizes Fandiño-López. He points out that it is a program that creates text based on predictive algorithms, that is, it tries to imitate preconceived texts regardless of whether or not they are in accordance with the law.

A case that exemplifies the aforementioned is that a few years ago the US used software with the aim of automating the administration of justice in criminal matters. The Propublica outlet published a report in which it was explained that the result is that the Afro-American population was condemned more severely than the white one. When analyzing the reasons, they found that the problem was in the bias of the data, which was nothing more than the reproduction of biases in real life.

The doctor clarifies that it is not that he is against the use of AI to improve the work of the administration of justice. He believes that there are already specialized data processing tools that can help solve complex cases and thus shorten time. For the relevant justice official, it can be useful to understand patterns during a judicial process thanks to AI. For example, to find out if a scammer has a preference for a certain type of victim.

On the other hand, when the consequences of a judicial decision affect the life and property of these people, Fandiño-López stresses that “these artificial intelligence systems have to be designed so that it is the official (the human person behind) who has the control all the time and at the same time, and so that its design is in accordance with the law”.

“The bias seems to me to be part of the essence of the system”, affirms the Doctor of Law. He argues that if AI systems are trained with pre-existing data coming from the real world, it will drive the action taken by the system to one point or another. “The question is not so much to remain in that illusion of the neutral and think about an AI without bias, but rather what kind of consequences we want an AI system to produce, and that is where the ethical, legal and social implications come in,” he says.

For the Doctor of Law, it will be the large corporations that provide the technology that will have “control over an impoverished population.” He sees worlds where there are sophisticated automation systems that “do not improve the quality of life for humans.” That of ‘high tech’, ‘low life’”. For this reason, he thinks that the AI ​​”in the end, it will be what we want it to be.” The fear of this technological paternity is that those who design these tools “end up gaining some control over our activities of daily life. We’ll see what happens.”

AI is at a very early stage to predict how it can impact society. “It may also be a fad and then people stop using it or it may be that the same environmental problems due to the use of AI lead to its disuse. There it is time that will speak ”, exposes Fandiño-López.