Richard Benjamins is a doctor in cognitive sciences and one of the main voices that, from a technology company, promotes the use of artificial intelligence within ethical parameters. He heads the artificial intelligence and data strategy area at Telefónica. He is on the advisory board of the Vatican’s Center for Digital Culture and on the MIT-Sloan panel on responsible AI. A few days ago he participated in Barcelona in the Congress of Digital Transformation of the Third Sector of Catalonia, organized by the Fundación Telefónica and the Taula d’ Entitats of the third sector.
How does Telefónica approach the use of AI?
We started in 2011 to consider data as a strategic asset. There are four large areas. The first is business optimization, everything that has to do with what a company does on a day-to-day basis. Then we have the important part of the relationship with customers, chatbots like Aura. The third is to use AI for our business clients and public administrations, because many are undergoing a digital transformation. And then we have information on movement, population, or activity. This is useful, for example, in Barcelona. We do it with anonymized mobility data, which is used for metro planning.
And in the social?
We have a small area called AI for Society and Environment, in collaboration with other actors, such as the World Bank or UNESCO, to carry out projects that have a social impact. Nowadays, there is a lot of research on this technology because everyone sees the benefits, but there are big problems that need to be solved.
Tell me about the risks.
There are two aspects. One is that this technology, because it is so powerful and is used in many places, through big tech, reaches billions of people. There is a risk of bias, of discrimination, which is not a risk of technology, but of its use. If you only use it to optimize your benefit, there may be a side effect that is negative. There it depends on the ethics of the company if you rectify it or not. The other aspect is that many groups, even countries, are left behind because this technology generates a lot of wealth, but more gaps between rich countries and poor countries and within countries, between rich groups and poor ones.
How does Telefónica see the regulation prepared by the EU?
We see it positive. The use of artificial intelligence is regulated and not technology. He talks about unacceptable risks, which are prohibited, such as manipulating people to do harm or take advantage of a disability or the issue of social scoring, which measures the behavior of citizens. Then there are high risks, such as hiring and firing people, access to essential services, finances, insurance, schools… This seems obvious, but five years ago it was not. All the facial recognition algorithms in the world are trained with a database, ImageNet, which has 80% of white people and 20% is divided between people of color, Asians, Latinos, and with many men. While this already works well, it’s not exactly the same for a black woman as it is for a white man. We accept it because the success rate is very high, but it is important to keep in mind that if we use a technology it should be applied equally to everyone. It is also under discussion that if you generate a deepfake –false image–, then you have to put a label that says so and not deceive.
Does this affect Telefónica?
We don’t have this technology, but we are working on detecting deepfakes. For example, if you are in an online meeting, a system checks if a deepfake is talking to us, because they can deceive you. There is a huge danger there. Today there is voice cloning technology with which a call is made, supposedly wrong, the person speaks to you for about 20 seconds and that is enough to clone the voice.
With AI, instead of talking about technology, we often talk about ethics. It hadn’t been that long since the atomic bomb.
I have promoted the entire journey at Telefónica with ethical principles that we are now implementing and when the regulation comes into force in Europe, Telefónica will already be prepared. The fact is that the AI ??is very powerful but it is not one hundred percent reliable. The ethical issue is a change of mind that many organizations have to do. Of course, all this slows down the work, but it is justified.
Is there a fight then between money and principles?
Yes, many times it is said that innovation must be balanced with ethics, but I ask why it has to be one or the other? why can’t you do both from the start?
With generative models, the red lines are placed after the model is already built and trained on data. That is where biases creep in.
They are a special category. They train with so much data that they learn from everything, because the content on the internet is infinite. What they are doing is training the models to detect bad content. Intelligence is very powerful, but it doesn’t do it well enough. You need knowledge of people.
Will there be a day when an AI gains consciousness?
I wrote a book three years ago called The Algorithm Myth. Accounts and tales of artificial intelligence, just about that. There are many accounts, science things that are true about artificial intelligence, but many stories, that are not true, that are fiction. The only reason to say that it is not impossible is us, because where do we come from? From millions of years of evolution I sincerely believe that the intention of the machine does not emerge like this by good means, it must be built. The need to reproduce, how can it arise in a machine?
What do governments and companies have to do for AI to have a social benefit?
You have to invest in social benefit and do it without fear, just as you invest in economic benefit. If the planet and humanity do not continue, you will not have an economic benefit either.
With all the panorama, are you optimistic?
I am optimistic. The problem is not artificial intelligence, the problem is people. For example, we have a percentage of people who are bad. This is the part of the old brain that we have. It is true that this technology is very powerful and is more accessible than nuclear weapons. So there is a risk, but I am optimistic because it is necessary to use this technology to solve our big problems and there is already enough awareness of possible negative impacts.