The air connection between the airports of Munich and Madrid for her participation in the recent City and Science Biennial, held simultaneously in the Madrid capital and Barcelona, ??forces Alena Buyx (Osnabrück, Germany, 1977) to eat during Spanish hours, away from the customs rooted in the rest of Europe. But the “magnificent” views offered by the terrace of the Circulo de Bellas Artes, and the “amazing chocolate dessert from its rooftop restaurant” that she recommends to those attending her conference, more than make up for it for the professor and director of the Institute for the history and ethics of medicine at the University of Munich.
With an agile verb, but calm argumentation, the also president of the Ethics Council that has advised the German Government since the start of the pandemic spoke with La Vanguardia moments before speaking with the Research Professor of History and Philosophy of Science at the Council Superior of Scientific Research (CSIC), Javier Moscoso, about what it means to “Living on the planet” in a society whose changes have been accelerated by the pandemic and the implementation of Artificial Intelligence.
What have we learned from the pandemic?
The German government’s ethics council released a report titled Vulnerability and Resilience in a Crisis that synthesizes how the pandemic has shown us that we are all vulnerable. And that, although crises, such as the war in Ukraine, energy, inflation or climate change, are a problem for everyone, they affect people differently. And that is very unfair to the most vulnerable groups. In the pandemic, it was the older people first, but over time it had a full impact on the young, who did not have the opportunity to live the life that was theirs due to their age.
Was a good job done?
If we want to avoid succumbing to future crises, we must detect very early who is vulnerable, why, and how we can support them. Obviously I am not going to talk about the war, which is the most extreme crisis in the entire world. But I do think about how to deal with climate change. And that’s where we need to ask how we can ensure that when we change our way of life, as we will have to, we won’t unfairly disadvantage certain groups.
And what lessons have been drawn from resilience?
If, as societies, we want to deal with this kind of uncertainty that the crisis produces, we have to address the information crisis. We have a huge problem with misinformation and all kinds of conspiracy theories that drive us apart. They cause some people to doubt the truth, that we have common ground. And for societies to be resilient in a crisis, we must do it together. There is no other possibility to face these enormous challenges. We cannot face them if we separate as societies. Informational self-determination to control what others can know at all times about our lives and our personal data is of vital importance.
Moral dilemmas such as genetic engineering or stem cell research remain the subject of discussion decades after they were raised. Are we running the risk of ethical rulings lagging behind advances in Artificial Intelligence (AI)?
My answer is yes and no. In other words, when it came to stem cells, when the technology was ready, all the ethical questions were raised. Do we want to clone? Do we want to enter this field? But Biomedicine has learned a lot about it. The scientists themselves, Jennifer Doudna and Emmanuelle Charpentier -a Nobel Prize winner for developing a method to edit the genome that is contributing to new therapies against cancer and can make the dream of curing hereditary diseases come true- warned themselves of its ethical and social implications. So that was a huge change from 20 years earlier. And an ethical debate was quickly addressed for which we were no longer starting from scratch.
But it’s not always like this
Indeed. For example, let’s talk about AI, which is something that is everywhere. All the algorithms are out there and are already shaping our lives through our smartphones. If we take a look at social networks, they are driven by algorithms that are changing our public debate. We talk about misinformation, but the networks are also polarized because the algorithm is programmed too many times to highlight the most radical issues.
The heads and tails of the same coin
To a certain extent, yes, in terms of Biomedicine we have been really good at a European level, assimilating that there are ethical challenges with these technologies and that we have to shape them. They have great potential, they can do a lot of good, but they can also do other things that we find very problematic. Let’s get trained, make sure we get the good and avoid risks. (…) But with algorithms, we have simply let them into our lives and it has taken several years to recognize that these algorithms also have enormous ethical and social implications. So yes, in this case we are running after them to try to regulate them.
Microsoft recently announced a system that “helps people interact with robots more easily, without having to learn programming languages”, when it should be the other way around. Is this one of those examples where we’re late?
For technology to end up molding and shaping us would be the wrong path. If we want to use the benefits of this technology, which it certainly does, we need to stay in the saddle and stay in control.
Should the spread of AI be slowed down in the meantime?
The ideal model today would be the combined intelligence of the human being and technology. But always centered on the person, on the patient. And not the other way around.
Climate change or the pandemic… we are talking about global problems. Can universal governance cause even more inequalities?
There is no such universal governance. When China carried out the controversial genetic modification of babies it was a great global scandal. And there were prison sentences. But those scientists can go anywhere if their own country forbids something. They can be moved to another place. But not to Europe, of course, where we have strong continental governance like data protection (GDPR) or the upcoming AI law that will force transparency to determine when you’re talking to humans and when you’re talking to machines, regardless of appearance. that you want to give it through a regenerated human voice or face. But I don’t think that happens on a global level.
Are we talking about the Asian giant again?
China already has the “social credit” system in place – by which it calculates, through algorithms, the behavior of users, as well as the trust they deserve – and the monitoring of people. Two aspects that we would never allow in Europe. The fact that they are operating differently there does not mean that we should have the same system here.
Is the best solution to have several systems?
I am not going to answer with a resounding yes. But I insist that from Europe we must maintain our high ethical standards and that they are based on human rights and on the protection of ethical principles and the care of vulnerable groups. I don’t think we should stray from those standards just because that happens somewhere else. We should have a kind of ethics made in Europe and shield the system that we believe is correct.
And how does that articulate against challenges like robot soldiers?
Certainly some countries could build the so-called AI Super Soldier. We must be able to monitor its development, understand it, and possibly have to protect ourselves from it. But that doesn’t mean we have to build unethical AI as a response.
A good part of the technological advances are in the hands of a handful of multinationals like Google or Facebook. How can a universal ethic break through against economic interest?
We cannot allow a few companies to own this infrastructure that shapes public debates, influences elections, or the way our democratic debates happen. Data protection (GDPR) is not perfect. I myself have criticized some aspects. But there is no doubt that its function is very important, because it shields the right for us to know what happens when they obtain our data and because it establishes that even large corporations have to follow certain rules. It is true that they do not always do so, that the GDPR is constantly violated despite the fact that they are fined, etc. And some say the rules aren’t strict enough. But we must continue with this tough battle to get those multinationals to sign the rules that we set.
That balance may seem easy in theory, but it’s difficult in practice.
Let’s look at Ukraine. It is worrisome that one person, like Elon Musk, is providing the internet in the middle of this war. It’s a good thing that’s happening, because otherwise the country wouldn’t have the internet, but a single individual shouldn’t have that kind of infrastructural power.
Does Europe run the risk of becoming a digital colony with American multinationals on one side and Chinese ones on the other?
We should make the powerful regulation that governs Europe easier, because there are those who think, like me, that we have too many rules. But by no means should we kill our own innovation in data analytics. We have built it based on a set of ethical values ??that are important to us and from which we will not separate. That is why we will not become a data colony.