CSIC artificial intelligence researcher Ramon López de Mántaras is one of the more than 1,300 experts and businessmen who published an open letter on Tuesday expressing the need for a minimum 6-month moratorium on training more powerful AI systems than GPT-4. The signatories question whether it is worth letting advances in AI impact society without evaluating and controlling its risks and without any regulation, as has happened up to now. The text ensures that if it is not stopped, there will be catastrophic social effects.
The letter does not ask to stop the development of AI, but to ensure that it will only have positive effects and to control its risks.
What has been done at most is to ask for a recommendation that has been talked about for years and that is in the Barcelona declaration on the appropriate use of AI, in which we already talked about prudence. They have deployed a large language model system that they have made available to train hundreds of millions of people without any precautions. This is not acceptable. Since there is nothing that forbids it, well they have done it. Ultimately, they are in a race over which model will best fool people. Let’s see who makes it fatter.
How did you get to this situation that alarms you?
OpenAI has done a great experiment with people. It is precisely that: we let the beast go and the users will tell us. But it should have been done the other way around. You have to do tests with beta testers –expert testers– before launching it on a massive scale. This is how a medicine is prepared, in stages, until it finally reaches the whole world. Something similar should be done with artificial intelligence, because the amount of falsehoods it can publish is endangering democracy.
As which?
The fake people thing is even worse. The images of Donald Trump being detained is an illustrative example. Fortunately, it was immediately reported that the information was false. This false information polarizes and generates hatred, because Trump has among his followers those who stormed the Capitol. I dont give credit. These models are stochastic parrots, as Dr. Emily Bender says. They don’t understand anything from the text they generate, but you get the feeling that the machine is there.
Now we have to mark red lines when the technology is already on the street. Has the process been built badly?
It should have been done before. It is what is known as ethics from design. But let’s not kid ourselves, those at OpenAI are smart people. It is obvious that they knew what consequences the release of ChatGPT would have. We are not realizing it, but the users are training the system. It is about reinforcement learning. I understand that everything is planned. They’ve done it on purpose so people can help them improve it. They are doing an experiment, but a very dangerous one. It shouldn’t be allowed. They would have to go through an ethics committee, as in other science experiments, to get their approval. These large corporations directly do the experiment on a global scale with millions of users and they don’t care about the consequences. I think like the philosopher Daniel Dennett. This should be legally prosecutable. Disinformation destroys society.
Are authorities late in regulating AI?
I am not aware that any government has done anything. OpenAI published a 98-page report on GPT-4 and there are no significant details in it. Too much roll to say nothing. We don’t know any data. It’s all very closed. This is understandable because they are immersed in a race with many competitors and they cannot reveal anything. In the end, all these people are interested in is making money. They have no ethical consideration. I know very badly that what is happening with artificial intelligence is happening. We have to slow it down as much as we can.
How should proper AI governance be built?
At the European level there may be a body, but the rules indicate that it has to be set up in each country. In Spain it has already been decided that the supervisory body will go to A Coruña, but they still have to set it up. You have to really see how content is given. Drugs, to be approved, follow a very defined pattern. It can be tested at the national or supranational level. An AI should be certified before it is deployed and certified that it meets a series of security and ethical requirements. To achieve this, it has to get out of governments.
In any case, you are defenders of AI and call to harness its potential for a prosperous future.
There is also some apocalyptic element in the letter, but if we do things right, it is clear that we will have great advances. This happens with all technologies. We may make appropriate or unacceptable or illegal use. It is the simplistic but clear example of the hammer: it can be used to kill or to drive nails. Artificial intelligence is a more sophisticated tool with more potential to do harm. We have to be able to control it.