Geoffrey Hinton, one of the pioneers of today’s artificial intelligence systems, has left Google, where he worked, “to be able to talk about the dangers of AI”. The 75-year-old British computer scientist is one of the most important contributors to the development of deep neural networks and has been awarded, among others, the Turing Award, the Princess of Asturias Award and the Frontiers of Knowledge Foundation BBVA The New York Times announced the resignation of the researcher yesterday, regretting the way in which this technology can be used to do harm.

Hinton spoke by phone with the CEO of Google and its parent company Alphabet, Sundar Pichai, on Thursday after announcing that he was stepping down. The computer scientist assured yesterday that he is not leaving to criticize the company. “In reality – he commented on Twitter – I’m leaving so I can talk about the dangers of AI without taking into account how this affects Google, which until now has acted very responsibly”.

Considered the godfather of artificial intelligence, Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built in 2012 a neural network capable of analyzing thousands of photos and learning to identify objects. Google bought the company they formed for $44 million. Their discoveries have been essential in creating today’s AI systems, such as ChatGPT.

In an interview with The New York Times, Hinton expressed concern about the company race since OpenAI launched ChatGPT in late November 2022. Two months later, Microsoft announced the integration of this AI into its Internet search engine , Bing, and Google responded to the challenge with the announcement that it would use another, Bard, for its own. The scientist considers that the projection on how the situation may evolve is “scary”.

According to Hinton, until last year, without the pressure of a rival like Microsoft launching its products with AI, Google acted as an “adequate steward”. Now the IT veteran is worried that mass use is causing an avalanche of fake photos and videos, called deepfakes, and that most of the population “is no longer able to know what is true and what is not”. One problem he sees is that, while AI “eliminates the drudgery,” it “could take more than that,” with massive job cuts.

Hinton also fears that he has made a mistake in time so that “these things can become more intelligent than people”. Until recently, he believed that “there were 30 to 50 years or even more” to reach this stage, but “obviously I don’t think that way anymore”.

The scientist suspects the possibility that AI will not only generate its own programming code, but will decide to execute it and that certain autonomous weapons, which he calls “robot soldiers”, will become a reality.

“It’s hard to see how you can prevent bad actors from using it for bad things,” said Hinton. His hope is that it will be the scientists who stop the advances until they make the technology safe: “I think they shouldn’t expand it until they know they can control it.” The computer scientist now feels a certain charge of conscience for his work. “I console myself with the usual excuse: if I hadn’t done it, someone else would have done it,” he said.