A large group of people involved in and promoters of the artificial intelligence industry launched an SOS letter for humanity on Tuesday for the danger they see in the offing.
“Mitigate the risk of extinction posed by AI must be a global priority along with other risks on a global societal scale such as pandemics and nuclear war.†This is the only paragraph in the brief statement released by the non-profit AI Security Center.
Among the 350 executive directors and artificial intelligence researchers who appear on the list of signatories, relevant personalities who have a first-hand role in the development of this tool, such as Sam Altman, chief executive of OpenAI, the firm that manufactures ChatGPT; Demia Hassabis, head of Google DeepMind; or Dario Amodei, head of the Anthropic Society.
Also signing the letter are Geoffrey Hinton and Yoshua Bengio, two of the three Turing Award winners for their work on neural networks. The third winner of that award, Professor Yann LeCun, another of the “godfathers of AI” and a professor at New York University, disagreed with the scaremongering of the letter. LeCun, who also collaborates with Meta (Facebook’s parent company) considered that these apocalyptic fears “are exaggerated.”
“Doom prophecies are slaps in the face,†he tweeted. Other experts agreed that fears are unrealistic and a distraction from issues like bias in systems.
But the statement from a very influential part of the sector comes at a time when concern about the potential harm of artificial intelligence is growing. Recent advances in large language models, the kind used by ChatGPT and other chatbots, have propagated the idea that AI may soon be used to spread disinformation and propaganda, as well as threatening to eliminate millions of “white collar†employees. †or jobs for employees with a good level of studies. Within a few years, AI could produce a serious social breakdown.