This text belongs to ‘Artificial’, the newsletter on AI that Delia Rodríguez sends out every Friday. If you want to receive it, sign up here.

Dear readers, dear readers: this is not serious.

While 350 executives, researchers, and engineers working in the AI ​​industry sign a one-sentence mini-manifesto saying if we don’t take it seriously, we may go extinct, people are vandalizing works of art, album covers, memes and vacation photos with the new Photoshop and lousy taste.

Although it is possible that, in reality, there are no reasons to take it so seriously, and that the only sensible thing is -I don’t know- to learn to create your own Pixar character or make automatic summaries of meetings. It even seems like a good solution to entertain ourselves while we wait for the end of humanity.

In the first two editions of this newsletter, we already talked about how suspicious an industry crying for its own regulation is, about previous manifestos and the distressing warnings of some experts. “Mitigate the risk of extinction posed by AI must be a global priority along with other risks on a global societal scale such as pandemics and nuclear war,” says the new statement, published on the website of the private Center for AI Safety foundation. about which not much is known. In a thread, its director gives some more context.

Signatories include OpenAI, Google DeepMind and Anthropic CEOs Sam Altman, Demis Hassabis and Dario Amodei; in addition to two of the considered fathers of AI thanks to their research on neural networks and winners of a Turing award for it, Geoffrey Hinton (the one who left Google to be able to speak freely about the dangers of AI) and Yoshua Bengio. The CEOs of Google, Sundar Pichai, and Microsoft, Satya Nadella, have not signed.

But let’s overanalyze the statement:

– Why is it so short? Apparently it was the idea of ​​a Cambridge professor, and it was done this way to achieve consensus among the signatories and avoid divergences like those that occurred with the March letter, where a thousand researchers and technologists asked for a six-month pause in the investigations. But when the nuances recede, catastrophic marketing approaches.

– The moment in which it is published says more than its 22 words: some of the signatories are dedicating all their efforts to position themselves as the main interlocutors in a future global regulation on the subject and thus influence it. Sam Altman ended his European tour by backing down on his threats and acknowledging that they really have no intention of leaving the continent. Washington and Brussels are working on a voluntary code of ethics for large generative artificial intelligences that will be ready in the coming weeks, because the laws are slow.

– Is the threat such a priority? “Do the signatories believe that current AI systems or their immediate successors can wipe us all off the map? If they believe it, then the industry leaders signing this statement should immediately shut down their data centers and turn everything over to governments”, write Seth Lazar, Jeremy Howard, and Arvind Narayanan, who recall that we have other urgent threats, such as the climate crisis, pandemic and nuclear threat from Ukraine war.

– The researcher Katja Grace -author of the famous survey that revealed that half of the people who work at a high level with AI believe that there is a 10% chance that things will get dangerously out of control-, goes into detail about why the The arms race metaphor is not adequate for AI. It’s better to think that we’re all on a thin layer of ice, she says.

– There is another father of neural networks and Turing winner, Yann LeCung, who currently works at Meta, who does not support the manifesto. “How can you design a seatbelt for a car that doesn’t exist yet?” he has said to himself on several occasions.

– You are not capable of holding a press conference for a candidate for the presidency of the United States and you are going to extinguish humanity, wrote the critic Douglas Rushkoff in other words. A couple of days earlier he gave an interview in The Guardian where he talked about Silicon Valley’s millennial hysteria: “Now they’re torturing themselves, which is fun to watch. They are afraid that their little AIs will come after them. They are apocalyptic and so existential because they have no connection to real life and how things work. They are afraid that the AIs will be as bad to them as they have been to us.”

– “The ‘AI extinction’ hype is a re-enactment of Cold War nuclear war hysteria. Do you remember the missile crisis? – but with an interesting twist brought by the neoliberal faith in the redemptive power of law, bureaucracy and risk governance”, says another great critic, Evgeny Morozov.

– The big question is: what risks are we talking about exactly? If we look at the website of the Center for AI Safety, where the statement appeared, there are eight types: its use as a weapon, disinformation, manipulation, the WALL-E weakening of our wills, the concentration of power, the misalignment between human-machine objectives, that the IAS deceive us to achieve those objectives and power struggles. “Except its use as a weapon, it is not clear how the other risks, however horrible, could lead to the extinction of our species,” says Professor Nello Cristianini. “You have to name the risks and be specific.”

– The great criticism of the statement -which has been being considered for some time- is that talking about future damage control distracts from the serious problems that are already occurring. Natasa Lomas sums it up at Techcrunch in just one paragraph: “This constant pace of hysterical headlines has distracted attention, it could be argued, from a deeper examination of existing harms such as tools that use copyrighted data to train intelligence systems. AI without permission or consent (or payment); or the systematic scrapping of personal data that violates people’s privacy; or the lack of transparency of the AI ​​giants regarding the data they use to train these tools. Or, indeed, flaws like misinformation (“hallucinations”) and risks like bias (automated discrimination). Not to mention AI-powered spam! And the environmental cost of the energy expended to train these AI monsters.” Right now, for example, Microsoft president Brad Smith’s biggest concern is deepfakes. We should not submit to a predetermined future, the authors of the famous paper on stochastic parrots wrote a few months ago, but rather adapt the machines to our needs. And in that conversation those most affected by AI should have a role: “immigrants subject to digital borders, women forced to wear certain clothes, workers who suffer trauma from filtering generative systems, artists whose work is stolen for corporate profit and the precarious workers who suffer to pay their bills”.

Having said that, it would be tempting to sit quietly in a corner and watch the show, but nothing is that simple. Among the 350 signatories to the manifesto is the Spanish Helena Matute, Professor of Psychology at the University of Deusto. “We must reach an agreement at a global level on a minimum of security, which today is not guaranteed by anyone, and which will not be achieved from one day to the next. You have to prevent. Many things can go wrong. We must act, as has been done with the atomic bomb, human cloning, and other technologies that involve great risks”, she has declared, and it also sounds reasonable.

What else has happened this week

– The journalistic subgenre “people caught cheating with ChatGPT” continues to give us joy. This week they have hunted down a New York lawyer (“find yourself a lawyer who will defend you from his lawyer,” the judge told him) and a thesis student, who gracefully acknowledges him on TikTok.

– Another less funny subgenre is that of “victims of montages”. Although we should say “women”, because we will have to think about the coincidence that they are usually the victims. This week, Rosalía has been very angry about the photographic manipulations of her body spread by a musician. It is difficult to know if they were created with AI or with a simple image editing program, just like the fake video of Aitana singing the “Face to the Sun” that has been circulating for a while and I have known through Carmela Ríos.

– NVIDIA is already one of the very few companies that has ever reached a stock market valuation of one trillion dollars. His AI chips cost $20,000 each. Manel Pérez puts the greed that drives the “fourth industrial revolution” in context. We must remember the name of the head of NVIDIA, Jensen Huang, who, by the way, gave this interesting speech in Taiwan a la Steve Jobs.

– The first time that the expression “artificial intelligence” was mentioned in La Vanguardia was in 1961, when José García Santesmases entered the Royal Academy of Sciences with a speech on “Automation, cybernetics and automation”. In it, he spoke of “artificial intelligence systems, which constitute one of the most interesting fields of cybernetics” and warned of the possible “elimination of the human operator in the production process.”

– Two interesting interviews by Francesc Bracero in La Vanguardia. The first, to the philosopher Susan Schneider, who talks about the elusive topic of consciousness: “if an AI claims to be conscious, society has to deal with it.” In the second interview, the businessman and disseminator Pau García-Milà tells that in his SME each employee has a ChatGPT account and experiments with it. “In the world of small businesses, there are not enough hands, not too many,” he says.

– Good news: AI helps a spinal cord injured person walk and discover a new antibiotic.

– Beware of the possibility, so close, that children have artificial friends.

– A designer has created Paragraphica, a lensless camera consisting of a box with a Raspberry Pi connected to Stable Diffusion inside. Capture parameters such as geolocation, time or weather and AI, then create the photo.

– English imperialism (in Spanish, in Wired).

– The ChatGPT app for iOS can now be downloaded in Spain.

– Google is starting to give some users access to the integration of AI tests in their search engine

– Everyone in San Francisco works at an AI start-up, has founded it or is financing it, says The New York Times.

– Two beautiful digital narratives, in English, to enjoy with time. ‘Look why AIs like ChatGPT have gotten so much better so fast’, in The Washington Post. ‘Eight questions about the future’, in The New York Times.

– I have looked over the assistant for Android Perplexity, the application to create Gamma automatic presentations, the study assistant capable of creating Studyflow exams and the Freepik image generation tool from Malaga. All very interesting, although I have not been able to test it in depth.

Weekly IAsity Levels: Reduced, despite the fact that the word “extinction” is on the table. We have had quite a bit of Real World anxiety with some elections and the announcement of others.