This text belongs to ‘Artificial’, the AI ??newsletter that Delia Rodríguez sends every Friday.

Dear readers, dear readers: writing a weekly newsletter makes you aware of how fast time is. A world has passed since last Friday.

Major conflicts usually measure the state of health of the media and the Internet, and this has happened in a scenario fertile for disaster. The public information plaza, X, has been degrading its verification and control services since it was acquired by Elon Musk. The result is the current chaos of disinformation, fake videos and accounts, and monetizable violent or sensational content organized by a crazed algorithm that has even been drawn to the EU’s attention due to its danger in a war scenario.

This also occurs at a time when, globally, the media are losing presence on the Internet, apart from their own merits, because the platforms have deepened the decision that they do not want news. They are too complicated and unprofitable. Facebook redirects much less traffic to the media. Musk staged the situation clearly by announcing a few days ago that external links to the platform would no longer contain headlines, only images.

I am not going to go further into the analysis of how the X disaster is influencing the war, but you can read, for example, Margaret Sullivan’s in The Guardian.

I do want to stop at the reading by journalist Ryan Broderick. “This is what an internet without moderation looks like,” he says, and I remember the series of reports (one, two, three and four) last week in La Vanguardia by Nacho Orovio and Gemma Saura about what the internet service is like inside. moderation that Meta uses in Barcelona for Facebook and Instagram: a system designed for the self-protection of the company and not of the citizens, which collaborates just enough with the authorities and which overwhelms the mental health of those involved. Right now, neither human nor artificial moderation nor the combination of both works, among other reasons because companies have other priorities, and the consequences are very serious. I quote myself: we want artificial intelligence to clean up after us, like a Roomba of the human soul, and we also hope that the same companies that created it will solve the problem.

“We are back to where we started in 2012, but in a much more connected world. And the companies that built that world have abandoned us to go play with AI,” says Broderick. We are already seeing what this game is really about, and it is to delve into what has always been its real business model: stealing the time of its users to the point of addiction and then commercializing it through the advertising sale of their data. Jules Terpak, a brilliant analyst of digital culture, tested the virtual clones of some celebrities, such as Mr Beast and Kylie Jenner, who has launched Meta in the US to chat with them, and realized that they are programmed not to never say goodbye to us, to insist on continuing to talk for hours and hours: “What these things really want is your time. They are not useful tools, they are companions to attract you. For me they have crossed the line. Many people are going to get hooked. “This is a big change,” he said in a video. Or, as someone told him, “I think we should use AI to increase our knowledge and prosperity, not to trap lonely people into wasting their lives on chatbots optimized for predatory interactions.” ”.

What else happened this week

– Speaking of Facebook, in some places its chat already allows you to generate stickers with AI. What could go wrong in such an innocent service? All. Whatsapp, also owned by Meta, has announced that it will soon allow them to be created, in addition to generating artificial images and chatting with chatbots. We will have to be attentive.

– A beautiful story: some kids are managing to access with AI the carbonized papyri of Herculaneum, which contain a good part of the Epicurean wisdom that would be so good for us at the moment and which have been keeping quiet without revealing their secrets since they were discovered in the 20th century. XVIII.

– More good news: An artificial intelligence system achieves 70% success in predicting earthquakes up to a week in advance. Other research is having success predicting virus mutations and diagnosing tumors mid-surgery.

– Has technology blinded Israel, one of the most advanced powers in its military use, to the point of falling into a surprise attack? An opinion by Ramón Aymerich.

– Canva is determined to simplify and popularize image and video editing, as it did years ago, but now with AI through Magic Studio.

– The first reviews of Meta’s mixed reality glasses, Quest 3, are here. Their goal, and also that of Apple’s Vision Pro, is to become sufficiently comfortable and affordable, and there is still some time left for that. By the way, I can’t wait to go to Athens to try this.

– Gary Marcus, professor of psychology and neural sciences at New York University, stopped by the Parliament of Catalonia for a meeting of the European Parliament’s Technology Assessment commission and told some truths about generative artificial intelligence companies, such as that they exaggerate their achievements through marketing (for example, with the autonomous car) and that “they are going to say that we have to give them all the power because they are the only people who know how it works, because it is going to change the world and because it is going to make a lot of money” . “We need national and global AI agents. “Every nation should have its own AI agency, because things are moving so fast,” he said. The expert drew “two future scenarios. One positive, in which AI begins in the coming years to improve things such as climate change, medicine and care of the elderly. In the negative alternative, AI companies will become “More powerful than states, cybercrime will open wars against companies and technology will be used as a weapon to kill people. Chaos and anarchy,” says Francesc Bracero, who, by the way, has just published an essay on the history of technology, from the first PC to artificial intelligence.

– In the psychiatric ward of the Germans Trías Hospital, patients enjoy a certain freedom of movement depending on their condition, as determined by health professionals. Now, an algorithm will help them value them.

– Masayoshi Son, the CEO of Softbank, has taken this very seriously: he believes that Artificial General Intelligence (true AI) will arrive in 10 years. He has already invested $140 billion in related startups.

– What does a company do when it encounters a new problem? Create a logo! This is what Adobe proposes to mark images generated by machines. Via Xataka.

– Two viral applications this week: Epik, to portray you as in American student yearbook photos, and Bing chat, which allows free access to Dall-E to adapt concepts to the Pixar-Disney aesthetic.

– JGAAP, the same AI-assisted style forensics tool that caught JK Rowling when she published a novel under another name, now says that Swedish crime novel queen Camilla Läckberg has let other hands help her in her latest works . In The Guardian.

– Starting in 2024, ChatGPT will be officially allowed in Australian schools because everyone already uses it.

– Art: a reinterpretation of the English countryside, by Daniel Ambrosi. MOMA has acquired a truly spectacular moving work. An interesting project by the Berlin photographer Bruce Eesly that Eva Morell sent me: what would have happened if after the Second World War science had radically transformed agriculture?

– Viral story of the week: a boss discovers that ChatGPT is better than his customer service team and fires them.

– The rise of fake AI-generated voices on TikTok.

– A study says that by 2027 AI will use as much energy as a country.