This text belongs to ‘Artificial’, the AI ​​newsletter that Delia Rodríguez sends every Friday.

Dear readers, dear readers: Of the many questions that this war will leave up in the air, one has to do with the use of technology and artificial intelligence.

The question is summarized well in La Vanguardia by José María Lasalle. Gaza was a laboratory where Israel experimented with its combat and surveillance AI designs, he says. “Through AI I controlled the famous Gaza wall. This circumstance meant that in May 2021 he won, as he officially recognized, the first war based on AI (…) Algorithms against terrorists, it was said then and the balance was just as overwhelming. It was even announced that thanks to AI, no Hamas commandos would be able to infiltrate or carry out terrorist operations in Israeli territory. What has happened, then, a year later? What has failed in the sensorized systems of the concrete wall that isolates the Gaza Strip, in the satellite surveillance that covers the border with a technological security perimeter and in the aerial drone coverage that makes the wall impassable and allows monitoring of the instant of mobility that occurs around? What’s more, what has changed so that Israel, which boasted on July 17 through its Ministry of Defense that it was in a position to successfully face a total technological war, has failed so miserably?

The answer is also summarized in the newspaper by Ami Ayalon, an admiral in the reserve and former head of the Shin Bet, Israel’s internal secret service. “Hamas knows how to plan attacks without phones,” he says. “Most of the intelligence we currently have from Gaza is what we call SIGINT. It is based on the interception of signals, whether telephones, internet… The enemy knows it. Hamas leaders already participated as children in the first and second intifada and know how to plan actions without using phones or the Internet. States cannot do it. Large armies have a hard time. Terrorist organizations, especially when they have very clear hierarchies like Hamas, yes.”

As this Reuters note says, the Israeli intelligence failure will be talked about for years. In May, Israeli Defense Ministry Director General Eyal Zamir said the country was on the verge of becoming an artificial intelligence “superpower,” using its techniques to streamline decision-making and analysis.

The New York Times, after speaking with several intelligence officials, pointed in a first analysis to several factors: the lack of surveillance of key communication channels, excessive dependence on border surveillance equipment that the attackers easily disabled from the beginning, the grouping of commanders in a single border base that was invaded in the initial phase and believing Palestinian statements made on channels that they knew were being monitored.

It was, Politico summarizes, “a massive, deadly 2G attack, in a 5G security state.”

I have tried to read about this issue, because this may be the first major defeat of AI in its military application, but I do not think it is possible to reach clear conclusions immediately, beyond a possible overconfidence suffered by the country of the military intelligence, cybersecurity, autonomous weapons, Tel Aviv technology startups capable of exporting software like Pegasus, and the million-dollar smart border. “The underground wall of sensors and reinforced concrete that had been built around the strip was supposed to block the tunnels through which Hamas attempted in the past to reach Israeli populations on the other side of the border. That wall has been of no use. The Hamas militias have limited themselves to assaulting the fences on the surface,” wrote former Israeli Foreign Minister Shlomo Ben Ami.

In my head, the issue is mixed with a headline read in The Guardian that has nothing to do with it. Researchers put ChatGPT to treat depression patients in the place of a general practitioner. It turned out that he recommended, without apparent bias, the standard, textbook treatments, not like doctors who often deviate. Technology is usually literal and logical, we are not.

What else happened this week

– The CCCB inaugurates a very interesting exhibition on artificial intelligence that you must visit in Barcelona. Meanwhile, the chronicle of Teresa Sesé and one of the texts from Marta Peirano’s catalogue.

– The usual scams are now in full operation but with new tools. For example, the case of Tomás, who they tried to blackmail with a fake porn video.

– A detailed analysis of the king of fake photos, the Pixel 8.

-We said last week that Gary Marcus, professor of psychology and neural sciences at New York University, stopped by to say that they are selling us the bike with AI and that it doesn’t work that well. La Contra took the opportunity to do an interview with him that I highly recommend: “What should worry Google and Microsoft is not that you do internet searches better than them (…) it is that ChatGPT fills the internet with errors and those hallucinations and ends up converting any search becomes unreliable to the point that people stop using them. This is what is called the eco chamber effect: the autophagy of AI and the great threat for search engines is that they end up intoxicated by its lies. And, in the end, there is no way to know what is true or false when you search for something on the Internet.” Another interesting Contra: this one with Douglas Rushkoff, a media critic famous here lately because Yolanda Díaz spoke about her thesis.

– Marc Andreessen has published a techno-optimist manifesto that has caused a lot of talk because more than celebrating artificial intelligence, science and progress, he vindicates the infinite growth of technological capitalism.

– An American thinktank, Rand Corporation, has published a report explaining that some appropriately tuned AIs can help not to manufacture biological weapons but to plan an attack with them. Let us remember the concern of Dario Amodei, from Anthropic, with this matter.

– More about the fabulous history of the Herculaneum papyri.

– Google is helping 70 large cities around the world to control the traffic lights of their most infernal intersections with AI: Google Green Light.

– No agreement on the demand imposed on Meta by its moderators in Kenya.

– Do you remember Luzía, the Spanish assistant for WhatsApp that we have talked so much about? Well, she has raised 9.5 million euros in a financing round in which Pau Gasol has participated.

– On the use of AI in the film industry: joke and concern this week with the artificial extras of this Disney movie. In the video game Cyberpunk 2077, a character has been given the voice of a deceased voice actor, with permission from his heirs.

– Josep Lluís Micó writes about predictive artificial intelligence.

– Ramón Peco explains what exactly the Adobe verification tools grouped under the name Content Credentials consist of.

– The tremendous problem of being fired by an algorithm used by the company to manage its human resources.

– Another lawsuit for Meta, Microsoft and Bloomberg, this time, from a group of American Christian authors.

– Stanford has created a ranking of generative intelligences, from most to least transparent.

– It seems that the restrictions that are applied to AIs before they see the light can be disabled. “Companies are trying to put AIs to good uses and keep illegal uses behind a closed door,” Berkeley researcher Scott Emmons told the NYT. “But no one knows how to make a lock.”

– The foundation of Paul Allen, the co-founder of Microsoft, works on open source models and databases. But in the case of generative artificial intelligence, open source comes with many delicate nuances.

– The mayor of NY is making robocalls to his neighbors in languages ​​he does not speak, like Spanish.

IAnxiety level this week: no news.