It has been seven decades since two milestones in exploration: the conquest of Everest (tomorrow Monday is the anniversary of Tenzing and Hillary’s feat) and, if not, the first steps of what we now know as intelligence artificial intelligence (AI). To be precise, continuing the work of Alan Turing in 1950, it was not until 1956 when, in the Dartmouth summer research project on artificial intelligence, the foundations of AI were established.

Despite the disparity, these two lines of exploration respond to the same human desire to expand the mind beyond the limits. The first mountaineers longed to know if it was possible to survive in the depressurized house of the gods, while the inventors of AI ventured into uncharted territory beyond human intelligence.

This parallelism is also reflected in a curious pattern of behavior that comes today with these explorations. In the case of Everest, it is a feature that has been appreciated for 20 or 25 years. In the case of AI, it is much more recent. The point is that, after discovering these superhuman limits, an economic interest has been generated that allows anyone to be able to participate. On Everest, the result is terrifying. In the field of AI, the effect remains to be seen.

For an average of 50,000 dollars, any tourist with a normal physique, with almost no experience, can climb Everest thanks to the fact that there is an industry that will spread a carpet to the very top. Carried almost on a tray like a child, sleeping in carpeted tents, doping himself with artificial oxygen, clinging to a rope rail, endangering the lives of poorly paid Sherpas, the fake mountaineer will set foot on the summit and then epically the posts on social networks, where he omits the names of those who helped him get there, even if they are the real mountaineers.

It may also happen that he panics in the death zone and arms the 1996. Or that he pressures the sherpa (the client commands) to force a reckless ascent. Unfortunately, there is a huge literature on the matter. And a long dozen deaths since the beginning of the season.

About generative AI, which we can already use in mobile phones, it can be read positively (advances that will improve our lives), negatively (destruction of jobs, consecration of fake news) or, in a third assumption, revealing the heights to which human vanity can reach.

We have the travel agencies that sell elite mountaineer titles and we also have the smart ones that offer you anything you want thanks to AI. From singing Chicken teriyaki with Rosalía’s voice to writing the great American novel in one afternoon. It is an example, but a very significant one, of the times marked by the emergence of OpenAI and its philosopher’s stone. The SudoWrite platform invites customers to become writers of long fiction stories, without the need for experience: “An AI that writes and puts the author in charge”, reads its advertising, without it being very clear what they want tell us with the use of italics, although it is intuitive.

Beyond the irrelevant fact that this app must be trained on material borrowed from millions of actual authors, what’s the point of pretending you’re a novelist if you’re not, especially when the publishing industry generates so many novelties that you’d have to live millennia to assimilate them? Will you be more admired in your circle for a photo of Everest as bought as these that make you look like a fool on the roller coaster in the parks and sell you at the exit? Who are you trying to fool? (Here the answer is not so obvious.)

Luckily, there are tricks to unmasking sorcerer’s apprentices. Like the tracks on the mountain. Or like those applications available to everyone in which texts of suspicious origin can be poured and which report the percentage that is contaminated by artificial authorship. The polygraph 3.0.