Any sufficiently advanced technology is indistinguishable from magic. The aphorism is by Arthur C. Clarke and is often used to describe situations where technology exceeds common understanding or becomes difficult to comprehend. It is also used in science fiction stories to justify seemingly magical or supernatural phenomena within a scientific or technological context. If our great-great-grandfather from the 19th century were to come across a modern smartphone, he would probably consider its capabilities magic, since the technology that makes it possible would far exceed his level of technological understanding. The underlying idea is that scientific and technological knowledge can explain phenomena that would otherwise seem inexplicable or mysterious.
Fast forward to the 21st century and swap mobile for artificial intelligence. The scientific and technological knowledge necessary, not just to understand AI but to understand what we are talking about, is far beyond us. If we multiply it by the uncertainty of the expected economic and social impact and raise it to the ethical and philosophical debates associated with AI, we will understand why it is magical for us. Before the AI ??we are all our great-grandfather. And this worries us.
Kate Crawford, artist, AI researcher and author of the book Atlas of AI, said in 2016 in a talk at Sónar D in Barcelona that AI was “a white man’s problem” and cited Mark Zuckerberg and Elon Musk as greatest exponents of a fight for technological hegemony. At that moment I saw him as a boutade, but time has proved him right. It turns out that while AI —or computing, or automation in general— affected manual and repetitive jobs, which in the North American context are mostly done by blacks and women, it didn’t worry anyone too much.
It has been with the emergence of the latest wave of generative AI, capable of imitating work that is supposed to be intellectual or cognitive, that we have begun to worry about its impact on society. Coincidentally, most of these jobs are done by white men, many of them with prostate problems. This does not mean that they do not affect the rest. We don’t like being our great-great-grandfather, much less having our chairs moved.
This is well illustrated by the litigation between Roberto Mata and the American airline Avianca for alleged injuries caused by the drinks truck on board in 2019. His lawyer, Steven Schwartz, sought precedents for rulings favorable to the interests of his client in cases Similar. He found as many as six that he attached to the cause. The judge was stunned to find that all the precedents were false. When notified, the lawyer rechecked the cases one by one: they all existed. What had happened? It turns out that Schwartz, to document himself, had used ChatGPT, which had listed up to six favorable cases in great detail. And not only this, when later asking ChatGPT itself on a case-by-case basis if they existed, he had answered that of course they did.
Schwartz, with more than thirty years of practice, is not that he can be considered a novice in the world of law, but he is in the world of AI (on the other hand, like you and me). His big mistake was mistaking ChatGPT for an advanced Google, for a Google that instead of giving answers in the form of links, gives them in “prose presentable to a judge”; technologies of which we understand and value the results, but which far exceed our level of technological understanding; technologies indistinguishable from magic.
As an engineer I am incapable of enjoying a magic trick other than card tricks. When I start to see gadgets, artifacts and mechanisms in a magic show, my brain focuses on the lack of knowledge about the technology that makes the illusion possible and I don’t stay calm until I find a rational explanation for it. When the magician makes the Statue of Liberty disappear, crosses a wall or walks on water, we only see him, but behind him there is an army of people who make the collective illusion possible. If we look at people, the illusion fades.
Something similar happens to AI: when we realize the army of people who make the collective illusion possible, the magic disappears. And I’m not just talking about scientists, mathematicians, data engineers, knowledge engineers, and developers. I am referring to the bulk of precarious AI workers in countries like Nigeria, Malaysia, Nepal or the Philippines who work for $1 or $2 an hour tagging content that will later be used to train the machines. They are what are known as “takers”, anonymous workers who do not know who they work for and who spend all day tagging bicycles in Tesla video frames, branded clothing in Instagram images, offensive or illegal responses from ChatGPT, audios unintelligible from Siri or Alexa or counting heads and arms in demonstration videos.
They are temporary and alienating jobs that are carried on websites such as Remotasks.com, Taskup.ai and DataAnnotation.com. If you want to see the behind the scenes of the AI ??you can sign up and try it yourself. In order for companies like OpenAI, Google, Tesla and Meta to do magic in the first world with their artificial intelligence, in the third world there are people who have to undersell their natural intelligence.
The mathematician and philosopher Bertrand Russell, after finishing a lecture on astronomy, was questioned by a lady from the audience who refuted that the Earth was suspended in space. According to her, the Earth rested on the shell of a large turtle. Russell replied that, assuming that it was so, where did the turtle that supported the Earth rest? “Very smart, Mr. Russell, there are turtles at the bottom of everything,” the lady replied convinced. The anecdote is probably apocryphal, but it also helps us to take the magic out of the AI: when we understand it well, we realize that there are humans at the bottom of everything.