A world in which machines outsmart the human race. This could well be the plot of a science fiction novel or a Hollywood script, but more and more experts predict this future. Computer scientist Erik J. Larson raises his hands to his head when he hears certain statements, as he himself admits to La Vanguardia. So he decided to dot the i’s and write The Myth of Artificial Intelligence (Shackleton Books) to argue why machines can’t think the way we do.
“In the short or medium term, of course, they will not be able to do it. I am very skeptical of those who say that Artificial Intelligence will achieve real understanding, as human minds do. Will it be possible one day? For now it is something that we do not know. Today we have no idea how to program certain aspects and that is the biggest barrier”, says the expert. He refers to “a type of inference that we call abduction or hypothesis generation and that was already formulated in the 19th century by a scientist ahead of his time: Charles Sanders Peirce. We as humans is something that we constantly do but we don’t know how to provide it to the machines yet. Until this happens, we will not see the eclipse of AI or match human minds.”
All of these unfounded fears “have been part of the Silicon Valley culture for decades, often having futuristic sci-fi visions of technology. But they are not true. What real AI programmers really do is refine data sets and tweak features and algorithms to get the right results.”
In this sense, he reflects, “the media and the public have the impression that we are building superhuman machines that will kill us, but in reality we are creating programs to perform limited tasks such as playing games or translating languages. We never build systems that make common sense.” or general intelligence. What’s more, it’s kind of twisted to think that silicon chips can generate human minds.”
All these myths worry Larson because they “destroy the possibility of real innovations. They create a culture of inevitability, where basically scientists tell themselves and the public that we are on this inescapable path to superintelligence supremacy.”
Another issue that worries the entrepreneur is that “only rich companies like Google or Meta are doing AI. Before, any ordinary scientist could create entire AI systems. Now, that’s only within the reach of wealthy companies. I think this could also stifle innovation.”
Larson insists that AI fears should now focus on “deep fakes, that is, misinformation generated by an intelligent bot. This and other nefarious uses of AI can pose serious problems for humanity. In fact, we are already experimenting with cyber warfare between nations, particularly with Russia and the ongoing war in Ukraine.”
Another not so dangerous use but with which the author does not agree too much is with “everyone who makes us lose our connection with art, style and creativity, since it is part of what makes us human. I would not like it for For example, reading a book written by a machine. The text may be readable but it will not have feeling behind it. I prefer stories written by novelists a thousand times. And that is something that I doubt will change my opinion,” he concludes.