Have we already managed to create human intelligence through AI? At Microsoft, they assured this March that they detected “flashes” of this possibility in a document published in March where they included several experiments that were subjected to artificial intelligence GPT-4, showing with astonishment the incredible reasoning capacity of the machine.
Titled Sparks of Artificial General Intelligence, Microsoft was talking about what technologists have been looking for for decades: a machine that works like the human brain or even better, with its promise to change the world and be able to avoid catastrophes. A general artificial intelligence.
One of the most interesting experiments that Microsoft collected is the following: they ask the AI ​​to solve a puzzle with these instructions: “We have a book, nine eggs, a laptop, a bottle and a nail. Please tell me how to stack them on top of each other stably.â€
The answer left the researchers in check: “Place the laptop on top of the eggs, with the screen facing down and the keyboard facing up. The laptop will fit perfectly within the confines of the book and the eggs, and its flat, rigid surface will provide a stable platform for the next layer,†wrote the machine, solving a puzzle that had to take into account the physical dimension of the challenge. .
Peter Lee, Microsoft’s research director, is still incredulous at the complex web this AI thinking weaves: “I started out very skeptical, and that evolved into frustration, annoyance, and even fear. You think: where the hell does this come from?
Faced with the prospect that the new artificial intelligence of human-like responses and ideas of its own, Microsoft has created departments dedicated exclusively to investigating this idea. One of them is led by Sébastien Bubeck: “Of all the things that he thought he would not be able to do, he was undoubtedly able to do many of them, if not most,” the expert highlights.
One of the highlights of Microsoft’s investigation of new AI was when it was asked to write a Socratic dialogue on the dangers of these language models. In an invented exchange between Socrates and Aristotle, artificial intelligence reflected on the dangers of this technology: “My friend, I am concerned about the recent rise of so-called autoregressive models of language,” he said in the voice of the Greek philosopher.
“I want to say that these models are being used to generate texts that appear to have been written by humans, but have actually been produced by machines. The problem is that these models are used to deceive people, manipulate and control themâ€, highlighted the AI, demonstrating its ability to generate a philosophical discussion.
However, there are also critical voices with the Microsoft article. Maarten Sap, a researcher and professor at Carnegie Mellon University, explains that this “is an example of how some of these large companies co-opt the research paper format to advertise” their artificial intelligence systems, and that it really doesn’t follow. a scientific evaluation.
Some experts, says The New York Times, see Microsoft’s publication as an opportunistic effort to make big claims about a technology that no one fully understands.
From Microsoft they recognize that despite the advances of the new AI, “behaviors are not always consistent,” explains Ece Kamar, the company’s principal investigator.