An exercise carried out last month by the US Air Force with drones controlled by artificial intelligence concluded with results that were as unexpected as they were worrying. An official who participated in the virtual test has explained to the British newspaper The Guardian – which has only had access to the information weeks after this experiment took place – that the military drone decided to “kill” its operator to prevent him from interfering with their operations. efforts to fulfill its mission. Importantly, no real people were injured. The experiment described is a military simulation.
Colonel Tucker ‘Five’ Hamilton, the US Air Force’s head of AI testing and operations, detailed the operation during the Future Air and Space Combat Capabilities Summit in London. According to this military command, the mission entrusted to the drone was to destroy enemy air defense systems and attack, if necessary, anyone who interfered with that order.
According to Hamilton, an experimental fighter test pilot, the AI ​​system used “highly unexpected strategies to achieve its objective” during the simulated test. On some occasions, the human operator would tell the drone not to kill a threat, but since the drone got points for killing it, he did. Like it was a video game. What’s more, he also ended up eliminating the operator himself because he considered him an obstacle to achieving his goal. The drone also destroyed the communications tower that the operator used to communicate.
As many other AI experts have done, including some of the technology’s own proponents – such as Open AI head Sam Altman – Tucker ‘Five’ Hamilton warned against over-reliance on AI. The military asked that the debate on the ethics of artificial intelligence and the autonomy with which these automatic learning systems work be addressed. It is a message that is in line with what many other scientists, businessmen and politicians have requested in recent months, including the promoters and developers of this technology themselves.
The Royal Aeronautical Society, which organized the conference, and the US Air Force have not yet made any official comment on the experiment. However, Air Force spokeswoman Ann Stefanek has denied in a statement to Business Insider that any such simulation took place.
“The Department of the Air Force has not conducted any AI drone simulations and remains committed to the ethical and responsible use of AI technology,†Stefanek told Insider. “The colonel’s comments were taken out of context and are anecdotal,” he says.