The war undergoes a revolution that has turned the Ukraine into its laboratory. The first steps were taken in Syria and Yemen in 2016 and 2018. Then, what are now known as autonomous weapons began to be used. Later, the use of them escalated during the conflict waged in 2020 between Armenians and Azerbaijanis for control of the territory of Nagorno-Karabakh. A war that went unnoticed due to the noise produced by the pandemic, but which ended with an overwhelming victory for Azerbaijan based on the massive use of military drones of Iranian and Turkish origin.
Now, the Russian invasion of Ukraine has consolidated this trend, although the lack of transparency prevents us from knowing to what extent the use of artificial intelligence (AI) provided by the United States is being decisive for the Ukrainians not only to resist, but to recover ground against the Russian invaders.
Something that is taking place with the massive use of the so-called lethal autonomous weapons systems (SALA), but also with the transversal use of AI systems that automate war and produce revolutionary changes in it that dwarf those that were introduced in past the use of the stirrup, gunpowder, the plane or atomic energy.
In this sense, the AI ??modifies the entire theater of war operations. First, with a sensory monitoring of the combatants. Second, using autonomous weapons that increase the deadly capacity of the actions. And third, by connecting all this information through a platform that offers an integrated command that automates the famous O-O-D-A decision cycle (observe, orient, decide and act). A sum of factors that makes the war artificial and exploits the vulnerabilities that the AI ??detects in the military deployment and capabilities of the enemy. A change of focus that unbalances the balance of power based on what Napoleon called logistics.
According to the International Committee of the Red Cross, SALAs constitute a category of more than 130 types of weapons. We are talking about autonomous combat aircraft, swarms of combat drones or unmanned submarines, among other types of robotic military devices that use AI systems.
But if we look at the census of systems categorized by the Future Life Institute, then there would be close to 300. The majority, with human supervision in some of their critical phases, although some with complete independence. They would be part of what are called killer robots (killer robots). In any case, the known SALAs are still controlled by humans. At least for now, although they are designed to think for themselves and devoid of ethical bias.
Some are defensive in nature, such as the Phalanx system, which shoots down planes automatically, or the Locust, which saturates the enemy, creating a large number of false targets. Others are offensive, like the Saudi SAQR1 drone, which carries missiles and laser-guided bombs, or the Russian T14 main battle tank, which acts as an automatic armored platform.
In all of them, the objective is to replace the fallibility of the human factor with another algorithmic one, which avoids the risk of error or desisting that can occur in the action of a soldier when using a weapon against the enemy. A bias is introduced that maximizes the deadly impact, while reducing human control over the decision and the harmful effects sought with it.
In doing so, they depersonalize war and increase its dehumanizing effects. Not only because the moral tension that accompanies the actions of combatants when they point a weapon at their target disappears, but because they neutralize the demands of proportionality in the means of combat used, as well as respect for the Geneva Conventions on which it is founded. war humanitarian law.
All in all, the greatest contribution of the SALAs is not in their deadly autonomy but in the interconnection of information they manage. Between them and with which they produce thousands of soldiers sensorized with smart textiles, exoskeletons and satellite-guided weapons via Wi-Fi. In this way, the AI ??monitors the war, by digitizing it within a platform of services connected through the cloud, which guarantees real-time knowledge of the situation of the combat forces, their status and how the environment affects them in which they carry out their operations.
The war adopts a technological transversality where centralized AI systems are capable of managing both the satellite coverage that allows the combatants to be sensorized on the front line, and the cloud computing that provides calculation services to the laser of missiles and howitzers, to the use of drones, as well as autonomous vehicles, ships and planes that carry out operations.
All an integrated AI design that does not hide that it is oriented towards the search for robustness, encrypted in increasing the lethal effects on the enemy. Something that starts from a nihilistic presupposition that makes the SALAs not recognize a person in the human being, which is what the warlike humanitarian law poses when it sees a combatant in him, but rather something objectual and devoid of dignity that must be eliminated. We would therefore be facing a process of total automation of the war, which would highlight the weight that calculation has in it as a military factor. A geopolitical calculation that perhaps helps explain the interest in prolonging the war in Ukraine if it were seen as a laboratory. Especially if the future round between the United States and China moves to the Taiwan Strait.