COMMENTARY ROBOTICS AND AUTONOMOUS SYSTEMS
Ukraine A Living Lab for AI Warfare
iStock illustrationSince Russia’s invasion of Ukraine, much debate has centered on whether the conflict represents conventional warfare or some revolutionary type of contest.
An article in the New Yorker in March 2022 described the conflict as the “the first TikTok War.” Ukraine's Minister of Digital Transformation Mykhailo Fedorov has called it a “technology war.” Alex Karp, CEO of data analytics company Palantir, has suggested that the technology being used is changing the competitive advantage of a small country versus a larger adversary. The Washington Post in December ran a front-page article about how Ukraine and Russia are fighting the “first full-scale drone war.”
There is also increasing talk about how this conflict accelerates the arrival of fully autonomous drones and other weapon systems to the battlefield. The role of artificial intelligence in warfare looms directly overhead in such commentaries.
A drone war, however, is not immediately an AI war. To what extent is the Ukraine conflict also characterized by AI?
Kai-Fu Lee, CEO of Sinovation Ventures, has called AI weapon systems the “third revolution in warfare,” after gunpowder and nuclear weapons. Is that revolution unfolding before our eyes? Does Ukraine signal a change in the character of warfare?
Not yet. While still short of changing the character of war, we believe Ukraine is a laboratory in which the next form of warfare is being created. It is not a laboratory on the margins, but a center-stage, relentless and unprecedented effort to fine-tune, adapt and improve AI-enabled or AI-enhanced systems for immediate deployment. That effort is paving the way for AI warfare in the future.
It is a future that is expected. Over the past few years, visions of AI-enabled warfare have abounded and received different conceptual labels. Retired Marine Corps Gen. John Allen and SparkCognition founder Amir Husain have called it “hyperwar,” a form of AI-controlled warfare with little to no human decision making involved. Former Deputy Secretary of Defense Robert Work and others have termed this “algorithmic warfare,” in which autonomous systems and weapons independently start selecting their course of action based on the situation in which they find themselves.
The Defense Advanced Research Projects Agency has come up with the name “mosaic warfare,” which is a more tactical term that combines conventional platforms with uncrewed systems to achieve battlefield advantages.
Most recently, Nand Mulchandani, CIA chief technology officer, and retired Air Force Lt. Gen. Jack Shanahan — who was the inaugural director of the Joint Artificial Intelligence Center — have coined the term “software defined warfare” as part of a vision in which software will be the crucial part of the defense architecture needed for next-generation warfighting systems.
What all these concepts have in common is the vision of a truly networked battlefield in which data moves at the speed of light to connect not only sensors to shooters, but also the totality of deployed forces and platforms. It is a future scenario that is envisioned partly because of fast-paced technological developments, but also because of fears related to geopolitical competition and what near-peer competitors might be able to deploy in the near future.
The Russia-Ukraine war falls short of these future scenarios. Yet it clearly brings these futurist visions of warfare closer to reality. The conflict is an unprecedented testing ground for AI. In some areas, its use has been clear. For example, the now-ubiquitous employment of drones and loitering munitions by both sides offers AI-enhanced autonomous capabilities in flight, targeting and firing. The use of loitering munitions, also known as kamikaze or suicide drones or smart missiles, has received much attention in the international media, whether as an asset redefining the future of tactical warfare or as the source of ethical and legal concerns.
The drones used include military grade UAVs but also commercial drones such as the Chinese-built DJI’s Mavic series, which are much cheaper and easier to obtain.
In addition to aerial systems, autonomous ships, undersea drones for mine hunting and uncrewed ground vehicles have been deployed. The combined use of aerial and sea drones in the October attack on Russia’s Black Sea flagship vessel, the Admiral Makarov, was perceived by some analysts as perhaps a new type of warfare.
In general, AI is heavily used in systems that integrate target and object recognition with satellite imagery. In fact, AI’s most widespread use in the Ukraine war is in geospatial intelligence. AI is used to analyze satellite images, but also to geolocate and analyze open-source data such as social media photos in geopolitically sensitive locations. Neural networks are used, for example, to combine ground-level photos, drone video footage and satellite imagery to enhance intelligence in unique ways to produce strategic and tactical intelligence advantages.
This represents a broader trend in the recruitment of AI for data analytics on the battlefield. It is increasingly and structurally used in the conflict to analyze vast amounts of data to produce battlefield intelligence regarding the strategy and tactics of parties to the conflict. This trend is enhanced by the convergence of other developments, including the growing availability of low-Earth orbit satellites and the unprecedented availability of big data from open sources.
In addition, AI itself has undergone dramatic technical improvements, with the growing accuracy of machine learning models and systems, as well as the increased capability of AI systems to integrate and cross-reference data from various sources.
What makes this conflict unique is the unprecedented willingness of foreign geospatial intelligence companies to assist Ukraine by using AI-enhanced systems to convert satellite imagery into intelligence, surveillance, and reconnaissance advantages. U.S. companies play a leading role in this. The company Palantir Technologies, for one, has provided its AI software to analyze how the war has been unfolding, to understand troop movements and conduct battlefield damage assessments. Other companies such as Planet Labs, BlackSky Technology and Maxar Technologies are also constantly producing satellite imagery about the conflict. Based on requests by Ukraine, some of this data is shared almost instantly with the Ukrainian government and defense forces.
The Russia-Ukraine war can also be considered the first conflict where AI-enhanced facial recognition software has been used on a substantial scale. In March 2022, Ukraine’s defense ministry started using facial recognition software produced by the U.S. company Clearview AI. This allows Ukraine to identify dead soldiers and to uncover Russian assailants and combat misinformation.
What’s more, AI is playing an important role in electronic warfare and encryption. For example, the U.S. company Primer has deployed its AI tools to analyze unencrypted Russian radio communications. This illustrates how AI systems were constantly retrained and adapted, for example, to deal with idiosyncrasies in customized ways, such as colloquial terms for weaponry.
A less visible but important use of AI in and around the Ukraine conflict is in cyber warfare, especially in support of defensive capabilities. Early analysis conducted by Microsoft for a June report shows that cyber defenses may have proven relatively successful, in part because of advances in AI-enhanced threat intelligence and the quick distribution of protective software to cloud services and other computer networks.
The flip side to this is the more visible use of AI surrounding the conflict: the spread of misinformation and the use of deep fakes as part of information warfare. AI has, for example, been used to create face images for fake social media accounts used in propaganda campaigns. While the spread of disinformation is not new, AI offers unprecedented opportunities for scaling and targeting such campaigns, especially in combination with the broad range of social media platforms.
Again, there are converging trends as the use of recommendation algorithms to target users with direct content is growing and the AI systems that can autonomously create and spread messages are becoming more sophisticated. It is the perfect storm for future cyber warfare, with AI at the center.
These examples illustrate that the current conflict in Ukraine is a testing ground for AI technology. Of course, there are limits. While modern AI-enhanced systems are being tested, countries are still reluctant to offer Ukraine access to their latest and most advanced systems, in part because of fears that these might end up in the wrong hands.
Nevertheless, as an AI laboratory, the conflict is unique: unprecedented funding, international engagement, and technological support from across the public and private sectors in a setting that may continue for several more years. The longevity of the conflict allows companies to fine-tune, adapt and improve their AI systems on the go. This is where AI-enhanced weapons and systems are markedly different from conventional ones: the longer they are deployed, the more data can be collected to improve them directly. As such, this conflict is a major stepping stone toward the networked battlefield and the AI wars of the future.
The media headlines about AI enhanced weapon systems are just the tip of the iceberg. Most AI is and will be deployed in systems far removed from the battlefield, in cloud computing and data analysis systems related to areas such as planning, logistics and preventive maintenance. It is an often-hidden side of the AI-driven revolution in warfare that has now been set in motion and will not stop.
While the character of the war may not yet be determined by AI, the Russia-Ukraine war is akin to a laboratory setting in which many companies and governments are able to constantly train and test AI systems for a wide range of capabilities, functionalities and applications.
This is the tragic paradox. Each day that the conflict continues, and human beings are losing their lives in horrible ways, AI systems are being trained with real data from a real battleground — not to stop the suffering and end the war, but to become more effective in fighting the next one: the AI war.
Retired Army Maj. Gen. Robin Fontes most recently served as the deputy commanding general of operations at Army Cyber Command. She is an advisor to RAIN, a global knowledge platform on the intersection of Defense and AI. Dr. Jorrit Kamminga is director of RAIN Ethics, a division of RAIN Defense + AI.