ROBOTICS AND AUTONOMOUS SYSTEMS

COMMENTARY: Opening the Replicator Program’s Pandora’s Box

12/27/2023
By Dr. James Giordano

iStock illustration

China’s ongoing dedication to developing unmanned vehicles for military applications has prompted the United States’ unveiling of the Replicator program — an initiative described by Deputy Defense Secretary Kathleen Hicks to “master the technology of tomorrow” by creating and employing “attritable, autonomous systems in all domains.”

While Hicks was quick to assure that these technologies will be “in line with our responsible approach to AI and autonomous systems,” the use of such systems can be seen as a Pandora’s box, from which emerge initial perceptions that the use of these technologies may decrease human costs of engagement but will also prompt questions of whether, and to what extent, such systems may affect thresholds and tolerances for their engagement in “drone warfare.”

The use of drones of any sort — non-autonomous, semi-autonomous or autonomous — may reduce national governments’ and societies’ tolerance for conflict and war for several reasons. These include tactical and/or strategic considerations of the battlefield effectiveness and efficiency of such systems in warfare, risks of human casualties incurred by their use and net incentives through which a society’s tolerance for conflict can be seen as a relative calculus of perceived necessity, expected value of the gains from success, losses from failure and the impacts of material and human costs from potential outcomes.

Axiomatically, Hicks identifying these systems as being “attritable” means that these technologies are designed to be replaceable, and thus costs of their production in relation to possible losses is constituent to their overall value proposition.

Similarly, if there is some sense that the use of these technologies could reduce risk of harm to combatants or populations involved, the lower relative human costs might promote conflict, at least initially.

However, if machine or structural target destruction is not sufficiently deterrent, machine-to-machine engagement will become disadvantageous, and it is likely that more tangible or advantageous human assets, including populated command, control and domiciliary centers will be targeted. Indeed, the use of machine systems’ technologies may enable more effective and human-cost efficient targeting of enemy assets, for example, for the group doing the targeting.

If such technological weapon systems are relatively common knowledge, there may be an iteratively decreasing tolerance for human battlefield engagements, and the availability, viability and/or cost of these technologies would be largely irrelevant to public reaction to loss, as these sentiments tend to be emotional rather than technical or financial — at least initially.

In this light, the potential for technological loss — versus human loss — would tend to reduce a population’s tolerance for human harm, and either implicitly or explicitly increase tolerance for machine vs. machine engagements. Machines targeting other machines — and not humans — if adopted by all parties possessing these technologies, could lower thresholds for conflict given perceived reduction in human costs. But as previously noted, sustaining machine-to-machine, or structural, targeting is a tenuous, if unrealistic, notion.

As true of most advanced weapons systems, costs of hardware and programming required to develop and maintain such autonomous technological systems are significant. And although such expense tends to diminish over time, this is nascent technology, and thus it is difficult to predict how long costs will remain high. Therefore, the high initial costs could decrease tolerance and raise thresholds for engaging in conflict.

For example, the high cost of these technologies could affect how military commanders use such resources in certain circumstances and in this way, may alter tactical and strategic decisions, contributing, in part, to new determinations of acceptable losses.

In this way, the introduction of unmanned, autonomous military technology may change tolerance for engagement like the Cold War threat of using nuclear weapons, so that one state may be able to deter actions of another through sheer possession of such armament, given that deterrence is aimed at modifying or preventing behavior by leveraging high costs of an action against its relative gains.

To be sure, the effects of development and planned use of such technology may alter international relations by empowering those states that possess it to deter competitors’ and adversaries’ hostile intent and foster non-bellicose approaches of settling differences — particularly when such technology is limited, not widely available and/or exists in greater quantity or effective iterations, thereby producing asymmetrical capability.

As well, it may be that the use of autonomous systems with increasing levels of decisional capacity will enable more specific mission targeting, and in doing so, reduce large-scale conflict. But this rests upon a presumption that the current model of military engagement such as counterterrorist/insurgency operations employed by Western nations, especially the United States, will remain the norm. This is unlikely, as the shifting architectonics of science and technology, socioeconomics and changes in geopolitics are evoking very differently scaled conflicts, as evidenced by current wars in Ukraine and Gaza.

Hence, it is highly probable that the use of advanced, autonomous, unmanned technologies in military operations will change tolerances and thresholds for engagement, as based upon the relative costs that nations — and populations — ascribe to technological and human resources.

For example, certain nation states — or non-state actors — might place a relatively higher value upon technological resources than human assets — due in part to factors such as a large national population and/or certain ideological constructs — and in such circumstances, the threshold for warfare in which there is greater potential for technological versus human loss would be elevated, and tolerance for such loss would be low.

More difficult to forecast is how a conflict pitting humans directly against decisionally capable autonomous machines would affect thresholds and tolerances for warfare, given that differing values would influence perspectives and actions relative to the probabilities of human and technological losses that would be incurred.

Predictions of this sort require analysis of, and preparation for, multi-domain and multidimensional effects of emerging science and technology through ongoing research and evaluation of current and future tools and processes in/for ethico-legal, social and economic guidance.

Priorities must focus upon: realistic appraisal of the science and technology and its capabilities and limitations; surveillance of which — state and/or non-state — actors possess such science and technology, and how cultural and political factors tend to influence these actors’ decisions and behaviors; acknowledgement that autonomous, unmanned technologies can affect the conduct of, and amenability to, conflict; recognition of potential trajectories and valences of such effects; and developing valid models that can be used to forecast end-state scenarios that could be deductively used to inform and plan current and future policy governing research, development and uses in practice.

Such technology forecasting and mapping could use exploratory and normative methods to identify and assess capabilities and gaps in technology, information and socioeconomic and political structures relevant to military technology applications. Crucial to these efforts would be to address and acknowledge — including proposed responses to — the changing nature of warfare and thresholds and tolerances for war.

Methods for technology futures’ prediction include group idea building and program planning, informed forecasting and multi-criteria decision-making. Each has value when used singularly, yet, when employed alone, is somewhat limited in scope and application.

Instead, the “advanced integrative scientific convergence approach” enables a combinatory utility. This entails three overlapping practices.

The first is foresight: identification and realistic definition of possible developments and uses of science and technology — in this case, autonomous, unmanned technologies.

Second is the assessment of the possible and most probable effects of the technology upon particular domains and dimensions — in this case military operations and tolerances and thresholds for conflict.

The third is “prediction,” which describes the features, performance and manifestations of projected trajectories of the technology at defined points in the future.

The advanced integrative scientific convergence approach could be used to pulse current and emerging international developments and trends in autonomous, unmanned technology and relate these to both other domains of science and technology and present and projected socio-political factors to model near- and intermediate-range future uses and effects in specific settings on local, regional and international scales.

At a minimum, this may enable more detailed descriptions of both the current state of the swarming technology and the socioeconomic and geopolitical variables that exert pushing and/or pulling forces upon its use.

Still, it is important to recognize that the advanced integrative scientific convergence approach — like the advanced autonomous technologies upon which analyses focus — remains incipient and, therefore, should be regarded as a work in progress requiring ongoing assessment and revision to remain apace of global progress in technology and the shifting geopolitical and military postures contributory to its employment.

But the proverbial “cat is out of the bag,” or Pandora’s box is open, and these technologies are now a current reality and clear and present issue. Hence, the time is now to consider — and foster discourse and guidance focusing upon — how the development and proposed use of such advanced technologies will change political, socioeconomic and public postures toward, and engagement of, warfare. ND

Dr. James Giordano is Pellegrino Center professor at Georgetown University; Stockdale distinguished fellow of science, technology and ethics at the U.S. Naval Academy; and is currently serving as a non-resident senior fellow of the Simon Center for the Professional Military Ethic at West Point.

The views and opinions expressed in this essay are those of the author and do not necessarily reflect those of the Defense Department and/or those institutions and organizations supporting his work.

Topics: Robotics and Autonomous Systems

Comments (0)

Retype the CAPTCHA code from the image
Change the CAPTCHA codeSpeak the CAPTCHA code
 
Please enter the text displayed in the image.