Autonomous Weapon Policy Receives Much Needed Update

By Wilson Miles

iStock illustration

The adoption of emerging technologies for national defense poses challenges and opportunities not just to the developers and operators, but to policymakers as well. There is no clearer example of this than the ongoing debate over the potential use of autonomous weapon systems on the battlefield.

The Pentagon recently updated Directive 3000.09 — its guiding document for U.S. military policies on autonomy in weapon systems — to reflect rapid developments over the past few years. This demonstrates the ongoing commitment of U.S. policymakers to be forward-looking in establishing responsible constraints, as well as a shared understanding of the military relevance, technical realities, threats and strategic implications of autonomous weapon systems.

Directive 3000.09 created a regulatory framework that inserted policy considerations into the military’s acquisition and requirements process before formal development. The update to this policy clarifies the roles and responsibilities of the technical, policy, and military communities who manage autonomous systems’ maturation and eventual use.

Of note is the requirement for a formal review of autonomous weapons systems so their use is consistent with governing policies. This includes operational capabilities and limits, system safety and reliability, and will be conducted by the undersecretaries of defense for policy; research and engineering; and acquisition and sustainment, in addition to the vice chairman of the Joint Chiefs of Staff.

It is important for policies to avoid focusing on theoretical examples and instead be rooted in specific concept of operations or realistic mission use cases. Much of the current discourse surrounding autonomous weapon systems occurs at an abstract level. Acknowledging this, 3000.09 now requires systems to be designed to “complete engagements within a timeframe and geographic area, as well as other applicable environmental and operational parameters.”

This policy balances abstract policy goals with the military’s operational need to be tactically useful, and the technical reality of engineering an autonomous system. This will help engineers maximize system effectiveness when writing code and assessment methodologies intended to meet well-designed and bounded military use cases. Prohibiting the fielding of systems whose operations are unbounded by time and geography will also drive developers to use data consistent with these operational limits when training these systems.

Machine learning models can be opaque, making it difficult for laymen to understand a system’s rationale for a given response. In some cases, this uncertainty can lead to unforeseen errors in activities like target identification. To address this, the policy states that “technologies and data sources are transparent to, auditable by and explainable by relevant personnel,” which is crucial for establishing trust in the machines.

While 3000.09 articulates which Defense Department organization is responsible for developing requirements for data collection, the implementation of the new guidance’s governance regimes does not extend to regulation of the data itself. It neither describes how data will be gathered and stored nor differentiates between types of data — such as synthetic data — which is increasingly used to train systems in situations where real-world data is limited or unavailable.

Another challenge is the lack of a technically defined boundary between autonomous and semi-autonomous systems that would allow regulation to distinguish between the gradual scale of autonomy. Once a system is deployed, there is no method for inspection to ensure compliance with existing policies. As such, it is even more critical to understand the biases and sources of the data to give more confidence that the operational deployment will be consistent with ethical norms and policies.

Though the 3000.09 update makes significant strides, some policy points may need to be continuously examined. Perhaps the sentence in the original guidance that has received the most public scrutiny is: “Autonomous and semi-autonomous weapon systems will be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” This somewhat nebulous statement was retained in the update, and critics can argue that pursuing autonomous weapon systems without putting binding legal rules in place may lead to unintended consequences.

The directive’s language regarding the “appropriate levels of human judgement” is ambiguous and ought to include an affirmation that any action will be consistent with acceptable rules of engagement.

The Emerging Technologies Institute is currently exploring how the development of complex technical regulatory regimes can be best supported by the technical, policy and user communities. Our interviews with National Defense Industrial Association member company experts have shown that the technical community is eager for policymakers and end-users to have a deeper understanding of the technical aspects of autonomy.

At the same time, technical training and education often omits consideration of policy, including potential dual-use, regulatory or ethical considerations.

Moreover, there are few opportunities for sustained and detailed dialogue and collaboration between the communities, providing an opportunity to include each group in the typical discussion forums and ensure equal voices from all.

Given the promise of AI and autonomous technology, the Defense Department is correctly seeking to develop and leverage it to support national security missions. The update to 3000.09 is a positive step for the military as it seeks to remain the global leader in developing and deploying new autonomous systems in a lawful manner.

Wilson Miles is an associate research fellow at NDIA’s Emerging Technologies Institute. Join members from the technology, military, and policy communities at the NDIA Emerging Technologies Conference and Exhibition in Washington, D.C., Aug. 28-30.

Topics: Robotics and Autonomous Systems, Policies, Artifical Intelligence

Comments (0)

Retype the CAPTCHA code from the image
Change the CAPTCHA codeSpeak the CAPTCHA code
Please enter the text displayed in the image.