ROBOTICS AND AUTONOMOUS SYSTEMS
Roboticist: Lethal Autonomy 'Inevitable'
Despite continued controversy regarding robots that can apply lethal force autonomously, militaries are continuing to develop these capabilities at a rapid rate, said a roboticist at the International Neuroethics Society meeting Nov. 13.
“Unless regulated by international treaties, lethal autonomy is inevitable,” said Ronald Arkin, associate dean for research and space planning at the School of Interactive Computing at Georgia Tech.
“It’s already here,” he added.
While most weaponized unmanned robots remain under direct human control, many nations are moving toward fuller autonomy — much to the dismay of humanitarian groups and nongovernmental organizations, he said.
Nearly 60 of these groups have called for a worldwide ban of autonomous technologies as a whole, he said.
It is an issue that has been, and continues to be, debated heavily at the United Nations. However, banning these systems without further understanding of their capabilities is premature, Arkin said.
Despite continued disparagement from human rights groups, Arkin said the data suggests that there is significantly less collateral damage with the use of certain unmanned systems.
“Atrocities [are] an inherent part of warfare,” Arkin said. Autonomous systems could be used to reduce casualties, he added. The goal now is to make these systems more humane. “We don’t want terminators,” said Arkin.
However, he added, the definitions of autonomy and human control both must be established before governing agencies make laws regarding the development and deployment of these systems.
“Philosophers… have been figuring out the right ways [for men] to kill each other for thousands of years, codifying that for the last several hundred years and actually having international agreements on what is the right way… for us to slaughter each other in the battlefield,” he said.
Now they want robots to conform to those standards as well, he added.
The hope among roboticists in this field is that the technology — being less prone to human, emotional error — will one day eliminate some of the unnecessary casualties in warfare, he said.
It is Arkin's hope that these systems will have the capacity not only to decide when to fire, but also when not to fire. They could not only more effectively seek out targets, but also eliminate the use of blunt force, he said.
“There is a weak link in the kill chain,” he said. “Human beings, unfortunately — not all — stray from what should be done” in combat situations.
One of the biggest questions roboticists need to answer is: “Can we find out ways that can make them outperform human warfighters with respect to ethical performance?” Arkin said.
For the military, being able to rely more heavily on robots would allow it to “expand the battlefield,” and influence larger areas for longer periods of time, he added.
“The overall goal, from the military’s point of view is to reduce… casualties,” Arkin said.
While he admitted these systems would not be perfect, and that they will still kill civilians, he thinks it’s likely that they will kill far fewer than human war fighters. These systems may one day be made “more humane, which translates into better compliance with international humanitarian law,” he added.