Viewpoint: AI Objectors Can Still Contribute to Nation’s Defense
The Defense Department is investing heavily in artificial intelligence. But some of America’s finest AI innovators are raising concerns about the military applications of their work.
Elon Musk, founder of SpaceX and Tesla, warns that machine intelligence may outstrip humans’ ability to control it. Employees of Google loudly and successfully protested the company’s involvement in the Defense Department’s Project Maven, an initiative that develops computer vision and machine learning techniques to search video for targets and threats.
Objections to military applications of new technology are principled and they have precedent, so we must take them seriously. Consider the anxiety created by military use of drones. Similar concerns produced international treaties that ban certain biological and chemical materials in warfare, though the Syrian regime and Russian assassins still put them to horrific use.
Innovators who object to offensive applications of AI can still contribute to the military defense of the nation. AI can serve warfighters and the civilians they protect — even if they are not used offensively — to find, fix and destroy adversaries.
Several challenges the Defense Department is facing make the point well. Personnel operate in conditions that are often extreme, that can compromise their ability to perform and survive. Soldiers push themselves to the limits to be physically fit, yet often injure themselves in the process. U.S. warfighters apply the most advanced military technology, yet some of our weapons cause traumatic brain injury and related effects to the troops who use them.
The solutions to these cases are similar: the use of technologies such as sensors, analytics, AI and machine learning to help prevent or preempt injury, or to better understand the problems and assess solutions that can make a difference.
Consider the work of aircraft maintainers who spend thousands of hours each year working inside the dark, cramped, tight quarters within the wings and fuel tanks of aircraft. The air can be toxic and the opportunity for entrapment, asphyxiation and other injury high. Current safety regimens require attendants to be stationed nearby to check their status, but a worker can be rendered unconscious by fumes before an attendant is aware. It is a high-risk environment with high potential for failure. This spurred the Air Force Sustainment Center and the Air Force Research Laboratory to invest in a system of sensors and analytics that mechanics will wear, streaming a variety of information about their physiological state, atmospheric conditions and exact position to a central monitoring station.
There, algorithms will continually assess their health and safety status in real time, allowing a single supervisor to monitor multiple workers simultaneously to improve safety.
This innovation was inspired from an unrelated project at AFRL’s human universal measurement and assessment network laboratory. Subjects there were wired to a variety of sensors to monitor brain activity, respiration, heart rate and other responses, as they performed tasks. AI algorithms would then assess their neuro-physiological signals, performance, workload and stress. The findings could be looped back into a simulation environment to improve training and learning, or operationally, to augment systems where, for example, automation might kick in to assist or offload tasks when the warfighter was overloaded or before performance degraded.
This transition of the lab’s “sense-assess-augment” framework and technology to the confined space problem demonstrates how the cross-pollination of technology will benefit Air Force maintainers and potentially others, too.
"Sensor-driven analytics and AI make important contributions to the well-being and protection of warfighters, and not just to their lethality."
Risk also exists outside of confined spaces. Service members often operate in environments where the effect of heat, exertion and hydration can have catastrophic effects on their performance if not managed properly. To address these challenges, the Army’s Research Institute of Environmental Medicine is working on a system of sensors to monitor body temperature, hydration levels and physiological stress so unit leaders can act to maximize the performance of their personnel and act before injuries from heat or cold occur.
Another example: Marines are one of the world’s finest fighting forces but not if they are injured and unavailable for duty. For many Marines injuries come well before combat, while they’re engaging in the rigors of fitness training. To reduce the frequency of these inadvertent injuries and thereby improve force readiness, the Office of Naval Research is developing the FitForce Planner. This system of analytics combined with a mobile app will help them plan optimized fitness regimens that build the balanced mix of mobility, muscular strength, endurance and flexibility required, while reducing risk of injury. FitForce instructors will be able to track overall fitness of a unit or an individual Marine to better support readiness. If this is sounding familiar to those who use consumer gadgets, consider this a Fitbit on steroids — without GPS tracking.
In fact, non-battle injuries, most from sports and physical training, led to more evacuations from Afghanistan and Iraq than battle injuries, according to the Army Public Health Center. To better understand injury risks from sports and related activities, the Army Aeromedical Research Laboratory is using sensors to monitor and track impact, acceleration and motion. This will inform leaders so they can better prevent unnecessary injuries, and again, improve on readiness.
Meanwhile, emerging evidence from a recent study by the Center for a New American Security think tank, “Protecting Warfighters from Blast Injury,” indicated that weapons operators are prone to traumatic brain injury. TBI — the signature wound in recent wars — is associated with exposure to roadside bombs. However, CNAS concluded that service members who fire recoilless rifles, shoulder-fired rockets, and similar weapons, experience high levels of blast overpressure — the pressure wave that comes from explosions — that can cause brain injury, even during training.
Because there are competing theories for how blast pressure causes brain injury, the Army has begun to gather data concerning blast exposure using blast gauges. The CNAS report recommended brain imaging studies and the use of advanced analytics to identify the relationships between specific blast conditions and injury.
In the near term, computer models and experiments have suggested that the modest protection of existing helmets could be improved with new designs, such as adding a modular face shield that could reduce blast pressure in the brain by up to 80 percent. In the future, soldiers might offload firing of such weapons to AI-enabled, robotic teammates.
In all of these cases, sensor-driven analytics and AI make important contributions to the well-being and protection of warfighters, and not just to their lethality. This widens the opportunities for scientists and technologists who draw hard lines at their ethical boundaries. Ironically, those boundaries may begin to fade in time, as human-machine systems become truly symbiotic. Such “symbiotic” systems engage humans and AI in a continuous cycle of decisions and learning. Symbiosis may produce battlefield decisions that are better than the decisions of unaided warfighters, decisions that better protect both them and civilians, and that deter the enemy when the enemy need not be destroyed.
Until then, the computer scientists and applied mathematicians who design new AI have ample opportunity to contribute to Defense Department challenges of protection and defense.
Members of the Human Systems Division of NDIA hope that these innovators will take on that mission.
Jared Freeman, Ph.D., is chair of the Human Systems Division of the National Defense Industrial Association, and chief scientist of Aptima Inc.