INTELLIGENCE AND SURVEILLANCE

Artificial Intelligence to Sort Through ISR Data Glut

1/16/2018
By Jon Harper
The Pentagon plans to deliver AI-based algorithms for Predator and Reaper drones by the end of 2018.

Photo-Illustration: Defense Dept., Getty

Inundated with more data than humans can analyze, the U.S. military and intelligence community are banking on machine learning and advanced computing technologies to separate the wheat from the chaff.

The Defense Department operates more than 11,000 drones that collect hundreds of thousands of hours of video footage every year. The intelligence community captures more than three NFL seasons worth of high-definition imagery data each day with a single sensor in a single combat theater, according to officials.

“When it comes to intelligence, surveillance and reconnaissance, or ISR, we have more platforms and sensors than at any time in Department of Defense history,” said Air Force Lt. Gen. John N.T. “Jack” Shanahan, director for defense intelligence (warfighter support) in the office of the undersecretary of defense for intelligence.

“It’s an avalanche of data that we are not capable of fully exploiting,” he said at a technology conference in Washington, D.C., hosted by Nvidia, a Santa Clara, California-based artificial intelligence computing company.

For example, the Pentagon has deployed a wide-area motion imagery sensor that can look at an entire city. But it takes about 20 analysts working around the clock to exploit just 6 to 12 percent of the data collected, Shanahan said. “The rest disappears, maybe goes into some forensics database somewhere like the Indiana Jones vault.”

As the ISR flow continues to increase, the challenge is becoming more and more acute. “We do not have more people to keep throwing at the problem,” he said.

Even if the intelligence community had many more analysts on its payroll, it would still be physically impossible for humans alone to handle the volume of content available, according to a report by the National Geospatial-Intelligence Agency.

The CIA is facing a similar predicament, said Matthijs Broer, chief technology officer at the spy agency’s science and technology directorate.

“The analytical capabilities … required to analyze that data for us really has been challenged significantly” in recent years, he said at a technology summit in Washington, D.C., hosted by Defense One.

Not having enough analysts to examine all of the video footage and other incoming data isn’t the only problem that officials are grappling with. Being able to exploit the intelligence information in a timely manner is also a challenge.

The Pentagon has analysts staring at full-motion video for up to 12 hours at a time. They have to digest what they’re seeing. They sometimes can’t brief decision-makers until many hours after a significant event has occurred, Shanahan said.

The NGA in its “GEOINT CONOPS 2022” report noted the drawbacks of having limited automation. “Relying on human processes for information generation, characterization, sourcing strategy and source orchestration dictates the speed of execution and is inherently limited to the speed individuals can perform tasks,” it said.

The CIA sees artificial intelligence, specifically machine learning and deep learning, as a potential solution to these problems. “That may actually help to flip … the paradigm,” Broer said. “With the overabundance of data coming at us we have no choice” but to pursue it, he added.

The Defense Department has established an algorithmic warfare cross-functional team to spearhead Project Maven, an effort to deploy AI to help exploit the full-motion video footage captured by drones. The project was launched in April 2017.

Six companies are already on contract and developing algorithms, Shanahan said, but did not identify them. The first batch of algorithms was slated to be delivered by the end of December for use with unmanned aerial systems. The project is on track and additional technology “sprints” are expected in 2018, he noted.

“The first sprint is small numbers of [UAS] classes, a limited number of platforms and limited numbers of objective and threshold requirements,” Shanahan said. Those numbers will continue to increase in sprints two and three, he said.

The near-term goal is to deliver AI-based algorithms for tactical medium-altitude MQ-1/MQ-9 Predators and Reapers, and wide-area motion imagery processing and exploitation systems by the end of 2018. Once proven, the technology is expected to spread throughout the Defense Department, Shanahan said.

“We’re not talking yet about replacing analysts,” he said. “What I want are analysts to do more analytical work … [by] giving them time back to think — to think contrarian thoughts, to do red team analysis, to actually put things in context rather than staring at full-motion video screens.”

The Johns Hopkins University Applied Physics Laboratory, which receives funding from the Defense Department, is working on a number of AI and computer vision projects to include image classification, detection and segmentation.

With algorithms assisting them, intelligence analysts “should be able just to sit there and get these alerts” in real time without having to watch video feeds and wait for an event to occur, said Pedro Rodriguez, a senior research scientist at the lab.

By 2022, the National Geospatial-Intelligence Agency hopes to automate as much of the ISR enterprise as possible, leaving only the highest cognitive tasks to humans.

Automated processes provide an increasing ability to integrate disparate sources of intelligence as the volume and variety of sources grow, the agency said in its report. “Machine intelligence can sift through the vast number of sources to sense shifts in activity patterns and automatically adjust dozens of collections to rapidly characterize the data, inform models and present the human analyst with new information and knowledge of potential future outcomes,” the report said.

Will Rorrer, a program manager at Harris Corp. who leads deep learning technology efforts at the company, expects trends to move in the direction envisioned by the NGA as algorithms are more widely employed.

“We’re going to switch from drowning in data to drowning in the detections and … observations” made by machines, he said. “How we make sense of that and move towards that automated activity-based intelligence is going to be the next step. And I think that’s where deep learning is going to move in this space after we start mass producing these algorithms.”

However, several challenges lie ahead, Shanahan and others involved in these efforts noted. One is the critical dependency on training data and labeled data to be able to build the algorithms.
“For computer vision … we have to go out and label hundreds of thousands of images” so that systems can recognize what they’re seeing, Shanahan said. “We’re building a DoD data labeling enterprise. [We’re] a little bit slow on the pickup but we’re getting there.”

Government, academia and industry are looking for better ways to enable machines to understand new data and take advantage of the vast amounts of unlabeled data that are out there.
“There are some really interesting research-and-development efforts underway right now to suggest that maybe … you won’t need 150,000 labeled images” to have an effective algorithm, Shanahan said.

The JHU Applied Physics Lab is working on applying transfer learning techniques to image classification, detection and segmentation. Through the use of convolutional neural networks, machine learning from a previous data set geared toward one problem can be applied to a different data set geared toward another problem, Rodriguez explained.

Rorrer sees advantages in using transfer learning and unsupervised machine learning to improve algorithms.

The intelligence community and industry need to explore “what other corralled information can I bring to the table and tie that into the neural net, increasing its accuracy,” he said.

They should find ways to “pre-train” the algorithms to process new information and employ other techniques that enable machines to learn with little or no human supervision, he added.

Computing capability is another critical dependency, Shanahan noted.

Algorithmic warfare is not going to be effective without cloud computing, but much of today’s cloud computing technologies are not optimized for artificial intelligence and machine learning, he said.
Advanced computing power is also required. Project Maven has made a significant investment in graphics processing unit, or GPU, computing. However, there may be better options available in the future, he said.

“The real experts in this business say quantum computing may be that game-changing technology,” Shanahan said. “The power … that quantum computer is going to bring — it’s going to change our business.”

 


Left: Aerial view of an Afghan village  •  Right: An intelligence analyst reviews imagery. (Defense Dept.)

 

For now, the Defense Department is focused on using algorithms at processing and exploitation workstations. But a goal is to have them embedded at the tactical edge inside platforms and sensors, he said.

Broer said AI and advanced sensors hold promise but also peril for the CIA. This is especially true when it comes to facial recognition and other high-tech ways of rapidly locating and identifying individuals.

Studies conducted by universities have demonstrated the power of this technology, he said.

“All I have to do … [is take] a few measurements on your digital dust and with a modest amount of computational horsepower I completely nail who you are,” Broer said, referring to the potential use of AI-assisted sensors for surveillance and monitoring.

That has major offensive and defensive implications for the CIA, he noted.

“For us the ability to move people around anywhere in the world without getting caught and without being recognized is huge. So … defensively it’s a nightmare for us,” he said. “Offensively [it’s] a great opportunity” because adversaries’ spies are similarly vulnerable, he added.

From an acquisition perspective, defense and intelligence officials are looking to leverage the R&D work of the commercial sector to help with their AI needs.

“We don’t have the internal capabilities or the resources to do that on the scale that the private sector does,” Broer said. “We interact a lot with leading actors in that area and modify [their technologies] slightly, because sometimes you get maybe 90 percent of the solution from a commercial platform and then you have to add a little tweak to that to make it work in the clandestine” world, he added.

Similarly, the Defense Department has niche requirements that might not be fully met by commercially available technology. That’s why the Pentagon needs to help guide private sector investment in AI with initiatives like the Defense Innovation Unit-Experimental, also known as DIUx, Shanahan said.

The unit, headquartered in Silicon Valley, was established in 2015 to help bridge the cultural divide between the Pentagon’s acquisition communities and the commercial tech sector.

“The DIUx piece is incredibly important from now on,” Shanahan said. “There’s a really interesting dialogue that is happening with everything from the small companies to the largest companies that are out there.”

The intelligence community has its own version of DIUx known as In-Q-Tel.

The government-sponsored nonprofit operates like a venture capitalist and invests in startups that are developing technologies of interest to intelligence agencies, Broer noted. The organization is well funded, he said.

Shanahan said the Pentagon is open to all comers as it looks to buy AI technologies. “My expectation is everybody from a one-person company to the biggest data internet company in the world is a contender for what we’re trying to find solutions to.”

Harris sees major business opportunities flowing from the Defense Department’s pursuit of artificial intelligence.

“There’s a big urgency to bring this into the DoD,” Rorrer said. “As a large defense contractor … we are taking all the actions we can to prepare for this flood of AI adoption.”

The company has been investing in deep learning technologies for over five years, he noted. “Defense contractors … have toolsets out there. We need to prepare for this wave of deep learning algorithms and we’re pushing that on all fronts,” he added.

Shanahan aims to eventually establish programs of record for algorithms.

“We’ll have selection criteria,” he said. “What I really want to get to is … indefinite quantity/indefinite delivery where we can just slide people in and out of contracts that already exist.”

Topics: Intelligence and Surveillance, Robotics, Robotics and Autonomous Systems

Comments (0)

Retype the CAPTCHA code from the image
Change the CAPTCHA codeSpeak the CAPTCHA code
 
Please enter the text displayed in the image.