ROBOTICS AND AUTONOMOUS SYSTEMS

DARPA Spending $2 Billion to Advance AI Technologies

9/7/2018
By Stew Magnuson
DARPA Director Steven Walker at the D60 conference on Sept. 5

Photo: DARPA

NATIONAL HARBOR, Md. — The Defense Advanced Research Projects Agency will spend $2 billion over the next five years on the “AI Next” program to advance the science of artificial intelligence, its director announced Sept. 7, the final day of the organization’s 60th anniversary celebration. 

DARPA has been involved in AI research since the 1960s and already runs 80 programs devoted to the field, but wants to take it to the next level to include contextual reasoning that will “create more trusting, collaborative partnerships between humans and machines,” the agency’s director Steven Walker said at the D60 conference in National Harbor, Maryland.

Today’s AI systems depend on large amounts of high quality training data, do not adapt to changing conditions, offer limited performance guarantees, and are unable to provide users with explanations of their results, a statement announcing the programs said.

An AI system is “trained on large data sets. If it gets outside those data sets, it tends to fail. And it tends to fail in a bad way. … What we would like to do is get into this third wave where not as much data is needed,” Walker said. “You give the machine the ability to reason in a contextual way.”

China reportedly spending big on its own AI initiatives had little to do with the DARPA effort, Walker told reporters afterwards. There has been a first and second wave of AI technologies, and now it’s time to take it to a third level, he said.

“The Chinese are making a big play in this area, but this really was the next natural step for DARPA to take in our investment in AI,” Walker said.

Broad agency announcements for companies wanting to take part in the program are expected over the course of the next year, with an agency goal of issuing contract awards 90 days after accepting a proposal.

Some of the initiatives are: automating critical Defense Department business processes, such as security clearance vetting in a week or accrediting software systems in one day for operational deployment; improving the robustness and reliability of AI systems; enhancing the security and resiliency of machine learning and AI technologies; reducing power, data and performance inefficiencies; and pioneering the next generation of AI algorithms and applications, such as “explainability” and common sense reasoning.

“Explainability” is a feature that simply isn’t available in AI systems now, Walker noted. It answers one of the problems facing the technology and that is trust in the systems, he said.

“Today, AI can’t tell you why it came up with the answer it came up with,” Walker said. “What we are trying to do with explainable AI is have the machine tell the human: ‘Here is the answer. Here is why I think this is the right answer,’ and explain to the human being how it got to that answer.”

Ethics will be part of the project, Walker said. Google withdrawing from the Defense Department’s Project Maven AI project on the grounds that its employees did not want to work on military drone programs has renewed focus on how such technology will be employed.

“As you start to think about third-wave AI and contextual reasoning by machines, there does need to be a discussion about the ethics and values,” he said. Google still works with DARPA on other projects, he noted, and there are still many companies willing to work with the organization, he added.



Topics: Infotech, Information Technology, Robotics, Robotics and Autonomous Systems, Science and Engineering Technology, Cyber

Comments (0)

Retype the CAPTCHA code from the image
Change the CAPTCHA codeSpeak the CAPTCHA code
 
Please enter the text displayed in the image.