INFOTECH
ALGORITHMIC WARFARE: DARPA Hosts Workshops To Develop 'Trustworthy AI'
5/25/2023
By Josh Luckenbaugh
By Josh Luckenbaugh

iStock illustration
The Defense Advanced Research Projects Agency is seeking help understanding the best ways to use artificial intelligence for national security.In June, DARPA will hold the first of two workshops with industry, academia and other government entities as part of its AI Forward initiative, during which the agency hopes to bridge the “fundamental gap” between the AI innovation going on in commercial industry and the Defense Department, said Dr. Matt Turek, deputy director of DARPA’s Information Innovation Office.
“Commercial industry has … been making significant investments and developing highly capable — or seemingly capable — systems,” Turek said in an interview. “But those systems might not be well aligned for DoD use cases.”
Current commercial AI systems can likely handle “low-risk decision-making,” Turek said, “but if you think about mission-critical decision-making to the DoD, in those cases we can’t suffer failures, and we need to be able to predict and understand perhaps in detail how systems might respond.”
For example, large language models such as ChatGPT are “very compelling” for generating text or creating documents — tasks that are “relatively low risk,” he said. However, “if you think about applying it to critical domains” such as “looking at a large body of intel reporting and summarizing it” for the Defense Department or an intelligence agency, “even there these models start to break down.
“There’s evidence of them hallucinating information that wasn’t necessarily there, or making up citations to scientific publications that were never written,” he continued. “Those sort of things would be fraud in the context of an intelligence analysis process,” highlighting “the dichotomy between what might be appropriate for commercial use cases, and then how that might not currently meet DoD needs.”
AI Forward will serve as DARPA’s “engagement mechanism” with the community, Turek said. The program will kick off with a virtual workshop from June 13-16, followed by an in-person event in Boston July 31-Aug. 2, during which participants will have an opportunity to brainstorm new directions toward trustworthy AI with applications for national security, a DARPA release said.
Turek declined to say how many applications DARPA received for AI Forward, but he anticipates the acceptance rate being in the 25 to 30 percent range, with participants coming from academia, industry and government and representing a variety of AI-related disciplines, such as theory, human-centered AI, philosophy and ethics, computer vision and natural language processing. The goal is to bring together a diversity of ideas and backgrounds “to take a holistic look at AI,” he said.
While DARPA doesn’t have specific use cases it hopes the AI Forward events will solve, there are “three core areas that need to be advanced in order to get us to trustworthy AI and the sort of AI that we will ultimately want for national security purposes,” Turek said: foundational AI science, AI engineering and human-machine teaming.
For AI science, the community must establish “an understanding of scientific principles that will allow us to design an AI system, decompose it into pieces, be able to make measurements [and] understand when you recompose that system how it’s going to behave,” which will then inform that second pillar of AI engineering, he said.
Turek used the analogy of building a bridge, which isn’t built by trial and error, while current machine learning models are built “a lot by trial and error,” he said.
The way civil engineers are “able to break that very large problem down into many smaller problems, solve those and then put it all back together and know that the entire bridge is going to work” is how AI engineering should be done as well, he said. “We need to be able to do that decomposition, be able to make those measurements on pieces of an AI system, recompose it together and understand how it’s going to perform when fully assembled.”
The third pillar, human-machine teaming, is something DARPA has been discussing since the 1960s, he said. “How do AI systems build an understanding of humans to have that interaction? How do they model human values and reflect those appropriately?”
Concerns on this topic include not only the teaming of AI and humans, but also the amount of computing and energy resources it would take to build an effective, large AI model, he said. “The compute resources are significant, but that means that the energy utilization is significant as well,” he said. Figuring out the “appropriate use of resources” for future AI systems will be a challenge, he said.
Following the workshops, DARPA plans to fund some of the efforts that come out of AI Forward, Turek said. “The ultimate output of the workshops will be the identification of approximately 40 promising areas for future research,” according to the AI Forward web page.
“We’re looking for the best and most compelling ideas, and depending on what we see, there may be ability to scale or adapt the funding,” Turek said. “What are those compelling ideas that we can start funding that might take AI in a new direction?”
Topics: Defense Department, Artifical Intelligence, Infotech
Comments (0)