Defense Logistics Agency Embraces AI

By Yasmin Tadjdeh

Illustration: Getty

The Defense Logistics Agency — which is tasked with managing the global supply chain for the armed services — is gung-ho about using artificial intelligence across its enterprise.

“There’s unlimited applications at DLA,” said Jesse Rowlands, a data scientist at the agency’s Analytics Center of Excellence.

The DLA enterprise is sprawling — it facilitates more than $37 billion in goods and services annually, employs 26,000 civilian and military personnel, operates in most U.S. states and 28 countries and supports more than 2,400 weapon systems, according to its website.

For many organizations, when they think about applying artificial intelligence to their mission or business plan, they focus on the big picture, Rowlands said.

They think, “‘let’s create a Siri or an Alexa or let’s create a whole procurement system,’” he said. “But really, 90 percent of the projects are going to be individual to an office.”

For example, AI could be deployed in human resources offices, or fraud, waste and abuse departments, he noted.

Much of the value of artificial intelligence “is going to be small iterations and small models,” he said. “You can deploy a model that affects 10 people but ... you’re helping that office quadruple the workload it can handle. That’s not a groundbreaking newspaper story, but you do a bunch of those and you’re having a big effect on the bottom line of the agency that you’re working at.”

For those involved in data science, which is Rowlands’ job, AI is viewed as a tool in a toolbox, he noted.

“What we are really concerned about at the data scientist level is, one, … [having a] framework where I can work on any project; and two, what’s the business problem?” he said. “That’s really going to determine what we can do.”

Rowlands described DLA as being in the “growth phase” of deploying AI. It has several grassroots projects across the agency, including in its research-and-development department and analytics center.

“We have in-house projects going, but we also have ... healthy R&D efforts going,” he said. Those include initatives to predict demand, lead time and spare parts requirements.

DLA is currently contracting those programs to industry, Rowlands said. He did not disclose which companies are participating.

The next step is creating a comprehensive AI strategy for the agency, he said.

“How do we organize this enterprise-wide with … a structure so we’re not all doing a bunch of individualistic things?” he said.

Rowlands said there is no projected release date for the strategy yet.

So far, DLA has not worked closely with the Joint Artificial Intelligence Center, the Pentagon’s office designed to consolidate and coalesce AI efforts across the department that was stood up last year, Rowlands said.

“We’ve had some interface with them, [but] not too much collaboration yet,” he said. That’s “understandable because they’re just kind of starting out.”

But, as the center bulks up its staff, it will be interesting to see how the JAIC’s role will evolve, he noted.

“I’m hoping they will lay out a really good governance structure for DoD agencies to follow — that will make my life a lot easier,” Rowlands said. “I’m hoping they will be a really big proponent of enabling AI [in the] agencies, because sometimes it can be hard to do it from the bottom up.”

As the Defense Department and DLA work to get their hands around artificial intelligence, Rowlands sees several issues that must be considered.
One is data. Data is the raw material for any model, he noted.

“The quality of data, the extent of your data … [will] determine the success of any project,” he said. “That’s something we’re going to be working on at DLA for a long time.”

And it’s not just the agency’s own data that DLA has to worry about. It also has to consider how it will work with information taken from external customers and suppliers.

“That can be very difficult,” he said. “That’s going to take a lot of time, a lot of effort.”

Another issue is governance.

“That is such a hard topic right now. It’s a really ‘unsexy’ topic,” Rowlands said.

Governance for traditional information technology is much different than with artificial intelligence, he noted. In traditional IT, if a former employee wrote a code or a model and doesn’t leave any documentation behind it, another engineer can later reverse engineer the work they did, Rowlands said.

However, with an AI model, “a lot of times you have no idea how the model works, no idea of what work went into it, what data they looked at,” he said. “If you don’t have the right governance in place, you can get stuck on first base.”

There must be a governance body in place to keep track of various projects, ensuring that every effort is documented in a transparent and repeatable way, Rowlands said.

Governance is also important when it comes to ethical and legal oversight.

“Every AI model is going to be biased and they’re all going to have legal and ethical impacts no matter how simple they look,” he said. “If you don’t have the documentation, I can’t reverse engineer it. And now there’s a legal issue. I can’t tell you what the model does. I don’t know how it does it. That’s not a very good position to be in.”

DLA also faces other challenges: having the expertise and infrastructure needed to deploy AI models properly. Those, however, are merely resourcing issues, he added.

“Those two are easily solvable for me, but the first two — the data and the governance — are a lot more difficult and a lot longer lasting and can’t just be solved by throwing money at the problem,” he said. 

Topics: Robotics, Robotics and Autonomous Systems

Comments (0)

Retype the CAPTCHA code from the image
Change the CAPTCHA codeSpeak the CAPTCHA code
Please enter the text displayed in the image.