ROBOTICS

JUST IN: Progress on Autonomy, AI Requires More Risk Taking

9/28/2023
By Laura Heckmann

iStock illustration

HUNTSVILLE, Alabama — When it comes to the future of integrating autonomy onto robotic platforms, industry experts have a message for the government: reduce specificity and increase risk.

Discussing challenges facing payload integration onto autonomous robotic platforms at the National Defense Industrial Association’s FUZE/FFC/DEMIL conference Sept. 27, Adam Robertson, co-founder and chief technology officer at Fortem Technologies, Inc., said stringent requirements could be stifling innovation.

Government requirements for autonomy are “too specific instead of objective,” he said. In other words, requirements should focus on what the technology needs to accomplish rather than a specific list of capabilities, and then let industry figure out how, he said.

Robertson said he sees the need to be restricted to an extent, but often the requirements developers are “missing some really, really good approaches by saying ‘This is what I want’ versus ‘what I’m trying to accomplish.’”

Industry is full of novel ideas, he said, “and I think … they’re excluding some really great ideas by being a little more restrictive.”

Leo Volfson, founder and president of Torrey Pines Logic, echoed the sentiment.

“The government comes out and says, ‘It has to work like this’ and ‘test them out on this platform.’ That's way too narrow. It really should be focused on ‘what's the problem we're trying to solve?’ And then there will be [an] out-of-box solution that would be proposed,” he said.

Nine out of ten times, proposals are not even going to be considered because there is “something wrong in the way that the program managers are looking at the problem,” Volfson added. “They are looking for very specific solutions.”

Volfson said “absolutely amazing stuff” gets proposed that never moves, saying AFWERX — a technology directorate of the Air Force Research Laboratory and the innovation arm of the Air Force — has “something like 600” proposals that are “amazing,” but go by the wayside.

Risk aversion is another hindrance to innovation, Anderson said.

“There's a huge risk aversion — it's put us into a place where you're at a severe disadvantage to the world,” he said. Adversaries with less risk aversion “move really fast,” he said.

Anderson said one reason development can’t move fast is because the government contracting process “takes forever,” from publishing and planning to awarding and the protest process.

“So I wish that we did have … a little bit more risk tolerance on multiple fronts” — not just contracting, he said, but innovation and even testing. Restrictions on how and where testing can be conducted are also limiting, he said.

“There’s [just] so much risk aversion due to the regulatory environment,” he said, suggesting a pathway be opened to allow innovation — meaning testbeds free from regulations, he said. “Allow me to break the rules in a limited testbed.”

Risk aversion is inherent specifically with machines and artificial intelligence, Anderson said. Drawing a comparison to the medical field, he said doctors inevitably make mistakes. Most do a “great job,” but “we have to accept that there are going to be some mistakes” — because we’re human, he said.

When it comes to AI, we hold it to a different standard, he said.

“Let's talk about humans on the kill chain. Sometimes there's going to be mistakes. You have to accept there's going to be errors,” he said. “When it comes to AI, our expectations of the machine's ability to not make mistakes is extremely high. We hold them to a different level. We expect them to be far better than humans.”

Anderson guaranteed that AI will make mistakes, but there needs to be a certain level of tolerance for it. Because if the mistake rate is less than humans, it’s still worth the risk.

 

Topics: Cyber, Emerging Technologies

Comments (0)

Retype the CAPTCHA code from the image
Change the CAPTCHA codeSpeak the CAPTCHA code
 
Please enter the text displayed in the image.