ALGORITHMIC WARFARE ROBOTICS AND AUTONOMOUS SYSTEMS
AI Project to Link Military, Silicon Valley
By Yasmin Tadjdeh

Photo: iStock
It was almost a year ago when Google employees made waves from Silicon Valley to Washington, D.C., by signing a letter objecting to the company’s work with the Defense Department’s Project Maven.
The effort — to develop AI systems capable of analyzing reams of full-motion video data collected by drones that would then tip off human analysts when people and events of interest pop up — was viewed by the employees as Google being in the business of war. Eventually, the company chose to not pursue another Project Maven contract.
But the brouhaha may have been mitigated if the Pentagon and Silicon Valley knew how to better communicate, said Paul Scharre, director of the Center for a New American Security’s technology and national security program.
“There’s not a lot of crosstalk and crosspollination between these communities — between policymakers and those in the AI community who are concerned,” said Scharre, who is also the author of the book Army of None: Autonomous Weapons and the Future of War.
To try and bridge the gap, CNAS is spearheading a new effort — known as the Project on Artificial Intelligence and International Stability — to create more dialogue between policymakers, the developers of AI platforms and national security experts working outside of government.
“You need perspectives from all three to really grapple with … [these issues] effectively,” he said.
The project will focus not only on the use of military applications for AI, but also on how other countries are employing the technology, he said.
“Countries around the globe [are] making very clear their intent to harness artificial intelligence to make their countries … stronger, to increase national and economic competitiveness,” he said.
“We’ve seen well over a dozen countries now launch some form of national strategy for artificial intelligence.”
China and Russia have trumpeted the fact that they intend to make AI a cornerstone of their future scientific pursuits and become world leaders in the technology.
One of the main purposes of the project is to better understand the risks associated with nations beginning to invest in AI technology and the security dilemmas that the United States should be concerned about. It will bring together three communities — policymakers, those in the AI community and security studies scholars — to better understand the problem, he said.
Scharre, who has researched artificial intelligence for years, said in the course of the work he has done, he has seen a lack of communication between different stakeholders.
“I’ll talk to policymakers working on AI issues. I’ll talk to AI researchers who are concerned about these issues surrounding competition. I’ll talk to people in academics or the security studies field who are interested in artificial intelligence,” he said. “But I’m not sure that these communities are talking to each other enough. … To us the big value here is bringing these communities together and convening them.”
Michael Horowitz, an adjunct senior fellow at CNAS and a political science professor at the University of Pennsylvania focusing on military innovation, said rapid advances in artificial intelligence make it an important area of study.
“If you think about AI as something akin to electricity or the combustion engine, then understanding the way that it shapes international politics … becomes really complicated and it takes a lot of work … from a lot of different perspectives,” he said. “[We want] to convene some genuine discussions even among people that might disagree about some things and use that as a basis to … generate intellectual progress.”
The heart of the effort will be four workshops held throughout the country to bring stakeholders together, Scharre said. The first meeting will take place this spring in Washington, D.C., but others will be held in San Francisco, Philadelphia and London.
“We’re going to try and get out of Washington to engage these different communities because that’s a convenient place for policymakers, but it’s not necessarily a hub for these other communities,” he said.
These will be closed-door events operating under the Chatham House rule, he noted. There will also be a number of smaller, public events, he added.
Horowitz said: “Open events are great for trying to get the word out and expose the broader community to some of … the issues and challenges surrounding AI. Closed sessions can have value as well because sometimes people are more willing to be open and honest in a more private discussion.”
While not yet finalized, the meetings will focus on a variety of topics, such as AI safety, he said. There is potential for accidents to occur with the technology, he noted. These include hacking, spoofing and even the risk of a human applying an algorithm outside of the context for which it was designed.
“All of those generate safety and reliability challenges that … [need to be] understood analytically and managed practically,” he said.
CNAS plans to capture some of the insights gleaned from the workshops and distribute them via more “native venues” for some of the communities the think tank is targeting, Scharre said.
“If we want to talk to security studies scholars and tech folks and policymakers, they read different things and they have different kinds of conversations,” he said. “We will do some think tank reports that dig in deep into these topics, but also try to catalyze the conversation in some of these other kinds of venues [like op-eds] to engage these audiences in this broader discussion.”
CNAS has grown its team to staff the project, including a number of full-time and adjunct positions, Scharre said. The effort is being supported by a grant from the Carnegie Corp. of New York. The think tank announced the campaign in December and it will run through October 2020. ND
Topics: Robotics, Robotics and Autonomous Systems
Comments (0)