SOFIC NEWS: Microsoft, SOCOM Highlight Need for Ethical Use of Artificial Intelligence

By Meredith Roaten

iStock illustration

Tech giant Microsoft is doing a “significant amount” of research and development work to ensure that its artificial intelligence products are in line with its ethical principles, according to the company's CEO.

Incorporating ethical principles into the corporation’s engineering practices is a priority, said Satya Nadella during prerecorded remarks that aired May 20 at the virtual Special Operations Forces Industry Conference, which is managed by the National Defense Industrial Association.

Nadella said the research into machine learning operations will help the corporation find ways to ensure ethical behavior in every aspect of artificial intelligence, from data provenance to creating unbiased models.

“There is a significant amount of R&D we are doing ... so that people who create AI have the best process practice tools in order to create AI that really conforms to the ethics that we have all defined for ourselves in our institution,” he said.

The company has published principles that include fairness, transparency, accountability, privacy and security, he noted.

Last year, the Defense Department rolled out its own list of five AI ethics principles that will underpin the miltiary's development and employment of the technology. The list was based on recommendations from the Defense Innovation Board and other experts. 

Military personnel must be responsible and exercise appropriate levels of judgment and care in the development, deployment and use of AI capabilities. The technology should be “equitable” and steps taken to minimize unintended bias. It must be traceable and include transparent and auditable methodologies, data sources, and design procedure and documentation. And systems must be reliable and governable, according to the list.

Nadella said that while Microsoft and the Defense Department both have published ethics principles, the next step is ensuring they are followed.

“I feel like we're making real progress,” he said.

While Microsoft supplies technology to the U.S. military, some in the commercial tech world are leery of working with the Pentagon. Backlash from Google employees in 2018 was strong enough to prompt the company to pull out of the Project Maven initiative, which helped the Defense Department analyze drone footage from places like Afghanistan using AI and machine learning. Employees objected to the technology’s support of military operations and gathered thousands of signatures for a petition asking the corporation to back out of the collaboration.

U.S. Special Operations Command Commander Gen. Richard Clarke, who participated in the discussion with Nadella at the conference, echoed the importance of ethics in employing artificial intelligence.

“We put emphasis on this from the beginning because we have to lead with our values up front,” he said.

Clarke said SOCOM needs AI to help catalog and analyze the vast trove of video, documents and other data that it has collected and will continue to collect. While humans can perform these tasks, artificial intelligence would reduce the burden on human operators.

“There is a ton of information that could help us against the future enemies of our country,” he said. “Because it's not just what we've collected, but it's about what it could tell us in the future.”

Topics: Ethics, Intelligence and Surveillance, Artifical Intelligence

Comments (0)

Retype the CAPTCHA code from the image
Change the CAPTCHA codeSpeak the CAPTCHA code
Please enter the text displayed in the image.