Partnering to Balance AI Innovation, Responsible Use

By Michael Seeds

iStock illustration

The year 2023 was notable for artificial intelligence. Scientists and engineers have been making progress on AI since the 1950s, but as ChatGPT celebrated its first birthday and “generative AI” became a mainstream term, 2023 left the world realizing how far we have come and how rapidly AI is reshaping the global landscape.

Over the past year, the AI whirlwind also highlighted the need for government and industry to work together domestically and internationally as the technology advances.

AI is a powerful tool that offers tremendous benefits to society and national security, and it is no longer just a topic of speculative discussion around the office water cooler. C-suite executives across all major industries are utilizing AI to improve their product and service offerings to consumers.

Within the defense industrial base, innovators are partnering with the government to equip war­fighters with AI-enabled systems to improve the speed, quality and accuracy of decisions in the field, which can provide the decisive advantage needed to deter or win a fight.

However, there is also a growing awareness of AI’s potential risks and the negative impacts of its misuse. Along with its rapid advancement, these concerns are increasing the demand for policymakers to shift their attention to this innovative tool. Policymakers responded throughout the past year with several initiatives focused on AI development, deployment and policies across the federal government.

On Oct. 30, the Biden administration released Executive Order 14110 on “Safe, Secure and Trustworthy Development and Use of Artificial Intelligence,” which includes more than 100 specific actions across more than 50 government entities focused on the development and deployment of AI through federal agency actions, regulation of industry and engagement with international partners. Many of these regulatory actions will occur over the next two years, including at least 11 directly involving the Defense Department.

Quickly following the release of EO 14110, the Office of Management and Budget announced a new draft policy to govern the use of AI by federal agencies. This guidance builds on previous work by the National Institute of Standards and Technology, which released a voluntary “AI Risk Management Framework” in January 2023 as a guide for industry.

That guidance also builds on work by the White House Office of Science and Technology Policy, which released the “Blueprint for an AI Bill of Rights” in late 2022, focusing on protecting individuals and communities from the potential risks of AI.

The Pentagon underscored AI’s critical role in national security with its Nov. 2 release of the “2023 Data, Analytics and Artificial Intelligence Adoption Strategy,” which builds upon and supersedes the 2018 AI Strategy and the 2020 Data Strategy. This 2023 strategy prioritizes speed of delivery, continuous learning and responsible development to leverage AI’s potential for safeguarding our national security interests globally.

Congress is also investing time and energy in exploring AI issues. In June, Senate Majority Leader Chuck Schumer, D-N.Y., announced the “SAFE Innovation Framework,” which aims to ensure AI is developed and deployed in a responsible and transparent manner while promoting innovation and supporting the potential societal benefits of the technology. In addition to several congressional hearings and numerous legislative proposals introduced in both chambers, Schumer has hosted a series of large public forums with experts to discuss various AI topics, including intellectual property, national security, workforce and privacy.

Undoubtedly, the United States is home to some of the best and brightest innovators in the world, and private industry is committed to managing the potential risks of AI as it further advances the technology. As the federal government continues to pursue proposals to both promote and regulate AI, it is essential that the administration and Congress partner with industry to ensure the proper balance of innovation and responsible use. This is especially true in the face of competitive adversaries like China, whose AI advancements underscore the imperative for the nation to sustain its technological edge.

Similar to the importance of working internationally to address many of today’s pressing national security imperatives, it is encouraging to see the United States collaborating with international allies and partners to advance a globally coordinated approach to AI.

In May 2023, AI governance and interoperability were an essential focus of the annual G7 Summit discussions, which led to the creation of the “Hiroshima AI Process Comprehensive Policy Framework.” Following the U.K. AI Safety Summit in early November, the United Kingdom and United States also unveiled the jointly developed “Guidelines for Secure AI System Development,” which 23 domestic and international cybersecurity organizations co-sealed.

It may be too soon to label 2023 as the “Year of AI,” as we will continue to see progress in 2024 and beyond.

However, in addition to the development of domestic and international regulatory frameworks, investments in AI research and development, talent cultivation and infrastructure will provide further opportunities for government and industry to partner together to advance AI technologies and help ensure the United States maintains its technological leadership to benefit our society and safeguard national security. ND

Michael Seeds is the National Defense Industrial Association’s senior director for strategy and policy.

Topics: Robotics and Autonomous Systems

Comments (0)

Retype the CAPTCHA code from the image
Change the CAPTCHA codeSpeak the CAPTCHA code
Please enter the text displayed in the image.