Images of RAF Air Controllers aboard an RAF Boeing E-3D Sentry, aka AWACS, conducting a mission in support of NATO. (British Ministry of Defense)

WASHINGTON — As part of the Biden administration’s global push for “Responsible Artificial Intelligence,” the Defense Department is building a growing library of interactive online guides to help program managers and other officials develop safe and ethical AI. (AI-aided counter-drone defense? Sure. Algorithms to help draft contracting language? Maybe, with guardrails. Self-aware nuclear launch systems that don’t wait for human input? No).

Versions have already been published or are in the works for defense officials, the intelligence community, civilian agencies, and even foreign allies, Pentagon RAI director Matthew K. Johnson said this week, with updates addressing generative artificial intelligence and President Joe Biden’s recently signed National Security Memorandum on AI.

The original version of the Responsible AI Toolkit, published last November, is essentially an interactive digital checklist. It’s meant to walk Pentagon program managers and other defense officials through the laws, regulations, ethics, and best practices on developing AI.

But it was also made available to the public online from the very beginning. And even before that official rollout, RAI director Johnson made clear he saw the toolkit not just as a handy guide for bureaucrats, but as part of a much broader soft-power push to promote American values on AI — and to contrast them with the less scrupulous approaches of adversaries like China.

RELATED: Clear guardrails mean faster progress on AI: Biden signs sweeping guidance for DoD & IC

Since then, Johnson and his RAI team have been busy, he told a Responsible AI in Defense conference convened this week by the Pentagon’s Chief Digital & AI Office (CDAO).

“We co-developed a version of the toolkit with NATO,” he said. “That’s based on our version, [but] we mapped it to NATO’s principles of responsible use and NATO’s data, life cycle, and some other things.” (The NATO version isn’t publicly available, he told Breaking Defense after his remarks). The RAI team is also working with “other allies and partners” and the US intelligence community, though he didn’t give any details.

The long-term ambition is that allied nations will gain confidence in each others’ systems and feel safe connecting them for combined operations, Johnson said.

“If you think about things like CJADC2, Combined Joint All-Domain Command & Control, our ability to share information and allow us to interconnect systems at speed is going to require us having streamlined assurance processes,” he said. “It can also facilitate things like reciprocity. … If you certify a model through your country’s process and our processes are aligned, we may not have to run it through our process,” saving considerable time.

The Pentagon RAI team is also developing versions of the toolkit for other US agencies.

“We were appointed by the Executive Office of the President to create the recommended process for government agencies to meet the AI risk requirements under the [2023] Executive Order,” Johnson said. “We also have integrated into it the ability to show how you meet the requirements under the National Security Memorandum on the issue last week.”

Johnson said his team has seen “a lot of interest across interagency.”

“We’ve gotten a lot of feedback from OSTP [the White House Office of Science & Technology Policy] and NIST [the National Institute of Standards & Technology], and we’re working on getting that into a version two that’ll be released to the interagency in the next couple of weeks.”

Finally, Johnson and his team aim to expand the toolkits to include evolving guidance on one of the most rapidly advancing and controversial forms of AI: generative artificial intelligence, especially the powerful but hallucination-prone Large Language Models.

“We’re currently… developing versions of the toolkit to support approvals and reviews for various use cases,” Johnson said. In particular, he went on, “we’re really focused on developing version of the toolkit right now for generative AI and Large Language Models. Our division wrote the Department’s policy on generative AI.”