White House National Security Advisor Jake Sullivan speaks during the annual meeting of the Arms Control Association at the National Press Club on June 2, 2023 in Washington, DC.(Photo by Drew Angerer/Getty Images)

WASHINGTON — President Joe Biden has just signed a sweeping National Security Memorandum (NSM) to guide military and intelligence use of artificial intelligence, National Security Advisor Jake Sullivan said this morning. The new plan’s guiding principle is a big bet that adhering to American values — like accountability, legality and individual rights — won’t slow the US down in the AI arms race with China, but help move it faster.

The memorandum will also boost cybersecurity defenses against Chinese theft of American AI secrets and strengthen government collaboration with the private sector and foreign allies, Sullivan and other officials said. And it will be accompanied by an extensive “Risk Management Framework” to help officials distinguish proper from improper use of AI.

The essence of the administration’s argument is that government bureaucrats will move slowly and cautiously to adopt AI as long as they’re uncertain about what’s permissible and what’s prohibited, what’s safe to try and what’s too dangerous. Only by establishing comprehensive guidelines, rigorous testing and strict prohibitions on unacceptable uses of AI — notably, that humans, not algorithms, must make the decision whether or not to launch nuclear weapons — can the US government empower its agencies to experiment boldly and advance rapidly within those clearly defined guardrails.

RELATED: ‘AI gold mine’: NGA aims to exploit archive of satellite images, expert analysis

“It’s a little bit counterintuitive, [but] ensuring security and trustworthiness will actually enable us to move faster, not slow us down,” Sullivan told an audience of military officers and civil servants at National Defense University. “Uncertainty breeds caution. When we lack confidence about safety and reliability, we’re slower to experiment, to adopt, to use new capabilities — and we just can’t afford to do that in today’s strategic landscape.”

“Preventing misuse and ensuring high standards of accountability will not slow us down,” Sullivan emphasized, comparing it to how trains were able to travel faster once government regulation ensured the rails would be safe. “It will actually do the opposite.”

Slow Down To Speed Up

Senior administration officials stressed this apparent paradox in a briefing for reporters Wednesday evening.

“We are directing that the agencies gain access to the most powerful AI systems and put them to use, which often involves substantial efforts on procurement,” one of the briefers said. But in that context, they continued, “one of the paradoxical outcomes we’ve seen is, with a lack of policy clarity and a lack of legal clarity about what can and cannot be done, we are likely to see less experimentation and less adoption than with a clear path for use — which is what the NSM and the framework tries to provide.”

The NSM includes “very specific requirements” for agencies to conduct “classified testing” of AI systems, while the accompanying Risk Management Framework lists specific applications of AI that either warrant special caution (“high impact” cases) or are outright prohibited.

RELATED: Empowered edge versus the centralization trap: Who will wield AI better, the US or China?

The prohibitions are meant to protect against a wide array of abuses, from a War Games or Skynet-style nightmare scenario where a computer orders the launch of nuclear weapons, to Chinese-style employment of AI to repress dissidents and minorities.

“There are clear prohibitions on use of AI with intent or purpose, for instance, to unlawfully suppress or burden the right to free speech or the right to legal counsel,” one official said. “There’s also prohibited use cases around, for instance, removing a human in the loop for actions critical to informing and executing decisions by the President to initiate or terminate nuclear weapons employment.”

“We actually view these restrictions … as being important in clarifying what the agencies can and cannot do,” one briefer explained. “That will actually accelerate experimentation and adoption.”

All this guidance builds on prior policy, officials emphasized, citing the Pentagon’s extensive revisions of DoD Directive 3000.09 on autonomous weapons and the State Department-led Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, now signed by over 50 nations.

“We have built, over the course of the last couple of years, I think, a pretty good set of markers, right and left, rules of the road, that try to get at these questions of fundamental values,” Sullivan said at NDU. “Then what the National Security Memorandum tries to do is say, ‘here’s our guidelines now, here’s how they have to be implemented with special care in the national security space.’”

Get Real, Real Fast

Biden took a personal interest in ensuring the memorandum would have real impact, not just lofty aspirations, Sullivan emphasized.

“We had multiple meetings with him about this, because he didn’t just take it and say, ‘Okay, fine,’” Sullivan said. “We went back and forth on this over the course of two or three months, [and] this was one of his questions: ‘How do we really make sure on a rigorous basis, not just that we’ve set forth standards that make sense to us, but how can we have confidence through thick and through thin, through crisis and contingency, they’re actually going to be enforced?’

“That is a big part of what this NSM is trying to achieve,” Sullivan said.

One key point the officials made was that the guidelines need to be not only clear, but flexible, able to adapt rapidly to new problems discovered in testing, new technologies from the private sector, or new challenges from China.

RELATED: New Pentagon AI & data chief plans big initiatives for fall, from back office to battlefield (EXCLUSIVE)

The detailed Risk Management Framework in particular will “have to be continuously updated,” one official emphasized, so it’s been explicitly designed to allow rapid changes as experiments and pilot projects discover new technical complications or ethical dilemmas.

Adaptability is even more important with artificial intelligence than it was in past technological revolutions like the nuclear weapons or the Internet, Sullivan said, because AI is moving much faster and its future is more uncertain.

“A specific AI application that we’re trying to solve for today … could look fundamentally different six weeks from now, let alone six months,” he argued. “The speed of change in this area is breathtaking.”

“The good news is that … America is continuing to build a meaningful AI advantage,” Sullivan said. “But here’s the bad news: Our lead is not guaranteed, it is not preordained, and it is not enough to just guard the progress we’ve made, as historic as it’s been. We have to be faster in deploying AI in our national security enterprise than America’s rivals are in theirs.”