The Australian Signals Directorate cyber and foreign intelligence facility in Canberra. (Australian Defense Force)

CANBERRA — As Australia’s national security establishment seeks to implement the US-led Responsible Military Use of AI and Autonomy Agreement, which requires a human being make the final decision to fire a weapon, a top government official on cyber policy is being up front that his government is trying to figure out how to make it all work.

At the core of the challenge, said Peter Anstee, first assistant secretary of the Department of Home Affairs’ cyber and technology security policy division, is whether man-in-the-loop decision making can ever be fast enough to keep up with a cyber attack guided by artificial intelligence, or if that requirement will effectively hamstring Canberra’s ability to counter and respond to such a threat. 

“It’s a good question, and one we’re currently grappling with,” Anstee said, “and I don’t think there’s a straightforward answer.”

“And then there’s the question around, where do we draw these guardrails in terms of military applications or use, Anstee said. “At the moment, the principled approach its that you always have to have a given decision maker sitting within the kill chain. American autonomy and artificial intelligence experts have long pointed to the unique character of cyber warfare, where response times can be measured in nanoseconds, which raises the question whether humans could respond quickly to a significant threat to be effective.

But the Australian official said he didn’t “think we’ve arrived to the point yet in the cyber domain where you’d have real time offense/defense activity happening in cyberspace, where it would be impossible to have a human decision maker insert themselves — short of a threat-to-life scenario, where we haven’t defined risk situations yet in a satisfactory way.”

When Breaking Defense asked if that was because Australia has not engaged in that type of warfare yet, he said it was a question of “those special thresholds haven’t been significantly defined.”

Anstee appeared on a panel here today at the Australia and Pacific Security conference sponsored by the German Konrad Adenauer Foundation. Breaking Defense accepted travel accommodations from the foundation to attend.

Shiri Kreb, an expert on cyber warfare and the laws of war from Australia’s Deakin University, said during the panel that the nature of cyber means that “it’s not like a yes or no, if we have human control or not, if we have a scale or a spectrum of human guidance to these machines.”

Krebs, who is also a legal fellow advising Australia’s Department of Foreign Affairs and Trade, said that, in the end, “the question is, what can this human do?”

And that question extends to AI and autonomous systems such as large drone swarms.

“We have a human in the loop, but how can a human respond to targets generated through 300 drones at the very same time, right? And so I think this is all not about whether we have the human on the loop that needs to verify a target, but rather, what can that human do with his or her cognitive abilities and to add effects, which I think is one of the examples we celebrate,” Krebs said.

As opposed to humans, algorithms — especially those that are AI based —evolve “on its own path” and “we simply cannot trace it back. So we can have an algorithm making life and death decisions, and ultimately, it’s completely unaccountable because we can’t trace the reasoning behind that,” she noted.

A secondary effect of relying on AI, Krebs said she had learned in interviews with intelligence analysts, is a decline in their skills.

“It became apparent that one of their concerns is this killing off our intelligence forces, mainly because when we allow AI-based technologies to do a lot of the collection and analysis of data. When we don’t allow ourselves the type of intelligence work that that we used to devote to each and every target or threat, that means some of our skill and capabilities in this area are being eroded.”