FARNBOROUGH 2024 — Air Force Secretary Frank Kendall is a strong proponent of the Air Force’s rapidly accelerating drone wingman effort, but the service’s top civilian also believes that more work is needed to establish a clear standard for liability if unmanned systems violate the laws of armed conflict.
“It’s obviously something we’re very concerned about,” Kendall remarked about fusing autonomy and lethal weapon systems in a wide-ranging interview with Breaking Defense over the weekend, which was frequently interrupted by the roar of jet engines at the Royal International Air Tattoo (RIAT).
Kendall, who has previously worked as a human rights lawyer, is acutely aware of the ethical conundrums wrapped up in the Air Force’s Collaborative Combat Aircraft (CCA) program. And as the head of the Air Force, Kendall is arguably at the forefront of the Pentagon’s efforts to confront, and ultimately incorporate, more aspects of autonomy and artificial intelligence in weapons as CCA rapidly progress.
“Whatever weapon systems we employ have to be consistent with the laws of armed conflict. The problem isn’t that. We know what those rules are and I think we know how to impose them on our systems,” he said.
The more vexing issue, Kendall said, is how to seek accountability when things go awry.
“It’s who do you hold accountable,” he continued. “And I think we’ve got to think through. Is it the person who used the weapon? Is it the designer? Is it the tester? Is it somebody in the chain of command? I think there needs to be a discussion about the mechanism by which people are held responsible for whatever weapons do when they do something that’s not allowed.”
The Air Force currently has two vendors under contract — General Atomics and Anduril — to provide the physical platform for the CCA program’s first round of drone production, or “increment.” The service is also working with several autonomy vendors that will plug into the drones, though Air Force spokesperson Ann Stefanek told Breaking Defense that the autonomy vendor pool is classified. CCA are expected to be operational before the end of the decade.
“Our policy is to have meaningful human control of the application of force, and we’re gonna keep that. But that leaves a lot of gray space in terms of how certain are you, what’s the degree of certainty you have that that’s a threat before you commit a weapon, and what degree of competency you want to have that you’re not going to impose collateral damage and kill civilians unnecessarily,” he observed.
That “gray space” is wider than commentators often assume. The Pentagon’s official policy on autonomous weapons, DoD Directive 3000.09 [PDF], was revised in 2023 to streamline the approval process to field more highly automated weapons. Even before this change, anti-aircraft and missile defense systems like Patriot and Aegis have long had fully automated modes for threats too fast or numerous for humans to respond in time. Despite what is often assumed, DoD policy has never actually required a “human in the loop” for the decision to use lethal force.
Kendall himself has worried aloud for years that a human operator might make decisions too slowly to survive against a fully computer-controlled threat, and has personally witnessed the capabilities of an AI-controlled jet. At the Reagan Forum last December, he declared: “If the human being is in the loop, you will lose. You can have human supervision, you can watch over what the AI is doing. If you try to intervene, you’re going to lose.”
In this latest interview with Breaking Defense, Kendall noted, “I think there are a lot of details to be worked out, but I think the principles are there, and I think we’re going to be compliant.”
Beyond the accountability problem, Kendall also raised concerns he’s voiced repeatedly: that America’s adversaries, namely China, won’t abide by the same ethical constraints.
The Biden Administration has recently gotten Beijing to agree to broad, non-binding discussions of “AI risk.” But its focus has been on building international norms for “responsible” military AI with its allies, while putting little faith or effort into binding AI arms control with adversaries.
“The risk we’re running is that our adversaries won’t be bothered by this at all,” Kendall said. “They will field systems which are clearly about their operational effectiveness, without regard to collateral damage or inappropriate engagements. And the more stressing the operational situation is, the more inclined they’ll be to relax their constraints.”
Sydney J. Freedberg in Washington and Valerie Insinna in London contributed to this report.