Pentagon grapples with growth of artificial intelligence. (Graphic by Breaking Defense, original brain graphic via Getty)

TECHNET AUGUSTA 2024 — The Army’s technology acquisition office announced today two new initiatives to go under its 500-day artificial intelligence implementation plan, focusing on testing of the cutting-edge tech for soldiers’ use and, if necessary, defending against enemy’s AI use.

Young Bang, principal deputy to the assistant secretary of the Army for Acquisition, Logistics & Technology (ASA(ALT)), introduced the two new initiatives here using new hastags: #BreakAI and #CounterAI.

Bang said #BreakAI is geared toward testing algorithms through traditional government “testing and evaluation” and “verification and validation” processes, ensuring the AI is eventually fully operable and error-free when it gets to the warfighter.

“It’s really about as we move towards AGI [artificial general intelligence], how do we actually test something that we don’t know what the outcome or the behaviors are going to be?” Bang asked.

For all its potential, experts have long warned of the difficulty in divining how AI comes up with its answers to questions, likening it to a “black box.” As such, Bang said that his office needs industry’s help in implementing this initiative, particularly with developing tools to test its AI, adding that he’ll be reliant on industry for feedback.

“#BreakAI is about how are we going to make sure that our AI models work, and when we put them in the hands of soldiers, we are totally confident that they’re going to give the right answer and the right outcome,” Jennifer Swanson, deputy assistant secretary of the Army for data, engineering and software, later stated.  

Meanwhile, #CounterAI is more of a defensive initiative. It’s aimed at making “sure our platforms and our algorithms and our capabilities are secure from attack and from threat,” Swanson explained. 

“We know we’re happy only one’s investing in this, there’s lots of investment [in counter attacks] happening in countries that are big adversarial threats to the United States,” she added. 

The #BreakAI and #CounterAI initiatives join the #DefendAI initiative that was announced in June as part of the Army’s 500-day AI plan — a plan that piggy-backed off its 100-day plan which was launched in March and ended in June. 

Related: DefendAI: Army wants industry help with safety testing for artificial intelligence

The #DefendAI initiative is designed to create a AI layered defense framework to help mitigate risks that come with third-party algorithms, while working to operationalize industry AI. Swanson said #DefendAI and #CounterAI are similar, but the latter focuses more on adversarial threats while the former is focused on weeding out issues that may naturally come with using an outsourced version of AI. 

“I think that there will potentially be some linkage and the things that we learned from #DefendAI that we can somewhat apply to #CounterAI,” Swanson said. 

And though the Army is asking industry for help with its initiatives, that doesn’t mean it plans to share its findings publicly, as rival militaries are certainly chasing their own AI best practices.

“We’re not going to talk about all of that publicly, right? Because there has to be some ops [operations] that are involved in this as we start to learn and figure out what we’re going to do. There’s going to be things we’re going to share in forums like these, and there’s going to be stuff we’re not going to share in forums like this, because we’re not going to go tell our adversaries exactly how we’re going to counter what they’re doing,” Swanson said.