Artificial intelligence concept. Brain over a circuit board. HUD future technology digital background (Getty images)

SYDNEY — Australia and the United States, two of the Five Eyes intelligence sharing cooperative, must quickly grapple with the slippery benefits of artificial intelligence which — with human help — promise to “revolutionize” the intelligence gathering and analysis crucial to maintaining peace and waging war.

So-called human-machine teaming (HMT), “could revolutionize the efficiency, scale, depth, and speed at which analytic insights are generated,” finds a new report by the US-based Special Competitive Studies Project (SCSP) and the Australian Strategic Policy Institute (ASPI).

“Time is of the essence. If the U.S. Intelligence Community and its partners do not begin integrating generative AI tools into their workflow, we will always be vulnerable to our adversaries,” SCSP President Ylli Bajraktar said in a statement.

The SCSP was founded by Eric Schmidt, former CEO of Google, and has some of the most innovative thinkers in American defense as advisors, including former House Armed Services Committee chair Mac Thornberry and former Deputy Defense Secretary Robert Work. ASPI is a national security thinktank in Canberra largely funded by the Australian, American and other partner governments.

The new report’s authors say that the current Large Language Model AIs, such as ChatGPT, are receiving enormous amounts of investment, “with some experts predicting we will see the advent of artificial general intelligence (AGI) — a type of AI that achieves, or surpasses, human-level capacity for learning, perception, and cognitive flexibility — by the end of this decade.” That is, of course, what the general public generally considers AI.

The authors predict that even the current, more narrow AIs “will probably far surpass the capabilities of systems we use today and will be able to solve complex problems, take action to collect and sort data, and deliver well-reasoned assessments at scale and at speed.”

RELATED: 3 ways intel analysts are using artificial intelligence right now

But the report comes back over and over again to the importance of humans working with and among the AIs.

“AI human-machine teaming will enable human intelligence analysts to focus on where they can best apply their expertise to maintain a competitive edge — clearly a vital priority in an increasingly contested strategic environment,” ASPI Executive Director Justn Bassi said in the statement.

The challenges are extensive and intricate, raising fundamental issues such as how to verify information and its sources, let alone their reliability, all key functions of the intelligence community.

“AI’s ability to identify patterns that cannot be manually verified through non-AI analysis poses a dilemma: to deploy AI and risk poor decision-making based on analysis not subject to human verification or to risk a potential intelligence failure by not deploying the AI,” the report says. Experts have long-warned that a problem with AI is that many are essentially “black boxes” in which a question goes in and an answer comes out, but the AI cannot explain exactly how it got there.

“This dilemma raises the central question of the extent to which transparency should be sacrificed for decision advantage. In such cases, it may be necessary to treat AI as a source of intelligence, similar to human sources, and assess its reliability and credibility based on its past performance and the context in which it operates,” the report says. “It might also require human spot-checking of randomly selected inputs or using other sources to corroborate the findings and insights that AI provides. This approach would require the development of new frameworks and methodologies for evaluating AI systems as intelligence sources, considering factors such as their track record, the quality and relevance of their outputs, and their potential biases or limitations.”

RELATED: Air Force studying ‘military applications’ for ChatGPT-like AI, Kendall says

Meanwhile, the so-called GenAI must be planned for now, while the two intelligence communities scramble to make sense of and effectively use the Large Language Models of today.

“To avoid remaining perpetually behind the curve on the pace of AI technological development, analytic managers should shift their focus away from what GenAI can do today and instead make reasoned bets on what GenAI will be able to deliver within the next three to five years,” the report’s authors write.