MAG-24 Airfield Seizure Training Exercise

U.S. Marine Corps Lieutenant Col. Eric R. Olsen, Operations Officer with Marine Aircraft Group (MAG) 24, participates in operations during a simulated Command Operations Center (COC) at Marine Corps Base (MCB) Hawaii, Kaneohe Bay, October 25th, 2013. (U.S. Marine Corps photo by Lance Cpl. Aaron S. Patterson, MCBH Combat Camera/Released)

WASHINGTON — As the US military moves ever so cautiously to adopt generative AI, the Marine Corps has issued its marching orders on how to safely implement the new technology, including a plan to stand up GenAI task forces at major commands.

Formally issued on Dec. 4, instruction NAVMC 5239.1 aims to split the difference between the AI hype squad and the doomsayers. On the cautious side, it reminds Marines that GenAI models may “hallucinate” — a term of art for “make stuff up” — and mandates, “System users should distrust and verify all outputs prior to use.” On the optimistic side, however, the policy urges leaders to let their Marines try out the technology: “Commands are discouraged from banning the use of GenAI capabilities.”

“Instead,” the policy continues, “commands should develop comprehensive governance processes that thoughtfully balance the benefits of GenAI tools and capabilities with potential risks, ensuring their use supports broader organizational objectives while maintaining operational security and integrity.”

RELATED: AIS@P: Army aims to get AI ASAP for intel, jammers and sensors

Specifically, the new policy gives commanders a new to-do list for 2025 on how to safely and responsibly employ GenAI. Some highlights from the memo’s four pages of fine print:

  • Commands are responsible for identifying their GenAI developers, system owners, and system users to mitigate residual risk when adopting GenAI tools into their workflows.” In other words, know who’s using AI, and know whose AI they’re really using. Though the guidance doesn’t specifically reference it, the hype cycle ignited by ChatGPT saw a lot of middleman companies offer GenAI capabilities that, under the hood, were just sending users’ queries to publicly available engines such as OpenAI and repackaging the answers, a significant security risk.
  • Commands are responsible for ensuring developers, system owners and system users use appropriate risk assessment frameworks for GenAI systems. These include the DoD Responsible AI Toolkit (reference le), the National Institute of Science and Technology’s Risk Management Framework (reference la), and the Defense Innovation Unit’s Responsible AI Guidance (reference ld).” In short, don’t just try whatever, have a plan to reduce the risks that follows established best practices.
  • Commands will track and manage AI tools, articulate what AI tools are being developed, and how the AI tools will be utilized in accordance with the five DoD AI Ethical Principles (reference i).” Similarly, don’t just let everyone try whatever looks cool; keep track of the tools you’re using and obey the five principles of responsible, equitable, traceable, reliable, and governable AI.

The most open-ended item is the mention of task forces, whose full assignment will be detailed in forthcoming policy. What this document does say is that they need to adopt a broad interdisciplinary approach to examining the GenAI tools available and assessing which are actually useful for which specific purposes, or “use cases.”

“Commands will establish an AI Task Forces/Cells consisting of various data, knowledge management, AI and digital operations subject matter experts to assess existing and in-development GenAI offerings,” the policy says. Those task forces then “will generate a list of forthcoming preferred GenAI capabilities aligned with common use cases as a reference for USMC organizations seeking to apply GenAI solutions to their mission needs and, as applicable, endorsement.”

Further detail “will be captured in an upcoming memorandum,” the December memo says.