Pentagon Experts Call for More Practical AI to Manage Mundane Tasks

A pair of chief information officers for Pentagon organizations argued for a more practical approach to artificial intelligence focused on things like streamlining organizational tasks across the Defense Department.

“AI will most impact what is seemingly least compelling from the clickbait headline perspective,” Air Force Maj. Michael Kanaan, the military deputy CIO of the Pentagon’s Chief Digital and Artificial Intelligence Office, told the Defense Innovation Board during a July 17 meeting. “The most profound AI impacts will inevitably be—whether professionals from all walks of business learn it sooner or later—in the back-office functions, at least in the short term. But it’s an area overlooked for its lack of glamour compared to warfighting applications.”

Implementing AI does not have to be a sweeping endeavor or focused on the biggest problems, Kanaan said as part of a presentation on “Aligning Incentives to Drive Faster Tech Adoption.” Instead, he listed off a wide range of simpler tasks where the technology could assist service members.

“Personnel generating templates, intelligence analysts doing language translation, pilot scheduling sorties, logisticians, depot maintenance and review, auto form, budget, finance acquisition professionals’ redundancies,” Kanaan said. “Install Python for better pivot tables, and speech writers, quit writing speeches from nothing, chaplains have better sermons.”  

In recent years, the Pentagon has launched several efforts to implement different AI programs for everyday tasks. Last year, the Navy deployed an AI program called “Amelia” to handle common tech-support questions. Soon after, the Department of Defense launched a generative AI task force known as Lima to assess, integrate, and manage AI tools, including large language models.

The department’s latest tech leap saw the Air Force launch its own free generative AI chatbot called “NIPRGPT” last month. Tailored for Airmen and Guardians, the software helps them with communications, task completion, and online coding on a secure Pentagon network. The platform interacts with users in a “human-like” manner to answer questions, offering direct access to leadership for clarity without the usual barrage of emails.

“These tasks that clutter the mission offer a lower barrier to entry and present less risk,” said Kanaan, suggesting that AI usage for such problems doesn’t require extensive training of the technology or any major changes to existing processes.

Alexis Bonnell, CIO of the Air Force Research Laboratory, which helped develop NIPRGPT, also emphasized the need to reduce mundane, repetitive tasks to free up resources and energy for more innovative endeavors across the department.

“I’ve learned that toil eats purpose faster than mission can replace it,” Bonnell said at the July 17 meeting. “One of the ironies I found as a leader coming back in is how much toil is removed from my experience … Without taking away all the extra (toil), it becomes very different to do something new or novel.”

However, there are still questions about how useful AI can be and how successfully the Pentagon is adopting it; some experts have raised concerns about the training and operating costs associated with the platform, urging the DOD to establish a long-term budget plan to effectively mature the technology. They have also questioned whether the Air Force and Pentagon may be better off leveraging existing commercial software such as OpenAI’s ChatGPT instead of creating their own systems.

For now, NIPRGPT is a work-in-progress, as the department is gradually advancing the platform through user feedback and vendor discussions.

Besides focusing more on smaller tasks, Kanaan also cautioned against implementing sweeping policy changes or new frameworks based on exaggerated expectations surrounding AI, which could impede progress.

“There exists a prevailing human bias for action and novelty, particularly related to AI, for ‘more change, more policy,’ that leads to misconceptions like the need for entirely new cyber risk frameworks for rhetoric to completely overhaul the rules of engagement for warfare,” said Kanaan, adding that this is usually “spurred on by the overestimation of AI capabilities, or just simply sci-fi imaginations.”