Sponsored Story

How to Scale AI: The Key to Crossing from Pathfinder to Success

Military leaders see applications for artificial intelligence in everything from autonomous aircraft to logistics and cybersecurity. But scaling up from pilot programs to operational is proving to be a major hurdle.

Scaling AI “is very much about building a scaffold or a framework,” said Jay Meil, chief data scientist at SAIC. Narrowing down to “what problem are we actually going to solve” is the first step, he said. “Once we identify that problem, we need to come up with a defined quantitative outcome, and we also need to identify applicable data.”

Good foundational work will help break the problem down into components, and then approach those smaller challenges with the idea that they can be combined later on. 

“You can build a small pilot to solve one of those small problems,” Meil said, and then combined pilots can be constructed with future scalability in mind. “You want to build the framework in such a way that it’s extensible and scalable.”

The architecture should be able to easily accommodate more computing capacity; more storage capacity; increased functionality; and expanded data sets. “You want to have very robust processing pipelines and compute pipelines in order to be able to scale it organically over time,” Meil said. Anticipating the potential for additional data or alternative uses of that data can be crucial to creating a path for growth.

Meil is working on a pilot effort for an Intelligence Community customer with exactly that in mind. “We’re building those frameworks and pipelines out so that when they’re ready, they can slowly add more scope, more data, and more scale to the program,” he said.

The mindset is to focus three steps ahead—to envision possible full-scale applications as they mature. And that means starting out with a question: Is AI really the right solution for a given problem?

Meil said he looks for several key markers in addressing the issue. Will AI make the operator’s work easier? Will AI accelerate the speed of decision? Can AI be leveraged in a repeatable way? Does using AI create a force multiplier? And is the relevant data needed to build an AI model available? 

If the answers are yes, Meil said, then AI can indeed be “the answer.”

For organizations new to AI, a partner like SAIC can provide invaluable experience and insight to the challenge. “Our focus is to bring these orchestration tools, these workflows and these scaffolds or frameworks, to make this process easier—in a repeatable manner,” he said.

Sometimes the hardest part is a lack of historical data. “Especially when we’re dealing with mission data, we are going to have sparse data sets,” Meil said. “We’re not going to have a lot of information on particular EW signatures or cyber information or information about adversaries,” he said.

But that doesn’t mean AI can’t help. Synthetically generated data can fill the gaps, and AI can help with that. “With generative AI, you might see a new ship that the model has never seen before, and it can generate an answer based on everything that it has learned in the past about previous ships or previous samples,” he said.

Weaving data together to combine, for example, intelligence data and command and control data, is the next step. With data available in a single place, “machines can make decisions and help the warfighter, recommending courses of action,” Meil said.

Some applications may require data to be isolated, such as in combined operations overseas, when some data sources may be shared by one partner but can’t be shared with others. Understanding that requirement ahead of time is key, Meil said. “All of the data can be physically co-located…and logically separated,” he said. “If you and I are searching for the same things, but we have different access levels, we’re both able to access the information that we need.”

With appropriate tagging, that approach can also apply to applications and users with different levels of access. By building that in from the start, the AI application will be readily scalable, and the focus can be on the mission, where existing doctrine and decision-making guidance is already well established. 

Building on established doctrine helps ensure AI is providing viable courses of action, and that the humans in the loop—the ultimate decision makers—are always in charge. “There’s no need to rewrite [the rules] around Artificial intelligence,” Meil said. “We train the models on the doctrine that is already in place, that people are comfortable with, to make decisions in similar ways. And we always keep that human on the loop.”