Increasing reliance on artificial intelligence to augment human decision-making raises the risk of attacks targeting critical data and AI algorithms, the Air Force’s cyber policy chief warned at AFA’s Air, Space, & Cyber Conference.
“If our adversary is able to inject uncertainty into any part of that process, we’re kind of dead in the water,” said Lt. Gen. Mary F. O’Brien, deputy Air Force chief of staff for intelligence, surveillance, reconnaissance, and cyber. Speaking on a panel on information warfare along with 16th Air Force boss Lt. Gen. Timothy D. Haugh and Air Force Chief Information Officer Lauren Barrett Knausenberger, O’Brien said AI is like any other new weapon system: Getting it is only half the battle. Defending it is just as critical.
”Once we do get the AI, what are we doing to defend the algorithm, to defend the training data, and to remove any uncertainty?” she asked. To be effective, AI must be reliable, and warfighters must trust its insights and recommendations.
But if hackers can infect the data to undermine that trust, confidence would evaporate in an instant.
Accelerating the decision cycle to identify and cue targets rapidly in the heat of battle, AI will be essential, said Yvette S. Weber, Department of the Air Force associate deputy assistant secretary for science, technology, and engineering, speaking in a separate session on autonomy.
“Advancements in [AI and autonomous systems] are critical to accomplishing the core missions of a high-end fight,” she said.
In “highly contested environments, human-machine teaming enables Airmen to process massive amounts of data and more rapidly assist in human decision-making to arrive at targeting decisions,” Weber said.
O’Brien, however, sees risk in the midst of those potential rewards. “There’s an assumption that once we have the AI, we develop the algorithm, we’ve got the training data, [and] it’s giving us whatever it is we want it to, that there’s no risk, that there’s no threat,” she said.
O’Brien mentioned Maj. Rena DeHenre, a young officer who advocated for a Defense Department AI Red Team in a recent post on the Over the Horizon blog. Citing a Cornell University research paper titled “Adversarial Machine Learning at Scale,” she argued that establishing Red Teams to hunt for vulnerabilities in military AI implementations is essential.
“With a dedicated AI Red Team, DOD would have a central team to address and assess AI and ML vulnerabilities,” she wrote.
DeHenre is precisely the kind of maverick that O’Brien says she’s been encouraged to “protect and promote.”
In her post, DeHenre lays out the ways in which an enemy could seek to twist U.S. reliance on AI to poison its decision-making processes.
“Adversarial machine learning (AML) is the purposeful manipulation of data or code to cause a machine learning algorithm to misfunction or present false predictions,” she wrote, citing the final report of the National Security Commission on Artificial Intelligence (NSCAI).
The NSCAI report notes that “even small manipulations of these data sets or algorithms can lead to consequential changes for how AI systems operate.” Indeed, the commission wrote that “the threat is not hypothetical: Adversarial attacks are happening and already impacting commercial [machine learning] systems.”
Worryingly, the commission notes that “with rare exceptions, the idea of protecting AI systems has been an afterthought in engineering and fielding AI systems, with inadequate investment in research and development.”
Just as with any other software code, security will never be as good as it could be if it’s not built in from the start.
“There has not yet been a uniform effort to integrate AI assurance across the entire U.S. national security enterprise,” the commission concludes.
Manipulations do not even have to be intentional. AI needs to be able to flex to handle anomalous data in its training and real-world sets, as well.
Hacking AI systems can be easier even than hacking conventional IT systems, some experts maintain.
“Machine learning vulnerabilities often cannot be patched the way traditional software can, leaving enduring holes for attackers to exploit,” notes a research paper from Georgetown University’s Center for Security and Emerging Technology. The paper goes on to point out that some hacks don’t even require insider access to the victim’s networks, since they can be accomplished by poisoning the data the system is collecting.
Defending AI, the paper argues, requires both building resilient systems and making them transparent and subject to human oversight so that the way they reached their outcomes can be understood. “Policymakers should pursue approaches for providing increased robustness, including the use of redundant components and ensuring opportunities for human oversight and intervention when possible,” the paper states.
Ed Vasko, director of Boise State University’s Institute of Pervasive Cybersecurity, expressed similar concerns during a session on 5G networking and cyber operations at AFA’s conference. “Every single technology transformation platform that I’ve ever seen and experienced” has become a target by collecting data, he said.
“Every time that we take the data elements and expand them out and find even more and more telemetry data to make use of, the challenge that we end up with is that we create more and more data environments and more information environments for our adversaries to potentially attack.”
The risks go beyond vulnerabilities created by cloud architectures or application programming interfaces, Vasko said, because the sheer volume of data being collected and processed makes up the biggest attack surface.
“The amount of data is going to explode beyond anybody’s expectations at this point,” he said. ”I’m not talking about access, I’m not talking about API platform connectivity. I’m actually talking about just the sheer collection of that data, and what that enables our adversaries to do and to think about.”
Vasko said the key difference between these new technologies and the processes they replace is that they effectively require Airmen and Guardians to relinquish their own judgement and instead trust the algorithm to interpret the data correctly and reach a conclusion. Joint all-domain command and control creates the opportunity “to actually change up how our fighters and our Guardians are thinking about leveraging their own senses,” Vasko said.
On the flip side, however, adversaries gain the potential to interfere in battlefield decision making at the same machine speeds that these decisions can be made. Just as misconstrued intelligence might have informed—or misinformed—a decision in the past, altering the data that underlies a machine decision in the future could have disastrous consequences.
“If our adversaries are able to achieve any of that, and impact … the JADC2 elements that are engaged to support our fighters, it’s game over,” he said.