Artificial intelligence could bring about “biological conflict,” said former Google chief executive Eric Schmidt, who co-chaired the National Security Commission on Artificial Intelligence.
Schmidt spoke with defense reporters Sept. 12 as he helped release a new paper from his tech-oriented nonprofit think tank, the Special Competitive Studies Project. Schmidt launched the think tank with staff from the commission in order to continue the commission’s work.
AI’s applicability to biological warfare is “something which we don’t talk about very much,” Schmidt said, but it poses grave risks. “It’s going to be possible for bad actors to take the large databases of how biology works and use it to generate things which hurt human beings,” Schmidt said, calling that risk “a very near-term concern.”
Schmidt cited viruses as one example: “The database of viruses can be expanded greatly by using AI techniques, which will generate new chemistry, which can generate new viruses.”
The new paper, “Mid-Decade Challenges to National Competitiveness,” says advances in biology could empower individuals to formulate pathogens and, therefore, “increase uncertainty about which actions are taken by a state, by those acting on behalf of a state, or those acting on their own.”
Having recently been appointed to a new commission on bioterrorism that hadn’t yet met, Schmidt didn’t want to elaborate more.
His prediction echoes prospects described in a recent experiment by Collaborations Pharmaceuticals, a drug company, which modified its AI for ruling out toxicity in new drug formulas to instead generate formulas for toxic substances.
Only “vaguely aware of security concerns around work with pathogens or toxic chemicals” at the outset, according to the paper, the researchers tried the experiment after being invited to take part in a conference on chemical and biological weapons. They later concluded that they had been “naive in thinking about the potential misuse.”
“Even our projects on Ebola and neurotoxins … had not set our alarm bells ringing,” they wrote.
For the experiment, they trained commercially available AI, which they had designed, with data from a publicly available database of molecules. They chose to “drive the generative model towards compounds such as the nerve agent VX, one of the most toxic chemical warfare agents developed during the twentieth century.”
It worked: “In less than 6 hours after starting on our in-house server, our model generated 40,000 molecules that scored within our desired threshold.”
Among those were not only VX itself “but also many other known chemical warfare agents”—and others likely more toxic.
The potential misuse of biological databases stays front of mind within DOD’s own in-house project to digitize decades of medical slides for AI-enabled research. The department’s Joint Pathology Center houses the world’s most extensive repository of diseased tissue samples. Its leaders envision AI algorithms learning to predict a patient’s prognosis—whether a cancer patient, for example, could get by with just monitoring or would need aggressive treatment.
Its director, pathologist Army Col. Joel T. Moncur, said in a past interview that the center had prioritized “privacy, security, and ethics” in designing the project.
The Defense Innovation Board, which Schmidt chaired at the time, recommended “enhancements” to the center’s repository to make the specimens even more suitable for AI research—beyond the center’s ongoing effort to create high-resolution digital images of physical slides—in part by linking the slides to the individual’s medical records. The records would undergo “de-identification.”