The United Nations’ secretary-general advocated for new restrictions on autonomous weapons as a U.N. group that negotiates weapons protocols started a week of meetings, in part, to discuss the matter.
Secretary-General Antonio Guterres addressed the Review Conference of the U.N.’s Convention on Certain Conventional Weapons. Taking place in Geneva, Switzerland, the Review Conference happens every five years. Guterres preceded the weeklong meeting with a Dec. 13 message encouraging conference members “to agree on an ambitious plan for the future to establish restrictions on the use of certain types of autonomous weapons.”
He described autonomous weapons as those “that can choose targets and kill people without human interference.” The conference has identified artificial intelligence, for one, as an “increasingly autonomous” technology.
The Air Force has experimented with autonomous weapons such as the Air Force Research Laboratory’s Golden Horde, which did not become a program of record but did succeed in getting Small Diameter Bombs to collaborate with each other after receiving and interpreting commands mid-flight. The experimental Perdix micro-drones, under the DOD’s Strategic Capabilities Office, rely on AI. And although the Defense Advanced Research Project Agency’s Gremlins drones don’t rely on AI yet, they’re designed to accommodate that level of computing.
Some countries and international rights groups want the convention to negotiate a treaty that would ban what the U.N. calls lethal autonomous weapons systems—and what others call “killer robots”—but diplomats told Reuters that’s not likely to happen this week. It would require a consensus, and the U.S., for one, has already rejected the idea. Russia was expected to do the same.
The U.N. group began convening experts on lethal autonomous weapons systems in 2014. It could agree on other guidelines short of a treaty, a diplomat told Reuters.
Speaking during an American Enterprise Institute webinar about AI on Dec. 7, NATO’s David van Weel articulated why countries such as the U.S. might oppose a treaty on autonomous weapons.
Van Weel put the issue in terms of a hypothetical attack by a swarm of drones. “How do we defend against them? Well, we can’t, frankly, because you need AI in that case in order to be able to counter AI,” he said.
Van Weel represented a minority on the panel. His counterparts—an Oxford scholar and a tech attorney—supported a treaty to “de-weaponize” AI. The University of Oxford’s Xiaolan Fu, professor of technology and international development, thought that even to start a dialog would amount to progress.
Considering AI to have “the risk to be as toxic as a nuclear weapon, if not more,” Tech Group co-head Jonathan Kewley of the firm Clifford Chance said AI-enabled weapons need people in the loop the same way nuclear weapons do.
AI “doesn’t have a conscience. It doesn’t have a moral fiber unless it’s programmed in,” Kewley said. “AI has the risk to be as toxic as a nuclear weapon, if not more, and if we don’t have the equivalent of that moral compass, the finger on the button designed in through a treaty—because we’re not going to design the technology to prevent this unless there is a treaty involving China, the U.S., and others—we’re going to have a similar issue to nuclear risk.”
NATO’s van Weel suggested he might stop short of AI-assisted activities that the U.S. has already acknowledged, at least in experiments.
Air Force Secretary Frank Kendall revealed in September that the Air Force’s chief architect’s office “deployed AI algorithms for the first time to a live operational kill chain … for automated target recognition,” though neither he nor an Air Force spokesperson at the time provided details about the target. Meanwhile in October, the Army-led, joint-service Rapid Dragon exercise relied on AI analysis of satellite images for targeting and reportedly shortened the decision-making from what might have taken as long as five hours to just one hour while also improving accuracy.
However, van Weel suggested that current AI, which he described as “very preliminary” and “very rudimentary,” is “by no means capable of making such important decisions,” even just “using AI to enhance the decision-making process.”
In its Ethical Principles for Artificial Intelligence, the DOD says its AI will possess “the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended consequences,” but the document stops short of specifying humans’ role in its principle covering governability.