United Nations countries couldn’t agree on limiting lethal autonomous weapons, but those seeking a treaty may have made headway nonetheless.
The U.N.’s Conference on Certain Conventional Weapons concluded its Sixth Review Conference in December, a meeting held once every five years, without moving ahead on treaty negotiations.
But the fact “that the conversation is happening at all” may have amounted to progress, said Elisabeth Braw, senior fellow with the American Enterprise Institute and author of the policy paper, “Artificial Intelligence: The Risks Posed by the Current Lack of Standards,” in an interview to talk about the conference’s outcome.
UN Secretary-General Antonio Guterres had said before the meeting that the conference “must swiftly advance its work on autonomous weapons that can choose targets and kill people without human interference.” But officials reportedly told Reuters after the meeting that India, Russia, and the U.S. were among the countries that, unsurprisingly, objected to the negotiations.
The Conference on Certain Conventional Weapons began addressing lethal autonomous weapons systems in 2013. An informal meeting of experts followed in 2014, then the creation of a group of governmental experts in 2016, and the adoption of 11 “guiding principles” relating to lethal autonomous weapons systems in 2019.
In addition to countries such as Austria, Belgium, Brazil, and New Zealand to name a few, nongovernmental organizations such as the International Committee for Robot Arms Control, the Campaign to Stop Killer Robots, and the International Committee of the Red Cross have taken their arguments to the U.N. or stated positions on the issue.
NATO’s David van Weel articulated why countries such as the U.S. might broadly oppose limits on autonomous weapons, putting the issue in terms of a hypothetical attack by a swarm of drones. “How do we defend against them? Well, we can’t, frankly, because you need AI in that case in order to be able to counter AI,” he said.
Countries probably all realize rules are inevitable for restricting lethal autonomous weapons, said Braw, who hosted van Weel in the webinar. But the technology—the artificial intelligence enabling the autonomy—could be more difficult to regulate than, for example, nuclear weapons, which fewer countries could conceivably make.
Braw speculated that for a mishap to generate enough public pressure on U.S. politicians to get the government engaged in treaty negotiations, the cost might have to be as serious as “the loss of life on our own side.” She thought the European Union, even though it’s not a military alliance, might be a regulatory body other than the U.N. that could tackle the issue, possibly drafting a rudimentary agreement addressing only the “most egregious uses” of autonomous weapons as a start.
“It is so complicated—and at the same time as we should worry about huge dangers posed by AI, we should realize that it has many useful applications,” Braw said. “It’s a force for good as well.”