NYT: A.I.-controlled killer drones become reality

10:48 26.11.2023 •

An experimental unmanned aircraft at Eglin Air Force Base in Florida. The drone uses artificial intelligence and has the capability to carry weapons, although it has not yet been used in combat.
Photo: The New York Times

Worried about the risks of robot warfare, some countries want new legal constraints, but the U.S. and other major powers are resistant, writes ‘The New York Times’.

It seems like something out of science fiction: swarms of killer robots that hunt down targets on their own and are capable of flying in for the kill without any human signing off.

But it is approaching reality as the United States, China and a handful of other nations make rapid progress in developing and deploying new technology that has the potential to reshape the nature of warfare by turning life and death decisions over to autonomous drones equipped with artificial intelligence programs.

That prospect is so worrying to many other governments that they are trying to focus attention on it with proposals at the United Nations to impose legally binding rules on the use of what militaries call lethal autonomous weapons.

“This is really one of the most significant inflection points for humanity,” Alexander Kmentt, Austria’s chief negotiator on the issue, said in an interview. “What’s the role of human beings in the use of force — it’s an absolutely fundamental security issue, a legal issue and an ethical issue.”

But while the U.N. is providing a platform for governments to express their concerns, the process seems unlikely to yield substantive new legally binding restrictions.

The debate over the risks of artificial intelligence has drawn new attention in recent days with the battle over control of OpenAI, perhaps the world’s leading A.I. company, whose leaders appeared split over whether the firm is taking sufficient account over the dangers of the technology. And last week, officials from China and the United States discussed a related issue: potential limits on the use of A.I. in decisions about deploying nuclear weapons.

“Complacency does not seem to be an option anymore,” Ambassador Khalil Hashmi of Pakistan said during a meeting at U.N. headquarters. “The window of opportunity to act is rapidly diminishing as we prepare for a technological breakout.”

Rapid advances in artificial intelligence and the intense use of drones in conflicts in Ukraine and the Middle East have combined to make the issue that much more urgent. So far, drones generally rely on human operators to carry out lethal missions, but software is being developed that soon will allow them to find and select targets more on their own.

“This isn’t the plot of a dystopian novel, but a looming reality,” Gaston Browne, the prime minister of Antigua and Barbuda, told officials at a recent U.N. meeting.

Pentagon officials have made it clear that they are preparing to deploy autonomous weapons in a big way.

Deputy Defense Secretary Kathleen Hicks announced this summer that the U.S. military would “field attritable, autonomous systems at scale of multiple thousands” in the coming two years, saying that the push to compete with China’s own investment in advanced weapons necessitated that the United States “leverage platforms that are small, smart, cheap and many.”

The concept of an autonomous weapon is not entirely new. Land mines — which detonate automatically — have been used since the Civil War. The United States has missile systems that rely on radar sensors to autonomously lock on to and hit targets.

What is changing is the introduction of artificial intelligence that could give weapons systems the capability to make decisions themselves after taking in and processing information.

Some arms control advocates and diplomats disagree, arguing that A.I.-controlled lethal weapons that do not have humans authorizing individual strikes will transform the nature of warfighting by eliminating the direct moral role that humans play in decisions about taking a life.

These A.I. weapons will sometimes act in unpredictable ways, and they are likely to make mistakes in identifying targets, like driverless cars that have accidents, these critics say.

The new weapons may also make the use of lethal force more likely during wartime, since the military launching them would not be immediately putting its own soldiers at risk, or they could lead to faster escalation, the opponents have argued.

Arms control groups like the International Committee of the Red Cross and Stop Killer Robots, along with national delegations including Austria, Argentina, New Zealand, Switzerland and Costa Rica, have proposed a variety of limits.

Some would seek to globally ban lethal autonomous weapons that explicitly target humans. Others would require that these weapons remain under “meaningful human control,” and that they must be used in limited areas for specific amounts of time.

Last week, the Geneva-based committee agreed at the urging of Russia and other major powers to give itself until the end of 2025 to keep studying the topic, one diplomat who participated in the debate said.

“If we wait too long, we are really going to regret it,” Mr. Kmentt said. “As soon enough, it will be cheap, easily available, and it will be everywhere. And people are going to be asking: Why didn’t we act fast enough to try to put limits on it when we had a chance to?”

 

read more in our Telegram-channel https://t.me/The_International_Affairs