Lethal autonomous weapons — robots that can kill people without human intervention — aren’t yet on our battlefields, but the technology is right there.
As you can imagine, the killer robot issue is one that raises a number of concerns in the arenas of wartime strategy, morality, and philosophy. The hubbub is probably best summarized with this soundbite from The Washington Post: “Who is responsible when a fully autonomous robot kills an innocent? How can we allow a world where decisions over life and death are entirely mechanised?”
They are questions the United Nations is taking quite seriously, discussing these issues in-depth at a meeting last month. Nobel Peace Prize laureates Jody Williams, Archbishop Desmond Tutu, and former South African President F.W. de Klerk are among a group calling for an outright ban on such technology, but others are sceptical about that method’s efficacy as there’s historical precedent that banning weapons is counterproductive:
While some experts want an outright ban, Ronald Arkin of the Georgia Institute of Technology pointed out that Pope Innocent II tried to ban the crossbow in 1139, and argued that it would be almost impossible to enforce such a ban. Much better, he argued, to develop these technologies in ways that might make war zones safer for non-combatants.
Arkin suggests that “if these robots are used illegally, the policymakers, soldiers, industrialists and, yes, scientists involved should be held accountable.” He’s quite literally suggesting that if a robot kills a person outside its rules or boundaries, the people involved in that robot’s creation are responsible, but here’s his hedge from a 2007 book called “Killer Robots”:
“It is not my belief that an unmanned system will be able to be perfectly ethical in the battlefield. But I am convinced that they can perform more ethically than human soldiers.”
This is one of several issues we’ll have to resolve as technology continues to develop like a runaway train.