The idea of robots replacing combat troops has prompted some concern, leading to questions about the automation of killing on the battlefield.
“Should a robot decide when to kill?” Was Adrianne Jeffries’ apocalyptic headline today in The Verge.
“We’re part of the Defence Department,” DARPA’s director, Arati Prabhakar, acknowledges. “Why do we make these investments? We make them because we think that they’re going to be important for national security.” One recent report from the US Air Force notes that “by 2030 machine capabilities will have increased to the point that humans will have become the weakest component in a wide array of systems and processes.”
The idea that humans could become the weakest component in the military process could have frightening ramifications. The International Committee for Robot Arms Control (ICRAC), an organisation founded in 2009 by ethics, law, and robotics, warns that proliferation of autonomous robots could lead to nuclear armed kill machines or dictators using robots to mow down civilians.
Though Terminator-like autonomy is a long way and few legal loopholes off, Jeffries’ concerns are not entirely unfounded. The military asserts that there is always a soldier “in the loop” when it comes to pulling the trigger, but it’s effort to push more of the burden onto bots is nonetheless apparent.
DARPA’s recent Robotics Challenge is at the very least the military’s endorsement on the idea that machines will eventually be able to take over for humans, if only at first doing such dangerous activities as defusing roadside bombs or assisting in disaster areas like the Fukushima nuclear meltdown.
There’s no denying, though, that the evolution of fully automated robots could eventually be put to more violent military use — the Obama administration’s continued reason for its drone program in Pakistan is that the mountainous region represents too much of a burden for the bots’ human counterparts.
The Department of Defence has attempted to reassure worries and released directive 3000.09, “Autonomy in Weapons Systems.” The directive, due to expire in 2022, establishes guidelines for proper use of autonomous weapons systems. For example, the machine must always follow an operator’s intent.
The UN has also recently weighed in, releasing a new draft report trying to halt autonomous weapons development, while insisting that autonomous killer robots “should not have the power of life and death over human beings.”
Although, as long as robots offer countries a means to wage war without sacrificing their own soldiers, interest in developing robotic soldiers will likely continue unabated.
Business Insider Emails & Alerts
Site highlights each day to your inbox.