On July 27, over a thousand artificial intelligence researchers, including Google director of research Peter Norvig and Microsoft managing director Eric Horvitz, co-signed an open letter urging the United Nations (UN) to ban the development and use of autonomous weapons.
The letter, presented at the 2015 International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, now has 16,000 signatures, according to The Guardian. It also featured signatures from Tesla and SpaceX CEO Elon Musk and physicist Stephen Hawking.
The letter’s worries about ‘smart’ weapons that can kill on their own sounds like a science fiction trope — humans cowering in fear of killer robots they have unknowingly created.
In reality, these killer robots are being knowingly created, under the guise of being “semi-autonomous” — after picking out a target and aiming, they need a human to pull the trigger. To be fully autonomous, they wouldn’t need a human to OK the kill shot.
There is a case to be made for these killer robots. According to The Conversation, autonomous and semi-autonomous weapons “are potentially more accurate, more precise” and “preferable to human war fighters.”
But the development of semiautonomous weapons are secretive, and it’s unclear what part humans play in choosing and firing on targets, assistant professor at the University of Denver’s Josef Korbel School of International Studies and contributor to the letter Heather Roff told Tech Insider.
Without guidance from the UN, Roff believes that the current secrecy may set a dangerous precedent for how truly autonomous weapons hiding under a misleading moniker, and that the consequences could be disastrous.
Here are some weapons systems that are so advanced they are worrying researchers.
The Samsung SGR-1
The Samsung SGR-1 patrols the border between North and South Korea, called the Demilitarized Zone. South Korea installed the stationary robots, developed by Samsung Techwin and Korea University, according to NBC News.
Roff said the SGR-1 was initially built with the capability to detect, target, shoot intruders from two miles away.
“In that sense, it’s a really sophisticated landmine, it can sense a certain thing and can automatically fire,” she said.
But Peter Asaro, the co-founder of the International Committee for Robot Arms Control, told NBC News that South Korea received “a lot of bad press about having autonomous killer robots on their border.”
Now the SGR-1 can now only detect and target but requires a human operator to approve the kill shot.
The Long Range Anti-Ship Missile
The long range anti-ship missile, or LRASM, is currently being developed by Lockheed Martin and recently aced its third flight test. The LRASM can be fired from a ship or plane and can autonomously travel to a specified area, avoiding obstacles it might encounter outside the target area, said Roff. The missile will then choose one ship out of several possible options within its destination based on a pre-programmed algorithm.
“The missile does not have an organic ability to choose and prosecute targets on its own,” Lockheed Martin said in an email to Tech Insider. “Targets are chosen, modelled and programmed prior to launch by human operators. There are multiple subsystems that ensure the missile prosecutes the intended targets, minimising collateral damage. While the missile does employ some intelligent navigation and routing capabilities, avoiding enemy defences, it does not meet the definition of an autonomous weapon. The LRASM missile navigates to a pre-programmed geographical area, searches for its pre-designated targets, and after positive identification, engages the target.”
A second email from Lockheed Martin said that “the specifics on how the weapon identifies and acquires the intended target are classified and not releasable to the public.”
The vagueness with which the LRASM locks on to its target may leave too much room for error, Roff said.
“Is it the heat? Is it the shape? Is it the radar signature? Is it a weighting of all of these things, that the one in the middle with the most signatures on it is the best target?” she said. “The decision process of how that gets made isn’t clear. We also don’t know if it’s always going to be a military object when it has all of those things.”
The Israel Aerospace Industries‘ (IAI) Harpy is a “fire-and-forget” autonomous drone system mounted on a vehicle that can detect, attack and destroy radar emitters, according to the Times of Israel. The Harpy can “loiter” in the air for a long period of time as it searches for enemy radar emitters before it fires “with a very high hit accuracy,” according to the IAI’s website.
But Roff said the system may not have enough any safeguards as to where the radar is located.
“If you have radar emitters, it’s mobile, so you can put it anywhere you want,” she said. “The Israeli Harpy is going to fly around for several hours and it’s going to try to locate that signature. It’s not making a calculation about that signature, if it’s next to a school. It’s just finding a signal … to detonate.”
The British Aerospace (BAE) Systems’ war drone Taranis was named after the Celtic God of Thunder. According to BAE’s website, the stealthy Taranis is capable of “undertaking sustained surveillance, marking targets, gathering intelligence, deterring adversaries, and carrying out strikes in hostile territory,” with the guiding hand of a human operator.
BAE Systems also wrote that it is collaborating with other companies to develop “full autonomy elements,” though the autonomous functions are unclear.
In a 2013 UN report, Christof Heyns, a UN Special Rappeuter, wrote that the Taranis is one of the robotic systems whose “development is shrouded in secrecy.”
A human in the loop
That secrecy is a driving force for the open letter, Roff said.
In 2012, the Department of Defence established a five-year directive that all weapons in operation and development will have a human in the loop.
Jared Adams, the director of Media Relations at DARPA told Tech Insider in an email that the DOD “explicitly precludes the use of lethal autonomous systems,” as stated by a 2012 directive.
“The agency is not pursuing any research right now that would permit that kind of autonomy,” Adams said. “Right now we’re making what are essentially semi-autonomous systems because there is a human in the loop.”
But Roff said it’s unclear exactly what is autonomous and where the human is. Most weapons systems will have had a human in control at some point — whether a human pre-programs the weapon to look for a target that fits specific characteristics or if it’s a human pressing the button to fire. To Roff, it’s important to determine where humans should come into play early on, before the technology becomes too advanced.
“What does meaningful human control mean? What does select and engage mean and when does that occur?” Roff said. “These are serious questions. How far removed does the human being have to be?”
NOW WATCH: Why BMI is BS
NOW WATCH: Briefing videos
Business Insider Emails & Alerts
Site highlights each day to your inbox.