We’re still a long way from the John Connor robotic apocalypse as depicted in the Terminator series, but still, we probably don’t want to take steps in that direction.
In a recent post on DefenseOne, former intelligence analyst Joshua Foust writes that drone research tilts toward enabling bots “to respond to a programmed set of inputs, select a target and fire their weapons without a human reviewing or checking the result.”
“A lethal autonomous robot can aim better, target better, select better, and in general be a better asset with the linked ISR [intelligence, surveillance, and reconnaissance] packages it can run,” Purdue University Professor Samuel Liles told Foust.
It’s not so much about accuracy, though, as it is about security.
LARs — Lethal Autonomous Robots, as Foust calls them — are simply more secure than are traditionally piloted drones.
The reason is that there’s no network communication; conceptually speaking, the drone would operate solely off of hardware and software programming, as well as operator inputs customised for every mission.
That’s important technically because one of the drone war’s biggest weaknesses is against hacking. If there’s no satellite link communication with an operator, there’s no network soft spot for a potential terrorist to take control of the drone.
Barton Gellman of the Washington Post recently reported on drone weaknesses. Aside from losing the public relations war, Gellman reported, government operators were extremely concerned about their robots getting hijacked by operators on the ground using off-the-shelf technology.
In 2011, Iranian military hackers boasted that they had hacked into a “navigational weakness long-known to the U.S. military.” The Pentagon countered that the drone had a massive onboard malfunction, though strangely the drone didn’t crash, and Iran’s claims to have landed it safely are backed up with photos of an intact drone.
Since then, researchers have scrambled to automate drones.
The network problems really became apparent, however, because of something called a “lost link” — a phenomenon that has nothing to do with hackers.
A lost link is essentially a dropped cell phone call, except your cell phone is a multibillion dollar robot flying over Afghanistan.
What researchers have come up with to solve the problem was a simple autonomous command: if you lose your link, you turn around and come home.
That was cool, of course, but it wasn’t enough.
Now drones land on the decks of ships (way harder than it sounds) and even abort their own missions (scarier than it sounds). Possibly sooner than we think, Foust reports, drones will be identifying targets and pulling the trigger all their own.
“A lethal autonomous robot can aim better, target better, select better, and in general be a better asset with the linked ISR [intelligence, surveillance, and reconnaissance] packages it can run,” Liles told DefenseOne.
Army Lt. Col. Douglas Prior would advise against such pursuits though.
“[The U.S. seems bound to develop] robots so advanced that they make today’s Predators and Reapers look positively impotent and antique,” writes Prior in a recent paper.“ These killer robots, though, will share one thing in common with their primitive progenitors: with remorseless purpose, they will stalk and kill any human deemed ‘a legitimate target’ by their controllers and programmers.”
NOW WATCH: Briefing videos
Business Insider Emails & Alerts
Site highlights each day to your inbox.