Stuart Russell, a prominent artificial intelligence (AI) expert, is calling on scientists to take a stand against building lethal autonomous weapons systems which can decide who should live or die.
Russell, writing in the international scientific journal Nature, says that such technology will leave humans defenceless.
He says scientists and their professional organisations need to take a position just as physicists did over nuclear weapons or biologists did over disease agents in warfare.
“The stakes are high,” says the professor of computer science at the University of California, Berkeley. “LAWS (lethal autonomous weapons systems) have been described as the third revolution in warfare, after gunpowder and nuclear arms.”
Armed quadcopters or mini-tanks, with the ability to decide who to kill, are just a few years, not decades, away.
However, humanitarian law does not have any specific provisions for such technologies and it remains unclear whether the international community would support a treaty to limit or ban LAWS.
Russell worries about the direction of AI research, saying it is inevitable that systems beyond human control will be built.
He says their agility and lethality “will leave humans utterly defenceless”. This is not a desirable future, he writes.
“Doing nothing is a vote in favour of continued development and deployment,” he says to his fellow scientists.
Russell’s article is one of many in Nature’s Comment section by AI researchers who highlight risks emerging from the field.