Mustafa Suleyman, one of the cofounders of Google’s artificial intelligence research company Google DeepMind, emphasised in a talk that researching AI isn’t going to destroy humanity.
Suleyman, speaking at Bloomberg’s AI2015 conference, stressed that Google DeepMind is being developed to improve Google’s existing service.
“The narrative has changed from ‘isn’t it terrible that AI is such a failure?’ to ‘isn’t it terrible that AI is such a success?'” Suleyman said. “We are building it to empower humanity, not destroy us.”
Here’s a still taken from a Periscope stream by Rodolo Rosini. It shows one of Suleyman’s slides about the ethics of AI:
The slide recognises that AI is “hugely powerful,” but emphasises that humanity is able to control it. Suleyman explained in his talk that AI is built with limits and controls that stop it doing harm.
Entrepreneurs like Elon Musk and Bill Gates have expressed concern about the rapid developments in artificial intelligence. Musk, who invested in Google DeepMind before its acquisition by Google, previously warned in a now-deleted internet comment that robots could start killing us within five years.
But Suleyman isn’t as concerned as his investor. His presentation made it clear that AI is there to help humanity, not destroy it. Besides, he said, there are more urgent concerns out there. He spent several minutes showing slides depicting impending natural disasters. That’s what we should be worried about, he said.
Here, via The Wall Street Journal, is what Suleyman said about the potential threat from AI:
On existential risk, our perspective is that it’s become a real distraction from the core ethics and safety issues, and it’s completely overshadowed the debate,” Suleyman said. “The way we think about AI is that it’s going to be a hugely powerful tool that we control and that we direct, whose capabilities we limit, just as you do with any other tool that we have in the world around us, whether they’re washing machines or tractors. We’re building them to empower humanity and not to destroy us.
One of the questions that Suleyman received after his talk was about DeepMind’s ethics board. The Wall Street Journal reports that Google created a board of people who make sure that DeepMind’s AI development remains safe and legal. But it has never released the names of the people, and again refused to do so at the Bloomberg conference. All Suleyman had to see about DeepMind’s ethics board was that “We will make it public in due course.”