Like Elon Musk and Stephen Hawking, Bill Gates thinks we should be concerned about the future of artificial intelligence.
In his most recent Ask Me Anything thread on Reddit, Gates was asked whether or not we should be threatened by machine super intelligence.
Although Gates doesn’t think it will bring trouble in the near future, that could all change in a few decades. Here’s Gates’ full reply:
I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.
Google CEO Larry Page has also previously talked on the subject, but didn’t seem to express any explicit fear or concern.
“You can’t wish these things away from happening,” Page said to The Financial Times when asked about whether or not computers would take over more jobs in the future as they become more intelligent. But, he added that this could be a positive aspect for our economy.
At the MIT Aeronautics and Astronautics’ Centennial Symposium in October, Musk called artificial intelligence our “biggest existential threat.”
Louis Del Monte, a physicist and entrepreneur, believes that machines could eventually surpass humans and become the most dominant species since there’s no legislation regarding how much intelligence a machine can have. Stephen Hawking has shared a similar view, writing that machines could eventually “outsmart financial markets” and “out-invent human researchers.”
At the same time, Microsoft Research’s chief Eric Horvitz just told the BBC that he believes AI systems could achieve consciousness, but it won’t pose a threat to humans. He also added that more than a quarter of Microsoft Research’s attention and resources are focused on artificial intelligence.