When I interview artificial intelligence researchers, the conversations almost always turns to science fiction.
I’ve watched a few movies about artificial intelligence (AI), like the “Terminator” and “The Matrix,” but I hadn’t seen “2001: A Space Odyssey,” considered by many I’d spoken to be the pinnacle of scifi. Marvin Minsky, one of the founding members of artificial intelligence, was even an adviser to the movie’s production team.
So I decided to spend a weekend binge-watching all the AI movies I had missed. Taking seven tips from colleagues and The Guardian list of the top 20 movies about AI, I plunged on ahead.
I didn’t go into the weekend with any expectations or criteria, but by the end, red-eyed and suffering from a little bit of cabin fever, I realised that one cartoon on my list seemed to be offer the most realistic vision of the future of AI.
Warning: Spoilers ahead.
The movie: Astronaut David Bowman and his crew mates aboard the Discovery One are headed to Jupiter in search of strange black monoliths that appear at turning points throughout the human species' evolution. The ship's computer Hal 9000 has a lot of responsibilities, including piloting the ship and maintaining life support for astronauts in hibernation.
Though Hal insists he is 'by any practical definition of the words, foolproof and incapable of error,' he makes a mistake and two astronauts conspire to turn him off. Little do they know that Hal has a few tricks up his gearbox.
The technology: Hal has a wide range of tasks, which makes him an artificial general intelligence (AGI), AI that has or exceeds human-level intelligence across all the fields of expertise that a human could have. AGI would take a huge amount of computation and energy. According to Scientific American, AI researcher Hans Moravec estimates that it would require at least '100 million MIPS (100 trillion instructions per second) to emulate the 1,500-gram human brain.'
Is it possible?: The Fujitsu K computer already outpaces this estimate at 10 quadrillion worth of computations in one second. Despite the K computer's computing capabilities, it still took about '40 minutes to complete a simulation of one second of neuronal network activity in real time,' according to CNET. Moravec writes 'at the present pace, only about 20 or 30 years will be needed to close the gap.' So, Hal is possible, but not right now.
Hal also has human emotions -- pride, fear, and a survival instinct -- but I wasn't sure where they originated. Humans have emotions because of evolutionary survival instincts. Emotions like fear and jealousy, according to the New York Times, may have helped us hoard scant resources for ourselves.
On the other hand, AI wouldn't develop emotions unless they're programmed to replicate them. The humans may have given Hal a survival instinct, but surely they wouldn't have programmed him to survive at the expense of his human crewmates.
The takeaway: Watching Stanley Kubrick's stunning masterpiece was like watching a living painting. But it also serves to warn us to ensure any AGI we create doesn't prioritise its survival over the survival of the humans it serves.
The movie: Matthew Broderick plays a high school hacker named David Lightman who mistakenly hacks into a government computer in charge of the nuclear missile launch systems at the North American Aerospace Defence Command (NORAD). Thinking he's hacked into a games company, Lightman begins to play as the Soviet Union in a what he thinks is a simulation game called Global Thermonuclear War, unwittingly setting off a series of events that may lead World War III.
The technology: The government computer, called the War Operations Plan Response (WOPR), learns from constantly running military simulations, and can autonomously target and fire nuclear missiles.
Is it possible?: WOPR combines two different technologies that exist right now, so I'd say this technology is possible with some time and effort, though it may not be a good idea. Like WOPR, DeepMind's deep neural net system called deep-Q networks (DQN) learns to play video games and gets better with time. According to Deep Mind's Nature paper, the DQN was able to 'achieve a level comparable to that of a professional human games tester across a set of 49 games.'
Autonomous weapons that can target and fire on their own also exist right now. One frightening real-life autonomous weapons is the Samsung SGR-1, which patrols the demilitarized zone between North and South Korea and can fire without human assistance. These are the kind of self-targeting weapons that almost started World War III.
The takeaway: Autonomous weapons exist right now, but I can't think of any government that would be willing to put the most dangerous weapons known to man in the hands of an easily hackable computer that doesn't clearly differentiate between simulations and firing real weapons. However, Tesla CEO Elon Musk, physicist Stephen Hawking, and over 16,000 AI researchers don't want to take that chance, and recently urged the United Nations to ban the use of autonomous weapons.
WOPR also has a clear set of goals -- win the game at any cost, even if it means destroying humanity. It's a clear illustration of an AI that could decimate humanity, what philosopher Nick Bostrom calls 'existential threat.'
The movie: In 2029 Japan, almost everyone is connected to the cloud via cybernetic android bodies, including detective Major Kusanagi. Tasked with finding s hacker named the Puppet Master, she learns that the hacker was originally a computer program that gained sentience. Over time, the Puppet Master learned about the nature of his existence, and his inability to reproduce or have a normal life.
The technology: In 'Ghost in the Shell,' technology has advanced to the point that false memories can be hacked and robots can build other robots. Major Kusanagi is a 'ghost' -- a human mind uploaded to and accessible through the cloud using her artificial body. She has superhuman strength and invisibility. She can also speak telepathically, access information, and even drive cars using her mind's access to the cloud.
Is it possible?: The idea of humans accessing the internet using just their minds is a well-trodden futurist trope. Futurist and Google researcher Ray Kurzweil predicted that we'll be able to communicate telepathically using the cloud by 2030, just a year after the events of 'Ghost in the Shell' take place.
Kusanagi's artificial body moves like a human body, but robots today still can't walk on two legs without collapsing midstep, as shown by the robots in the DARPA Robotics Challenge Finals So that makes it pretty hard to believe that robots would be dexterous enough to be backflipping off high-rise buildings in just 15 years. On the other hand, MIT is currently building superstrong robots that can punch through walls, but these robots aren't autonomous -- they're controlled by a human wearing an exoskeleton.
The takeaway: Though uploading our minds into robotic bodies will most likely take more than just 15 years to be developed, 'Ghost in the Shell' brought up some very real ethical and safety concerns that may be applicable in just a few years. Could hackers fake memories, like the garbageman in the movie who thought he was helping a criminal in exchange for help gaining custody of a nonexistent child through the divorce from a nonexistent wife?
The military is developing a brain implant that could restore memories and repair brain damage, so it's not too far-fetched to think these kinds of implants could be hacked.
The movie: Haley Joel Osment plays a child-robot named David. The company who built David tests his performance by placing him with an employee's grieving family. After the family abandons him, David searches for the Blue Fairy from the 'Adventures of Pinoccio.' David hopes that she'll turn him into a real boy and his adoptive family will take him back.
The technology: David seems to be a mix of AGI and artificial narrow intelligence, AI that has human-level or superhuman-level intelligence, but only in very specialised tasks. David's specialised task is to love and be loved by his adoptive parents.
Is it possible?: David's general intelligence may be possible in about 30 years, if we're to accept Moravec's prediction about how much data it'd take to simulate a brain on a computer. Though I would consider most robots that have emotions unfeasible, I thought David was entirely plausible. He's a robot that has been explicitly built with emotions and desires, but his emotions are in line with his goal, which is to fill a hole for parents who may have lost children or can't have them.
At one point in his journey, David stumbles into a fair where humans, angered by technological unemployment, capture and destroy old robots. Humans hating on robots isn't a new phenomenon. Japanese children have recently been observed ganging up on a poor robot in a shopping mall, and a hitchhiking robot was destroyed in Philadelphia just two weeks into its trip.
The takeaway: 'A.I. Artificial Intelligence' represented what advanced robots might look like. Though they have general intelligence, each robot also has specialised task, like the prostitute robot and the nanny robot that David meets during his travels. I did however, think the movie seemed disjointed and overly long.
The movie: Caleb, a computer programmer at a search engine company called BlueBook, wins a competition to visit his company's reclusive founder. When he arrives, he learns that the founder, Nathan, actually needs his help conducting a Turing Test on a robot named Eva. Caleb falls in love with Eva, but discovers that Nathan is actually testing how well Eva can manipulate Caleb.
The technology: Eva is an AGI, meaning she has skills and knowledge in different fields that span the breadth of human knowledge. She can draw, ask questions to seamlessly carry on conversations as if she's human, can instantly access information from BlueBook, and takes advantage of every opportunity she gets to try to escape, including making Caleb think that she loves him.
Is it possible?: This technology isn't possible now, and because AI researchers work in such specialised fields, it would take many years for them to combine their expertise to build an AGI. Different researchers are all working on different aspects of intelligence, from vision, natural language, and planning.
'These pieces have been done separately, and bridging them is going to be a very important challenge,' Subbarao Kambhapati, an AI researcher at Arizona University told Tech Insider.
The takeaway: Unlike David in 'A.I.' Eva hasn't been built to feel emotions, but emulates them to manipulate the humans around her to get what she wants. It's easy to believe that a robot would fake emotion to succeed. The film also raised ethical questions about the treatment and confinement of seemingly self-aware beings who have been programmed to only want to escape. I did find it hard to believe that Nathan could build such intelligent robots on his own, even though he's regarded as a genius in the movie.
The movie: Computer games designer Kevin Flynn hacks into his former employment's database to prove that his games have been stolen by his boss. In the process, Flynn is accidentally scanned and digitised into the inner world of computer programs. In this world, the all-powerful Master Control Program (MCP) lords over other computer programs. The MCP forces the other programs, which take on the appearance of their human programmers, to battle to the death.
The technology: The MCP is an artificial intelligence program originally designed to be the system administrator of the company's mainframe computer, but then gains sentience and begins to absorb information from other companies. The movie depicts the inner world of the mainframe computer as a physical space, where computer programs were represented as individual people with personalities, emotions, and genders who could physically manipulate objects like motorbikes, ships, and Frisbee.
Is it possible?: The film looks cool, especially considered when it was made, but it makes zero sense and is not possible. Like other movies about hacking, it tries to visualise the computing process to an awesome but nonsensical effect. There's is no way to digitize a person and put them in a computer. Computer programs don't have personalities, emotions, or the faces of the programmers that made them. They aren't secretly racing around on motorcycles inside your computer when you're not looking.
The takeaway: The movie raised a lot of questions, and not ethical questions like in 'Ex Machina' or questions about the origins of intelligence like '2001,' but pointless questions about the plot itself.
What does the MCP have to gain by destroying other computer programs, why not just absorb the program's knowledge? Why is the inside of a computer a vast expanse of space traversable only by flying ships and motorcycles? What reason would two computer programs have to love each other? What is love to a computer program? Why do computer programs wear neon spandex?!
The movie: I've seen WALL-E several times already but decided to rewatch because it's a great movie and I didn't want to end my weekend binge on a mindnumbing note (Thanks Tron). WALL-E is a lonely garbage cleaning robot on Earth. Long after other robots like him have failed, WALL-E has learned to repair himself with harvested parts and to collect mementos while going about his daily work. He falls for EVE, a robot charged with finding organic matter on the wasteland that's now Earth. He follows EVE to the spaceship that houses the remaining humans, who are all pampered by robots doing specific tasks like collecting trash and sweeping the floors.
The technology: The many robots in 'WALL-E' all have one specific job to do. WALL-E cleans garbage, EVE looks for plants, MO cleans, Auto pilots the ship.
Is it possible?: We currently have this kind of specialised artificial narrow intelligence, AI with human and superhuman level intelligence in very specific fields. For example, Wired estimates that at least '70% of total trade volume' is done by ANI that's faster and better at spotting opportunities than any human trader.
The takeaway: Of all the movies I watched, 'WALL-E' seemed to be the most accurate. Many of the the scientists I spoke to said the world will most likely be populated by specialised AI and robots doing different tasks, rather than robots with human-like intelligence across many areas of expertise.
In fact, I could argue that the beginnings of 'WALL-E' are already here, though today's AI aren't as sophisticated. The Roomba vacuum cleaner is a precursor to MO, and the autopilot systems on aeroplanes are crude iterations of Auto. This is the type of AI most AI researchers are working for -- AI that can do many tasks for us, making us more efficient and freeing up our time to pursue our leisure's.
'I think AI can do really positive things for society and my opinion is that too many scifi movies look at the negative things of what people fear AI could do,' Lynne Parker, a Division Director for the Information and Intelligent Systems Division at the National Science Foundation, told me. 'So to me something like WALL-E is a nice story because it shows the good side of what AI can do for society.'
Personally I can't wait for domestic robots to do my errands for me, though I do hope to do more substantial things with my time than be driven around on a scooter all day long.
NOW WATCH: Briefing videos
Business Insider Emails & Alerts
Site highlights each day to your inbox.