Hardly a day goes by where a robot doesn’t beat a human at things originally thought to be impossible to automate.
This year especially, artificial intelligence (AI) has had a renaissance — Tesla pushed their self-driving autopilot out to all eligible cars, and Google and Facebook have both announced large investments in AI research.
Where will these technologies take us next? Well to know that we should determine what’s the best of the best now. Tech Insider talked to 18 AI researchers, roboticists, and computer scientists to see what real-life AI impresses them the most.
Scroll down to see their lightly edited responses.
I think autonomous driving is most impressive to me. Autonomous driving first started in the Nevada deserts. It's harder to drive in the urban streets than in rough, almost nonexistent roads in the Nevada desert. Again, because the hardest thing is reasoning the intentions, to some extent, of other drivers on the road.
That has been quite impressive, that we went that far that quickly. I'm pretty much sure that some years down the line, none of us actually have to drive.
Commentary from Subbarao Kambhapati, a computer scientist at Arizona State University.
It took me a long time to really understand what the implications or impact of the self driving cars would be on our society. I don't like to drive now, so this is kind of a commodity for me.
The recent results that we're seeing with things such as self-driving cars, like an ability to significantly decrease traffic accidents -- I think that's really exciting to think about.
I think about a world with no cars would be exciting to me but think about a world with automation of vehicles and the impact it will have on society. That's really exciting.
Commentary from Carlos Guestrin, the CEO and cofounder of Dato, a company that builds artificially intelligent systems to analyse data.
One of my favorite systems is Andrew Ng's system that learned to pilot a model helicopter from a few hours of observation, and was able to perform tricks at the level of world-champion pilots.
This was before the introduction of super-stable quadcopters — the copter used in this experiment was extremely challenging to control.
Commentary from Peter Norvig, director of research at Google.
The most impressive AI I've seen is a project that Tuomas Sandholm out of Carnegie Mellon did where they are matching kidneys with donors using AI techniques.
Those are very complex decisions that affect human lives that's one practical system and very impressive.
Commentary from Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence.
I've been quite impressed by Watson and the ability to use Watson for things in the biomedical field.
For example, to discover new drugs and new treatment avenues. So Watson is definitely high on my list of AI that I've been impressed by.
Commentary from Sabine Hauert, a roboticist at Bristol University.
I've got colleagues here at the National Information Communications and Technology Australia working on the bionic eye. They're trying to help people with macular degeneration, people who are losing their eyesight, using AI algorithms and computer vision algorithms.
The ultimate aim of this project, which I'm confident will succeed in the next decade, is to do what the bionic ear has done to people with hearing loss. They will actually be able to implant electrodes onto the back of the eyeball and restore vision to people who have lost their eyesight.
That's an amazing achievement, an amazing change, and quality of life improvement to those people.
Commentary from Toby Walsh, a professor in AI at the National Information and Communications Technology Australia.
When not saving lives, AI is simplifying it. For Samy Bengio, Google products make his daily life easier.
I'm impressed by some of our own Google products, like the Google app, which now recognises my broken English as well as my French almost all the time without me trying to speak in a clean way.
I'm also impressed by Google Now when it automatically suggests useful things like the exchange rate when I travel, or where my car was last parked. I'm also impressed when Google Search seems to understand my queries even when I completely misspell them.
Commentary from Samy Bengio, a researcher at Google.
Kiva robots in Amazon warehouses work together to make sure you get the right package, for cheap, right on time, said Peter Stone.
An example I often use in my classes, just because it's my area of expertise, is the Kiva system — the multirobot system doing fulfillment processing in warehouses.
Amazon.com uses these robots to bring shelves to the people who pack the boxes after you order something from Amazon.com. It's certainly one of the most impressive examples of multirobot systems, and their videos are very very impressive.
Check out a Kiva robot in action:
Commentary from Peter Stone, a computer scientist at the University of Texas at Austin.
Basically now, whatever question you ask me if I don't know, I can Google it and I'll know.
If you ask me when Einstein died, or what last paper Einstein wrote, anything you want to know. You just go to a keyboard or some kind of device, talk to it or type it, and you have the answer.
That's something that we got used to, but it's really remarkable that all that knowledge is represented, searchable, and available without us even noticing anymore.
Commentary from Manuela Veloso, a computer scientist at Carnegie Mellon University.
With Nest, now you have these devices that are being deployed in people's homes that are doing something useful. They're able to make changes to the environment by raising and lowering the temperature and knowing when people are home.
It's working even though the developers and designers of Nest can't possibly know everything that's going on in your home. So there's lots of unknowns that they're able to account for and still get the job done really well.
Commentary from Matthew Taylor, a computer scientist at Washington State University.
Now that I understand how these things work, I'm really impressed by little, subtle things.
I teach my students the Watson demo — the IBM Watson playing Jeopardy. Now they notice the things that it's showing off, like the ability to generalize or collect a bunch of ideas into a single concept. We do it without thinking, but when you know what's hard about it, then you can recognize that.
Commentary from Joanna Bryson, researcher at Princeton University.
Speaking of confusing a robot with a human, Bart Selman says YouTube's captions could have been done by a human.
Roger Ebert lost his ability to speak, but his voice was recreated from videos of him speaking, said Lynne Parker.
A lot of impressive work was done to take Roger Ebert's spoken voice over the years of his giving movie reviews and create a voice synthesizer that sounds like him, I thought was pretty good. A lot of signal processing and understanding how human language had to work together to create that voice.
I thought that was a pretty cool application that in his life had a very nice effect. People in his life were able to hear a voice that really sounded like him as opposed to a synthetic machine.
Commentary from Lynne Parker, the division director for the Information and Intelligent Systems Division at the National Science Foundation.
There are international competitions where teams of robots play soccer against each other in all kinds of different leagues like wheeled robots, legged robots, humanoid robots, and also just in simulation on a computer.
If you compare the performance of those robots 10 years ago to today the improvement is really staggering. It's really amazing what they can do. They're really fast and they're really good. The day in which there are humanoid robots which can beat humans playing soccer is not here yet, but it might not be that far away.
Commentary from Shimon Whiteson, an associate professor at the Informatics Institute at the University of Amsterdam.
In fact, the Atari-playing AI from Deep Mind is a favourite among computer scientists, including Pieter Abbeel.
The DeepMind results on learning to play Atari games while only having access to raw pixels and the game score have been very inspiring.
I have been very excited about our own recent results on the same benchmark, as well as learning to walk in simulation — with a single algorithm able to learn those two very different types of tasks.
Commentary from Pieter Abbeel, a computer scientist at the University of California at Berkeley.
The DeepMind system starts completely from scratch, so it is essentially just waking up, seeing the screen of a video game and then it works out how to play the video game to a superhuman level, and it does that for about 30 different video games.
That's both impressive and scary in the sense that if a human baby was born and by the evening of its first day was already beating human beings at video games, you'd be terrified.
Commentary from Stuart Russell, a computer scientist at the University of California at Berkeley.
Business Insider Emails & Alerts
Site highlights each day to your inbox.