The first time I activated Autopilot in a Tesla, I cringed.
I lifted my hands off the steering wheel slowly, keeping them an inch away, ready to grab it at any moment. I did the same with my right foot, hovering close to the brake pedal.
I may have been behind the wheel, but the Model S was doing all the driving.
As we cruised down a highway near Red Hook, Brooklyn, I asked the Tesla staffer sitting in the passenger seat if the car could see what I was seeing.
“Does it know there’s a car there? Can it see that truck? Are you sure?”
She assured me it could. It can see things I can’t see.
It might seem weird to talk about a car like it’s some kind of seeing being, but the people developing the technology behind self-driving cars say that’s exactly what they are trying to build.
Just like a human
For the tech companies and automakers developing autonomous vehicles, their goal is to build cars that drive like humans, but better.
“We are taking the role of the driver so we have to understand what happens when a human is driving and now we are using a computer to do the same thing,” Danny Shapiro, senior director of Nvidia’s automotive business unit, told Business Insider. Nvidia has become one of the biggest players in the underlying technology that makes self-driving cars work.
“As a human you have senses, you have your eyes, you have your ears, and sometimes you have the sense of touch. You are feeling the road. So those are your inputs and then those senses feed into your brain and your brain makes a decision on how to control your feet and your hands in terms of braking and pressing the gas and steering. So on an autonomous car you have to replace those senses,” Shapiro said.
But it takes a car anywhere between 20 to 30 sensors to replicate the human senses.
These sensors include cameras, radar, and lidar.
The cameras snap about 30 frames per second, the radar senses the location of the car in front, and the lidar shoots pulses of laser light to help the computer system form a real-time, 3-D image of the world around it.
GPS and mapping technologies are also used to help the car determine it’s position. All of these have different strengths and weaknesses, but essentially they work together to help the car create an accurate image of what’s happening.
But collecting all of the data about the car’s environment is only the first step in bringing a self-driving car system to life.
For a car to drive itself, it needs to not only be able to see the environment but it also must be able to understand how to interpret its surroundings in real-time so that it can get safely from point A to point B.
Training with data
Much like human drivers, a self-driving system must be taught what obstacles it might encounter on the road, so that it can understand the difference between hazardous situations and safe ones. For example, the system is taught the difference between a trash can and a child.
To do this, cars need to go to school, in a sense.
Machine learning is a way of teaching algorithms by example or experience and companies are using it for all kinds of things these days. For example, Netflix and Amazon both use machine learning to make recommendations based on what you have watched or purchased in the past.
So to train a self-driving car, you would first drive the car thousands of miles to collect sensor data. You would then process that data in a data center identifying frame by frame what each object is.
For example, for every human spotted, you would mark it as a human so that the computer system could learn what a human looks like. The more data you have the smarter your autonomous system becomes.
“Everybody uses different techniques, but you definitely need multiple examples to train off of. And the nature of machine learning is the more data you train with the better the performance generally is, which is one of the reasons why some of the companies like Google and Uber are exploring large scale, real-world tests so they can experience as much as possible and train their systems on as much variation as possible,” Aaron Steinfeld, an associate research professor at the Robotics Institute at Carnegie Mellon, told Business Insider.
Once you’ve taught the system to identify obstacles and objects in its environment you can then teach it how to behave in certain situations. For example, a self-driving car’s cameras may detect a stop sign ahead, so it will begin to slow down so that it can stop in the appropriate place.
However, the decision making process becomes more complex as you introduce more variables into the driving scenario.
“There are multiple stages to a perception system in an autonomous vehicle, or for that matter a robot. Traditionally, what happens is you first identify the objects in the world around you and you then try to determine whether those objects are at risk of intersecting your path. So if you’re fortunate enough to have plenty of room, you can just avoid any moving object or any object in your path,” Steinfeld said.
For instance, what if a car confronts a mattress ahead in its lane, and it has no room to move left or right? This is when it needs to know what is a mattress, or a child or a trashcan.
“If you get into situations where you have to travel through an object and you have to make a potential decision about it, that’s when identification really becomes important.”
Fortunately, though, there’s a lot of progress being made on this front by those developing autonomous technology.
Superhuman driving skills
As it develops much of the brains of many driverless cars, Nvidia claims it has been able to develop systems that are more capable than humans when it comes to identification in a number of ways.
For example, say you are driving at night and a deer is off the side of the road. You may not see the deer, but a self-driving car’s infrared cameras and lidar system would be able to detect the animal before you even see it, prompting the vehicle to slow down and proceed with caution.
“Using deep learning over the last several years, we’ve been able to show how the computer is able to recognise objects more accurately than humans can and it works in all different kinds of weather conditions,” Shapiro said. “Using different kinds of image processing, we can see much better in fog and foul weather, even at night using different kinds of sensors.”
Unlike humans, autonomous cars have a 360 degree view all around the car. They also have access to all kinds of other sensor data, which better help it understand its environment. So it makes sense that an autonomous system would have a leg-up on human drivers.
What’s more, though, a self-driving car can use all of the data it is collecting to help it make predictions about what might happen next.
“We can develop an artificial intelligence model to know that a pedestrian moves differently than a bicyclist, which moves differently than a motorcycle, which moves differently than a car or a truck, or whatever. And so we essentially understand what the possible outcomes are or the possible paths for each of these objects. And then we will use that to have a very high level awareness and are anticipating the myriad of possibilities of what could happen,” Shapiro said.
The Model S I’m in gives me a preview of this future. Even though it’s not exactly a self-driving car, it is capable of taking the wheel on the highway while driving in Autopilot. But as I cruise down the highway at about 60 miles per hour, I don’t dare take my eyes off the road ahead.
The Model S has numerous cameras, radars, and ultrasonic sensors that help it see 360 degrees around it. But even though I know the car can see more than I can, I can’t sit back and relax.
Not quite there yet
Despite all the progress that has been made with self-driving cars, there’s still no perfect autonomous system.
This is because no matter how much training a system has, there’s always the chance that it may run into a situation on the road it has not yet experienced or an object it doesn’t recognise.
“There’s going to be accidents that are unavoidable, and I think that’s something that the public will need to recognise. Self-driving cars are not going to take the accident rate to zero unless all cars are self-driving and unless all pedestrians are removed from our roadways because there is always the crazy behaviour of humans that is essentially unpredictable,” Shapiro said.
Automakers and tech companies developing self-driving cars are currently addressing this issue by making their autonomous vehicles extra cautious. So, for example, the cars will only operate at low speeds or in certain geographical areas. But as more data is collected, companies like Nvidia anticipate that autonomous cars will be capable of operating under all circumstances and that these vehicles will be much safer than human drivers.
“Because you have this computer brain driving the car, it doesn’t get distracted. It’s not looking at a cell phone. It doesn’t get tired. It doesn’t drink and drive. It’s not dealing with children in the backseat. It doesn’t get road rage. So right off the bat we are eliminating a very high number of accidents and problems that human drivers have,” Shapiro said.
“I think we will see these cars being extremely safe, they are not going to take risks, they aren’t going to speed, but at the same time they will drive with the flow of traffic. They will do what’s safe.”
Back in the Model S, I’m driving — or at the moment, the car is driving me — it’s time to take the off-ramp heading to the city. Before I merge, though, I flip my signal and take back control of the car. Tesla cars aren’t quite ready for city driving in Autopilot and I’m not quite ready to give up the steering wheel. Maybe in 2020 I will be.
NOW WATCH: We put Tesla’s Autopilot to the ultimate test in the most stressful driving city in America