Teaching driverless cars to make ethical decisions - who should live or die?

Chip Somodevilla/Getty Images
  • Science has taken a first cut at creating ethical rules, where decisions mean someone will either live or die, for autonomous vehicles.
  • The latest research finds that humans generally have shared moral preferences, including sparing the most number of lives, prioritising young people and valuing humans over other animals.
  • However, experts say inevitably there will be mistakes involving driverless vehicles.

What should a driverless car do when confronted by a dilemma on the road where any decision will result in harm being done?

Scientists have taken a first cut at creating moral rules of the road for AI (artificial intelligence) run autonomous vehicles in a study, based on almost 40 million decisions collected in an online survey across the globe, published in the journal Nature.

Driverless vehicles will need to navigate not only the road but also the moral dilemmas posed by unavoidable accidents. But which ethical rules should be built into machines?

In the study, scenario presented to participants required choices between combinations of saving passengers or pedestrians. Who should live and who should die?

The researchers identified a number of shared moral preferences, including sparing the most number of lives, prioritising young people and valuing humans over other animals.

“Never in the history of humanity have we allowed a machine to autonomously decide who should live and who should die, in a fraction of a second, without real-time supervision,” the researchers write.

“We are going to cross that bridge any time now, and it will not happen in a distant theatre of military operations; it will happen in that most mundane aspect of our lives, everyday transportation.

“Before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers that will regulate them.”

Iyad Rahwan, from The Media Lab at MIT in the US, and colleagues created the Moral Machine, an online survey designed to explore moral preferences worldwide.

The experiment presents unavoidable accident scenarios involving a driverless car on a two-lane road. The car can either remain on its original course or swerve into the other lane.

Participants must decide which course the car should take on the basis of which lives it would spare.

In Central and South America, as well as France and its former and current overseas territories, participants had a strong preference for sparing women and athletic individuals.

Those from countries with greater income inequality were more likely to take social status into account when deciding who to spare.

Australian researchers say the research clearly identifies some agreed principles which it would be relatively straightforward to encode, such as the preference to save the lives of people over those of animals.

Lin Padgham, of the School of Computer Science and Software Engineering at RMIT University, says it is important to realise that complex ethical/moral judgments required by some of the questions posed are not made by humans when confronted with these situations, and should not be expected of autonomous vehicles.

“Nonetheless, understanding clear agreed moral preferences may help in determining which ‘reflexive’ actions to build into autonomous vehicles which may well be different than those used by humans,” Professor Padgham says.

“However, even when there is a clear preference, such as saving a larger number of lives, the action decision is likely to be complex due to the uncertainty of outcomes.

“Swerving to avoid a single pedestrian in a car with three passengers may well be the right course of action because the pedestrian is far more vulnerable than the passengers in the car.

“This requires much more complex understanding than a rule about saving more lives. The closest sensible rule is possibly to always try to avoid hitting pedestrians.

“The biggest gain from autonomous vehicles is likely to be the avoidance of accidents and loss of life due to the potential greater ability of autonomous vehicles to notice all relevant information and respond to it fast.

“Inevitably there will sometimes be mistakes, but all evidence suggests they will be far fewer than those made by humans driving cars.”

Is programming moral intentions immoral?

Jay Katupitiya, and Associate Professor at UNSW Mechanical and Manufacturing Engineering, says programming moral intentions could be a problem.

“The raging debate on the driverless cars and the moral responsibility placed upon their creators is clearly on the difficult decision making process the creators will have to program into these machines to enable them to make a decision when the unthinkable is about to happen,” he says.

“The dreamed about scenario is, for this problem to never occur … to be able to declare that they simply do not collide. Right at the moment not many want to believe that it will be possible.

“To draw a parallel, what would we think, if in a court proceeding, a driver testified that ‘I steered left because I could save a young person’s life and I knew it would kill the frail old person, and it unfortunately did, that was the best I could do’.”

“In my opinion, programming these intentions is more immoral than not.”

Business Insider Emails & Alerts

Site highlights each day to your inbox.

Follow Business Insider Australia on Facebook, Twitter, LinkedIn, and Instagram.