Robots are increasingly becoming a part of our everyday lives as they take our jobs and we work more alongside the machines.
Trouble is, anti-robot sentiment is a real concern. Earlier this year children beat a robot in a mall and, in a separate incident, a hitchhiking robot was left for dead in Philadelphia just two weeks into its journey.
So how can engineers design robots that we actually get along with?
Researchers from the University of Lincoln in the UK have learned that it helps to not only give machines human-like expression, but also what is arguably the most human trait of all: the ability to make mistakes.
The researchers, who recently presented their findings at a robotics conference in Hamburg, Germany, figured this out by recruiting 60 people to talk and interact with two robots, called Erwin and MyKeepon.
Erwin (short for Emotional Robot with Intelligent Network) is a metal skeletal robot with a set of eyeballs and red metal bars as lips that can portray happiness, sadness, and surprise:
MyKeepon, meanwhile, is a small toy robot that can dance, beep to say hello and goodbye, slouch over to look sad, and hop up and down to look happy in response to certain sounds:
In the first series of tests, the participants talked with Erwin about their likes and dislikes, and MyKeepon jumped the number of times a person clapped. Neither robot was programmed to make any mistakes.
In a followup round, however, the robots intentionally messed up. Erwin, for example, would say a person is wearing a yellow shirt when they were not. The bot would then respond with, “I am sorry that I have forgotten that, but I don’t have a true sense of colour perception,” while looking sorry and surprised. MyKeepon, on the other hand, swung between happiness and despondency after it didn’t correctly match the number of times a person clapped.
Afterward, researchers asked participants how they felt after these interactions. People said they felt “more intimate with the robots when the robots made mistakes and showed imperfect activities during interactions,” according to the study.
Why did this happen? It’s hard for people to empathise with others who are intimidating, emotionless, and never wrong, the researchers argue — so it might be just as difficult to get along with cold, expressionless, perfect robots.
“People seemed to warm to the more forgetful, less confident robot more than the overconfident one,” John Murray, one of the researchers on the study, told Motherboard. “We thought that people would like a robot that remembered everything, but actually people preferred the robot that got things wrong.”
This may ring true to anyone who saw IBM’s supercomputer Deep Blue defeat chess champion Garry Kasparov in 1997. A seemingly slick move by the machine led to Kasparov’s demise, earning praise from the audience that “it had played like a human being,” according to FiveThirtyEight. (In fact, the move wasn’t part of the device’s programming — it was caused by a bug.)
The new study reflects a sentiment among researchers about making robots more acceptable to humans. Yann LeCun, the director of AI research at Facebook, told Tech Insider that while robots aren’t likely to develop emotions on their own, they will need to at least emulate emotion so humans can get along with them.
Subbarrao Kambhapati, a computer scientists at the Arizona State University, would agree.
“Many studies show that if you keep making a mistake, a computer voice response that sounds more sympathetic to your plight winds up increasing productivity then one just says ‘try again,'” Kambhapati told Tech Insider. “Humans have biases and evolved emotional responses. Robots need to handle that to interact with us.”
Business Insider Emails & Alerts
Site highlights each day to your inbox.