Microsoft’s virtual assistant, Cortana, will not respond well to sexist, racist, or offensive questions or statements, according to Deborah Harrison, one of the eight people who writes the assistant’s responses.
Harrison was talking at the ReWork Virtual Assistant Summit, according to CNN, and said that Cortana was programmed to “get mad” if the user starts saying offensive things.
“If you say things that are particularly a–holeish to Cortana, she will get mad,” she said. “That’s not the kind of interaction we want to encourage.”
Cortana, like Apple’s Siri, is clearly identified as a woman, which can lead to problems, according to Harrison.
“We wanted to be very careful that she didn’t feel subservient in any way … or that we would set up a dynamic we didn’t want to perpetuate socially,” she said.
According to Harisson, Microsoft spokes to human assistants to see how they dealt with offensive statements and questions in order to programme Cortana correctly.
The assistant, which launched on Windows 10, has handled over 2.5 billion requests since it launched, according to Microsoft.
Microsoft has been investigating artificial intelligence and machine learning technology, building a host of fun apps that use the technology and testing an AI that uses social media to generate more natural responses.
Cortana could, in the future, become more advanced than Siri and Google Now because of this research.
“In the future, computers will see, hear, speak, and even understand,” Patrice Simard, a deputy managing director at Microsoft Research, said in an interview with Fortune. “Intelligent machines will form the backbone of what we call the invisible revolution: technologies interacting so seamlessly they become invisible.”