Enter Details

Comment on stories, receive email newsletters & alerts.

This is your permanent identity for Business Insider Australia
Your email must be valid for account activation
Minimum of 8 standard keyboard characters


Email newsletters but will contain a brief summary of our top stories and news alerts.

Forgotten Password

Enter Details

Back to log in

Here's why Microsoft's teen chatbot turned into a genocidal racist

Azeem AzharAzeem Azhar is the author of a daily AI newsletter.

An artificial intelligence (AI) expert has explained what went wrong with Microsoft’s new AI chat bot on Wednesday, suggesting that it could have been programmed to blacklist certain words and phrases.

Microsoft designed “Tay” to respond to users’ queries on Twitter with the casual, jokey speech patterns of a stereotypical millennial. But within hours of launching, the ‘teen girl’ AI had turned into a Hitler-loving sex robot, forcing Microsoft to embark on a mass-deleting spree.

AI expert Azeem Azhar told Business Insider: “There are a number of precautionary steps they [Microsft] could have taken. It wouldn’t have been too hard to create a blacklist of terms; or narrow the scope of replies. They could also have simply manually moderated Tay for the first few days, even if that had meant slower responses.”

If Microsoft had thought about these steps when programming Tay, then the AI would have behaved differently when it launched on Twitter, Azhar said.

Azhar, an Oxford graduate behind a number of technology companies and author of the Exponential View AI daily newsletter, continued: “Of course, Twitter users were going to tinker with Tay and push it to extremes. That’s what users do — any product manager knows that.

“This is an extension of the Boaty McBoatface saga, and runs all the way back to the Hank the Angry Drunken Dwarf write in during Time magazine’s Internet vote for Most Beautiful Person. There is nearly a two-decade history of these sort of things being pushed to the limit.”

Tay genocide microsoft twitterTwitter

Azhar said that Tay highlights a more serious point. “AIs are going to need to learn and interact somewhere akin to the real world,” he said. “Equally, if we allow AI-systems unexpurgated access to the ‘real world’ while they are learning, there could be ramifications. Twitter seems harmless if offensive and no-one believes Tay or Microsoft is genocidal. But what if this was an AI driving bids on the stock market or triaging patients in a hospital?”

Azhar added that businesses and other AI developers will need to give more thought to the protocols they design for testing and training AIs like Tay. “‘TayGate’, a case study in getting it wrong, is a useful petri-dish precedent for more substantial questions we’ll deal with in the future,” he said.

Tay TweetTwitter

In an emailed statement, a Microsoft representative said: “The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We’re making some adjustments to Tay.”

NOW WATCH: Forget Snapchat — you can send self-destructing videos from your iPhone

Follow Business Insider Australia on Facebook, Twitter, and LinkedIn

Tagged In

ai microsoft uk