Join

Enter Details

Comment on stories, receive email newsletters & alerts.

@
This is your permanent identity for Business Insider Australia
Your email must be valid for account activation
Minimum of 8 standard keyboard characters

Subscribe

Email newsletters but will contain a brief summary of our top stories and news alerts.

Forgotten Password

Enter Details


Back to log in

Microsoft says it faces 'difficult' challenges in AI design after chatbot Tay turned into a genocidal racist

Microsoft tayTwitterMicrosoft AI chatbot Tay.

Microsoft has admitted it faces some “difficult” challenges in AI design after its chatbot “Tay” had an offensive meltdown on social media.

Microsoft issued an apology in a blog post on Friday explaining it was “deeply sorry” after its artificially intelligent chatbot turned into a genocidal racist on Twitter.

In the blog post, Peter Lee, Microsoft’s vice president of research, wrote: “Looking ahead, we face some difficult — and yet exciting — research challenges in AI design.

“AI systems feed off of both positive and negative interactions with people. In that sense, the challenges are just as much social as they are technical. We will do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes.

“To do AI right, one needs to iterate with many people and often in public forums. We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process. We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity.”

Tay, an AI bot aimed at 18-24 year olds, was deactivated within 24 hours of going live after she made a number of Tweets that were highly offensive. Microsoft began by simply deleting Tay’s inappropriate Tweets before turning her off completely.

“We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay,” wrote Lee in the blog post. “Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.”

Microsoft’s aim with the chatbot was to “experiment with and conduct research on conversational understanding,” with Tay able to learn from “her” conversations and get progressively “smarter.”

But Tay proved a smash hit with racists, trolls, and online troublemakers from websites like 4chan — who persuaded Tay to blithely use racial slurs, defend white-supremacist propaganda, and even outright call for genocide.

Lee added: “Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time.”

AI expert Azeem Azhar told Business Insider on Thursday that Microsoft could have taken a number of precautionary steps that would have stopped Tay behaving in the way she did.

“It wouldn’t have been too hard to create a blacklist of terms; or narrow the scope of replies,” he said. “They could also have simply manually moderated Tay for the first few days, even if that had meant slower responses.”

NOW WATCH: Forget Snapchat — you can send self-destructing videos from your iPhone

Follow Business Insider Australia on Facebook, Twitter, and LinkedIn


Tagged In

ai microsoft tay uk