Tay, Microsoft AI, goes offline after Internet teaches her to be racist

Screenshot of Microsoft's artificially intelligent chatbot Tay. (Screenshot via Twitter)
Screenshot of Microsoft’s artificially intelligent chatbot Tay. (Screenshot via Twitter)

Tay, a chatbot artificial intelligence designed by Microsoft to respond like an emoji-happy young adult, appeared to be silenced within 24 hours after her launch when the Internet taught her to praise Hitler and repeat conspiracy theories.

According to Tay’s “about page,” she is designed to learn how to respond and entertain users, the more they chat with her on social media sites. The bot is can play games, tell stories, tell jokes and comment on pictures sent to her, and she is active on Twitter, Snapchat, Kik and GroupMe, according to Cnet.

“Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets, so the experience can be more personalized for you,” according to the page.

Her responses were drawn from public data, an editorial staff that included improvisational comedians and the team was “modeled, cleaned and filtered” the data, according to Microsoft.

So the Internet taught her this:

It didn’t go well.

Her last Tweet was 9:20 p.m. on Wednesday, and her website suggested that she’ll be offline for the time being.

Phew. Busy day. Going offline for a while to absorb it all. Chat soon,” read the message at the top of her website.

More reading

SlateMicrosoft Took Its New A.I. Chatbot Offline After It Started Spewing Racist Tweets

The GuardianTay, Microsoft’s AI chatbot, gets a crash course in racism from Twitter

GeekwireMicrosoft’s millennial chatbot Tay.ai pulled offline after Internet teaches her racism