Microsoft apologizes for its AI chatbot turned racist tweet machine

Two days ago, we wrote about Tay, Microsoft’s AI chatbot modeled to talk like a millenial and learn from its conversations. Unfortunately, by the next day, Twitter users had ‘taught’ her to be racist, homophobic, and all around bigoted. Though the chatbot was promptly shut down, Tay had learned represent everything that was wrong with the Web in a span of less than 24 hours. Now Microsoft is opening up on what went wrong. In a post on its official blog, the company apologized for the Tay’s misdirection: We are deeply sorry for the unintended offensive and hurtful tweets from Tay,…
This story continues at The Next Web




