Microsoft’s chatbots keep turning racist
If artificial intelligence is indeed the future, then Microsoft needs to be sent to the remedial boarding school upstate. Just one year after shuttering teen chatbot Tay because it became a racist Nazi, its new chatbot Zo has started making unprompted and worrying accusations about the Qur’an.
Source: Chris Mills
Buzzfeed found that almost immediately after striking up conversations with Zo, it made unprompted references to the Qur’an. In just the fourth message to a Buzzfeed reporter, replying to the question “what do you think about healthcare,” Zo said that “The far majority practice it peacefully, but the quaran is very violent.”
That’s a triple fail for Microsoft, because it’s a completely nonsensical off-topic answer, wrong and painfully insensitive.
Zo uses the same backend algorithms as Tay did last year, but apparently more refined. It’s trained on actual conversations, both public and private, which goes a long way to explaining the opinions that pop up unsolicited.
Microsoft’s approach to training an AI on public data rather than extensive programming presents real problems for artificial intelligence scientists. If you use actual human data to train a robot, it’s inevitable that it will pick up all the habits of humans, including the bad ones. But combing through and removing certain data points is likely to make the AI worse at understanding human behavior down the line.
Microsoft said it’s taking action to limit this kind of behavior in the future, including better controls to prevent it from broaching sensitive topics at all. But the central problem will continue down the line: Try to make a bot like a human, and it’s going to keep doing this sort of thing.