Published in News

Microsoft releases new AI chatbot

by on14 December 2016


Zo has protection from racists


After its early incarnation of an AI chatbot Tay turned into a racist, Microsoft has had another go with a social bot called Zo.

Zo is built on the back of the same technology that powers Microsoft's other chatbots in China and Japan, Xiaoice and Rinna. Zo is meant to learn from her interactions with humans, and can respond to conversations with her own personality.

However, unlike Tay she comes protected from learning the wrong things such as racism, rude words, conspiracy theories and voting for Donald Trump.  It is not clear how Vole has managed this, but it claims that it should be kiddie friendly from now on.

Microsoft says it has plans to bring the chatbot to Skype and Facebook Messenger as well. If you use Kik, you can start a conversation up with Zo or you can head to Zo.ai to request an invite to chat with the bot on Messenger when it's available.

Last modified on 14 December 2016
Rate this item
(0 votes)

Read more about: