Published in AI

Facebook's racist AI lies and insults you

by on09 August 2022

Just like your other Facebook friends

Meta has warned that the company's new artificial intelligence-based BlenderBot3 chatbot could write insults and lie to you.

BlenderBot3 chatbot has been released to the public for users in the US to try out. The company claims that its AI can search the internet and 'chat about nearly any topic'. It's even been given the ability to self-improve and learn how to communicate better through conversation with humans.

However, while the bot is designed to help Meta make better AI-based 'conversational' systems, the tech giant has also admitted that it has a few behavioural issues.

In an FAQ about the bot, Meta said: "Users should be aware that despite our best attempts to reduce such issues, the bot may be inappropriate, rude, or make untrue or contradictory statements.

"The bot's comments are not representative of Meta's views as a company, and should not be relied on for factual information, including but not limited to medical, legal, or financial advice."

The company added that it has worked to 'minimise' how much the bot uses swear words, insulting languages or culturally insensitive phrases. Users have the ability to report and 'dislike' these comments.

Apparently, the issue is that the chatbot was trained on publicly available data from the Internet, so it reflects the morons you usually see on Facebook including Daily Mail readers and Trump fanboys. The only upside really is that the Chatbot can't shoot you or force you to breed unwanted children.


Last modified on 09 August 2022
Rate this item
(0 votes)

Read more about: