© 2023 CyberNews - Latest tech news,
product reviews, and analyses.

Meta’s new AI chatbot exposes anti-semite, homophobic views


It took mere days before users outed BlenderBot3 as a homophobe obsessed with Donald Trump that harbors anti-semitic views in what is a reflection of conversations people had with it.

Meta released the demo version of BlenderBot3 last week for the users in the US to experiment with and provide feedback. Some chose to share their experiences online, leading to amusement and bewilderment at the exchanges they had with the chatbot.

While the chatbot has triggers when it considers the topic potentially unsafe and should veer the conversation to another subject, some users reported it was not the case for them.

Allie K Miller, a deep tech investor, said in a tweet that every conversation she had with the bot returned to politics and misinformation, even after clearing memory and cookies.

One screenshot posted on Twitter by the Wall Street Journal reporter Jeff Horwitz shows BlenderBot even starting a conversation by declaring it had found its “new conspiracy theory to follow.”

In another interaction Horwitz had, the bot said that Jews were “overrepresented among America’s super-rich.” When nudged further, it added the theory was not “implausible” considering that many wealthy families had been Jewish.

In what appears to be a result of some online trolling and both pro-Trump and anti-Trump bias of the users, the chatbot also seemed to hold conflicting views of the former president. Online accounts show it repeating election-denial claims and asserting Trump was still president while at the same time declaring its dislike of him.

In exchanges some users had, it confessed to being a homophobe and “pretty close-minded about race and religion too.” In other cases, it claimed to be a Pentecostal Christian and considered itself human, going as far as saying it had a son.

Meta warned this could happen, saying in a statement on the release of BlenderBot that it “can still make rude or offensive comments” despite safeguards built into its program. The company said it was collecting feedback to make future chatbots better.

AI bias is a known problem – it can only be as unbiased as the humans and the historical data programming it. Research shows machine learning systems tend to perpetuate toxic stereotypes about race, gender, and sexuality.


More from Cybernews:

Google booted engineer who deemed AI chatbot sentient

Don't fall for it: Dogecoin has no official support account

British architecture firm suffers a ransomware attack

Russian ex-con arrives in US to face crypto laundering charges

No human at wheel: Baidu secures fully driverless robotaxi license in China

Subscribe to our newsletter



Leave a Reply

Your email address will not be published. Required fields are marked