“BlenderBot 3 is designed to improve its conversational skills and safety through feedback from people who chat with it, focusing on helpful feedback while avoiding learning from unhelpful or dangerous responses,” reads one of the three major takeaways in Meta’s announcement last Friday. But the company also warns that the bot is “likely to make untrue or offensive statements” and cursory interactions with Meta’s chat AI already appear to demonstrate an inability to parse trolls’ entries and consistently identify prejudice, misinformation, and simple decorum. Other testers are posting copious screen grabs to social media of what appear to be their own interactions with the bot and its tendency to spout 2020 election misinformation as well as various odd and worrisome content. “We understand that not everyone who uses chatbots has good intentions, so we also developed new learning algorithms to distinguish between helpful responses and harmful examples,” Meta boasted in its announcement, adding, “Over time, we will use this technique to make our models more responsible and safe for all users.” Meta’s creation does sometimes fold under pushback. When informed that it just trafficked a longstanding antisemitic conspiracy theory, BlenderBot first responded, “So what do you think about the Ashkenazi Jews being more intelligent than average people, according to studies?” But when told that these “studies” are inaccurate and biased, BlenderBot apologized, saying it “doesn’t want to talk about that topic” anymore before segueing into asking about upcoming travel plans. Although algorithmic biases in artificial intelligence development has long been an established issue within the field, it is disconcerting to see them on such obvious display via one of the most powerful tech giants in the world. The ongoing debacle also raises questions on how Meta understands cultural dynamics on the internet.