Meta programmed AI to avoid Trump assassination attempt topic, AI talks about it anyway


Meta is blaming AI hallucinations for the erasure of the attempted assassination of Donald Trump and the incorrect image labeling of the shooting on its platforms.

Meta, the parent company of Facebook, WhatsApp, and Instagram, has responded to the treatment of political content on its platforms in the past week.

The primary instances the tech giant is referring to are the application of a fact-check label to an image of the former president and incorrect responses to queries and comments about the shooting, Meta said.

ADVERTISEMENT

The image of Trump pumping his fist, blood on his face, surrounded by Secret Service Agents with the American flag swaying in the background was labeled an “altered photo,” according to Meta’s AI.

Furthermore, the tech giant claims that despite prohibiting the AI model from talking about political issues like the shooting, in some instances, Meta’s AI did it anyway.

“In a small number of cases, Meta AI continued to provide incorrect answers, including sometimes asserting that the event didn’t happen.”

In most cases, Meta programmed its AI to provide a generic response about how it couldn’t provide any information on the topic instead of providing users with incorrect information.

Instead, this translated as Meta refusing to talk about the event and thus erasing it from its platforms.

Meta explicitly states that this was not the result of bias. However, the company understands why it would leave users with that impression.

Users took to X to demonstrate their sentiments surrounding the supposed erasure of the event.

One user posted a screenshot of a conversation with Meta’s chatbot Llama 3.1, “its most capable model yet,” in which the chatbot alleges that the user may be thinking of a different event or even a false report.

ADVERTISEMENT

Other users have posted the supposed “altered image” of Trump, which demonstrates the incorrect labeling of the real image.

Meta claims that a photo only slightly different from the original had been circulating across its platforms – it showed the Secret Service agents smiling. The company applied a label to this altered image.

However, this label applied what can be called a blanket label to all content that is the same or almost exactly the same as content vetted by fact-checkers.

Despite these blatant inaccuracies, the company claims that in both instances, its systems were “working to protect the importance and gravity of this event.”

Mark Zuckerberg’s Meta acknowledges the unreliability of AI models, claiming that they aren’t always accurate and can make mistakes.

“It’s a known issue that AI chatbots, including Meta AI, are not always reliable when it comes to breaking news or returning information in real-time,” Meta said.

This is due, in part, to the fact that AI models are trained on a specific dataset, which may not include the latest information on every event.

“Which can at times, understandably, create some issues when AI is asked about rapidly developing real-time topics that occur after they were trained,” the company said.

ADVERTISEMENT

The tech giant said that these issues are being addressed and declared its commitment to free expression and continual improvement