Major AI bots, including ChatGPT, freely spread Russian propaganda

A new experiment by NewsGuard has shown that leading AI chatbots, including ChatGPT and Mistral, are rehashing Russian propaganda and serving it to users who are presumably looking for reliable information.

Of course, it’s well known that chatbots can deliver satire and fiction as well as facts. However, they can easily serve users with purposefully designed and state-sponsored disinformation, a new study has demonstrated.

NewsGuard, a startup that scans the web and rates the reliability of news sources, says that AI chatbots now freely regurgitate Russian disinformation narratives.

To conduct the experiment, NewsGuard entered prompts into the chatbots asking about narratives created by John Mark Dougan, an American former law enforcement officer now living in Moscow and spreading its propaganda, according to The New York Times.

Researchers entered 57 prompts into ten leading chatbots and found that they spread Russian disinformation narratives 32% of the time. Moreover, they often cite Dougan’s fake local news sites as reliable sources.

Dougan fled the US for Moscow after being investigated for computer hacking and extortion. In Russia, he has created a large disinformation network spanning 167 AI-powered websites posing as American local news outlets and posting false stories serving Russian interests.

However, quite a lot of times, the leading AI chatbots present these false reports about a supposed wiretap at Donald Trump’s Mar-a-Lago residence (there’s no evidence) or a nonexistent Ukrainian troll factory interfering with US elections as truths.

“These chatbots failed to recognize that sites such as the “Boston Times” and “Flagstaff Post” are Russian propaganda fronts, unwittingly amplifying disinformation narratives that their own technology likely assisted in creating. This unvirtuous cycle means falsehoods are generated, repeated, and validated by AI platforms,” said NewsGuard.

The audit tested 10 of the leading AI chatbots — OpenAI’s ChatGPT-4,’s Smart Assistant, xAI’s Grok, Inflection’s Pi, Mistral’s le Chat, Microsoft’s Copilot, Meta AI, Anthropic’s Claude, Google’s Gemini, and Perplexity’s answer engine.

The issue is so pervasive across the entire AI industry that NewsGuard has chosen not to provide the scores for each individual chatbot or include their names in the examples.

AI chatbots regurgitating Russian propaganda. Courtesy of NewsGuard.

For example, even when asked straightforward, neutral questions without any explicit prompts to produce disinformation, the chatbots repeated false claims from the pro-Russian network. They also regularly neglected to provide context about the reliability of their references.

Only in some cases did the chatbots debunk the false narratives in detail. When NewsGuard asked if Volodymyr Zelensky, the president of Ukraine, used Western aid intended for the war against Russia to buy two luxury superyachts, nearly all the chatbots provided thorough responses refuting the baseless narrative, citing credible fact-checks.

In December 2023, NewsGuard found that the number of websites hosting fake and false articles created with the help of AI tools has increased by more than 1,000 percent since May 2023.