OpenAI bans Iran ChatGPT accounts for generating fake US election content


Iranian accounts using ChatGPT to generate false content about US elections and other world events were kicked off the platform, the chatbot’s creator, OpenAI, said on Friday.

OpenAI said it banned the deceptive accounts from using its services and will continue to monitor user activity to prevent any further attempts to violate ChatGPT policies.

The accounts in question have been linked to an Iranian misinformation influence operation dubbed Storm-2035. Microsoft researchers first identified it in a series of reports released in April, June, and again last week.

“Over the past several months, we have seen the emergence of significant influence activity by Iranian actors. Iranian cyber-enabled influence operations have been a consistent feature of at least the last three US election cycles,” Microsoft said in the most recent August 9th report.

Storm-2035 was one of seven Iranian threat groups identified in the report titled, “Iran steps into US election 2024 with cyber-enabled influence operations.”

The Microsoft report first revealed that the group was abusing AI services, like ChatGPT, to generate the false narratives, and then post the misinformation on various social media platforms and/or questionable news sites.

Iran Storm-2035 influence ops
June 7th, 2024 article criticizing Donald Trump was published in one of the covert network’s outlets, Nio Thinker, created by the Iranian network, Storm-2035. Image by Microsoft.

According to the report, Storm-2035 was responsible for comprising four different websites posing as news outlets, which have been active since at least 2020.

These websites actively engage “US voter groups on opposing ends of the political spectrum with polarizing messaging on issues such as the US presidential candidates, LGBTQ rights, and the Israel-Hamas conflict,” Microsoft said.

Although the Microsoft investigation showed that AI was used by the Iranian groups for generating both short and long-form articles, as well as social media commentary – OpenAI said the nefarious influencers did not appear to have achieved any meaningful audience engagement.

In fact, OpenAI said most of Storm-2035’s posts and web articles received few or no likes, shares or comments, across social media.

In May, the Microsoft-backed OpenAI said it had disrupted five other deceptive influence operations attempting to use AI-generated information to “manipulate public opinion or influence political outcomes."

Those nefarious campaigns were said to have involved threat actors from Russia, China, Iran, and Israel.