Since May 2023, the number of websites hosting fake and false articles created with artificial intelligence (AI) has increased by more than 1,000 percent, NewsGuard, an organization tracking misinformation, has found.
NewsGuard said it had so far identified 603 AI-generated news and information sites operating with little to no human oversight. That’s up from 49 sites.
The organization says this proves that the rollout of generative AI tools has been a boon to “content farms and misinformation purveyors alike.” In other words, it’s now as easy as ever to spread pure propaganda or at least false narratives about things like elections, wars, and natural disasters.
Not so long ago, propaganda campaigns relied on armies of low-paid workers of the so-called troll farms. But AI has made it possible for basically anyone – whether they’re an intelligence agency or just a nerdy teenager – to create such outlets, NewsGuard said.
Most people are not especially sophisticated news consumers who follow professional advice on how to differentiate fake or false content from real news, so these developments seem quite dangerous.
These websites typically have generic names such as iBusiness Day, Ireland Top News, and Daily Time Update, which, to a consumer, appear to be established news sites.
“This obscures that the sites operate with little to no human oversight and publish articles written largely or entirely by bots – rather than presenting traditionally created and edited journalism, with human oversight,” said NewsGuard.
For instance, one AI-generated article told a made-up story about Benjamin Netanyahu’s psychiatrist, saying he had died and left behind a note that suggested that the Israeli prime minister was involved. The psychiatrist was fictitious, but the claim was actually featured on an Iranian TV show and circulated on various other authentic media sites.
Analysts also identified a Chinese-government-run website using AI-generated text as authority for the false claim that the United States operates a bioweapons lab in Kazakhstan infecting camels to endanger people in China.
NewsGuard said that brands willing to advertise almost anywhere are at least partly to blame for allowing these websites to thrive.
“In many cases, the revenue model for these websites is programmatic advertising under which the ad-tech industry delivers ads without regard to the nature or quality of the website. As a result, top brands are unintentionally supporting these sites,” said NewsGuard.
“Unless brands take steps to exclude untrustworthy sites, their ads will continue to appear on these types of sites, creating an economic incentive for their creation at scale.”
If you want to be sure you’re reading real news, watch out for certain giveaways, NewsGuard said. For example, numerous articles might contain error messages or other language specific to chatbot responses. This would indicate that the content was produced by AI tools without adequate editing.
Sure, it’s likely that many proper news sites will use AI tools in the near future. But they will also deploy effective human oversight and, of course, will not generate hundreds if not thousands of articles a day, said NewsGuard.
The growth of websites churning out fake content is particularly concerning in the run-up to the 2024 US presidential election because the flood of propaganda could easily sway the chances of one or another political candidate.
Social media is already packed with misinformation, although Meta, the parent company of Facebook, Instagram, and WhatsApp, has banned political campaigns from using its generative AI advertising products.
More from Cybernews:
Subscribe to our newsletter