
After spending nearly five years at Google, where he worked on generative AI and factual accuracy on Google Search, Alexios Mantzarlis now says he hasn’t got a lot of confidence that the online space is going to improve.
At Google, Mantzarlis created the adversarial red team, focusing on content risks from generative AI, and managed a team responsible for Google Search’s content policy on factual accuracy and high promise answers.
But he left. In an interview with Cybernews, Mantzarlis – who also calls himself a “recovering fact-checker” because he previously worked on international fact-checking initiatives for years – doesn’t mince his words.
“I enjoyed Google a lot and then hated it a lot. So much has changed in terms of the philosophy, and now it’s just gotten worse. When the big pivot to AI came, I stopped having any kind of confidence that information quality was the true priority,” said Mantzarlis.
He’s now at Cornell Tech, where he’s the inaugural director of the Security, Trust, and Safety Initiative, which strives to mitigate and prevent digital harm through graduate education, active communities of practice, and research.
Is it better now? It’s much harder than Mantzarlis was hoping it to be – due to actions of the Donald Trump administration, there are problems with funding, and “the general atmosphere right now is a disaster.”
Google had to react to the rise of ChatGPT
Disaster is a suitable word for whatever Google has been trying to do with generative AI. Since last year, Google’s AI Overviews feature and its knowledge panels have been slowly taking over Search.
It hasn’t been smooth. Yes, it’s probably neat that Google’s AI now summarizes search results drawn from multiple sources at the top of the page – you don’t have to click on any additional links to find out more.
However, these extra clicks might be worth your while because AI can – and does – hallucinate and push hogwash. In 2024, users were told to change their blinker fluid – which doesn’t exist – or to chill out and have a smoke during pregnancy, for example.
Those in the know weren’t surprised. Since generative AI is trained on whatever is out there on the web, it’s also reliant on unreliable sources – pranks, jokes, and conspiracy theories.
But the average person has been taught to trust Google Search’s results almost blindly. So when the AI Overviews feature somehow told the world that a famous Cambridge physicist had died when he actually hadn’t, the world believed it.

“Misleading answers can be more serious if users take the answer given as gospel – such as with health advice or topics related to purchasing decisions,” thinks Georgia Coggan, a freelance writer.
“Google's laissez-faire attitude to presenting AI summaries of topics that are wrong is negligent at best and dangerous at worst. To roll out this technology while it is unable to provide correct answers will fuel the spread of false facts – and Google has intentionally made it so appealing to bypass the source with these handily digestible chunks of information.”
The publishers and the journalists – at least those willing to present the truth to the best of their ability – already suffer enormously.
As AI-powered search assistants proliferate, businesses are witnessing dramatic dips in organic traffic, and roughly 60% of searches now yield no clicks at all, according to recent studies.
Mantzarlis agrees that Google’s rollout of AI-powered technology has been “somewhat haphazard.” The reason was, of course, the sudden rise of OpenAI and its viral bot ChatGPT.
“The rollout of this tool was primarily motivated by the intention to show the markets and the shareholders that Google could do it rather than an idea to really differentiate Google as the place to go and find high-quality, useful information,” Mantzarlis told Cybernews.
He thinks the effect of ChatGPT on Google Search can truly be existential – even though in March, a study found speculations that AI tools are reducing Google’s dominance ungrounded.
“ChatGPT is definitely having an effect on search behavior, so yes, this is existential for search. But I think Google chose to address this existential challenge in a way that actually increased the risk rather than decreased it,” said Mantzarlis.
“We deployed a confident bullshitter”
As a tech writer and expert, Mantzarlis has noticed that the so-called AI slop – this is a term describing low-quality media created using generative AI technology – “is making us increasingly unable to even get to evidence, truth, and facts that are important.”
And Google has become part of the problem – simply because it’s just mind-boggingly huge. Even if the error rates are 0.1%, there will be tens or hundreds of thousands of errors among the 15 billion search results – and that’s a lot.
“What’s also frustrating is that priority was put on using AI as a generalist tool rather than a specialist tool that could be trained to be extremely precise on certain topics. Instead, we’ve deployed this confident bullshitter,” Mantzarlis thinks.
He hopes that people will see how AI-powered search actually works and then turn sour on it. Even now, people around the world are suing OpenAI because ChatGPT is telling them they killed their kids or committed all kinds of weird crimes.

“But I’m also a little fatalistic because these are the largest, best-endowed, and most advanced companies in the world – and this is the path they’ve taken,” said Mantzarlis.
“Generative AI is being jammed down our collective throats whether we want it or not. And, of course, the current atmosphere is making advocating for safety and information quality even harder.”
As the former founding director of the International Fact-Checking Network, the global coalition of fact-checking projects, Mantzarlis spent numerous hours negotiating with Google Meta and the news industry. He’s not filled with hope that the future is bright.
“It’s high time for publishers to disintermediate and begin rebuilding direct relationships with their audiences. That’s the reason so many journalists have chosen the Substack path because there’s no funny business around the content’s visibility,” said Mantzarlis.
“What's frustrating is that priority was put on using AI as a generalist tool rather than a specialist tool that could be trained to be extremely precise on certain topics. Instead, we’ve deployed this confident bullshitter,”
Alexios Mantzarlis.
We shouldn’t expect too much from big tech firms, he thinks. At the moment, they’re very cozy with the Trump administration – most of the billionaires attended Trump’s inauguration or visited him in Mar-a-Lago, and Mark Zuckerberg abruptly canceled professional fact-checking efforts on the company’s platforms.
Still, the one thing that makes Mantzarlis believe that the situation can change is polling. Pew Research Center’s data still shows that the majority of Americans think that platforms should be doing more to moderate misinformation, even though that number has decreased.
“Plus, the decisive question is how the European Union’s Digital Services Act gets enforced,” said Mantzarlis, pointing out that this particular EU regulation, addressing illegal content, transparent advertising, and disinformation, can influence policy globally.
“If the EU decides to go after large platforms for violations of systemic risks, of which misinformation is one, and fine them, this will make a difference,” Mantzarlis told Cybernews.
“But if it all becomes part of larger discussions around Ukraine or tariffs, for example, then that’s the end of it because these platforms will not have the incentive to change their ways. And so, we could be facing a few long, long years.”
Your email address will not be published. Required fields are markedmarked