ChatGPT just made you a criminal – with zero evidence


What happens when a chatbot makes up a crime – and pins it on you?

Consider the shock of finding that when you ask ChatGPT about yourself – much like when we used to Google ourselves – it falsely describes you as a child murderer.

As data protection NGO noyb has revealed, when Norwegian user Arve Hjalmar Holmen sought out information about himself, it returned a disturbing response.

ADVERTISEMENT

At the rate technology is advancing, you’d expect better than such wild hallucinations.

This kind of delusion is far from purely a technical issue – it puts someone’s reputation and livelihood on the line.

If a screenshot like this were to go viral, the negative consequences could be severe.

A Chat-GPT misdemeanour.
Screenshot from noyb

Machine-made scandals

There may be heavy legal consequences for OpenAI too. If found guilty, they could be penalized up to 4% of their global turnover – or €20 million, whichever is the greater sum.

Or, as in Italy in April 2023, a temporary ban may ensue – in that case, it was three weeks – all because of how ChatGPT was found to be processing personal data.

If this kind of hallucination were to happen more often – and at this stage, it’s rare – then cases of defamation or emotional distress could pile up, as well as placing an increased financial burden on the company.

ADVERTISEMENT

Previous calamities include:

  • An Australian mayor who sued OpenAI for defamation in 2023 after ChatGPT falsely claimed he had been imprisoned for bribery while working for a subsidiary of the national bank.
  • A German journalist wrongly labeled by Microsoft Copilot in 2024 as a child molester. The chatbot also described him as an escapee from a psychiatric institution, a con man who preyed on widows, a drug dealer, and a violent criminal.

Is it enough for AI to include a disclaimer that it may be wrong? No. That’s like a Hollywood movie claiming that events are fictional and not based on real life. The difference is that ChatGPT is supposed to reflect real life – not be in the entertainment business.

The issue is currently between a rock and a hard place. OpenAI clearly can’t self-govern. Doing so would mean mass censorship, which would cripple ChatGPT, an AI that’s essentially “learning on the job.” These hallucinations aren’t intentional – they’re the result of AI conflating snippets of information.

And for governments, it’s difficult to apply swift regulation to an industry that’s moving at breakneck speed. ChatGPT toggles between "search" and "reason," and there’s a level of abstractness to these differentiations.

Right now, it seems like a string of isolated incidents. But if this kind of foul play were to multiply, societal mistrust in AI would skyrocket.

Konstancija Gasaityte profile Paulius Grinkevičius B&W Ernestas Naprys Gintaras Radauskas
Stay informed and get our latest stories on Google News