OpenAI accused of violations as ChatGPT lies about people


An Austrian watchdog has filed a complaint against OpenAI for violating GDPR as ChatGPT provides false information about individuals.

The non-profit organization noyb filed a complaint against OpenAI with the Austrian data protection authority on the 29th of April. The complaint quotes the company’s inability to tackle misinformation that ChatGPT generates about individuals.

According to the organization, while AI models hallucinating is a well-known issue, when it comes to personal information, errors might be extremely sensitive. Chatbots like ChatGPT are also failing to comply with EU law when processing data about individuals.

“If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around,” says Maartje de Graaf, a data protection lawyer at noyb.

The watchdog claims that OpenAI refused the complainant's request to rectify or erase the false personal data – date of birth – arguing that it was impossible to correct it.

In the complaint, noyb asks authorities to investigate OpenAI’s data processing and the measures taken to ensure the accuracy of personal data processed by the company’s large language models and impose fines to ensure future compliance with the General Data Protection Regulation (GDPR).

GDPR mandates that personal information must be accurate, and individuals should have complete access to their stored data, including details about its origin.

Individuals also have the right to request the deletion of false information. Failure to comply with GDPR regulations can result in penalties of up to 4% of a company's annual revenue.

In the case of OpenAI, the company is unable to disclose the origin of the data or provide details regarding the specific personal information stored by ChatGPT about individuals. Also, fixing factual accuracy is still an “area of active research,” the company claims. “The obligation to comply with access requests applies to all companies. It is clearly possible to keep records of training data that was used and at least have an idea about the sources of information. It seems that with each ‘innovation,’ another group of companies thinks that its products don’t have to comply with the law,” said de Graaf.

This is not the first time OpenAI has received backlash from digital privacy watchdogs. Last September, the Polish data protection authority initiated an investigation into ChatGPT following a complaint from a privacy and security researcher.

The researcher claimed that OpenAI was unable to correct inaccurate information about him, and the complaint also alleges that the AI company failed to meet the transparency requirements set out in the regulations.

Additionally, the Italian data protection authority has an ongoing investigation into ChatGPT. Last year, local watchdogs banned ChatGPT, citing privacy fears. The popular AI platform was later reinstated as the company met the initial demands.

In January this year, Italian authorities issued a preliminary decision, stating its belief that OpenAI violated GDPR in various ways, including the generation of misinformation about individuals by the chatbot. A final decision is still pending.


More from Cybernews:

New banking malware gives hackers complete control of Android phones

Meta is now threatening to leave India

ICICI Bank glitch gave access to other clients’ credit cards

Cyber crooks ramp up credential stuffing attacks

JP Morgan employees access sensitive information they weren’t supposed to see

Subscribe to our newsletter



Leave a Reply

Your email address will not be published. Required fields are markedmarked