Expecting privacy from ChatGPT is like asking the NSA to stop spying on citizens


OpenAI is rolling out the ability for its chatbot to memorize conversations with users. A privacy researcher Davi Ottenheimer thinks it’s “Orwellian.”

OpenAI introduced a new controversial feature on ChatGPT last week, which the company called ‘Memory.’ The new feature will allow the chatbot, by default, to remember all the conversations a user had with it and train itself on them.

While the company states that this new feature will enable the chatbot to be more accurate and instantly on the same page as the user, there are concerns about how it will affect data privacy.

ADVERTISEMENT

The company has come under scrutiny for its handling of the data fed to the chatbot. In 2023, Italy temporarily banned ChatGPT for the use of personal data to train its models without user consent, as it violated GDPR. The same year, Samsung raised concerns, as employees of the company’s semiconductor division leaked confidential information while using chatbot at work.

While every piece of information that‘s put into ChatGPT becomes part of the knowledge of the AI platform, the company itself has access to user data. ChatGPT’s policy clearly states that “Your conversations may be reviewed by our AI trainers to improve our systems."

Cybernews interviewed Davi Ottenheimer, a privacy researcher and Vice President of Trust and Digital Ethics at Inrupt, and published author. Ottenheimer argues that OpenAI's overarching vision is fundamentally flawed and destined for failure in safeguarding data privacy. He criticizes the company's labeling of private data storage as 'Memory,' likening it to an Orwellian concept.

Long-term storage, not memory

Ottenheimer believes that the new ‘Memory’ feature carries Orwellian undertones since presenting something drastically different from reality could catch someone off guard and be ill-prepared for the resulting consequences.

He insists on calling the new feature storage, not memory. “To begin with, it's not memory. This is propaganda. They call it memory because it implies something to people to have a memory very different than what they're actually presenting, which is long-term storage.”

While human memory is ephemeral, and a computer’s memory is erasable, feeding information to machine learning algorithms can have unpredictable consequences to privacy.

OpenAI has stated that users have the choice to delete their conversations from ChatGPT’s “memory,” yet Ottenheimer questions it.

ADVERTISEMENT

“If you drop ink into water. Can you remove the ink? It’s very, very difficult to remove ink from water. Where they say it's deleted, and we don't know what they mean specifically and provably.”

OpenAI chose the wrong model

“I believe OpenAI started with the wrong model of AI, and they've invested so much now they may be stuck on it, but they need to rethink their architecture completely,” claims Ottenheimer.

The OpenAI model is a huge repository of data that continuously undergoes learning processes. “Don't get me wrong, for the first 3 to 6 months, it's very impressive. An AI learning system that takes everything in the world. The whole corpus of all information is very impressive for the first six months.”

He continues to explain that after six months, the AI model starts to exhibit rogue behavior or signs of bias, including racism. “Then you say, how do I delete that? And you can't because they built a learn-everything machine and then try to go and get rid of things. It's impossible. Again, the ink in the water.”

“The other model flipping it around is if they [OpenAI] had designed an architecture that implies destroying everything, you know, start over. Every time they want to learn, they start from scratch. They would be in a better place now.”

A different approach to AI would be designing architecture where the model learns based on the current input of data and gets completely wiped out after interaction.

“What OpenAI tried to do is create something more like the NSA, which is – everything comes in, and then we'll decide what's right and wrong. And you live in our world. I think the deletion concept that they're presenting is inherently flawed. It's like asking the NSA if they can stop knowing something about a citizen.”

No free choice

Ottenheimer emphasizes that using AI services means individuals become locked into systems without alternatives, echoing concerns articulated by philosophers like David Hume regarding the loss of personal autonomy.

ADVERTISEMENT

“The fatal flaw in all of these AI companies that are trying to build a centralized system is all the power resides with them. And not with the data owners. The best way to run AI is to flip it. So that the data owners maintain the power and control over their own lives,” he said.

“You don't want to collect everything about everyone in a central repository and then try to dispense knowledge back. Again, the architecture needs to be the opposite,” Ottenheimer explains.

He advocates for a system where individuals own and manage their data, sharing happens only with informed consent, and information is deleted when it's no longer needed.

“The technology is still nascent in protecting us from abuse by large repositories of data, and OpenAI shows no signs of safety to me, only signs of danger. They've made some huge mistakes already,”

he said.

OpenAI is not open

“I'm sensitive to the propaganda. I studied intelligence, I studied disinformation early on in the 80s and 90s. And when I see a company called Open AI that is proprietary closed, it's inherently misrepresentative.”

He believes that the level of transparency that OpenAI gives regarding the usage of data is below sufficient. “I have no idea where my data is really going. I can't inspect it, I can't find out, and I don't know if they delete it when I say delete.”

Lack of control over one's digital footprint poses significant challenges to maintaining personal integrity and could lead to serious repercussions, where chat histories could be used to fabricate data or incriminate the user. Also, the inbuilt safety measures on ChatGPT are often bypassed, with users reporting dangerous malfunctions.

“What if ChatGPT was forced to not accumulate everything and try to curate everything and monitor everything, but instead was simply a service that you could train for your own purposes under the laws and provisions that you live.”

“You can actually have a local store. Again, not memory. A local store, a personal data store. Using protocols that exist today and then you wouldn't be subject to integrity attacks that you have no way of defending yourself against.”

ADVERTISEMENT

AI companies do not perceive harm properly

AI regulatory mechanisms are still nascent. However, Ottenheimer doesn’t think that trying to regulate AI is a futile exercise. He highlights both the direct and indirect impacts that regulations have on companies.

Directly, regulations impose costs on companies through fines, which incentivize compliance. “You think about it in business terms. This is a cost we didn't have a minute ago. Even if it's a dollar fine, it's still a cost. And so it does have an effect.”

Indirectly, violations damage a company's reputation, affecting consumer perception and market competitiveness. “If the public sees that you're consistently violating something. It starts to have an effect on the perception of the brand as somebody who is in gross violation of the safety of the general public.”

The researcher highlights the importance for the industry to self-regulate by shifting towards the concept of integrity breaches rather than privacy breaches.

“It's like being told to stop smoking versus making a decision to live a healthy life. In some ways, what's missing is the self-regulation. And it's interesting to me that they don't perceive the harm properly,”

he explains.

Privacy breaches are still perceived by companies as an external threat, affecting data owners rather than the company itself. Shifting to the concept of integrity breaches would mean that breaches of user privacy are harmful to the company as well.

“We saw this with Grok recently. It called Elon Musk a pedophile, right? The thing he created called him a pedophile. That's an integrity breach. That hurts the company, as well as hurting him. It hurts the individual by saying something that's false, but it also hurts the integrity, the data, and the processing. It can't be trusted.”

So, let’s say the concept of integrity breaches becomes mainstream, and companies begin to feel the heat. What comes next?

“I think regulations are going to come around. Hopefully sooner rather than later to the idea of these breaches of integrity needing stricter rules and regulations,” he concludes.

ADVERTISEMENT

“I think regulations are going to come around. Hopefully sooner rather than later to the idea of these breaches of integrity needing stricter rules and regulations,” he concludes.