One in ten AI prompts puts sensitive data at risk


Almost half of the sensitive data employees enter into tools like ChatGPT or Perplexity includes sensitive customer information, a new study has found.

The analysis of tens of thousands of prompts showed that nearly one in ten business users potentially disclosed sensitive data, according to the study carried out by cybersecurity firm Harmonic Security.

The study monitored generative AI tools, including Microsoft's Copilot, OpenAI's ChatGPT, Google's Gemini, Anthropic’s Claude, and Perplexity. It was found that most employees used free versions of these tools that lacked proper security controls.

ADVERTISEMENT

“Most GenAI use is mundane, but the 8.5% of prompts we analyzed potentially put sensitive personal and company information at risk,” said Alaistair Peterson, chief executive and co-founder of Harmonic Security.

In most cases, organizations can manage data leakage by blocking the request or warning the user about possible implications of their actions, he said. However, not all companies are capable of this yet.

“The high number of free subscriptions is also a concern. The saying that ‘if the product is free, then you are the product’ applies here and despite the best efforts of the companies behind GenAI tools there is a risk of data disclosure,” Peterson said.

No such thing as a free lunch

Many free-tier tools explicitly state that they train on customer data, meaning that sensitive information could also be used to improve their models.

Of the prompts that may have exposed sensitive information, roughly 46% potentially disclosed customer data, such as billing information and authentication data, according to cybersecurity experts.

More than a quarter contained information on employees, including payroll data, personally identifiable information (PII), and employment records. Some prompts included asking AI tools to conduct employee performance reviews, the study found.

Legal and finance data including sales, investment portfolios, and mergers and acquisitions activity accounted for almost 15% of the potentially exposed sensitive information, followed by security-related information that could be exploited by threat actors.

ADVERTISEMENT
vilius Niamh Ancell BW Marcus Walsh profile Stefanie
Stay informed and get our latest stories on Google News

Sensitive code, including access keys and proprietary source code, comprised the rest of sensitive prompts that were potentially disclosed.

However, the study showed that the vast majority of employees used AI tools without putting their employers or customers at risk, with most common tasks including text summaries, edits, and documentation for code, the study showed.

Ensuring that employees use paid plans is one way to mitigate security risks, according to experts. Other recommendations include real-time monitoring systems that track and manage data input in AI tools, as well as employee training.