Workers regularly post sensitive data into ChatGPT


A new study found 15% of employees regularly post company data into ChatGPT – and over a quarter of that data is considered sensitive information – putting their employers at risk of a security breach.

The June research report, Revealing the True genAI Data Exposure Risk, analyzed the behavior of over 10,000 employees, examining how the employees use ChatGPT and other generative AI apps in the workplace.

The report revealed that at least 15% of workers are using ChatGPT and other generative AI tools at work, and nearly 25% of those visits include a data paste.

Workers were found inputting data into GenAI tools an average of 36 times per day – and those numbers are only expected to increase as the popularity of AI use for productivity increases.

Sensitive data into GenAI tools
Revealing the True genAI Data Exposure Risk, LayerX

Such behavior is recurring, with many employees pasting sensitive data on a weekly or even daily basis, the report found.

“Soon, we predict, employees will be using GenAI as part of their daily workflow, just like they use email, chats (Slack), video conferencing (Zoom, Teams), project management, and other productivity tools,” LayerX stated in the 10-page report.

Unfortunately, even though GenAI is opening up a whole new horizon of opportunities, it also poses significant risks to organizations, particularly concerning the security and privacy of sensitive data, the report said.

What’s more, the top categories of confidential information being input into the GenAI tools include internal business data at 43%, source code at 31%, and personally identifiable information, known as PII, at 12%.

Types of sensitive data pasted into GenAI tools
Revealing the True genAI Data Exposure Risk, LayerX

Source code and internal business data are the highest exposure risks, according to the report.

“Organizations might be unknowingly sharing their plans, product, and customer data with competitors and attackers,” LayerX stated.

Since GenAI platforms operate in the browser, existing security solutions cannot address risks like pasting of sensitive data, the study said.

Key findings showed 4% of employees paste sensitive data into GenAI on a weekly basis, increasing the chances of sensitive data exfiltration.

The numbers also found 50% of workers most heavily engaging with GenAI are from Research and Development (R&D)., followed by Sales & Marketing at over 23% and Finance at over 14%.

“For example, a Sales manager using GenAI to produce an executive summary of their quarterly performance would have to provide the GenAI tool with the actual sales results data.” the report explained.

Sensitive data pasted into GenAI tools incidents April 2023
Revealing the True genAI Data Exposure Risk, LayerX

It seems as if a significant portion of GenAI users don’t rely on prompt instructions alone but also paste data in their attempt to generate a desired text, the study found.

This exposes sensitive company data into GenAI, even though it is most likely done innocently to save time, LayerX said.

ChatGPT quickly accumulated over 100 million active users by January 2023, barely two months after its release.

And in April, ChatGPT reported its active user growth increased to a whopping more than 800 million users per month.

The report found 44% of workers have used GenAI API over the past 3 months, with a small portion of them visiting AI sites and apps over 50 times per month.

“This assumption might be reinforced by the fact that even in the last month GenAI users were still less than 20% of the entire workforce,” LayerX stated.


More from Cybernews:

Netflix enters food service industry

Cl0p names first batch of alleged MOVEit victims

BreachForums is back – for real this time

Dozens of healthcare providers affected by Virginia debt collector breach

Cl0p, the MOVEit bug, and what to make of it all

Subscribe to our newsletter



Leave a Reply

Your email address will not be published. Required fields are markedmarked