Samsung employees have been reported by local media as sharing confidential data with ChatGPT, opening up the data to OpenAI’s users.
Employees interacting with ChatGPT, a chatbot created by US startup OpenAI, supposedly leaked Samsung’s sensitive data on three separate occasions, according to a South Korean business news outlet.
Economist writes that the alleged leak came only 20 days after the South Korean conglomerate lifted a ban on ChatGPT. Ironically, the ban was put in place to avoid leaking confidential data.
The information employees shared with the chatbot supposedly included the source code of software responsible for measuring semiconductor equipment. A Samsung worker allegedly discovered an error in the code and queried ChatGPT for a solution.
OpenAI explicitly tells users not to share “any sensitive information in your conversations” in the company’s frequently asked questions (FAQ) section. Information that users directly provide to the chatbot is used to train the AI behind the bot.
Samsung supposedly discovered three attempts during which confidential data was revealed. Workers revealed restricted equipment data to the chatbot on two separate occasions and once sent the chatbot an excerpt from a corporate meeting.
Privacy concerns over ChatGPT’s security have been ramping up since OpenAI revealed that a flaw in its bot exposed parts of conversations users had with it, as well as their payment details in some cases.
As a result, the Italian Data Protection Authority has banned ChatGPT, while German lawmakers have said they could follow in Italy’s footsteps.
The release of ChatGPT has prompted a race in the tech sector to release intelligent chatbots. Google has launched its ChatGPT rival Bard, while Chinese tech giant Baidu has unveiled its own chatbot, Ernie Bot. Both met with mixed reviews from early adopters.
Your email address will not be published. Required fields are markedmarked