ChatGPT plugins prone to threat actors, says study


ChatGPT plugins that allow the AI model to interact with external programs and services for better functionality also have vulnerabilities that could be exploited during a cyberattack.

Salt Labs research team uncovered three flaws, within ChatGPT itself, PluginLab used with the AI model, and OAuth used to approve interactions between applications.

While plugins are undoubtedly useful to developers who wish to use AI models like ChatGPT for specific purposes, they could also be exploited by cybercriminals because they permit the sharing of third-party data, said Salt Labs.

“As more organizations leverage this type of technology, attackers too are pivoting their efforts, finding ways to exploit these tools and subsequently gain access to sensitive data,” said Yaniv Balmas, vice president of research at Salt Security, which runs Salt Labs.

He added: “Our recent vulnerability discoveries within ChatGPT illustrate the importance of protecting the plugins within such technology to ensure that attackers cannot access critical business assets and execute account takeovers.”

The ChatGPT glitch occurred when the AI model redirected users to a plugin website to get a security access code approved by them. When the user inputs this code into ChatGPT, it installs the plugin and can then interact with it on their behalf.

However, Salt Labs researchers discovered that an attacker could exploit this function to instead deliver a code approval with a malicious plugin, enabling an attacker to automatically install their credentials on a victim’s account.

What this means is that any message that the user writes in ChatGPT could be forwarded to the infected plugin, giving the threat actor behind it access to sensitive or proprietary data.

The second vulnerability was the AI website PluginLab, which came to light when Salt Labs researchers discovered that it did not properly authenticate user accounts, which would have allowed a potential attacker to insert another user ID and get a code representing the victim, allowing account takeover on the plugin.

This security flaw pertains to the popular coding developer forum GitHub because one of the affected plugins spotted by Salt Labs was “AskTheCode,” which integrates between it and ChatGPT. In other words, by exploiting this vulnerability, an attacker could gain access to a victim’s GitHub account.

The final issue concerned several plugins with regard to OAuth redirection – this could be manipulated by a threat actor sending an infected link to an unsuspecting user. Because the plugins highlighted by Salt Labs don’t verify URLs, their use would have left a victim open to having their credentials stolen. This, too would pave the way for account takeover by an attacker.

Fortunately, Salt Labs appears to have sounded the alarm in good time – it reached out to OpenAI, which fixed the glitches with no evidence they had been exploited in the wild.


More from Cybernews:

Ninety percent of US internet users stream music – more than ever

Bitcoin Fog operator barred for laundering $400M

Massive data leak in Irish Health Service Executive uncovered

Duty Free Americas claimed by BlackBasta ransom group

US spy chief "cannot rule out" China using TikTok to sway elections

Subscribe to our newsletter



Leave a Reply

Your email address will not be published. Required fields are markedmarked