Finding a vulnerability in ChatGPT could net you up to $20,000 under OpenAI’s new bug-bounty program, but the company said it won't include ChatGPT misfires from prompt misuse and hallucinations.
The Microsoft backed OpenAI announced the new program in a blog post Tuesday on its company website and Twitter.
“At OpenAI, we recognize the critical importance of security and view it as a collaborative effort,” the company stated.
The bug-bounty program will allow ethical researchers to test and analyze several areas of the company’s artificial intelligence systems, including the much revered large-language model chatbot, ChatGPT.
“This initiative is essential to our commitment to develop safe and advanced AI. As we create technology and services that are secure, reliable, and trustworthy, we need your help,” OpenAI said.
“The program is open to the global community of security researchers, ethical hackers, and technology enthusiasts to help us identify and address vulnerabilities in our systems,” the company said.
The cash rewards for finding a vulnerability, and presenting it to OpenAI, range anywhere from “$200 for low-severity findings to up to $20,000 for exceptional discoveries,” the post explained.
To put the amounts into perspective for those not familiar with bug-bounty programs, Microsoft offers up a low of $500 all the way up to $250,000 for extraordinary discoveries.
The leading bug-bounty platform Bugcrowd will be tasked with managing the submission and reward process for OpenAI, as well as outlining the long list of rules and parameters for those wanting to participate in the program.
According to the Bugcrowd platform, the program’s scope will include certain functionalities of ChatGPT, a framework of how OpenAI systems communicate, and shared data with third-party applications.
- OpenAI APIs, including public cloud resources or infrastructure
- ChatGPT, including ChatGPT Plus, logins, subscriptions, and other functionality
- Third party corporate targets exposing confidential OpenAI corporate information
- OpenAI API Keys
- OpenAI research organization’s websites, services, subdomains and APIs
- Other targets, including OpenAI.com, developer documentation and playground
Notably absent from the bug-bounty program are any safety issues related to the content of model prompts.
“Model safety issues do not fit well within a bug-bounty program, as they are not individual, discrete bugs that can be directly fixed,” the program states. “Addressing these issues often involves substantial research and a broader approach.”
This means finding new ways to prompt ChatGPT to say bad things, tell you how to do bad things, write malicious code for you, or even getting ChatGPT to hallucinate, will get you no closer to a cash reward.
OpenAI offers its own model feedback form on its website for those sort of issues.
In March, OpenAI had to take ChatGPT offline to fix a bug in an open-source library, causing havoc for users around the globe who were unaware of the planned outage.
The bug had allowed some ChatGPT users to see other users’ chat history, and for some ChatGPT Plus monthly subscribers, payment card details were also leaked to other users.
Just last week, the Italian government banned ChatGPT from operating within its borders, citing the March data leak.
Italy has become the first Western nation to ban the chatbot, giving OpenAI twenty days to amend its data-collection practices.
ChatGPT, short for generative pre-trained transformer, was also named as an accomplice in a recent Samsung data leak.
Samsung employees in South Korea reportedly shared confidential source code with the chatbot, opening up the data to OpenAI’s users.
Released in November 2022 to rave reviews worldwide, ChatGPT and its large-language model competitors, Google’s Bard, and China’s Ernie Bot, have already been incorporated into many business tools and applications and are expected to transform life as we know it.
OpenAI said ChatGPT’s user count was estimated to be over 100 million at the beginning of this year.
More from Cybernews:
Subscribe to our newsletter