
OpenAI hopes to entice security researchers by raising the maximum payout for its Bug Bounty program from $20K to a cool $100K. This initiative, along with a slew of others, is part of a multi-pronged effort to prioritize AI cybersecurity.
The company announced the expansion of its Security Bug Bounty Program on Wednesday, along with several cybersecurity agendas, including widening the scope of its Cybersecurity Grant program, a new red team partnership, and the development of new tools for protecting emerging AI agents from malicious threats.
Bug Bounty expansion
The Bug Bounty program, launched last April, will now pay out $100,000 to researchers who submit “exceptional and differentiated critical findings.”
The $80,000 increase “reflects our commitment to rewarding meaningful, high-impact security research that helps us protect users and maintain trust in our systems,” OpenAI said.
The AI startup is also launching a limited-time “bonus promotion” period, during which additional bounty bonuses will be awarded to those who “submit qualifying reports within specific promotional categories.”
The current bonus promotion, which explicitly covers “priority 1-3 IDOR access control vulnerabilities on any in-scope target,” began Wednesday, March 26th, and will run through April 30th.
The promotion also increases the original IDOR bounty range from a minimum of $200 to $400 and a maximum of $6500 to $13,000.

Cybersecurity grant program evolves
OpenAI’s Cybersecurity Grant Program is evolving to include new priority areas like agentic security, model privacy, and AI-powered software patching, the Microsoft-backed company said.
“Since launching two years ago, we've reviewed over a thousand applications and funded 28 research initiatives, gaining critical insights into areas like prompt injection, secure code generation, and autonomous cybersecurity defenses,” it said.
The program is further introducing the “micro-grant” in the form of API credits to help researchers quickly prototype new ideas.
Expanding its focus, the grant program is encouraging new submissions encompassing a broader range of topics and new innovative angles. Applications are currently being accepted oniine in priority focus areas such as:
- Software patching: Leveraging AI to detect and patch vulnerabilities.
- Model privacy: Enhancing robustness against unintended exposure of private training data.
- Detection and response: Improving detection and response capabilities against advanced persistent threats.
- Security integration: Boosting accuracy and reliability of AI integration with security tools.
- Agentic security: Increasing resilience in AI agents against sophisticated attacks.
Updates on our Cybersecurity Grant Program, Bug Bounties, and Security Initiatives
undefined OpenAI Newsroom (@OpenAINewsroom) March 26, 2025
We’re sharing developments that reflect our progress, momentum, and forward-looking commitment to security excellence on our ambitious path toward AGI.https://t.co/eR8ZzDrwQU
Furthermore, the company says it has been collaborating with “experts across academic, government, and commercial labs” to engage in open source security research to uncover vulnerabilities in open source software code – all to help improve a model's ability to find and patch vulnerabilities in code.
The company says it will be releasing its security disclosures to relevant open source parties as they are identified and scaled.
Strengthening AI security
To top it off, OpenAI said it will expand the use of its own AI models to bolster real-time threat detection and response.
Announcing a new red team partnership with Seattle-based cybersecurity firm SpecterOps, OpenAI said it will be “rigorously testing our security defenses through realistic simulated attacks across our infrastructure.”
Continuous adversarial testing, which will cover its corporate, cloud, and production environments, will help the Sam Altman run start-up proactively build comprehensive security measures directly into its infrastructure and models, it said.
OpenAI explains that as it continues to introduce more advanced AI agents, such as Operator and Deep Research, there is more to understand about the unique security and resilience challenges that arise with such technology.
Strategies for advancing emerging AI security will include defending against prompt injection attacks, implementing advanced access controls, comprehensive security monitoring, cryptographic protections, and defense in depth.
Your email address will not be published. Required fields are markedmarked