Protect AI, an AI and machine learning (ML) security company, has launched a platform dedicated to reporting AI and ML vulnerabilities.
Protect AI has acquired the Huntr.dev platform that pays security researchers for discovering vulnerabilities in open-source software. As a result, they’re launching the new Huntr platform that will focus exclusively on AI/ML threat research.
"The vast artificial intelligence and machine learning supply chain is a leading area of risk for enterprises deploying AI capabilities. Yet, the intersection of security and AI remains underinvested," said Ian Swanson, CEO of Protect AI.
According to him, the new platform's goal is to "foster an active community of security researchers, to meet the demand for discovering vulnerabilities within these models and systems."
Swanson promised "the highest paying AI/ML bounties available to the hacking community." The first contest for the researchers will be focused on HuggingFace Transformers – an AI community and Machine Learning platform – with a $50,000 reward.
"By actively participating in Huntr's AI/ML open-source-focused bug bounty platform, security researchers can build new expertise in AI/ML security, create new professional opportunities, and receive well-deserved financial rewards," Protect AI said in a press release.
At the end of July, Protect AI raised $35 million and has to date raised $48.5 million to "protect ML systems and AI applications from unique security vulnerabilities.”
This April, OpenAI announced a bug bounty program to recognize and reward the valuable insights of security researchers who contribute to keeping our technology and company secure," with rewards ranging from $200 for low-severity findings to up to $20,000 for exceptional discoveries.
Your email address will not be published. Required fields are markedmarked