In a sign of growing concerns about artificial intelligence (AI) being leveraged in cyberattacks, Google has announced that it will reward cybersecurity researchers who uncover glitches that can be exploited by the technology.
The tech giant made the announcement on October 26th, saying that it would augment its existing bug bounty Vulnerability Rewards Program (VRP) to reward independent researchers who uncover AI-related glitches.
“Today, we’re expanding our VRP to reward for attack scenarios specific to generative AI,” it said. “We believe this will incentivize research around AI safety and security and bring potential issues to light that will ultimately make AI safer for everyone.”
Google will also expand its open source security work to make information about AI supply chain security “universally discoverable and verifiable.”
“Generative AI raises new and different concerns than traditional digital security, such as the potential for unfair bias, model manipulation or misinterpretations of data (hallucinations),” it added.
The announcement follows on from Google’s partnership with the US federal government in August, which green-lighted thousands of third-party researchers to test and seek out system weaknesses. Previously, such experts, sometimes referred to as white or gray-hat hackers, would have risked criminal prosecution for such hacking-related activities.
“As we continue to integrate generative AI into more products and features, our Trust and Safety teams are leveraging decades of experience and taking a comprehensive approach to better anticipate and test for these potential risks,” said Google. “But we understand that outside security researchers can help us find and address novel vulnerabilities that will, in turn, make our generative AI products even safer and more secure.”
Henrik Plate of cybersecurity analyst Endor Labs welcomed the move, describing it as “a great opportunity to develop secure systems from the ground up.”
More from Cybernews:
Subscribe to our newsletter