
Most cyber security experts and ethical hackers, known as whitehats, have used ChatGPT for web security practices. Despite its limitations, most recommend including ChatGPT in toolkits, new research by Web3's bug bounty platform Immunefi discloses.
Cybernews has reported that scammers are already using the new tool WormGPT, which is based on a large language model. And it’s becoming clear that cybersecurity researchers have used the more advanced ChatGPT for quite some time already.
76.4% of whitehats have used ChatGPT for web security practices. The remaining respondents (23.6%) have yet to use the technology.
When asked about use cases, most whitehats highlighted that ChatGPT is most useful for education (73.9%), followed by smart contract auditing (60.6%) and vulnerability discovery (46.7%).
However, experts are also finding another trick to save time, but it is not very productive.
“After ChatGPT's release, Immunefi received a flood of bug reports that seemed legitimate at first glance, but upon further examination, it became evident they weren’t. In order to stop the flow of spam, and protect its quality standards, Immunefi instituted a new rule to permanently ban any account detected to be submitting ChatGPT-generated reports,” the company reports.
Not a single real vulnerability has been discovered through a ChatGPT-generated bug report to date. And amongst the banned accounts, 21% were for submitting AI-generated bug reports.
“Whitehats clarified that the technology cannot be considered a substitute for manual code review. The chatbot may not be able to detect new or emerging threats that have not yet been identified, and not only doesn’t support bigger code bases, but it often relies on outdated libraries which lead to constant errors,” the report notes.
Cybersecurity researchers agree that ChatGPT is limited. Most respondents highlighted limited accuracy in identifying security vulnerabilities (64.2%), followed by a lack of domain-specific knowledge and difficulty handling large-scale audits at 61.2%, respectively.
The accuracy of results and ease of use are the two most important deciding factors on whether to use ChatGPT or not.
A thorough review of AI-generated bug reports disclosed that they are far from perfect. The language and descriptions of bugs and vulnerabilities are vague, recommendations generic, specifics missing, technical details scrambled, and logic incorrect. To an AI model, claimed vulnerabilities sometimes seem like “potential.” It fails to mention or interact with any code from the project’s codebase. ChatGPT offers generic best practices, issues, or theoretical aspects of the code.

And yet, even if the level of confidence is not very assuring, it’s not stopping whitehats from using ChatGPT. More than a third of experts (36.7%) use ChatGPT daily, and 29.1% of respondents use it weekly. Half of users have a neutral satisfaction level, 35.3% are satisfied, and 16.2% are dissatisfied.
More than half of whitehats would recommend ChatGPT as a tool for web3 security research to colleagues and peers.
Most ethical hackers confirm that ChatGPT presents security concerns. It can be used for phishing, scams, social engineering, the development of ransomware and malware, cybercrime training, and jailbreaking.
Moreover, whitehats highlighted the capacity to "enable another level of sophistication for script kiddies" and how it could "help them write a somewhat working program."
More from Cybernews
Sentient AI will never exist via machine learning alone
FIA World Endurance Championship driver passports leaked
Reddit faces its first “banned content” fine in Russia
Meta and Microsoft launch open source AI model LLaMA 2
New AI model can save firefighters from heart attack
Subscribe to our newsletter
Your email address will not be published. Required fields are marked