
Software developers are using AI assistants to write code like never before, but new research reveals that AI-generated code may not be all it's cracked to be security-wise, and companies need to be aware.
A new blog published Wednesday by the application security management firm Apiiro and backed by Gartner Research warns there is a “security trade-off” when developers use AI tools to write code.
It's all due to the explosive growth of AI-generated coding, which is leaving businesses with serious security risks due to coding errors and the lack of manpower to police them.
Last year, more than 80 percent of developers worldwide reported they were currently using AI tools for writing code, making it the most popular use of AI in the development workflow for 2024.
That’s according to a survey of developers by Stack Overflow taken last July, and given the advancements in generative AI, one can only assume that number has grown over the past six months.

GenAI coding outpaces security reviews
Apiiro said it found significant security risks in AI-powered code by using deep code analysis of millions of code lines from dozens of enterprises in financial services, industrial manufacturing, and technology.
These security vulnerabilities include a “3X surge in repositories containing PII and payment data, a 10X increase in APIs missing authorization and input validation, and a growing number of exposed sensitive API endpoints.”

Besides coding errors made by genAI, the problem stems from the sheer volume of developers that have adopted AI tools since the launch of OpenAI’s ChatGPT in November 2022.
Citing a report from Microsoft, the researchers also note that 150 million plus developers are now using GitHub Copilot, more than double the number of users since it was launched in October 2021.
“The rise of GenAI code assistants like GitHub Copilot has dramatically increased code creation velocity in the past two years, even as the number of developers has remained steady,” Apiiro said.
Apiiro’s data shows that since Q3 2022, the number of pull requests (PRs) has surged by 70%, far outpacing the 30% growth in repositories and the 20% increase in developers.

This means that the lack of manpower to conduct security reviews on AI written code is a serious issue and one that will continue to fester and leave companies vulnerable unless they begin to adopt an automated review process.
The researchers say that with the explosion of AI-powered code creation, the “traditional manual security and risk management review processes” currently employed by most businesses today are out of sync with the new AI landscape.
Coding errors without controls
The actual coding errors are caused by faults in the generative AI itself, mainly due to the lack of “a full understanding of organizational risk and compliance policies,” Apiro said.
For example, the data found that APIs are exposing sensitive data nearly two times as much, following alongside the growth in the number of repositories.

The research additionally shows a threefold increase in the amount of personally identifiable information (PII) and payment information inside the repositories, and this was just over the last six months or so.
A 10X surge in repositories containing APIs with missing authorization and input validation over the past year was also reported in the findings, further coinciding with the upward trajectory of AI coding used by developers.
This means that anyone, especially cybercriminals, can access the API, exposing a company to data breaches, account takeovers, injection attacks, session hijacking, system overloading, and other functional abuses, according to a Practical DevSecOp report.
And although tech experts predict these security holes will eventually disappear as AGI continues to advance, they still need to be addressed from a security standpoint.
As developers are pushed to create code even faster, these application security risks will also increase, “underscoring the need for stronger risk detection and governance,” Apiiro said.
Your email address will not be published. Required fields are markedmarked