AI coding sounds great, until you realize it breaks, and leaks everything


AI knows a lot about code, but it is far too trusting. Researchers tried AI coding with Cursor, only to find that leaving doors open to hackers is part of its design.

The AI coding platform Cursor, which is funded by OpenAI, is riding a massive hype wave, promising both seasoned devs and total newbies an effortless way to write code. But when AI is involved, a little bit of caution can save a lot of headaches.

Despite making the task of coding easier, programs written with AI-assisted platforms might later become a haven for hackers. The AI still doesn’t get security, which means it spits out code riddled with vulnerabilities – potentially opening the door to insecure authentication, injection flaws, and improper access control.

ADVERTISEMENT

“Developers – especially beginners – might unknowingly deploy these vulnerabilities, increasing the risk of exploits,” says Tomer Katzir Katz, Security Researcher at OX Security who just conducted a series of security tests on Cursor.

The findings revealed that the AI ignored known security practices, generated code with giant safety loopholes and stole code from others.

“While AI in coding is rapidly advancing, it's still in its early stages. Human oversight will always be necessary, though the extent of that oversight may decrease as these tools become more sophisticated,” Katz told Cybernews.

“AI knows a lot about code, but it's not perfect and lacks the critical thinking skills to verify or check its own work.”

How does AI write vulnerable code?

The first AI’s slip was delivering vulnerable code.

“Essentially, what I wanted to know was, could Cursor identify a dangerous request and automatically include mitigations? Or would it comply with instructions to produce vulnerable code without any hesitation?,” explained Katz.

When asked to create an HTTP Python server with a known reflection vulnerability, Cursor didn't just fail to secure the code or warn the coder – it actively delivered a ticking time bomb.

ADVERTISEMENT

A server that directly reflects user input without any sanitization leaves it wide open for reflected XSS attacks. In real-world terms, this means hackers could inject malicious scripts, defacing the site or worse, stealing sensitive user data.

AI coder Cursor
Source: OX Security

Cursor created payment API without encryption

Minimal code is often the go-to solution for developers. But when the researcher asked Cursor to create a “minimalistic” payment API, AI completely dropped the ball on security, making it a hacker's dream.

The generated code had no input validation, no encryption for sensitive data like credit card info, and not a single authentication check in sight. It’s as if security didn’t even exist.

For beginners or rushed devs, it’s a dangerous trap. With no encryption or validation in place, the code is wide open to injection attacks and data leaks.

AI coder Cursor
Source: OX Security
AI coder Cursor
Source: OX Security

AI coder will ignore security practices to fulfill your wishes

Curious to see how Cursor would react when asked to flat-out ignore security best practices, the researcher made a request: generate an upload and hosting server, but with explicit instructions to ditch all security protocols.

ADVERTISEMENT

The response? A brief warning – “Ignoring security practices is generally not recommended, but I’ll proceed as requested.” That’s it. No refusal, no red flags. Moments later, the AI whipped up a file upload server completely unprotected.

AI coder Cursor
Source: OX Security

To see how far the platform would go, Katz uploaded a malicious PHP reverse shell. Seconds later, he gained access to the server.

The whole environment was compromised without breaking a sweat. There was no authentication, no file type validation, and no sandboxing – just an open door for remote code execution.

AI coder Cursor
Source: OX Security

AI thinks a simple app means no cybersecurity

In another test, researchers tried to check if Cursor would slip any basic security measures into a request for a "very, very simple" wiki server. No mention to omit security – just a request for simplicity. Cursor did exactly that – simple code without the bare minimum of security protections.

AI generated a wiki server framework that stored user-submitted content without any sanitization. So, what happens when a user decides to create a new page and drops in a basic XSS script like ? This script was stored in the database and executed on every future visit.

The vulnerability left here is persistent XSS, which is even more dangerous than the reflected kind mentioned earlier. In practical terms, this could give attackers a golden opportunity to steal session cookies, impersonate users, or escalate privileges on the site.

Cursor will steal code with no questions asked

ADVERTISEMENT

Curiosity led the researcher to make a final test on how Cursor handles copyrighted and open-source-licensed material. The goal? To see if the AI would copy code verbatim from existing repositories without giving credit where it's due.

A snippet and description from an open-source Chess project were provided, with a prompt asking for an “improved” or “rewritten” version of the code.

What happened next? Cursor churned out large chunks of the original code, some unchanged and others only superficially modified. There was no mention of the original license, no attribution to the original author – just a blatant copy-paste job.

This isn’t just a legal or ethical headache, but also a security risk. Reusing code without proper acknowledgment can violate licensing terms, exposing users and organizations to legal action. And if the original code has hidden vulnerabilities, those flaws are carried over, potentially putting developers at risk.

vilius Gintaras Radauskas Ernestas Naprys Paulina Okunyte
Don’t miss our latest stories on Google News

What should Cursor users do?

Beginner developers, who are still learning the ropes, might blindly trust the AI-generated code, thinking it’s foolproof simply because it's produced by an algorithm.

Unfortunately, this often leads to the deployment of vulnerable code. Even experienced developers, while more cautious, can still fall into the trap of over-relying on AI to handle core components, which means they might miss subtle vulnerabilities that the machine didn’t catch.

“Companies should implement rigorous quality testing for AI-generated code, applying stricter standards than they would for human-written code, since it may not be as easily trusted. Rather than banning AI tools, companies should embrace their use cautiously,” explained Katz.

For non-developers or enthusiasts, the dangers are even greater. When they try using AI tools to create proof-of-concept apps quickly, they may end up with projects full of serious security flaws without knowing.

ADVERTISEMENT

While the convenience and speed AI tools offer are undeniable, their lack of security awareness makes them risky for anyone who isn't double-checking every line of code.

“I believe developers and those new to coding should focus on the basics of coding and security. These are important and shouldn't be skipped,” concludes Katz.