Building a website with AI? Here are the hidden risks you should be aware of


AI-written code enhances productivity, but its benefits also come with risks.

With the rise of AI-powered tools, tasks that previously required technical skills and knowledge are now accessible to everyone. Companies such as Bubble, Webflow, Lovable, and Bolt allow users to build either websites or apps with little or no technical knowledge.

Meanwhile, professional programmers benefit from tools like GitHub Copilot, which significantly enhances productivity.

ADVERTISEMENT

But while AI can help in areas like vulnerability discovery and patching, implementing AI-based coding solutions also comes with cybersecurity risks.

A recent report by the Center of Security and Emerging Technology details three categories of risk associated with AI code generation models.

Models can generate insecure code and be vulnerable to attacks. They can also have downstream cybersecurity impacts, such as feedback loops in training future AI systems.

The security of LLMs is often overlooked, as code generation models mostly focus on their ability to produce functional code without assessing their ability to generate secure code.

Experts working in the IT industry also advise being mindful of those risks.

From inefficiencies to security vulnerabilities

While AI-based tools allow users to write code, fix bugs, or spin up prototypes even for newcomers, they really show off their prowess in translating abstract ideas into immediate code, says Mantas Lukauskas, who is the AI tech lead at web hosting company Hostinger, and also works in AI orchestration startup Nexos.ai.

“Instead of memorizing function signatures or searching Stack Overflow for hours, users can describe exactly what they want and watch the AI deliver a solution – often in seconds. Yet even advanced models can produce code that looks correct at first glance but hides serious flaws, from performance inefficiencies to security vulnerabilities,” he says.

ADVERTISEMENT

Several studies illustrated the risks associated with implementing AI tools in coding. A study published last year from Cornell University showed that out of 452 real-world code snippets generated by Github Copilot from publicly available projects, 32.8% of Python and 24.5% of JavaScript snippets contained 38 different common weaknesses.

According to Lukauskas, one problem with LLMs is that they are trained only to a certain point in time. The model doesn't know if a library has a security vulnerability or not – thus, it will go ahead and generate code using a compromised library.

For example, asking OpenAI or any other model to generate a website using React would result in using an older React version, which may already not be used by most programmers.

openai-micro-soft
Image by Shutterstock.

However, there are workarounds.

“You can always provide some additional information, such as documentation of the library, and instruct it to generate code based on the documentation,” Lukauskas says.

Marcus Walsh profile Niamh Ancell BW Stefanie vilius
Don't miss our latest stories on Google News

AI-tools lack proper encryption mechanisms

No-code or low-code tools allow the creation of thousands of websites with the same framework. Even a small security flaw in the code generated by AI would leave all these websites and millions of their users vulnerable, says Devansh Agarwal, senior Machine Learning Engineer at Amazon Web Services.

People who don’t work in the industry won't know how to fix these risks, meaning that they will remain in the public domain for a long time.

ADVERTISEMENT

Agarwal also points out that even software engineers make mistakes. That’s why their work in bigger companies is reviewed by security engineers, who notify the software engineers about any security issues.

When code is written by AI, it is often unchecked, adding to potential security risks.

Other risks exist, such as AI hallucinations, which could result in pushing insecure code into production.

“I have seen that if you try to generate infrastructure code using AI, it will not enforce the proper encryption mechanisms to prevent security breaches. These mechanisms are generally considered advanced knowledge, so there isn't a lot of data present on the internet to train models on them,” Agarwal says.