Critical vulnerability plagued AI development platform Lightning AI


Popular AI development platform Lightning AI fixed a critical remote code execution vulnerability. Due to improper user input handling, attackers could run commands with root privileges.

Lightning AI is used by a quarter million developers and has over 185 million total downloads. Cybersecurity firm Noma Security disclosed and helped fix a dangerous security risk with a severity rating of 9.4 out of 10.

Attackers could exploit the bug by crafting special URL links. When clicked by a target, these links would execute a malicious command, leading to remote code execution and potentially total system compromise.

ADVERTISEMENT

Targets’ details, such as usernames and names of their Lightning AI Studio, are publicly accessible through the platform’s templates gallery, which streamlines potential attacks.

“An attacker could automatically craft a malicious link containing code designed for execution on the identified Studio under root permissions,” the researchers said in a report.

The problem lies in a hidden parameter called a command embedded in the URL. Although users cannot see it, the command could be modified in the URL to execute arbitrary commands directly in the terminal.

The malicious URL would contain the username, studio path, and an appended command encoded in Base64. Researchers demonstrated that malicious links could tap into AWS metadata and potentially expose sensitive data like access tokens and user information.

Niamh Ancell BW Ernestas Naprys Marcus Walsh profile Gintaras Radauskas
Don’t miss our latest stories on Google News

“An attacker could craft a URL that included the command parameter and share it via email, forums, or its own website, and every victim that clicked, visited, or used the crafted link would be redirected to the terminal with the malicious URL,” the researchers explained.

Minimal interaction – a single click – would be needed to compromise users and organizations. This case underscores the importance of mapping and securing the tools and systems used for building, training, and deploying AI models.

Researchers suggest that platforms avoid direct execution of user-controlled inputs and never trust them, even when hidden or invisible to users. The development environment should adhere to the principle of least privilege.

ADVERTISEMENT

Lightning AI team released the fix on October 24, 2024.