
A critical flaw in Meta’s AI framework allowed attackers to remotely deploy malware directly on the server hosting artificial intelligence apps, researchers claim.
A major vulnerability in an open-source tool used in Meta’s Llama Stack left users open for remote code execution (RCE) attacks. According to the Oligo Research team, a number of bugs related to the misuse of the PyZMQ open-source library were uncovered.
PyZMQ is a tool for sending messages between different parts of the Llama Stack framework. Meanwhile, Llama Stack is an open-source framework, an essential part of building functioning AI-based apps. It could be anything from a text analysis tool to a simple chatbot. Introduced last July, Llama Stack is a major player in the AI game, backed by industry titans such as AWS and NVIDIA.
Researchers claim the problem is that the framework automatically processes any data it receives through its messaging system (PyZMQ), which is like opening packages without checking who sent them or what's inside. This messaging system is built on ZeroMQ, a technology for sending messages between different parts of software, but it is implemented in Python as PyZMQ.
The bug, tracked as CVE-2024-50050, received a severity score of 9.3 out of 10 from security experts, marking it as a critical threat. However, Meta's own assessment rated it at 6.3, considering it a medium-level danger. Any score above 9.0 typically indicates a bug that could give attackers complete control over a system with little effort.
“Affected versions of meta-llama are vulnerable to deserialization of untrusted data, meaning that an attacker can execute arbitrary code by sending malicious data that is deserialized,” researchers said.
Since the bug in Meta’s Llama Stack would unpack any package it received without checking what’s inside, attackers could exploit it to deploy malicious code. The flaw, researchers claim, could have led to data breaches or theft of computing resources.
The most dangerous part was that the flaw would be set on the app-holding servers, which, at least in theory, could allow attackers to take over the entire system running the AI model.
According to the researchers, Meta was quick to react to the team’s disclosure. The patch was applied in frameworks version 0.0.41. PyZMQ’s maintainers have also improved their documentation to avoid similar issues in the future.
Your email address will not be published. Required fields are markedmarked