A researcher at the Black Hat 2024 conference has revealed that Copilot, Microsoft’s AI assistant, has multiple security loopholes, allowing attackers to exfiltrate sensitive data and corporate credentials.
Microsoft claims that delegating tasks to artificial intelligence might save hundreds of hours of work daily, referring to its AI-powered assistant, Copilot.
Launched in February 2023, Microsoft Copilot Studio allows companies to build their own AI assistant and train it on specific company data to automate tasks across Microsoft’s apps. Based on a large language model, Copilot accepts text, image, and voice prompts and runs on new Windows 11 PCs and Copilot PCs.
However, using AI always involves risk. As the debate heats up about how big tech companies handle user input data, Microsoft says that Copilot provides enterprise-grade security, safeguarding user data from being used in Microsoft’s AI models’ training.
Despite Microsoft’s claims, cybersecurity researcher Michael Bargury demonstrated how Copilot Studio bots can easily exfiltrate sensitive enterprise data, circumventing existing controls. The findings were revealed at the annual Black Hat USA 2024 security conference in Las Vegas.
“Leakage is not only possible but probable”
According to the researcher, a combination of insecure defaults, over-permissive plugins, and wishful design thinking makes data leakage “probable, not just possible.”
“Most Copilot Studio bots people build will be insecure, and they are very easy to spot on the web. This is data leakage waiting to happen,” Bargury told Cybernews.
“You must cover them as part of your application security program to make sure you're not leaking data out of your organization.”
Bargury goes on to explain that Copilot’s interface and system are not error and malicious attack-proof.
An exploitation tool he created, CopilotHunter, can scan for publicly accessible copilots and use fuzzing and Generative AI to abuse them to extract sensitive enterprise data. The findings showed that targeting thousands of accessible bots reveals sensitive data and corporate credentials that malicious actors could exploit.
“Attackers can remotely take over your interactions with the Copilot. They can get the Copilot to do whatever they want on your behalf, manipulate you, and misinform your decisions. They have full control of every word the Copilot writes to you,”
explains Bargury.
Ways to exploit Microsoft’s Copilot
The security team's headache starts with the initial steps of creating a Copilot. To create their own Copilot, users have to pick a “Knowledge” source to train the AI model.
Users can pick various sources as a training set for AI, such as public websites, uploading files, using SharePoint, OneDrive, or a cloud database, Dataverse.
However, even at this initial phase, Bargury sees a potential minefield for security. Uploaded files might contain hidden metadata, and sensitive or compartmentalized data might be uploaded. Copilot sharing will break compartmentalization, enabling co-owners to download the files and leading to multiple leakage scenarios.
Using Microsoft’s content management tool SharePoint or storage space OneDrive as input can be problematic, as it might lead to the oversharing of sensitive data. Although the data is stored within a company's cloud environment, the user authorizes Copilot to access all subpages under the provided link.
The same applies to using Microsoft’s Dataverse as input. Dataverse stores and manages data used by business applications. However, the data in the tables is dynamic and might be part of other existing applications and automation, leaking sensitive data.
Another important building block of Copilot – Topics – is troublesome. Topics can be seen as the Copilot competencies: they define how a conversation with AI plays out.
A topic contains a set of trigger phrases—phrases, keywords, and questions that a user is likely to use related to a specific issue. A trigger phrase helps Copilot react appropriately to questions.
In addition to 16 predefined topics, users can create their own to customize the Copilot. The researcher indicates that the similarly named topics might influence execution paths.
Copilot’s premium feature, Generative AI, might help user data flow outside the organization. To use it, users have to agree that “data is flowing outside your organization's compliance and geo boundaries.”
The researcher also assumes that threat actors could exploit the "Actions" functionality to confuse Copilot and access sensitive data without authorization. The “Actions” feature makes Copilot respond to users automatically using generative actions.
Reading hardcoded credentials
Some connections and flows might contain hardcoded credentials. Hardcoding means that authentication data, such as usernames, passwords, API keys, or other sensitive data, is directly entered into the source code of software or applications.
While this is a weak cybersecurity practice, it can still occur. The situation could become even more problematic if these credentials are channeled into an AI model. The model might analyze those resources and “learn” the credentials, potentially supplying hardcoded credentials as part of a copilot answer.
Another weak point that is not error-prone is Channels and authentication. The current default authentication for copilots is set to “Teams.” However, the “No authentication” option is just one click away in the interface, tempting users to slip.
“Giving AI access to data makes it useful, but it also makes it vulnerable. Each organization needs to do its own risk assessment and figure it out,” says Bargury.
“Especially if you're building custom copilots with Copilot Studio, be careful of leaving that decision up to developers or even citizen developers. That decision needs to be taken with the security team. Security must be monitoring these interactions closely,” he concludes.
Your email address will not be published. Required fields are markedmarked