Certy AI’s anti-scam moderation system left an exposed environment file and revealed sensitive information such as its OpenAI API key, Cybernews researchers have discovered.
Certy AI, an IT company specializing in web services for business and cybersecurity, left a publicly accessible environment (env.) file, opening itself up to attackers.
Since an env. file serves as a set of instructions for computer programs, leaving it open to anyone could expose critical data and provide threat actors with various options for attacking.
The company closed the file after our researchers contacted them. According to CertyAI's representative, the .env file was left open while rushing to deploy demo services, which were not supposed to be left open to the public.
“Our concern for user data never ends, but sometimes when deploying demo services (which aren’t open to the public), we go too light on security, and in the rush to get online, the rush can play nasty tricks, such as copying an .env file and pasting it into a demo instance in a bad way. This case only involved tokens or temporary access which are limited to demo use, so they can only be used through the demo and do not compromise data security in any way,” the company's representative said.
The Cybernews research team discovered an accessible env. in a CertyAI subdomain containing its OpenAI API key and Photoroom API key. While an OpenAI API allows businesses to integrate ChatGPT-maker’s services into their websites, a Photoroom API allows users to edit images.
According to the team, exposed API keys, including an OpenAI API key, can be a severe security vulnerability.
“API keys are sensitive credentials that grant access to specific services or resources, and if they fall into the wrong hands, it can lead to unauthorized access and potential misuse of the associated resources,” researchers said.
Attackers could abuse the OpenAI API key to drain owner resources and cause financial damage. Moreover, malicious actors could abuse the API for malevolent purposes and compromise the confidentiality of the key owners’ interactions.
“OpenAI typically charges for API usage. If someone gains access to your key and uses it maliciously, you could incur unexpected charges as the attacker consumes your allocated resources,” the team said.
Leaving exposed OpenAI API keys could enable a malicious actor to generate inappropriate or harmful content, potentially causing reputational damage to the company.
Another threat vector derives from the way an API key is used. If an OpenAI API key is used to process confidential information, unauthorized access could lead to a breach of privacy.
The team advises companies to follow API key management best practices to avoid unnecessary attention from attackers, including keeping API keys secure, rotating them regularly, restricting access based on the principle of least privilege, and monitoring for any unusual activity associated with the key.
“If you suspect that your API keys have been compromised, it’s advisable to revoke them and generate new ones immediately. Moreover, changing credentials, like email username and password associated with exposed API services, is a must,” researchers said.
Updated on March 22nd [09:45 a.m. GMT] with a statement from CertyAI.
Your email address will not be published. Required fields are markedmarked