AI-based visual editing service leaks user images and customer data, an AI media manipulation service, leaked nine gigabytes of data, including usernames and images it created using specific queries.

Artificial intelligence-based tools such as ChatGPT or DALL-E have caught the attention of swaths of internet users. However, few have likely considered the security implications of uploading text or images to such tools, and a recent Cybernews discovery is a stellar example of this worrying trend.

The Cybernews research team discovered that, an AI-based visual design platform headquartered in Hong Kong, leaked user-generated content via an open ElasticSearch instance.

According to the team, exposed customer usernames and images they created using the company’s tools. Moreover, the instance also had information on the number of user credits, a virtual in-service currency, and links to Amazon S3 buckets, where generated images were stored.

The Cybernews research team has reached out to the company about the leak, which had closed down the open instance before the publishing of this article. data leak
A user query for the AI to create a white dragon visible in the leaked data. Image by Cybernews.

Down the supply chain’s services allow users to manipulate photos or generate images with the help of an AI-based Application Programming Interface (API). The functionality enables the integration of the company’s services into other apps for third-party use.

The exposed instance also had around 22 million log entries referencing usernames, including individual users and business accounts. However, this does not imply that an equal number of users was exposed, as some log entries were duplicates. self-reported having over 300 million API requests, peaking at 4,000 requests per second from over 5,000 applications and websites used worldwide. boasts of working with over 25k businesses.

At least some of the apps that used’s API had their user data exposed.

For example, the team discovered that accounts of photo and image editing apps Vivid App and AYAYA App were included in the open database. displays both service providers on its website as its customers.

“AI image generation is still in its infancy, and’s case demonstrates how trend-chasing coupled with hasty product implementation can introduce severe security issues if not handled properly,” researchers said.

“AI image generation is still in its infancy, and’s case demonstrates how trend-chasing coupled with hasty product implementation can introduce severe security issues if not handled properly.”

Cybernews researchers said.

Cluster of problems

Exposing the data of users may threaten their privacy. Threat actors could have accessed media uploaded by the company’s customers for AI-based editing, which could have included personal snapshots intended for private use.

Meanwhile, business customers of risk losing the confidence of their own customers, which may lead to a loss in revenue and stagnated future growth.

Moreover, the Cybernews team surmised that the open instance was not appropriately configured, as anyone could have performed CRUD (Create, Read, Update, Delete) operations.

“If’s developers previously didn’t back up the data, the open instance could have led not only to the temporary denial of service but a permanent data loss that was stored on the open instance. Attackers could have wiped it out,” Cybernews researchers said.

In theory, the open instance provided attackers with means to carry out a denial of service (DoS) attack, resulting in disruptions for its users and customers. More worryingly, attackers could have carried out a supply chain attack on the company’s customers.

Attackers would have had to use the leak as an initial access point to enter the database and take over the data. Once inside, threat actors could pass it through’s API.

“Data passing through an API could be executed without proper validation, leading to a remote code execution attack and further compromising the company’s systems,” researchers said.

Business clients of are advised to investigate endpoints that were connected with’s API. Meanwhile, users should change their platform usernames.

More from Cybernews:

Android game with 1m downloads leaked users’ private messages

Hackers target Valentino, Michael Kors, and Creed fashionistas

Instagram Co-Founders show off Artifact, fresh AI-powered news app

Why Google needs Bard to head off a resurgent Microsoft

Dark side of ChatGTP: it’s aiding cybercrime

Subscribe to our newsletter

Leave a Reply

Your email address will not be published. Required fields are markedmarked