Hunting Hydra: the discovery of a massive CSAM network

It's been two weeks since my research team began investigating a website that was hosting a Child Sexual Abuse Material (CSAM) marketplace. But what we didn’t know was that it was only the tip of the iceberg. As one head of the Hydra appeared, another popped up, and the more we dug, the more heads appeared, reaching from the dark web, revealing a vast network spanning across the world.

In light of the recent lawsuit brought against Meta platforms and CEO Mark Zuckerberg by the attorney general of New Mexico, Raúl Torrez, the discovery was extremely significant. The lawsuit alleged that the social media giant enabled CSAM distribution across its platforms and failed to detect predator networks operating in plain sight.

This came to our attention on February 28th, after a member of Anonymous, who is part of a small, organized #OpChildSafety initiative, discovered a link in a public Facebook group they were investigating and building a case against for distribution of CSAM. At some point, the group’s privacy settings switched to private, but not before their WhatsApp and Telegram groups were discovered.

We needed to understand the target to determine the best approach. The only thing any of us knew with any certainty was the content painted a picture of a nightmare. A person was designated to confirm our worst fears – that way, no one else had to be exposed to the content.

This is the story of hunting Hydra.

Organizing the investigation

Delegating objectives to individuals who are skilled in their respective craft is paramount to managing a structured work environment, as it means each person will be able to contribute to the research flow organically. This also eliminates redundant results.

In the same way, working with responsible people is just as important because, all too often, you will find masses of people eager to help but have no sense of direction or knowledge of what is helpful and what is harmful to an investigation.

I say this because it’s not uncommon to hear members of Anonymous thinking that they’re doing the right thing by DDoSing websites containing CSAM instead of researching and reporting them to the appropriate platforms such as the Internet Watch Foundation (IWF) or the National Center for Missing and Exploited Children (NCMEC).

Another misconception is reporting CSAM on-platform. For example, reporting Telegram and WhatsApp accounts can have these users banned but not arrested. Nothing is stopping a CSAM distributor from creating a new account and continuing.

Another important thing is to remember that the sensitive nature of this kind of research must be protected. It is paramount to insulate the data from exposure to outsiders who might take the information and use it toward their own goals.

I am the type of person who cannot even so much as glimpse things of this nature without being traumatized, which is why I work exclusively in data analysis. Furthermore, a person who is more resilient to reacting emotionally to exposure to content of this nature was designated to offer a visual confirmation, as well as search the site for personal identifiers, as well as payment methods such as email addresses and crypto-wallets.

Another individual was selected to perform Open Source Intelligence (OSINT) on usernames and phone numbers without having access to the links. This way, we could control who knew about the target link to ensure it could not be leaked outside our operation.

WHOIS analysis

The first step was to get an understanding of who hosted the target. I needed to obtain WHOIS records. This could return IP address blocks, domain names, and registrar information, which could contain subscriber information and so on. To do this, I used VirusTotal.

Some might not be aware, but you can do a heck of a lot more with Virus Total than merely upload files you suspect might contain malware. The scope of the information it provides allows users to passively scan hosts as it generates a fully customizable graph. In turn, this will map out every relationship connected to your target, such as host information, ASN numbers, WHOIS, domain certificates, subdomains, mirrors, and, of course, malicious files or code.

This allowed me to identify where the site was hosted, when the domain first appeared online, and all its historical WHOIS records. This can be useful if there is any variant subscriber information, although in this case, it was either marked as private or obfuscated.

That is when I discovered a mirror running by the same hosting provider. Now, we were working with two sites instead of one. The domain name belonged to a Google Domains subscriber registered in Germany, but the hosting server was located in Nevada, USA. The subscriber’s information was marked as private.

In this case, it generated a large cluster of malicious code. However, inside one of these clusters was a malicious IoC indicator (Indicator of Compromise) pointing to a TOR onion link on the dark web. It indicated the possibility that Hydra subscribers could access the content using a bridge from the dark web to the clearnet. This would be easily accomplished by using tor2web, which is an HTTP proxy software designed to do just that.

An IoC acts as a red flag, which indicates that something suspicious might be taking place on a network or endpoint. This can be anything from strange activity, such as outside scans, to data breaches. Any digital evidence left by an attacker will be detected by an IoC.

On February 29th, I reported both sites to the Internet Watch Foundation and to the National Center for Missing and Exploited Children.

The ultimate sabotage

Whenever a person launches cyberattacks against websites being investigated for CSAM, that person is unwittingly helping the enemy by alerting them that someone is onto them. This allows the host to destroy evidence and switch hosts and domains, effectively ending the investigation for everyone involved.

When people do this, they aren’t thinking about the child victims of these crimes.

They’re thinking only of themselves.

On or around March 2nd, an individual outside my research group leaked the web address that was initially reported to my team. We discovered this because the site was unceremoniously hit with a Distributed Denial of Service (DDoS) attack. By the time we learned about the attack, the host had already moved somewhere else, and the sites were no longer available.

To make matters worse, our research virtual machine crashed, and we lost most of our data.

I contacted the IWF, and they confirmed our worst fears: the sites were now offline.

We were able to reconstruct our data because, after everything we learned, we still had access to historical WHOIS records. More importantly, after performing a reverse image search on the “model” the operators are using as a website preview image, we learned that the victim in question is featured all over Clearnet CSAM sites.

By employing the same method, I ran a reverse image search on a screenshot of their user login page and discovered that one of the discontinued mirrors had been cached by Yandex, which pointed to a single snapshot of the site on

Moreover, by searching the domain names on Google, we found over a dozen more mirrors, many of which were connected to illicit password-protected content shared on Mega.

And more mirrors, including links posted on Facebook.

Archived source code analysis and AI

One of our researchers began to crawl and audit Hydra’s sites, as well as visually search its source code. The following was discovered:

  • The sites were all exact duplicates
  • The owner was using a Discord bridge to host web content
  • The owner’s Telegram ID
  • Cryptocurrency payment options
  • PayPal email address
  • The website was written sloppily

I used ChatGPT to better understand the relationship between different sections of code and had it search for useful investigative items, such as payment information and how it hosted content. This is great for generating summaries of complex information because it can parse through programming languages you might be unfamiliar with.

I wasn’t done with AI just yet. Because one of our researchers managed to infiltrate Hydra’s WhatsApp group, this gave us access to user phone numbers and display photos for us to reverse. One of the display photos belonged to a high-profile target. Sadly, we could not return successful reverse image results because half of the head was cropped.

This was remedied by using an app, which allowed me to use AI to generate the rest of the head, the haircut, and more of the tee shirt and background. But as luck would have it, even though what the AI added to the photo was able to satisfy the missing requirements to pass a reverse image search, it could not find any matches.


One of our researchers discovered that Hydra suffers from an interesting server vulnerability. Without divulging too many specifics, an endpoint was exposed, which allowed the researcher to obtain GET requests from the server in real-time, exposing a log consisting of time stamps, user names, user IDs, which tier the user bought, how many users they invited to the site, and so on.

This was invaluable, which helped us better understand the structure of the criminal enterprise. Many of the usernames were also unique.


As each day passes, our researchers continue to find more mirrors. By the time this article is published, we will have possibly uncovered over 30 different mirrors, all hosted by the same hosting provider. This doesn’t include all the other CSAM networks we stumbled upon, most of which caused our investigators to encounter red rooms and snuff.

I wonder how we got here. How did this escalate from being an epidemic to ultimately becoming a full-fledged take-over? I have reported new samples to the IWF and NCMEC, as well as the Homeland Security Investigations (HSI). But with the urgency we work against Hydra, we do not feel the research is being heard.

After everything we learned about Hydra and after careful consideration of the above question, I realized how its existence lies squarely on the shoulders of one common denominator: the tech industry.

Let me put it this way. The tech industry is the sole enabler of child sexual exploitation because the platforms and services being used by predators have given this activity free rein to thrive. That is why those who have failed to proactively police this content should be held liable for creating safe havens for pedophiles to operate unchallenged, without consequence.

It is time for the tech industry to begin enforcing policy and offering public transparency, as well as reporting it to NCMEC, IWF, and Interpol. This will ensure that legal action will ensue rather than having those accounts banned. This cannot happen without better cooperation between law enforcement and the public. We shouldn’t have to jump through hoops just to inform them that children are being abused.

More from Cybernews:

Anti-scam firm exposes OpenAI API key

Ransomware attack on MarineMax yachts claimed by Rhysida gang

Eagle Bank cancels payment cards over merchant breach

BBC working on AI models, content deal – media

Cyberattacks hit the health sector in the US

Subscribe to our newsletter

Leave a Reply

Your email address will not be published. Required fields are markedmarked