I get it. Every security researcher involved in responsible, secure disclosure while participating in bug bounties should receive recognition for their achievements. After all, the industry can be highly competitive, and many of us struggle with underqualification.
Who wouldn't want to be applauded for their hard work? Finding vulnerabilities and reporting them responsibly is good, honest work. However, this leads to the question, is the industry inadvertently directing the next cyberattack?
The year was 2008, and my hacking group was doing what it always did: hunting for vulnerable remote desktops. We used TSGrinder in tandem with Angry IP Scanner through a Python script, which made the host discovery and intrusion completely autonomous.
That’s when we discovered that we had gained access to a law firm in Mississippi. Bad ideas ran across our minds as we started poking around the computer system, imagining having the ability to read private emails between the attorney and US prosecutors.
Maybe we would discover evidence of unethical or even illegal collusion between the law firm and the government. Or perhaps we might uncover sealed documents that contained the identities of informants? We imagined endless possibilities.
One of my members suggested that we report the weak password to the attorney who used the workstation computer. We’d heard stories about hackers getting arrested for unethical reporting. However, we trusted our OPSEC, and in the end, we knew we had to do the right thing.
This is how it went down. My member phoned the attorney, who thought it was a prank call. He didn’t believe that someone was remotely connected to his computer and was about to hang up until the contents of his computer were described in vivid detail.
Silence. Then anger.
The attorney felt extremely violated by our invasion of his privacy. If he had wanted to, he could have notified the FBI of the computer breach, but surprisingly, he didn’t. After he calmed down, he actually thanked my teammate for reporting it to him. Furthermore, he helped the attorney disable the remote desktop since he didn’t use it and taught him about secure passwords.
This could have gone very wrong. Thankfully, this time, the opposite was true.
Now and then, we hear stories about the insurmountable legal trouble hackers have found themselves in. I’m talking about when they think they’re doing the right thing by revealing vulnerabilities or disclosing their intrusions on networks and web applications they were never permitted to scan or access in the first place. The key is permission.
What about security researchers who disclose their findings publicly and publish proofs-of-concept (PoC) for anyone to study and eventually weaponize?
How responsible disclosure works
Responsible disclosure, sometimes referred to as coordinated vulnerability disclosure (CVD), involves a procedure where ethical hackers and security researchers identify vulnerabilities and then report these findings to the impacted company or vendor. This way, they can address the security issues before bad actors find and exploit them for malicious purposes.
In truth, we really did not have any official bug bounty programs or responsible disclosure policies when I started my hacking journey back in the late 90s. If we reported our findings to affected companies, we had to do so cautiously. After all, law enforcement can be very zealous, especially when prosecuting technology crimes that are difficult for laypeople to understand.
A vital aspect of the cybersecurity industry is the ability to exchange research findings and work together with fellow industry participants to educate, alert, and ideally prevent or minimize potential security risks in advance.
However, here’s the dilemma. Threat actors never cease to flex their ingenuity, which in turn triggers a response from security researchers, but only after an attack emerges from hiding beneath the proverbial radar. By design, it’s a game of cat-and-mouse, spy vs. spy.
But there is another security risk that I want to address, and it’s simple: public disclosure.
Is responsible disclosure the enemy of security?
I know, that’s quite a controversial subtitle. Please hear me out. Responsible disclosure is vital in the fight to mitigate and remediate vulnerabilities and security exploits. That’s how industries become more protected from outside threats. But what about those cases where the security researcher publishes their PoC on social media, which is then adapted by threat actors and turned into a weapon?
This happens when companies and vendors fail to regulate how the vulnerability should be publicly disclosed in their policy or whether it should be disclosed at all, at least in the current manner that it is today.
In my mind, the industry should keep PoC exploits and the roadmap detailing how the vulnerability was discovered among an exclusive need-to-know group, such as the cybersecurity industry itself. This should also include a secure method of communicating these discoveries. This way, we can reduce further rampant cyberattacks when bad actors discover that a new vulnerability has been discovered and can be executed in the wild.
This has happened countless times in the past and will invariably continue to happen until new disclosure standards are adopted to limit public access and, by extension, abuse. Whether new standards can be governed through legislation or privately by the industry itself is another question.
When hackers love disclosure
A couple of cases come to mind. The above scenario unfolded exactly as described in the events following a private disclosure by a security researcher from Alibaba Cloud’s security team regarding the discovery of the Log4j 0day vulnerability to the Apache Software Foundation on November 24th, 2021. Despite being undetected since 2013, this vulnerability carried the potential to endanger millions of users of Apache software.
Nevertheless, following the transition from private to public disclosure, the potential for attacks quickly escalated into an imminent risk to millions of affected users. The scope of the impact was insane because it also impacted platforms such as Minecraft, Apple iCloud, Twitter, Cloudflare, Steam, and various software packages, including those by VMWare, among others.
Think about it – the consequences of a single public disclosure caused harm to millions of users. I believe it's an interesting contrast to the harm that ensues when cybercriminals publicly release sensitive information. The researcher, in this case, was not a cybercriminal. But the consequences of his actions were on par with the same.
The narrative concerning this domino effect continues. The exposure of this vulnerability put critical infrastructure at risk, including SCADA systems and systems handling matters crucial to national security. Reports have emerged indicating that the 0day exploit was also executed in espionage campaigns and attacks on systems affiliated with the Belgium Defense Ministry.
There are still systems out there in the wild that are vulnerable to log4j, which any would-be intruder can discover and exploit using the Metasploit Framework.
A similar incident kicked off in 2021 when Windows Subsystem for Linux (WSL) came under attack, introducing the first WSL-based malware, better known as “Bashware.” You see, four years prior, the security researchers who initially discovered it reported it to Microsoft, who dismissed the finding as non-serious because it would be too difficult for a malicious attacker to find and exploit in the wild.
The proof-of-concept was published and soon forgotten. That is until the day an attacker discovered it and weaponized the knowledge therein, making full use of the disclosure in a malicious way.
Instances like these are more common than not. This is because vulnerability disclosure is generally not treated nor regarded as a sensitive matter, regardless of the potential for abuse, which can and has had damaging ramifications against national security.
Your email address will not be published. Required fields are markedmarked