Hackers continuously risk jail time with little attention paid as to whether they’re trying to protect or attack the internet.
The term “hacker” has come a long way from a fleur of social stigma associated with it to less judgment and more natural curiosity surrounding the capabilities of those actors. Yet even today, the law does not always look upon them favorably.
Back in 2019, a Hungarian ethical hacker was accused of exposing vulnerabilities in the system and faced eight years in prison for “committing the crime of disturbing a public utility."
Similarly, an ethical hacker Joshua Crumbaugh got banned from Twitter after openly demonstrating how relatively easy it is to create a spam bot.
This goes to show how reluctant tech giants and many smaller companies are to accept help from outsiders, especially when it’s unsolicited. Even more so, this exposes gaps and holes in the legal system, which does little to differentiate between the tactics and intent of hackers. At the same time, progress is being made, with the Department of Justice (DOJ) announcing that it will no longer subject ethical hackers to criminal charges. But are we quite there yet when it comes to the security and protection of “the internet’s locksmiths?”
Cybernews sat down with Casey Ellis, founder and CTO of crowdsourced security platform Bugcrowd, to learn more about the role of ethical hackers in today’s cybersecurity niche, and discover what the law really thinks of “hackers in good faith.”
There are different types of hackers – blue-hat, white-hat, grey-hat, red-hat. Do they all belong to the same category of ethical hackers?
The thing they have in common is that they all hack. That goes to intent and impact – mostly on the receiving side. Every one of them is involved in something like identifying weaknesses in computers or even outside of the cybersecurity context, making the computer do something it was not necessarily meant to do in the first place.
Is ethical hacking considered a crime then?
I love this question. Ethical hacking is not considered a crime since the law doesn’t have the concept of “ethical.” It’s more the fact that most of the laws that have been written historically codify hacking in terms of unauthorized computer access and basically say: “If you do this, you’re probably a criminal.” Which presumes that if you’re doing something bad to a computer, it’s probably for a bad reason, so we can safely assume you’re committing a crime. That is not true because ethical hacking involves all the same activities, yet with it, you’re not only not committing a crime but preventing it from happening in the first place.
This is actually one of the reasons why I started Bugcrowd. One reason was to combine the resources and the creativity that exists in the hacking community with all the problems that need to be solved in cybersecurity. The other was to keep my buddies out of jail because there is a long history of ethical hackers getting prosecuted, taken to court, or, at the very least, getting chilled by the potential threat of their actions being misinterpreted as evil.
I view hackers as functioning as the internet’s immune system. While what we’ve had traditionally is this internet autoimmune problem where we reject the input of folks who can actually identify “the sickness” and treat it.
Why should organizations opt for ethical hackers?
The reality is, if they’re on the internet, they’ve already opted for hackers of all sorts, including the unethical and ethical kinds. Using laws as a way to deter bad things from coming in off the internet is analogous to yelling at the thunderstorm and asking it not to hit you with the lightning. It’s not listening to you and even if it could hear you, it probably wouldn’t have cared. What you could try and do instead is putting up a lightning rod – and even if the lightning does strike that thing, you channel it around in order to get it to a place where you can deal with it. Which is exactly analogous to companies receiving security feedback from the internet.
Ultimately, the opting side of it is becoming less and less voluntary. There are regulations coming in now like vulnerability disclosure adoptions. And then you’ve got things like CISA Binding Operational Directive that actually requires federal civilian agencies to opt in to getting feedback from ethical hackers.
So there is a lot of pressure on doing it, which is literally because of how the internet works. It’s a matter of people recognizing and starting to work with it rather than assuming that if they ignore it, it will go away.
Vulnerabilities are a product of the fact that humans write code. Writing software, deploying systems into environments – humans are incredible in their capacity to create, but we are not perfect. And the internet itself amplifies the things that we get wrong. So the necessity comes from humans being the ones building the internet. And the computers can’t find all the ways in which we can get things wrong, which is where the ethical hacking community comes in.
How safe is it to work with ethical hackers? Could someone use their position only to later exploit the company’s vulnerabilities?
People are doing it anyway. Assuming otherwise is far more dangerous than what we’re proposing here. For the first three years of Bugcrowd, we had to put a lot of time into introducing the concept of a digital locksmith. Both a locksmith and a burglar can break into your house and steal your stuff – and here, we are talking about what they’re capable of, not so much about what their intent is. And I feel like we’re a lot further along in implementing that idea, although people still can make decisions out of fear. But ultimately, they should realize that it doesn’t matter: if a burglar wants to rob your house, they’ll do it anyway. It’s important to get ahead of that and make it as difficult for them as you can possibly manage.
In turn, what are the dangers of being an ethical hacker?
That’s changed a lot, as well. Initially, there was a lot of social stigma you had to manage. Hacking prior to 2015 was almost strictly counter-cultural, where if you identified as a hacker, it was almost you and this community against the world since no one understood what we were actually trying to achieve here.
I think it’s a lot less true today, partly because cybersecurity itself has become a concept an average person is at least aware of. So it’s become much easier to explain what ethical hackers are here for. And the other side of it is companies like ours which suggest to everyone that it’s safe and they probably need it.
Ethical hackers see it, and it’s becoming progressively less and less dangerous to practice hacking in terms of staying on the right side of the law. I think there’s still a lingering concern around the legal side of things and, to some degree, around the social stigma you’ve mentioned before, but I think that that’s continuously decreasing as the importance of this becomes more obvious.
How far can ethical hackers go to still be considered “ethical?” For example, if one hacks a company without their prior knowledge only with the intention to help them find vulnerabilities, is it still considered ethical?
The idea of navigating through hacking laws has become a little bit like jaywalking. In itself, jaywalking is a misdemeanor in many places, but if you do it, it’s relatively unlikely you’ll face charges unless you do it in a way that is extremely disruptive.
Ethical hacking works quite in a similar way. When companies receive unsolicited input that’s actually valuable from people on the outside, someone might have the reaction of having their baby called ugly for the first time. It’s a thing that’s going to happen and having the humility to recognize that it will happen allows organizations to prepare themselves.
Another thing is that researchers can be obnoxious. Perhaps, they’ve overstepped the actions that would make it clear that they’re operating in good faith. There are many variations of that, such as security researchers making big noise on Twitter as they’re trying to talk to the company and get the problem resolved. Having empathy helps make things work a lot better.
So what makes a hacker ethical? It’s really a sort of a Hippocratic Oath – I’ll just do everything I can not to cause harm in the process. There are limits to that because sometimes you have to push to get people to pay attention. Again, as a hacker, you need to understand what the laws are and make sure you’re aware of any you might be technically violating while trying to make the internet a safer place. And in general, the term “ethical hacker” implies that there is a consensus of what the good side of things actually is, and there isn’t. It’s fluid at this point in time.
Which term would you recommend instead?
It really goes back to the question surrounding hats. This whole idea of being able to bucket people into “good” or “bad” and use it as a framework for how we do computer security just doesn’t work. The web moves too fast for that. Hacking in good faith would be appropriate to describe benevolent hacking.
How can one protect themselves against hackers?
If you can’t beat them, join them.
It’s important for organizations to recognize that there is feedback available to them. The fundamental idea behind Bugcrowd is the notion that one person as an employee of a company trying to outsmart all of the potential adversaries out there will fail. It doesn’t matter how skillful they are.
Accessing that army of allies available out there just makes sense. My recommendation, at the very minimum, is to look into the vulnerability disclosure policies, get your organization aligned around the fact that that is a good thing for you to do. Being vulnerable is inevitable. It’s not good or bad, it’s about how we respond to it.
Your email address will not be published. Required fields are markedmarked