Besides crippling businesses, cybercrime also fuels much more cruel crimes, such as human, weapon, and drug trafficking, and sex slavery.
In the last two years, fraudsters, cartels of all shapes and sizes, and state-sponsored hackers have tarnished the online world. But a crime has been a crime since Cain hit Abel’s head with a stone, so the tide of cybercrime rising as fast as COVID cases hasn’t surprised Chris Pogue that much.
Ransomware, distributed denial-of-service (DDoS), cryptojacking, and business email compromise (BEC) have flourished under the unguarded shift to digital technologies during COVID-19.
With 13 years of military leadership experience and continued membership of the US Secret Service Electronic Crimes Task Force (ECTF), Chris Pogue, Head of Strategic Alliances at Nuix, believes that the COVID era will be studied for the whole generation to come.
We sat down with Chris Pogue to discuss the recent struggles in cybersecurity, and looked for a silver lining in the now so gloomy cybersecurity landscape.
Many experts I talk to agree that this last period with COVID all is pretty much the most intense that they've seen throughout their careers. What do you think? What has surprised you the most recently?
It depends on how you define the period. I don't think that there's a fundamental shift. A crime has been a crime since Cain hit Abel in the head with a rock. Technology has changed and advanced, but I think the human element behind it remains the same.
I worked for a number of years inside the US secret service. When hurricane Katrina hit, there were mountains and mountains of fraud, waste, and abuse. Fraudsters will seize any opportunity to distort the truth and get people to fall victim to different schemes. COVID being a global hurricane Katrina, if you will, with everyone in the world impacted to some degree. It just creates an environment that is right for fraud.
People want to believe that other people are doing the right things to help the global pandemic move along, and it's just not the case. Fraudsters are good at putting up false websites that you need to donate to. It is an opportunity to help people out. I think that is consistent, and we see that every time there's some pandemic or incidents around the world where people are responding in any way, there's just going to be fraud that comes along like a tail.
What makes COVID so unique is the size and the scale. One of my buddies inside the secret service recently said, 'we will be investigating this for the next generation.' Hurricane Katrina (2003), it's almost twenty years later, and we are still investigating it. So COVID-related waste, fraud, and abuse, or data theft due to the pandemic will be an entire generation of investigations. Thirty years, at least. So that's new.
The vectors are the same. Many of it is based on laziness, whether it's corporate hygiene not taken seriously or forgotten and not seen as a priority. Organizations were trying to figure out how to stay in business and go completely remote. If I have to sacrifice security for functionality, I'll do that because then at least I still have a business, I can take care of security later.
A lot of executives think of it as more of an insurance policy anyways. If something happens, then we will deal with it. But the immediacy of dealing with the pandemic trumped everything else. That created a perfect storm of means, opportunity, and a distracted populace worrying about COVID and staying in business and not worrying about security.
According to Deloitte, around 35% of malware responsible for breachers is unknown, which means it takes a long time to detect it. The average time to detection is 14 weeks. It seems that cybersecurity experts are operating in the dark. Do they know what they are dealing with?
When it comes to malware, it's always interesting. I'm a certified reverse engineering analyst, so I've been dealing with malware for 20 years or so. It depends on how an organization is set up to detect. In the past, you've looked at hashes. That's basically what antivirus uses - a fingerprint. But it has to be an exact match. If it's not an exact match, then it's not a match. Then you fuzzy hashing or content piecewise hashing, which is matching percentages to which files are similar. I can take two files, presumably two malware variants, based on the same source code and say that those two variants are 70% similar.
Then you have behavioral analytics, which looks at what it is doing. Forget about the source code, forget fingerprints, what is the thing doing? Is it harvesting data? Is it opening up remote access? It has some malicious functions. The last bit of that, what is my ability to detect it? Purchasing a detection application or establishing a defensive posture is only as good as its ability to execute. You can spend all the money you want on all the latest technologies, but it has to do what it was designed to do. Which means it has to be instrumented and has to be fine-tuned. And the only way to do that is with real-life scenarios. It used to be called the red team and the blue team. The red team was the bad guys, and the blue team was the good guys. They would attack each other, and then you would take notes and say, 'well, where did we screw up? Let's do it again.' And this is born out of the military and is called training as you fight or opposing forces. You have a blending of the blue team and red team, saying, 'here's what I'm going to do to attack you, here's a malware variant that I'm going to detonate inside your environment. You tell me what you see.'
If you don't go through that instrumenting or fine-tuning exercise, you rely on the vendors to provide you with out-of-the-box capabilities to detect all these different variants, which no vendor is going to do. The success criteria changes from 'I'm gonna buy something' to 'I'm buying a tool that needs to be fine-tuned so that I can defend against these attacks.
You know the attackers are doing it. I 100% guarantee they are not sitting back and creating one variant of the malware strain and deploying it. They are creating hundreds, and so for every one you detect, there's 99 you don't.
What antivirus applications can detect is the easy stuff. It's what you aren't seeing and why you don't see it, and how do you continue to evolve that. At the end of the day, malware is just a program. It has instructions, it's written like anything else, and the more you understand how it works, the easier it becomes to detect it when it's resident within a network. Not easy, just easier.
I guess it still comes back to resources a certain organization might have. When it comes to universities, hospitals, and local governments, they are less equipped to fight cybercrime, right?
Typically, there's a concept that you don't spend $100 to defend $10 worth of data. In financial services, where you usually have the strongest security controls and the greatest capability, this goes back to Willie Sutton (American bank robber), who was mistakenly quoted. Still, the quote is attributed to him, 'why do I rob banks? That's where the money is.'
Why have financial services historically been the most targeted of all business verticals? Well, that is where all the money is. As you move into other areas where commerce is strongest, it historically has been the big three - food and beverage, hospitality, and retail. Those organizations are not necessarily focused on security. They focus on providing services, and that's where their competitive edge comes from - better services, lower prices, better shopping experience, better customer experience.
That extrapolates across those other business lines. Whether you are in legal services, healthcare services, your focus is on providing legal services, healthcare, surgery, higher education. Not on being a cybersecurity professional or having a solid defensive posture and mechanisms in place. I think you are going to see it, I wish we saw it sooner, but I think we will see the service model. You are not going to worry about hiring an army of cybersecurity professionals. I'm going to sign a contract. I'm going to have a monthly bill and pay someone else to do it. That's their expertise. That's what they do. It's not different than having a contractor for your electrical services or plumbing or your trash. Let them be experts in that area, don't try to replicate the functionality. It's just doesn't make a whole lot of sense. I think we will see more and more of that as organizations throw up their hands and think, 'I can't do this, I can't provide higher education services and provide a level of cybersecurity necessary to protect my customers' data.' You'll see those professional services provided by Deloitte, EY, and other types of organizations start to become more and more prevalent as a managed model.
Will this, in any way, make cybersecurity skills shortage a minor problem? If there's software-as-a-service and companies, universities don't try to build in-house capability to deal with that and hire professionals. Probably, we are not going to fill those 4M vacant places in the cybersecurity industry.
I don't think so. I think what you have is a classic model. You have a need, and you have a populace that has to catch up with the demand. In higher education, institutions have to provide the level of education necessary to get their students up to that. You must be this tall to ride that ride, sort of that baseline of expertise, and then graduate them into the market. It's kind of like doctors and lawyers. You will have baby lawyers and baby doctors first year out of medical school and the first year out of law school. It doesn't make them experts. It just means they have the basic level of education necessary to perform within that profession.
I think we will see more of that, and where we need to be careful and not call someone with a degree an expert. Many people have degrees that aren't experts because you have to think about how you measure that. Simply having a degree means you have checked all the boxes, passed all the tests, and have the baseline understanding, whether it's cybersecurity, schoolteachers, doctors, lawyers, or marketing. Just because you went to school and have a marketing degree or a degree in journalism doesn't make you Malkom Gladwell. It just means you have a degree. Good job, you've passed all the tests, now you have to prove yourself in the market.
We need to see that paradigm shift of 'yes, you have a degree, that's awesome, now is when the real learning starts, and a real education about fighting the real enemy.' Before, in academia, you were studying theory. Now you are studying theory in practice, and it's going to change, and you have to be mobile and nimble because it's an evolving threat. If it were a constant threat, we would have solved it 30 years ago. But it's not, so you have to be able to reinvent the way you look at attack vectors. You have to reinvent your approach to investigations because you are fighting a dynamic adversary. Without that level of understanding and ability to be flexible and thinking of new and innovative ways to deploy, detect, react, respond, you are just going to get stuck fighting a modern battle with 20-year-old approaches and strategies. It is not going to work. I think we will still have a shortage for a while.
Is there any way to lure any cybercriminals to join good guys? Insiders would definitely help fight cybercrime. It seems that the cybercrime world doesn't lack talents and skills.
I don't think so. It would be great, though. I think the return on investment is just so high, and some of these criminals are making so much money. They are seen as heroes. They are taking from the evil West. It's almost a Robinhood concept.
We went to Vegas and interviewed a bunch of hackers at DefCon. And it was nothing groundbreaking. It was 'we find an open window,' or 'your heavily protected systems, we are not going to attack those,' or 'we are not going to create a user-id called hacker, we are going to do what's called going native, we are going to assume the credentials of an insider.'
We say insider threats, but every threat is an insider threat. Just because they are starting outside, once they get inside, they are behind the firewall and assumed credentials. We say that they've gone native, and they will be low and slow. Some of them maintain persistence for years.
Luring them away from that, it's hard. Attribution is challenging. You have to get a mutual legal assistance treaty to go into or collaborate with law enforcement to apprehend individuals within the bounds of their countries. Sometimes they live in non-extradition countries. And then you have to prosecute, win the case, and then they have to be sentenced. And sometimes the sentences are a slap on the wrist, and sometimes they get ten years. You are rolling the dice depending on the country's jurisdiction, how much money they stole, things like that. And so it's inconsistent.
There's very little reason to come to the other side. There's a whole lot of reasons to stay as a criminal. It would be great to have more on our side. We could use that knowledge. But if we embraced the hacker community a bit more, the ones that are white hats, or grey hats, as they call themselves, the methodologies they use are very similar when doing pen-testing. Their motivation is different, but the tools they use and how they do it can be replicated, and we can learn a lot.
Not many threat actors get arrested. And there are many reasons for it. But when I talk to law enforcement, they complain that some of the laws are simply binding their hands. For example, encrypted chat apps, besides being instrumental in uprisings, have become a marketplace for illicit goods. But there's nothing law enforcement could do. Privacy is valued much more. It seems to be a priority over fighting cybercrime. What do you make out of that?
There's a great quote from our founding fathers. Benjamin Franklin said, and I'm paraphrasing, 'for someone that sacrifices privacy for security will get and deserve neither.' They are inextricably linked. You can't have both. Do we want privacy? Absolutely. Privacy is important to everybody. I also think there's common sense. The privacy of suspected criminals does not trump the notion of privacy across the entire populace.
Hopefully, we will see an evolution of understanding, where we agree that, yes, privacy is super important. If someone is suspected of committing a crime, we will work with law enforcement to peel back some of those layers and give them the information they need to pursue.
That's going to take time. It's still new, and we still have entire uprisings about privacy. It's a very personal issue. I think certain things trump privacy, such as the apprehension of pedophiles, people dealing in human trafficking, weapons, drugs, and supporting terrorism. You force-fit your privacy when you start to engage in those activities.
But I guess the trust that we have to place in governments is that they only use that information for law enforcement. I don't think that trust is there. It doesn't mean it can't ever be. If that were the case, and the populace said, yes, I trust privacy in the function of law enforcement in pursuit of suspected criminals, I think most people would be ok with that. But trust is not always there. Or it's there in varying degrees, so I think it will continue to be a struggle. Unfortunately, law enforcement will struggle with it because they want to do the right thing and catch the bad guys, and there are always political hurdles they have to jump to do that.
What do you think of this global cybercrime treaty that Russia has proposed? Russia is often blamed for turning a blind eye on the cybercrime gangs as long as they don't target Russian entities.
It's hard to tell. I studied Russian in college, so I speak Russian. I studied history, so I know there was some sort of our fault for going back on our word with Stalin after WWII. So we created the problem that we found ourselves in sometimes. I think treaties are ok. There's nothing bad that's going to happen as a result of the treaty.
The Russian people I met and worked with and known throughout my career are like everybody else. They don't like being labeled as criminals, and they don't want their country being labeled in some adversarial way. I don't think there's any wisdom in perpetuating that.
Hopefully, what comes from this is more joint collaboration from like-minded people who want to stem the tide of cybercrime because of its result. One of the really important fronts that we have to understand is the full spectrum of that crime continuum.
It doesn't end with cybercrime. You have to understand what happens with the money that's made as a result of cybercrime. It fuels other types of crime. That's a cash cow for buying and selling weapons, human beings, children, and young girls to sex slaves, all of these horrible crimes that come from this.
It's not just we are stopping hackers from breaching firewalls and exfiltrating data. We are preventing something that leads to the ability to fund other types of much more heinous crimes. If we all wrap our heads around that, look at the end game, and say this is why we are collaborating here.
To what extent any of the treaties, international agreements, and executive orders can help tackle cybercrime? Would it be much more effective to focus on education as humans are always blamed for being the weakest link in cybersecurity?
Having treaties and discussing them in a political setting is good because it makes it important, and it's an issue everyone realizes needs to be addressed. In isolation, no, it's not going to do anything.
Like any strategy, it all boils down to execution. You can talk about plans all you want, but all you have is a dream. To take that from 'we discussed it at the highest levels of government, we have international treaties' down to 'could we do this, what are the steps we need to achieve the objectives laid out in the treaty or the political decisions?' That's where the emphasis should be. Education is paramount. Spending money on the right things in the right areas is paramount. Training the folks with their fingers on the keyboard fighting the battle is paramount. And looking at it more like a dynamic fight versus something static. It's how are you training, how are you configuring your systems. Are you constantly evolving and looking at that threat landscape like something ever-changing and not staying constant? If I can't stop it, how do I detect it? If I can't detect it, how do I speed up detection? And once I detected it, how do I contain it? Once I contained it, how do I investigate it to ensure I understand how they got in, moved, what they touched, and what they took. And then, is there a law, a treaty that's been violated? And then prosecute until there are enough teeth behind the regulatory issues to compel organizations to do these sorts of things, and there are enough teeth behind the ability to arrest and prosecute these criminals.
That should be the next phase of the political agenda. Not 'I'm going to tell you what you need to do.' $1,9 trillion is spent on compliance in the security world. Just doing the things that everyone tells you you should do. Which is great, but what if you did those things and didn't have the big brother telling you that you had to do them? Then you wouldn't spend so much trying to comply because you were doing the right thing so that you could get the right outcome. It's a bit backward right now. Unfortunately, in businesses, unless you tell them 'do these things,' they are just like 'eh, what's the worst that can happen.' I weigh that against what it will cost me to fix it, and it's just not worth it. Spending $100 on $10 worth of information it's just not worth it - I will pay the fine.
More from CyberNews:
Subscribe to our newsletter