The sudden need to work from home en masse caused years of progress to happen in a few months over the last year. Nowhere is this more obvious than in the adoption of cloud technology. Unsurprisingly, sporadic transition left gaping holes in the security.
There is no denying that the need for cloud-based solutions skyrocketed in 2020. Remote working would not have been possible without it. According to the NTT’s 2021 Hybrid Cloud Report, over 90% of decision-makers across 13 major economies saw cloud adoption as a critical business need.
PwC estimates that cloud-related expenses rose by a staggering 19% last year even though the pandemic caused overall business spending to shrink last year. Virus-fueled ascent to the skies, however, carries its own weight.
According to Andrew Wertkin, an expert on cloud tech and CSO of BlueCat Networks, even the security savvy financial companies transitioned from on-premises conferences Zoom calls in a few weeks, and that cannot go without consequences.
“These decisions could never have been made that quickly in the past because of the amount of rigor and diligence that would be done on the security side. This trend we’ve seen over the last year opens lots of opportunities for bad stuff,Andrew Wertkin.
“These decisions could never have been made that quickly in the past because of the amount of rigor and diligence that would be done on the security side. This trend we’ve seen over the last year opens lots of opportunities for bad stuff,” Wertkin told CyberNews.
Transition to the cloud is here to stay. Senior Leadership IT Investment Survey 2020 by CCS Insight shows that the number of businesses with more than half of their IT workloads hosted in the cloud is expected to double over 2021. For most companies, cloud and security investment categories remain at the very top.
Wertkin explained to CyberNews that a new phase is emerging as companies need to evaluate hastily made decisions over the last year. The dog days might not yet be over for some who chose the non-flexible solution to match a dynamic business environment.
Cloud technology played a vital role for the vast majority of businesses during the pandemic. And it’s no surprise that sometimes sporadic transitions led to security flaws. Could you share insights on what’s been going on with the cloud recently and what significant security challenges came from the change?
I’d break down cloud-related developments into two broad categories. One is the migration of data centers to the cloud or customer building net new capabilities in the cloud. And the other is the rapid adoption of software-as-a-service (SaaS). During the pandemic, there certainly has been acceleration of cloud migration to some extent, however we have seen deep evidence of the rapid adoption of SaaS leading to the advent of direct internet access.
Many companies relied on all internet traffic going through a security architecture in the data center or regional network access point. And that capacity was not enough for a massive switch over to SaaS.
It caused performance issues because businesses were switching over to geolocated distributed SaaS systems. Yet they’re trying to drive all of that communication back through a regional data center. That’s destined to cause performance issues. And that increases costs if you don’t start trying to drive stuff out directly.
Companies will download containers, potentially from untrusted repositories from the internet and use them directly in production. That seems scary,Andrew Wertkin.
And when you look at some of the vendors like Microsoft 365, their historical guidance advises not to push their data through proxies because that might create performance issues. So, companies weren’t going to deploy a bunch of proxies everywhere.
What we saw was direct access, avoiding traditional security infrastructure to the internet for SaaS. In some cases, for everything in other cases, just for those SaaS applications. That leads to rapid change in the security architecture and, unsurprisingly, to more vulnerabilities, opening up the door in ways companies didn’t expect.
Large financial customers went from on-premises video conferencing systems to Zoom in a month across tens of thousands of employees. These decisions could never have been made that quickly in the past because of the amount of rigor and diligence that would be done on the security side. This trend we’ve seen over the last year opens lots of opportunities for bad stuff, given that the surface area has changed so dramatically so quickly.
Even understanding all the risks involved in the quick transition, there seems to be no way back. With that in mind, how can businesses avoid security flaws during this transition?
I talked about direct internet access from offices. But companies also have people working from home and customers using their services from home. A lot of companies were trying to jam everything back through a VPN and creating additional costs.
Any decisions made too rapidly are probably bad. A lot of vendors are out there that have solutions to solve all of this. As far as I’m concerned, vendors rapidly adopting those solutions are along the same lines as rapidly adopting SaaS. You need to go through your appropriate diligence because if something looks like it does everything, that doesn’t mean it actually does.
Thinking ‘let’s jump to something new right away’ is always problematic. My advice is always to properly evaluate changes to architectures. The worst thing anybody can do right now is to deploy something inflexible. Part of cloud migration, in general, is to be able to push out and change applications and services faster than ever in the past, to be part of the digital economy, and to take advantage of digital strategies.
If companies going from architecture A to architecture B do it in such a way where it’s going to be just as hard to switch to architecture C, that’s a problem. Look for things that can enable continuous change that doesn’t lock you into a single paradigm.
At the end of 2020, several predictions claimed that we would see threat actors focusing more on Kubernetes deployment, with attacks being a lot more sophisticated than what we’ve seen previously. Do you see more dangers here?
I think we’ve reached a tipping point. The consumption and utilization of Kubernetes isn’t an experiment anymore. Two or three years ago, there were capable people kicking tires in many large organizations that are building production workloads on Kubernetes now all the time. And many of the traditional mechanisms you’d use to find the weak spots in a server don’t exist from a Kubernetes perspective.
So how am I going to get to that now? There are many potential weak points in how companies deploy and utilize Kubernetes that aren’t necessarily just Kubernetes. For example, companies will download containers, potentially from untrusted repositories from the internet and use them directly in production. That seems scary.
I think the more stuff gets moved to the cloud, the more people will realize that they should not have to hire consultants or buy other software to figure out why they’re spending $5 million a year,Andrew Wertkin.
What’s your mechanism to ensure that that container is doing what you want to and whether it’s secure? When you’ve got to rush to new technology, no matter how secure that technology is intended to be, you have new users rapidly adopting technology. And what could go wrong with that? Well, potentially a lot.
There was a super interesting and novel supply chain attack developed, ethically of course, by Alex Birsan where you could basically inject code into software development and build environments that download dependencies from the internet in what he calls “dependency confusion”. Exposing trust that shouldn’t exist blindly.
There’s a lot of moving parts with complex technology. And the question is whether companies monitor what’s being pulled down, or do they operate with an assumption that it’s somehow secure? I think that there’s going to be a lot of engineering on how to attack Kubernetes that won’t be limited to simply container downloads.
Some experts claimed 2021 would be a year of the serverless takeover. And there are, of course, many architectures moving towards serverless, but again, would you agree if that’s the case? Is the market ripe for an actual takeover?
From my experience, working with our customers and understanding how they’re utilizing the cloud, we see serverless being used more and more. And it’s not surprising given the number of serverless capabilities in the public clouds. Why would I maintain my own relational database system when I can use a service, and I’m not managing the server for that? Kubernetes itself is often deployed in serverless paradigms.
Why would I deploy compute nodes when I can deploy a function that’s running somewhere else? There are lots of excellent examples of it, and we see more and more of it. I don’t know if this is the year, though. I would imagine serverless platform services are one of the significant ways cloud companies compete.
There’s plenty, but one is platform services, and more of these platform services are serverless. I would imagine you’re going to see a lot more innovation there. However, they can be frustrating to use because if that serverless capability meets your requirements – fantastic. But if it doesn’t, you end up having to contort your applications to work with them.
So, there’s still a trade-off there. If I’m deploying my own database, for instance, I need to pay for VMs running, and I need people to administer it and make sure it’s healthy. If I’m using a serverless database service, I don’t need to do that stuff.
However, it’s not like it’s cheap. Somebody’s still doing that work even though it’s the cloud platform. And they’re going to push that cost forward to you. I think this will be a bellwether year given how much has been moved to the cloud and companies demanding more understanding and transparency in how the service is charged.
They currently give you an extreme amount of raw data, but you often need experts to understand it. There are entire cottage industries, and there are software systems out there to try to take all of these minor granular charges and build them up to something so you can understand.
I think the more stuff gets moved to the cloud, the more people will realize that they should not have to hire consultants or buy other software to figure out why they’re spending $5 million a year. And right now, it is way too hard to figure that out.
More great CyberNews stories:
Subscribe to our monthly newsletter