© 2021 CyberNews - Latest tech news,
product reviews, and analyses.

If you purchase via links on our site, we may receive affiliate commissions.

Dealing with COVID ‘hangover’ in the cloud: costs of rapid transition


There’s no longer a need to convince businesses to adopt the cloud. The key question in 2021 and onwards is how to make the transition smooth. The boom accelerated the transition speed last year, but rapid decisions might have caused a ‘hangover’ for some.

The world after the pandemic will be different in many ways. For one thing, a lot more companies will have adopted cloud technologies, and many more will be familiar with going cloud-native.

A forecast by the International Data Corporation (IDC), a market intelligence company, claims that a staggering 90% of new enterprise applications will be cloud-native by 2022. Speed of transition is illustrated by a growing volume of cloud-native startup acquisitions by market giants like Cisco or Palo Alto.

The transparency part comes because when you stop knowing where it runs, you lose control from a cybersecurity perspective and cost perspective. Then that’s a problem,

Laurent Gil.

Going cloud-native, however, does not mean outsourcing storage. To fully transition, companies will need to adapt to new architectures, safety measures, and a need for transparency from cloud service providers.

According to Laurent Gil, co-founder of cloud optimization platform CAST.AI, the rapid transition to the cloud meant that companies were cutting corners, as illustrated by acquisitions intended to mitigate some of the newly exposed risks. The world, however, is moving towards adopting cloud-native tech, and businesses globally will have to adapt.

“It’s great that you have a lot of tech companies that accelerated the move to the cloud. But what I call the ‘hangover’ is the fact that they did it very fast. And when you do something very fast, you cut corners,” Gil told CyberNews.

We discussed why the whole world has decided to transition to the cloud so rapidly, what challenges the transition poses, and what transparency businesses will need to seek to stay afloat in a world moving off-prem.

I’ve read predictions claiming that by 2022, 90% of new enterprise applications will be cloud-native. And I can’t help but wonder why now? Why do we find ourselves in such a rapid transformation at the moment?

It’s not just the technology. It’s a combination of technology and what’s happening in the world. The impact of COVID meant that many companies accelerated the move to the cloud, and they did that because they started to realize that it is a lot more efficient if someone is taking care of your computer. Other reasons aside, it’s more efficient because there’s more safety.

For example, if you have your own data center, but there are issues with COVID, companies have more difficulty sending the technician to fix it. I think the idea that you let someone else take care of your infrastructure is what gave the boost now. The second element to it is technology. At the same time, we have great maturity in cloud-native techniques to move to the cloud.

Image by Shutterstock.

It’s also a pure coincidence. Every five years, there is a new technology that booms. And in this case, the boom was Kubernetes or cloud-native techniques. The key is not to view the cloud just as a bunch of memory storage. Cause if you think of the cloud-like this, you will end up paying more by moving to the cloud than having your own data center.  

If you move to, let’s say, Amazon, you have to pay for the machine, and you also have to pay for the service they give you. So, if you only use memory and storage, you are not moving anywhere but replacing one expense with another. And most likely, spending more than before. 

Entirely moving to the cloud means using cloud services and architecture. One part of that is Kubernetes. It’s not the only one, but it’s an important one. It’s the idea that you can deploy applications very simply using containers. It’s easy, simple, efficient, and fast to deploy applications on the cloud. 

To answer your question, exploding interest is related to COVID, but it’s also the fact that now we very well understand what it means to have cloud-native applications. And businesses now know and understand the techniques and capabilities that make this movement easier than a few years ago.

Over the last year, there were significant acquisitions of cloud-native startups by Cisco, VMware, Palo Alto, and SUSE. Does that mean there’s no longer a need to convince service providers to utilize the cloud? 

I call it the COVID hangover. It’s great that you have a lot of tech companies that accelerated the move to the cloud. I think the world will be a better place because of it. But what I call the ‘hangover’ is the fact that they did it very fast. And when you do something very fast, you cut corners. 

When you move to the cloud, there are a few mistakes that we all made. And therefore, companies decided they would do something that would prevent or at least mitigate these mistakes. That was a great idea two years ago, but right now, it’s a great business. 

When you move to the cloud, there are a few mistakes that we all made. And therefore, companies decided they would do something that would prevent or at least mitigate these mistakes,

Laurent Gil.

I’ll give you an example. Whatever cybersecurity system you were using on-premise, a newly added cybersecurity platform will not be the same on the cloud. You’re not looking for the same issues. If you move too fast, then you forget that you have to morph into what the new cloud-based security platform should look like. Then you have the hangover that says you have to adopt new practices in a hurry. 

That’s why you see many acquisitions because the companies try to add these services in a hurry because they know that whatever they were doing in the on-prem world did not replicate very nicely on the cloud. It’s not that the hackers have changed. It’s more that the surface of the vulnerabilities has changed. There’s just a very high speed of adoption.

Cloud-native supporters, such as CNCF’s Liz Rice, predict the merger of DevOps and DevSecOps to mitigate problems with cloud safety. What’s your take on cloud-native security, and what sort of developments do you see going forward in the near future?

The reason for DevOps and security to be closer is the realization that the place where you manage and configure your infrastructure should also be the place where security is. That it’s essential to perform the job. And that’s why some people add the ‘sec’ in the DevSecOps.

The general idea has always been to automate processes to eliminate the human-made misconfiguration mistakes of the team and developers. I think the realization was that security has to be embedded in the place where you configure and manage your infrastructure.

Another idea is that security must be automated to the point where adding or deleting resources on the cloud is automated. The thinking goes that when it’s automated, it will follow guidelines. And if you do it smartly, the guidelines cannot be changed by the human. Then effectively, you have good practice.

Photo by Unsplash.com

There’s a lot of focus on the DevOps approach in developing a cloud-native ecosystem. However, a report by the EMA claims that broken cooperation between cloud and local teams is a crucial hindrance for a successful transition to a cloud-native architecture. Do you think it’s possible to align the two teams that might be somewhat hostile towards each other?

There is a big difference between companies born on the cloud and companies moving to the cloud. There is a significant cultural shift that is a big change for teams of developers in adopting cloud-native techniques or trying to replicate what they were doing before in the cloud. 

Typically, we see that whenever companies try to replicate what they were doing before, that does not make for a great outcome. You cannot force it. There is even a term in the industry - ’lift and shift.’ When you’re lifting an application from the on-prem and shifting it to a cloud, that’s the worst way of doing it because you don’t benefit from cloud services. 

You take whatever you had, make it run on some providers’ system instead of making it run on your own machine. There are zero improvements. The only outcome is an increase in cost. The industry started to change the term to ‘move and improve,’ however. 

That means that you’re moving on-prem and improving it as you are moving into the cloud. I think that’s the realization that the ‘lift and shift’ is the same as doing nothing. Improving is the only way to benefit from any cloud services.

I think the developers who embrace cloud-native techniques will end up being a lot more efficient. But there is this education you have to do. And educating is not easy. But it’s hard to avoid the direction of history. Five years ago, people would refuse to move to the cloud because they don’t own a machine the operations are carried on. Now there is no battling about that. There’s no need to make everybody like the cloud. 

Five years ago, people would refuse to move to the cloud because they don’t own a machine the operations are carried on. Now there is no battling about that,

Laurent Gil.

Market insiders point to a lack of pricing transparency as a critical hindrance for an even more rapid transition to the cloud. Do you see businesses becoming more demanding towards service providers? And if yes, how can they leverage transparency?

In my opinion, transparency is clearly where we’re going. The developer is only interested in having an application that runs nicely. That’s the goal. It doesn’t matter where it runs as long as it runs nicely. And if it runs, and it runs the way you want, what do you care where it’s operated, how many machines do it, how many core CPU’s memory, why is it even a thing to know how many of these things you need to run your application? 

The transparency part comes because when you stop knowing where it runs, you lose control from a cybersecurity perspective and cost perspective. Then that’s a problem. So, there must be a proper balance. As for now, nobody found out what the balance should be. 

To know that these are services that I have and can use is excellent. But on the other hand, you have to ask yourself, what am I using? How much do they cost? Where are they located? Is someone else using the same service and the same hardware as I am? That’s a very important cybersecurity question. 

As this lack of transparency can have consequences for security, it also can affect costs. The problem we see now, that’s part of the ‘hangover’ I was telling you earlier, is you’re set to receive bills that have 80 pages. They have things that you don’t even know exist. You’ll have some obscure load balance. And why do you have this? Because you are using the service. 

The issue for the user is that they have no idea what they’re paying for anymore. It’s just impossible for a human to understand it. Another thing is that you have no idea whether you are paying for something because you are using a service. For example, do you need a four-core machine, or would a one-core machine do the same job? The bill doesn’t tell you this. The bill only tells you, you had the four-core machine, and you used it for an amount of time this month. 

It doesn’t tell you whether it was smart to use this machine, or maybe it was not, because you don’t need that much capacity. So, these are the two issues with your bill. One is there’s a huge number of services that don't matter anymore just by the sheer number of them. And there is also this question, which is, do I really need this? 


More from CyberNews:

$280 million stolen per month from crypto transactions

Security concerns undermine faith in digital banking

Hunted by an AirTag: how an Apple device can be used to stalk you

Why the Soviets didn’t start a PC revolution

The man behind the Oscars Chadwick Boseman NFT tribute: why do NFTs excite black artists?

Subscribe to our newsletter


Leave a Reply

Your email address will not be published. Required fields are marked