The Cloud Bubble Has Burst – Why Companies Are Leaving the Cloud

Read More >

Bursting the big bubble and doing it with a bang, leaders in the DevOps and cloud infra space are questioning is the cloud worth it? David Heinemeier Hansson Co-owner and CTO of Basecamp and HEY as well as Simon Sharwood, Tech writer for The Register are questioning the very fundamentals in which most of the world’s applications are built and calling for a major tectonic shift away from the cloud. They argue that we are all “paying too much for too much, and the cloud providers know it”. 

They have a point. According to a Gartner report worldwide spending on public cloud services will reach $592 billion next year and as Hansson puts it ‘much of these costs are based on provisions that never get used but are provisioned just in case they could be’. 

But does this criticism justify a wholesale departure from the cloud or should we focus this criticism on the way the cloud is packaged and retailed? 

The promise of the cloud was always supposed to be about its dynamism, where users had the freedom to procure compute only to the extent that they needed it. But that hasn’t exactly happened. Instead, we’re stuck with a paradigm in which we are constantly procuring more than we need–wasting both energy and money.   

Therefore, the question to ask is what needs to change in order to make this promise of flexibility and efficiency a reality. 

Why is The Cloud So Expensive in the First Place?

The cloud came into existence to provide greater flexibility and scalability, powering the tremendous technological boom we’ve experienced for the past 15 years. 

The cloud’s pay-as-you-go model created nearly unlimited growth potential that enabled businesses to build new technologies at rapid scale–to the delight of billions of users around the world. As a result, technology companies quickly became among the most valuable and profitable in the world.

But the flexibility of the cloud is also its biggest financial drawback. The immense range of offerings in the cloud combined with how easy it is to simply spin up new servers, causes users to procure significantly more than what they need. Sometimes, they do this by accident, by spinning up servers for testing and forgetting to delete them once they are no longer needed. Other times, expensive compute instances are provisioned intentionally in order to cover for extreme peaks in data and prevent a situation where there is not enough capacity for their applications. 

Either way, the cost of overprovisioning as well as some of the hidden costs of various cloud services is causing businesses to reassess their dependence on the cloud, wondering if it is indeed worth the financial investment. 

The Costs of Moving Back On-Prem

Yes, the cloud’s expensive, but who’s to say moving back to on-prem wouldn’t be even more costly?

Companies like Dropbox famously moved 90% of its workloads back to on-prem servers, but this move isn’t right for everyone and has the potential to hinder the leaps and bounds that have been made in cloud innovation.

Moving back on-prem means you are now responsible for ensuring high availability, low latency and good performance for all customers. You’re also limited to the set capacity of the servers, where you’ll need to provide a buffer to ensure you’re system doesn’t scale out, but most of the time, this large buffer will lie dormant and unused. These physical servers cannot be cross-purposed, so there is no flexibility to repurpose any unused capacity towards other applications. 

Furthermore, you’ll need a superman team of SysAdmin to manage your data center. This rare breed of engineer is hard to find, and by nature of that, expensive to keep. In comparison, the “cloud engineer” has become a more common role. 

In addition to expensive engineering salaries, there is also the cost of maintenance, physical security, and electricity consumption. But then there is also the potential hail-storm of costs that can rain down on you from possible disasters such as security vulnerabilities, power outages from keeping servers cool, fires, and floods etc, that can demolish any savings from moving on-prem. Each of these risks represent a tremendous liability since you may have no backup should one of these occur. 

On the flip side, what happens if things go well, very well? The very nature of on-prem infrastructure means you cannot address a major upswing in usage, which can cause significant UX and operational issues. The lack of agility makes it difficult to meet growing business demand at the rate that is needed. 

Instead, everything will need to be planned in advance, knowing that most of your infrastructure will sit idle most of the time and only a small percentage will be regularly used. That’s a tough Capex bill to swallow, and an even harder depreciation bill.

The Solution: Infusing the Cloud With More Flexibility

Acknowledging that leaving the cloud is not necessarily the answer, it’s time to create a new paradigm where the cloud is both flexible and cost efficient.  

What if it was possible to easily adjust cloud resources to changing application needs? Imagine if there’s no need to overprovision or pay for resources that are not being used and simultaneously, there is no risk of insufficient resources. Instead, your infrastructure continuously responds to your application needs, saving significantly on the cloud while also ensuring your services can scale fast and respond to its environment.

That cloud utopia is the vision behind Dynamic Cloud Infrastructure. Technology enabling Dynamic Cloud Infrastructure takes the infrastructure provided by cloud providers and embeds greater flexibility and efficiency. As a result, you can continue to provision as you always have, but now you have the ability to scale provisions automatically. This eliminates planning, forecasting, and manual adjustments, while ensuring cloud usage is as efficient as possible. 

I’ll provide two use cases where Dynamic Cloud Infrastructure is possible. One example is with AWS EC2 and the other is for block storage. 

Dynamic EC2 Payment Structures

Imagine you’re a company interested in leveraging the savings of AWS Reserved Instances (RIs). Despite the fact that you can achieve a roughly 70% discount on workloads you cover with RIs, you’re reluctant to cover more than 50% of your instances due to the risk of overprovisioning.

With the way discount commitments are currently set up, savings come at the expense of flexibility. You have to commit to allocating workloads to the limitations of a given RI for 1 or 3 years in advance. Yet, there’s always that chance that the instances you need now won’t be relevant in the future. If that’s the case, you’ll be stuck paying for instances you’re not using. 

Due to this risk, you choose to cover only 50% of your instances with RIs while the remaining 50% remain On-Demand. As a result, you’re paying a premium on half your instances, costing you an extra $1 million a year.

In this scenario, technology enabling Dynamic Cloud Infrastructure is a real game-changer. An algorithm dedicated to optimizing discount program utilization can automatically measure your real-time compute needs and allocate RIs accordingly. As a result, you get the RIs you need exactly when you need them, so you’re never paying for resources you’re not using. Best of all, you get the bigger 70% discount of RIs with no long-term commitment or risk of overprovisioning. 

Dynamic Block Storage

Another place where technology enabling Dynamic Cloud Infrastructure comes in handy is in the area of block storage, where overprovisioning is rampant. 

Because stability and performance are top priority, cloud users typically buy considerably more block storage than they need to account for spikes in demand. They do so to prevent “out of disk” errors that have detrimental consequences to the performance of their applications. Because storage needs have a tendency to spike at certain times, companies purchase the maximum capacity they’ll need throughout the application’s life, but are unable to scale down storage capacity once the application returns to functioning at its normal rate. 

As a result, they’re stuck paying for significantly more storage than they generally use. 

You could use EFS to solve this problem by shrinking and expanding filesystems, but you’ll pay for it with disk latency, with inferior speed and performance, that’s not adequate for many workloads. 

Here’s how enabling Dynamic Cloud Infrastructure can help.

Say a customer provisions 1 TB of storage to cover themselves at their peak, but they typically use only 300 GB. In this scenario, 70% of that storage is paid for but not used. The monthly cost of this overprovisioning will not necessarily be significant. However, over a 12 month period, it can make a tremendous impact on their cloud bill. In addition, you’re also using significantly more energy than necessary, and the more businesses that work this way, the greater our carbon footprint.

In a Dynamic Cloud Infrastructure approach, the customer would be able to automatically shrink and extend block storage volumes, enabling them to purchase 350 GB to cover their stable state of 300 GB, with an additional buffer of 50 GB for unexpected spikes in demand. In the event that they have 900 GB of data, the AI algorithm would automatically increase block storage to 1 TB, for example, and shrink it back down again once it is no longer needed. This can save the customer about $858 per filesystem over the course of a year. If you have 1,000 filesystems, this number can add up to $858,000 per year. 

Final Words

The cost of the cloud is a tremendous challenge to businesses as they scale and desire to get more return on their cloud investment. But leaving the cloud is not the most cost-efficient or agile way of growing in today’s fast-paced business environment.

Instead, cloud engineers have a right to demand that their infrastructure becomes more dynamic, so that they are both cost efficient and flexible enough to address inevitable changes in application usage.

They should demand greater visibility into their entire infrastructure, with the ability to identify unused resources, whether they be EBS, EIPs or ELBs, rarely used instances or instances in distant regions, the list of possible waste is long. With the right tooling that can provide this insight along with tracking the bill from month to month, it becomes possible to keep track of what services are generating the greatest spend, the areas where waste is kept and where it’s possible to scale down – or remove entirely. 

Beyond this visibility, mastery comes in the form of automation that scales resources both up and down, vertically and horizontally, to ensure that instead of paying too much for too much, we’re paying enough for just enough.

By making cloud infrastructure more dynamic, the initial vision of tech democratization becomes an even more established reality, as it becomes more affordable and thereby accessible to all. Businesses can make their cloud footprint smaller and yet more performant, efficient and yet far more agile, and ultimately much less expensive.

blogbanner1_320x480_ML