How to Reduce Cloud Spend With Scalr

May 5, 2017 Alex Green

Earlier this week we talked about Scalr Cost Analytics, which helps admins visualize costs, monitor events, get automated cost reporting, and use API calls for third party integrations. But well-informed visibility isn’t the total solution. Cost monitoring needs to serve up visibility and offer preventative and reactive controls. In this post we’ll focus on how we help admins set those controls with Financial Policies. These policies can maximize efficiency and help companies regain control of cloud consumption.

 

Here’s three examples of financial policies you can create with Scalr:

 

  1. Server Reclamation: servers in particular environments (e.g. testing or dev) should be automatically shut down after a predefined number of days, with the ability to request lease extensions. No more hidden fleets of testing servers six months later.

  2. Lifecycle management/usage rightsizing: Autoscaling servers during set times or on specific triggers (like server CPU utilization or site response time). When busy periods have ended, we safely spin down the number of servers.

  3. Control Overprovisioning: Set guardrails on cost to keep developers from spinning up servers more expensive than they need (more CPU/memory), and more servers than they need (e.g. dev environment can’t have more than a hundred servers running at once).

 

While end users don’t necessarily care about costs, the actions we take every day affect expenses. Having guardrails that improve self service for end users and help admins control costs is ideal solution for everyone, and these examples affect team members most frequently.

 

I can speak from experience - as part of the Technical Marketing team, we do webinars and demos every day and experiment with new third party tools, so this leads to us forgetting about servers as we move on to the next project. We use our own product to keep us from going crazy. The same goes with lifecycle management - even though now we’re seeing the rise of Kubernetes, which is fantastic at handling autoscaling and usage right sizing, not all of our applications can be moved to containers. Whether it’s a CIO mandate or we don’t have the time to migrate our applications, we still need automated policies for the majority of our workloads. All of these policies lead to one goal - cost reduction at every part of the organization.

 

Let’s jump into Reclamation Policies.

 

Reclamation

Let’s cut costs by ensuring that testing servers don’t run for longer than they need to. At the admin level, we set up policies inside the Scalr UI by creating a new Policy Group. Policy Groups serve as collections of policies (rules) that are intended to work together as a group. After we create the policy group, we then attach to an Environment (a logical grouping of cloud resources).

 

 

In the UI above, we are creating a new policy. It’s Type, Reclamation is shown, at the top, along with the Environments that it’s been applied to. There are different types of policies, which explain what they can do. Here we can set the default running lifetime. We can also set notifications to remind server owners when they are being shut down. The second half of this are lease extensions, which give users the ability to request extensions on their servers if they’re still doing work. You can set the extension length, and how many extensions you’ll allow.

 

Technical jargon aside, Reclamation Policies just make sure that both end users and admins are aware of how long servers are active, and for what reasons.

 

I can then attach this Reclamation Policy to the Environments that need it. Below, we’re adding the Reclamation Policy to the list of policies that the Environment already has.

 

 

You’ll see that Policies have a broad range of options beyond financial controls - you can also set guidelines on servers based on their origin (for example, for AWS servers in the Testing Environment, instance size, security group, and VPC are all preset by admins). This Environment is connected to the ACME Operations Cost Center, so we can trace all of the actions inside this environment down to the user and the server size.

 

Overprovisioning

Like Reclamation Policies, I can create create guardrails for my end users as a Policy Group for an Environment. So for our Development Environment, let’s limit the options for server sizes to a few choice options. Because our Environments can use resources across clouds and instance sizes are different across providers, we’ll have to specify the restrictions for each cloud. The way instance sizes are labeled in AWS, GCP, and Azure are all different, so we have to know what

 

 

In the screenshot below I’m setting instance types for AWS. I can also predefine the region, or even the OS, but let’s keep it simple. Because we dynamically pull pricing information from public cloud providers, these are up to date costs, not manually generated ones.

 

 

Then, like before, I attach this Policy Group to my Development Environment.

 

 

Lifecycle management/usage rightsizing

When I’m creating a new Farm (application blueprint) as an Admin, I have the ability to set up auto-scaling policies. For this example, we want to scale up our servers during set time periods and autoscale on specific triggers.

 

In the UI below, I’m editing scaling rules on the Roles (components) of my application. Because parts of my application may need to scale differently and different times, we layer scaling rules on each Role.

 

 

In the top left, I’m selecting the Ubuntu server that is part of this application. On the right half of the screen I have all the information about this server, including the OS, the option to select an instance type, how much it will cost a day, and so on. The second arrow shows is where I’m able to set up my autoscaling rules.

 

Here’s one of these rules - a Schedule - preset times to automatically scale servers. Below I’m stating that during working hours, automatically create ten instances.

 

 

This is the most common example of scaling. We can learn from our data (which we can see through the Scalr Cost Analytics or with a third party tool like Datadog), that are demand is higher throughout the day and server times slow down. But what about policies that react when our servers need it?

 

Here is an example of auto-scaling based on RAM thresholds.

 

And URL response time:

 

Finding ways to balance cost with resource utilization is the real goal of cost monitoring. By baking in monitoring and cost controls into Scalr, we have an end to end solution. We look at this as the best way for admins to consolidate, standardize, and optimize their spend in the cloud.

For more information, visit the wiki and check out our resource hub for blog posts, videos, and webinars:

https://scalr-wiki.atlassian.net/wiki/display/docs/Cost+Analytics

https://scalr-wiki.atlassian.net/wiki/display/docs/Configuration+Policy+Group+Type

https://scalr-wiki.atlassian.net/wiki/display/docs/Reclamation+Policy+Group+Type

http://hub.scalr.com/whitepapers/managing-cloud-costs-with-scalr

Previous Article
How to Visualize and Track Cloud Spend With Scalr
How to Visualize and Track Cloud Spend With Scalr

Cost monitoring needs to serve up visibility and offer preventative controls. Scalr Cost Analytics helps ad...

Next Article
OpenStack Self-Service: Horizon or CMP?
OpenStack Self-Service: Horizon or CMP?

OpenStack Horizon brings with it both OpenStack’s strengths and weaknesses. Should it be a core part of you...

×

Join thousands of Cloud professionals on the Scalr Newsletter

Thank you!
Error - something went wrong!