Application Centric Management of Infrastructure Resources

Cloud Application Management

Subscribe to Cloud Application Management: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get Cloud Application Management: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Cloud App Mgmt. Authors: Elizabeth White, Stefan Bernbo, Yeshim Deniz, Harry Trott, Amit Kumar

Related Topics: Cloud Computing, Cloud Hosting & Service Providers Journal, Cloud Backup and Recovery Journal, Cloud Application Management, Cloud Development Tools

Article

Five Essentials to Safeguarding Your Applications

Service disruption is a leading cause of lost revenues

A while back, I was starting up an EC2 instance on the AWS cloud when it entered an endless restart loop.

All the application deployment efforts we’d made (installation and service configuration) over two weeks just went down the drain. So we called support. The support rep redirected us to his team leader who simply told us that, as indicated in the SLA, we had to abide by the shared responsibility model and they were not liable for our loss.

The Shared Responsibility Model

Shared Responsibility refers to the responsibility of the cloud vendor, as well as the cloud consumer, to make sure that everything is backed up and appropriately configured to avoid situations like the one we were in. In short, it was our fault for not having better programmed our architecture.

When there’s a public cloud outage, it can affect thousands of businesses. The most infamous outages are probably the AWS outages of April, 2011 or October, 2012 that affected Reddit, Netflix, FourSquare, etc. And it’s not just Amazon. In July of 2012, Microsoft’s Windows Azure Europe region was down for more than 2 hours!

Nati Shalom, CTO and Founder of GigaSpaces, said “failures are inevitable, and often happen when and where we least expect. Instead of trying to prevent failure from happening we should design our systems to cope with failure. The methods of dealing with failures is also not that new — use redundancy, don’t rely on a single point of failure (including a data center or even a data center provider). Automate the fail-over process.”

Don’t get caught with your pants around your ankles like we did. We lost weeks of work because our instance stopped working but it could have been prevented with better architecture and processes. The following are key important points that must be considered in order to achieve high availability in the cloud:

5 Key Essentials to High Availability in the Cloud

Design for Failure - Architecture, Complexity & Automation

Spread your system across multiple physical locations to isolate your business from physical and geographical availability zones. Architecture driven by recognized point of failures, combined with automated failover that can quickly and easily shift workloads between availability zones, regions or clouds ensures high availability.

Cloud’s 5 levels of redundancy:

Physical

Virtual

Zone

Region

Cloud

Let’s take a look at AWS. Best practice dictates that using an Elastic Load Balancer (ELB) across different availability zones (AZ) will increase availability. In a recent survey performed by Newvem, AWS users ELB habits were tracked over a 7-day span. Newvem found that only 29% of users follow this practice. The other 71% are at risk if anything is to happen within that AZ. Setting up ELBs across multiple AZs is a relatively straightforward effort that can greatly increase availability.

Know Your SLA – 99.999%

Public clouds SLAs site shared responsibility. This essentially means that the online service provider (you) is liable for its service; running, compliant, and highly available (make sense does it?). When people talk about downtime or rather 99% uptime, this is what it actually looks like:

Moving to the cloud means giving up control over the IT infrastructure level. Before signing a contract (i.e. putting down your credit card) you should make sure that you know where your cloud operator’s liability ends.  With an understanding of where your responsibility begins, you can then decide the level of SLA that you are willing to offer to your end users. Better SLA means greater investment and hopefully competitive advantage (if you are an ISV for example).  On the other hand, who cares about SLA? Finally if your online service is down, it really doesn’t matter what your SLA is. You (as well as your cloud vendors) need to maintain ongoing communication with users and give a fully transparent view of the current state, as well as an estimation of when the service will be available again. A post mortem analysis is a must. Performing the same mistakes/reasons for the failure will not be acceptable for your user community.

Prepare for the Worst – Disaster Recovery

There are countless challenges in managing a Disaster Recovery (DR) solution. You need a solution that allows you to easily and consistently monitor the backup of your critical IT resources and data. You should also create automatic retention management to enable a quick and complete recovery. DR and redundancy are expertise that evolved over the years in traditional data. Principles such as RTO and RPO (time and point of recovery) are the same, however the cloud creates new DR options and facilitates backup with limitless and economical storage space.

Consider the Cost Impact

Did you know that an outage for Amadeus can cost $89K per hour? For PayPal, downtime can translate to as much as $225K per hour! Service disruption is a leading cause of lost revenues.

Traditionally, delivering high availability and automatic failover often meant replicating everything…not the most cost-effective solution. Fortunately with the cloud, high availability can be planned across different levels so that you can create “availability on-demand”. Automated scripts allow you to quickly scale when necessary but can turn off when the extra servers (or a different AZ) is no longer needed. The notion of “capacity meets demand” takes a significant role here. In order to prepare for the next failure you don’t need to buy more servers, you need to architect smartly and maintain processes that will be in line with your SLA. We are still not there but the agility and comprehensive capabilities of the cloud can support a HA plan that can support pricing packages in line with different SLAs according to your customer’s requirements.

Transparency and Test Routines – Avoid Another Outage

Transparency is key. One of the most important things I have learned is that the traditional enterprise wants and will be happy to pay to maintain control. The cloud puts control and transparency at risk by moving servers off premise into your browser. Traditional enterprise leaders and users are used to having nearly complete control of IT physical resources, and the thought that they may not have the “irons” intimidates them. Cloud vendors and developers have to make sure they report back to leaders on the adoption progress, making sure that these new IT resources generate the expected business benefits without harming services, compliance, SLAs, and so on. Transparent environments support the cloud cycles of refinement. Automatic testing tools and cycles (for example Netflix’ Chaos Monkey) are crucial to avoid the next failure including the ability to forecast and be alerted before customers flood your support center with calls.

Organizations that run to deploy their production without considering their SLA, System DR, and Availability put their online business at great risk. Choosing to run a business on a cloud is a strategic move, and high availability is a significant part of this strategy.

To learn more about best practices for high availability, join our live webinar this month.

More Stories By Ofir Nachmani

Ofir Nachmani is a Cloud Computing Evangelist, Blogger and Lecturer at IAmOnDemand.com. He has extensive experience helping ISV companies with cloud adoption and management. Today, Ofir is a Senior Vice President and Chief Evangelist at Newvem Analytics Ltd. Prior to these he led ClickSoftware’s On-Demand initiative and established the company cloud offering. On his previous adventure, he led ClickSoftware’s On-Demand (SaaS) initiative and also held several positions at Zarathustra SaaS development VP of product and company CEO. In 2009, ClickSoftware acquired the AST group and Zarathustra as part of it. Twitter: @iamondemand

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.