Your SlideShare is downloading. ×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Avoiding Cloud Outage


Published on

Building cross-region and cross could high availability into your app, a real life use case by Gigaspaces, Nati Shalom, Funder & CTO, Gigaspaces …

Building cross-region and cross could high availability into your app, a real life use case by Gigaspaces, Nati Shalom, Funder & CTO, Gigaspaces
Achieving high levels of availability and disaster recovery in a cloud environment requires the implementation of patterns and practices that introduce redundancy through multi-zone, multi-region, and multi-cloud deployments. As we move towards implementing higher availability, we cannot escape the direct increase in the accidental complexity of the deployment architecture resulting from lack of cloud portability and deployment lifecycle automation. We present how high availability and disaster recovery were achieved in reality by using the Cloudify open source framework on top of AWS. This approach applies to not just AWS but also other public clouds and private cloud environments such as Eucalyptus. The resulting reference architecture provides portable PostgreSQL replication and disaster recovery as well as application tier scalability across zones, regions, and public/private clouds through a unified deployment workflow.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

No notes for slide
  • A high-ranking Amazon executive said there are 60,000 different customers across the various Amazon Web Services, and most of them are not the startups that are normally associated with on-demand computing. Rather the biggest customers in both number and amount of computing resources consumed are divisions of banks, pharmaceuticals companies and other large corporations who try AWS once for a temporary project, and then get hooked. According to in March 2012 - researcher estimates that Amazon Web Services is using at least 454,400 servers in seven data center hubs around the globe. Let us try this: Google is powered by a million servers. Maybe a little more than that. And Amazon has half a million servers. Now, things fall in place. Facebook, the service that takes up one fourth of all our time online, is powered by less than 100,000 servers.Biggest customers – pinterest, instagram, Netflix, heroku, quora, foursquare etcAmazon Web Services runs more than 835,000 requests per second for hundreds of thousands of customers in 190 countries, including 300 government agencies and 1,500 educational institutions. 
  • The Amazon cloud proved itself in that sufficient resources were available world-wide such that many well-prepared users could continue operating with relatively little downtime. But because Amazon’s reliability has been incredible, many users were not well-prepared leading to widespread outages.Amazon EC2 outage on April 2011 was the worst in cloud computing’s history back then. It made the front page of many news pages, including the New York Times, probably because many people were shocked by how many web sites and services rely on EC2.Microsoft Azure outageDec 28 2012 -  some owners of Microsoft's Xbox 360 game console were unable to access some of their cloud-based save storage files.July 26 - 2012 - Service for Microsoft’s Windows Azure Europe region went down for more than two hoursFeb 29 2012 - The ultimate result was service impacts of 8-10 hours for users of Azure data centers in Dublin, Ireland, Chicago, and San Antonio.
  • Some parts of Amazon Web Services suffered a major outage. A portion of volumes utilizing the Elastic Block Store (EBS) service became "stuck" and were unable to fulfill read/write requests. It took at least two days for service to be fully restored. Reddit, one of the better-known sites to go down due to the error, said it has 700 EBS volumes with Amazon.Sites like Quora and Reddit were able to come back online in "read-only" mode, but users couldn't post new content for many hours.
  • For second time in less than a month, Amazon’s Northern Virginia data center has suffered an outage and is impacting many popular services such as Instagram, Pinterest & Netflix.Several websites that rely on Amazon Web Services were taken offline due to a severe storm of historic proportions in the Northern Virginia area where Amazon's largest datacenter is located. Amazon previously suffered an outage in its Northern Virginia facilities on June 14, 2012.A line of severe storms packing winds of up to 80 mph has caused extensive damage and power outages in Virginia. Dominion Virginia Power crews are assessing damages and will be restoring power where safe to do so.
  • A major outage occurred, affecting many sites such as reddit, Foursquare, Pinterest, and others. The cause was a latent bug in an operational data collection agent. A memory leak and a failed monitoring system caused the Amazon Web Services outage on Monday that took out Reddit and other major services.According to a post Friday night, AWS explained that the problem arose after a simple replacement of a data collection server. After installation, the server did not propagate its DNS address correctly and so a fraction of servers did not get the message. Those servers kept trying to reach the server, which led to a memory leak that then went out of control due to the failure of an internal monitoring alarm. Eventually the system ground to a virtual stop and millions of customers felt the pain.
  • Amazon AWS again suffered an outage, causing websites such as Netflix instant video to be unavailable for some customers, particularly in the North-eastern US. Amazon later issued a statement detailing the issues with the Elastic Load Balancing service that led up to the outage.The disruption began shortly after noon Pacific time on December 24 when data was accidentally deleted by a developer during maintenance on the East Coast Elastic Load Balancing system, which is designed to distribute traffic volume among servers."Netflix is designed to handle failure of all or part of a single availability zone in a region as we run across three zones and operate with no loss of functionality on two," the company said in ablog post this afternoon. "We are working on ways of extending our resiliency to handle partial or complete regional outages."
  • Fault tolerant systems are measured by their uptime / downtime for end usersAmazon says it is "committed" to a 99.95 percent uptime
  • Although AWS went offline for a few hours only, the downtime experience did have an impact on customers’ businesses. There is no known data for the number of people affected by a cloud computing service outage. It is estimated that the travel service provider Amadeus loses $89,000 per hour during any cloud computing outage, while Paypal loses around $225,000 per hour.
  • DR – The process and procedures you take to restore your system after catastrophic event.Cloud infrastructure has made DR much easier and affordable comparing to previous options.Cloud can also suffer from large scale failures because of network, power or any IT failures.Applications owners need to be responsible for HA and DR – can use multiple servers, AZ, regions and even clouds.Zones within a region share a LAN so they have high bandwidth, low latency and private IP access. Zones utilize separate power resources. Regions are “islands” – they share no resources.
  • Each cloud is unique in many aspects offering different API and functionality to manage the resources.Different set of available resourcesDifferent format, encoding and versionsDifferent security groups, machine images, snapshots etc.
  • Make sure to have a dedicated expert to manage your DR architecture, processes and testing.Define what your target recovery time and recovery point is.Be pessimistic and design for failures – (assume everything will fail and design a solution that is capable of handling it). Avoid single point of failures – all parts of your app should be highly available (different AZ / regions / cloud) – load balancers, app servers, web servers, message bus, database.Use monitoring and alerts for failover processes and for every change in state.Document your DR operational processes and automations.Try to “break” different part in your application. Try different ways to break it – unplug the network, turn machine off etc. Try it again.
  • Netflix has open sourced ”Chaos Monkey,” its tool designed to purposely cause failure in order to increase the resiliency of an application in Amazon Web Services (AWS.)It’s a timely move as AWS has had its fair share of outages. With tools like Chaos Monkey, companies can be better prepared when a cloud infrastructure has a failure.In a blog post, Netflx says that this is the first of several tools that it will open source to help companies better manage the services they run in cloud infrastructures. Next up is likely to be Janitor Monkey which helps keep an environment tidy and costs down.Chaos Monkey has achieved its own fame for its innovative approach. According to Netflix, the tool “randomly disables production instances to make sure it can survive common types of failure without any customer impact. The name comes from the idea of unleashing a wild monkey with a weapon in your data center (or cloud region) to randomly shoot down instances and chew through cables — all the while we continue serving our customers without interruption.”
  • Netflix provides an excellent toolset for surving outages at the operation level.In this part i wanted to zoom-in more on the design implication of our application.The core principle for surviving failure is actually fairly simple and in fact applies to any systems not just cloud whether they happen to be Airplane, Missiles, Cars etc.. At the end its all about redundancy. The degree of tolerance is often determined by how many alternate systems or parts of the system we have in our design and how much they are separated from one another. The degree tolerance is also determined by how fast we can detect the broken part in our system and make the switch. In software terms the common parts that comprises our system is built out of two main groups - the business logic and the data.Making a redundant software application that can survive failure is often based on setting up clones for two of those  parts of our system.
  • We need abstraction – we don’t want to be locked in. We want to use tools that offer this abstraction layers both for daily management and for DR. This tool should translate our architecture concepts to the cloud specific properties (using recipes).To clone our application business logic we need to be able to ensure that all parts of our system runs the exact same version of all our software components . That include not just the binaries but also the configuration, the scripts that runs our application and more importantly that all our post deployment procedures such as fail-over, scaling and monitoring are also kept consistent. Quite often the things that makes the cloning of our business logic complex is due to the fact that the information on how to run our application is often scattered in many different sources such as scripts, as well as the mind of the people that runs those apps. To make the job of cloning our application much simpler and thus more consistent we need to be able to capture all parts of the information for running our apps in the same place. Configuration management tools such as Chef, Puppet and in the case of Amazon CloudFormation can help on this regard. 
  • RDS read replica - Amazon RDS uses MySQL’s built-in replication functionality to create a special type of DB Instance called a Read Replica that allows you to elastically scale out beyond the capacity constraints of a single DB Instance for read-heavy database workloads. Once you create a Read Replica, database updates on the source DB Instance are replicated to the Read Replica using MySQL’s native, asynchronous replication. Since Read Replicas leverage standard MySQL replication, they may fall behind their sources, and they are therefore not intended to be used for enhancing fault tolerance in the event of source DB Instance failure or Availability Zone failure.
  • There are lots of patterns on how to avoid failure.It took Netflix lots of development work to build a framework that can handle them well.Most users, startup don't have the lactury of implementing them themselves. You need a tool that will enable you to automate those patterns in a consistent way.  - Enter Cloudify
  • Any App, Any Stack — Move your application to the cloud without making any code changes, regardless of the application stack (Java/Spring, Java EE, Ruby on Rails, …), database store (relational such as MySQL or non-relational such as Apache Cassandra), or any other middleware components it uses. This enables you to achieve your objective of no code changes.To make the work of setting all this work simpler we tried to bake all those patterns into a readymade tools and are scripted into out of the box recipes. The cloudify recipes includes: Database cluster recipes with support for MySQL, MongoDB, Cassandra, Postgress etc..Integration with Chef and Puppet Automation of fail-over, scaling and continues maintenance of our application.Application recipes that allows you to capture all the aspect of running your application including the post deployment aspect such as fail-over, scaling and monitoring.
  • There are lots of patterns on how to avoid failure.It took Netflix lots of development work to build a framework that can handle them well.Most users, startup don't have the lactury of implementing them themselves. You need a tool that will enable you to automate those patterns in a consistent way.  - Enter Cloudify
  • Cloud brings lots of promise for making our business more agile.Cloud has also become a huge shared infrastructure in which every failure has a much more significant impact on our business world wide.The experience in the past year had tought us that even a robust cloud infrastructure such as Amazon can fail. Through this experience we've learned that rather than relying on the infrastructure for preventing failure we need to design our system to cope with failure and get used to failure as away of life. Having said that the investment required to build a robust application can be fairly large and not something that everyone can afford.Using tools like Cloudify, Chef Puppet and if your a pure Amazon shop Netflix <framework> could help greatly to reduce this effort by making a lot of those patterns pre-backed into recipes.
  • Transcript

    • 1. Protect your app from OutagesNati Shalom CTO GigaSpaces@natishalomMay 2013
    • 2.  AWS and outages Outage impact Disaster Recovery – it’s all about redundancy! Cloudify as a solution for redundancy Demo with Cloudify on EC2® Copyright 2013 GigaSpaces Ltd. All Rights Reserved2AGENDA
    • 3. 3AWS USAGE• AWS – around 0.5M servers• Facebook – less than 0.1M servers• Google – around 1M servers
    • 5. OUTAGE – APRIL 21, 2011® Copyright 2012 GigaSpaces Ltd. All Rights Reserved5
    • 6. OUTAGE - JUNE 29, 2012® Copyright 2012 GigaSpaces Ltd. All Rights Reserved6
    • 7. OUTAGE - OCTOBER 22, 2012® Copyright 2012 GigaSpaces Ltd. All Rights Reserved7
    • 8. OUTAGE - CHRISTMAS EVE 2012® Copyright 2012 GigaSpaces Ltd. All Rights Reserved8
    • 9. NOT ONLY AMAZON® Copyright 2012 GigaSpaces Ltd. All Rights Reserved9 28 December 2012 - some owners ofMicrosofts XBox 360 gaming console wereunable to access some of their cloud-basedstorage files. 26 July 2012 - Service for Microsoft’sWindows Azure Europe region went down formore than two hours 29 February 2012 - The ultimate result wasservice impacts of 8-10 hours for users ofAzure data centers in Dublin, Ireland, Chicago,and San Antonio.
    • 10. 10THAT’S WHAT YOU EXPECT?99% - 3.65 days downtime99.9% - 8.76 hours downtime99.99% - 53 minutes downtime99.999% - 5.26 minutes downtime
    • 11. ® Copyright 2012 GigaSpaces Ltd. All Rights Reserved11OUTAGE IMPACT – DESIGN FOR FAILURESOutage could cost…$89K per hour for Amadeus$225K per hour for PayPal!
    • 13. 13MULTI CLOUD
    • 14. 14PREPARE FOR DISASTER RECOVERY•Dedicated expert for DR architecture•Define target recovery time & point•Assume every tier can fail•Use monitoring and alerts•Document your operational processes
    • 15. 15CHAOS MONKEY
    • 16. 16
    • 18. 18CLONE YOUR DATA
    • 19. 19
    • 20. Leverage Existing Automation FrameworksConfiguration Centric APP Centric (PaaS)
    • 22. BUILT IN SUPPORT FOR MANAGING DATA IN THE CLOUDReal Time Relational DBClustersNoSQL Clusters HadoopStorm MySQL MongoDB Hadoop (Hive,Pig,..)Elastic Caching XAP Postgress Cassandra ZooKeeperCouchbaseElasticSearch
    • 23. 23
    • 24. VERIFI (CURRENT) DEPLOYMENT ARCHITECTURE24Availability region (US-West: Oregon)Data VolumeInternet EC2 Instancemod_clusterEC2 InstanceJBossData VolumeEC2 InstanceEC2 InstancePostgresSQLCassandra4 recipes
    • 25. TARGET ARCHITECTUREAvailability Region (US-West Oregon)Data VolumeInternet EC2 Instancemod_clusterEC2 InstanceJBossData VolumePostgres MasterEC2 InstanceEC2 InstanceCassandraAvailability Region (US-East Virginia)Data VolumeEC2 Instancemod_clusterEC2 InstanceJBossData VolumePostgres SlaveEC2 InstanceEC2 InstanceCassandrareplicationBootstrap two EC2 clouds in different regions, install the “verifi” application on each. The second cloud will have a slightly modified(extended) postgres recipe for acting as a slave + no running app servers. Upon the primary zone failure, the second cloud will spin upinstances of the app servers and turn the data instance into master, then bootstrapping another “slave” cloud in another zone.
    • 26. FAILOVER SCENARIO26Region (US-West Oregon)App ServersPostgresSQLRegion (US-East Virginia)PostgresSQLCloud #1 Cloud #2Region (US-East Virginia )PostgresSQLCloud #1 Cloud #2App ServersRegion (US-West California)PostgresSQLCloud #3Region failureoccursBootstrap another cloud ina different region using thesame application recipeused to bootstrap cloud #2above*Liveness pollLiveness pollUpon initial deployment, the primary deploymentof the application will be bootstrapped onto cloud#1, another slightly modified application recipewill be bootstrapped as cloud #2, polling cloud #1for failure, and acting as a PostgresSQL db slave.Turn Postgres slave intomaster, Start app serverinstances*
    • 27. Copyright 2012 Gigaspaces. All Rights Reserved27NEXT STEPSAcross clouds(AWS, Rackspace, Azure…etc)Across AWS regionsAcross AWS zones1 application+ overridesSeveral clouddrivers1 application+ overrides1 cloud driver1 application +overrides1 cloud driverAvailabilitySupported byVerifi phase #1
    • 28. Copyright 2012 Gigaspaces. All Rights Reserved28EVOLUTION PATHAvailabilityComplexity Multicloud/providerMultiregionMultizoneMultiinstanceMulticloud/providerMultiregionMultizoneMultiinstance
    • 29.  AWS and outages Outage impact Disaster Recovery – it’s all about redundancy! Cloning your environment – app stack Cloning your DB – Replication Cloudify as a solution for Redundancy Use recipes to work on any cloud Fast and customized data replication Demo with Cloudify on EC2® Copyright 2013 GigaSpaces Ltd. All Rights Reserved29SUMMARY
    • 30. Thank You!@natishalom® Copyright 2013 GigaSpaces Ltd. All Rights Reserved30QUESTIONS & ANSWERS