Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Disaster Recovery and Reliability

1,910 views

Published on

My presentation on DR and Reliability for hybrid cloud environments.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Disaster Recovery and Reliability

  1. 1. Disaster Recovery & Reliability Manish Pandit 03/26/2018
  2. 2. Why Define and contextualize Disaster Recovery in a business and technical context without boiling the ocean. In other words, this is a very, very high level overview of a topic where each slide can easily be a session on it’s own.
  3. 3. Sorry for the....math :(
  4. 4. Availability A measure of % of time a service is in a usable state. Also measured in 9s. Scheduled downtimes do not count towards availability, but may impact customer satisfaction metrics (more so in a B2C model).
  5. 5. Reliability A measure of the probability of the service being in a usable state for a period of time. Measured as MTBF (Mean Time Between Failures), and the Failure Rate
  6. 6. Connecting Reliability & Availability A database goes down for an unscheduled maintenance for an hour Availability = 98% (or 1 Nine) Reliability = 23 hours MTBF = 23 hours; as I can rely on that db for only 23 hours.
  7. 7. Disasters
  8. 8. BCP Business Continuity Plan “Business continuity planning (or business continuity and resiliency planning) is the process of creating systems of prevention and recovery to deal with potential threats to a company.” - Wikipedia Usually owned and managed by the COO
  9. 9. Disaster Recovery Disaster Recovery starts where High Availability stops.
  10. 10. Disaster Recovery Disaster Recovery is a component of BCP, covering the technical/infrastructure area. Usually owned and managed by the CTO/CIO.
  11. 11. But...how do we put metrics around Disaster Recovery Plan?
  12. 12. RPO Recovery Point Objective The maximum amount of data loss that is tolerable without significant impact to business continuity. Always defined backwards in time. Ideal value = 0
  13. 13. RPO If the RPO is 4 hours, it’d mean you must have (good) backups of data no older than 4 hours. Think about your laptop. How much far back in time you can go where any data loss beyond that time is tolerable?
  14. 14. RTO Recovery Time Objective Wider than RPO - Covers more than just data. The maximum amount of time the system can remain unavailable without significant impact to the business continuity. Ideal value = 0
  15. 15. Source: CloudAcademy
  16. 16. RTO and RPO If it takes 2 hours to restore the last backup that was done 4 hours ago, then RTO is >= 2 hours, and RPO is >= 4 hours. If a master fails, and the slave is 10 minutes behind, your RPO cannot be < 10 minutes. If the application needs to be bounced to update the db connections which takes 10 minutes, then the RTO cannot be < 10 minutes.
  17. 17. PTO Paid Time Off following the the disaster recovery. *It is more or less a convention to throw PTO in there.
  18. 18. Who decides RTO and RPO? The business does.
  19. 19. That’s easy - get me zero RTO and RPO Zero RTO and/or RPO is realistically impossible. (why?) The business has to establish the tolerable RTO and RPO. This acts as a requirements-spec for the DR Plan and Implementation. These limits also help establish the SLA with customers.
  20. 20. Tolerable? For a bank, an RPO greater than a few minutes = lost transactions. For an online broker, an RTO greater than a few minutes = lost trades. For a media company, RTO greater than a few minutes = angry tweets. For a static website, weekly backups are acceptable with a RPO of 1 week. For an HR system, RPO greater than a day may be acceptable, but RTO greater than a few hours may not.
  21. 21. Hybrid Cloud Most companies run a hybrid cloud, which means the infrastructure is split (usually disproportionately) between on-prem and public cloud.
  22. 22. Common Failures Network backbone/ISP Outage Software Bugs Storage Controller/NFS Crashes Disruptive changes to security settings/firewalls Corrupt DNS configuration being replicated AWS/Public Cloud Outage
  23. 23. Backup & Restore Regular backups are copied to the recovery site. Infrastructure has to be spun up on the recovery site in the event of a disaster. RPO and RTO can be in hours, if not days. Inexpensive - Costs few hundred dollars a month for the storage.
  24. 24. Pilot Light Infrastructure is provisioned, but needs to be started before taking any traffic (RTO!) Data replication may be a few seconds/minutes behind (RPO!) Lower RTO and RPO than Backup & Restore, a bit more $$ for replication.
  25. 25. Warm Standby Infrastructure is provisioned, ready to take on traffic. It may need to be scaled up to handle full production load. Data replication may be a few seconds/minutes behind (RPO!) Lower RTO than Pilot Light, more $$ (why?)
  26. 26. Multi-Site Multiple sites taking live production traffic Difficult to pull off due to database constraints (multi-master, anyone?) When done right, RPO and RTO of a few seconds to few minutes Costs an arm and a leg
  27. 27. Multi Cloud Mother of them all. Automation to support multiple cloud providers, plus on- prem. RPO and RTO similar to multi-site, but provides isolation at a provider level. Costs an arm, a leg, and a kidney.
  28. 28. So...
  29. 29. Survey the Land Start with measuring your current RTO and RPO.
  30. 30. Gather data You cannot improve what you cannot measure. Bonus - Detect anomalies across the board.
  31. 31. Runbooks Write them, and keep them updated.
  32. 32. Review your automation Follow the Pull-request model for infrastructure changes. Automating a destructive script (unintentionally) is the quickest way to a disaster. foreach ($env == ‘prod’); sudo chmod -R -rx
  33. 33. Practice the DR Plan!
  34. 34. Failure-as-a-service Inject failures in the infrastructure. Measure of readiness. Chaos Engineering. Netflix - Simian Army Amazon Aurora Failure Injections
  35. 35. Not all components are equal - neither should their DRs
  36. 36. Blast Radius A DNS failure can take down an entire data center. A faulty switch can take down entire subnet. A service failure can take down all others dependent on it. A Region failure has larger blast radius than an Availability Zone failure A Provider failure has larger blast radius than a Region failure.
  37. 37. Design for Fault Tolerance and Graceful Degradation Use evented processing vs. synchronous wherever possible
  38. 38. Dashboards - Internal and External Service health monitoring is critical.. ..so is ensuring that the monitors themselves can survive a disaster.
  39. 39. Finally Make disaster recovery and high availability a topic of discussion during every stage of a project. Ask the hard questions. Embrace failure - learn from it.

×