Azure and cloud design patterns
Upcoming SlideShare
Loading in...5

Like this? Share it with your network


Azure and cloud design patterns



Introduction to Cloud design patterns and azure services.

Introduction to Cloud design patterns and azure services.



Total Views
Views on SlideShare
Embed Views



1 Embed 200 200



Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
Post Comment
Edit your comment

Azure and cloud design patterns Presentation Transcript

  • 1. Cloud Computing
  • 2. Agenda Design Considerations in moving to Cloud Introduction to Windows Azure Demo
  • 3. Business Benefits of Cloud Computing Almost zero upfront infrastructure investment. Just-in-time infrastructure. More efficient resource utilization. Usage based costing. Reduced time to Market.
  • 4. Cloud Computing
  • 5. Cloud Computing
  • 6. Challenges Faced by Apps in theCloud Application Scalability  Cloud promises rapid (de)provisioning of resources.  How do you tap into that to create scalable systems? Application Availability  Underlying resource failures happen … usually more frequently than in traditional data centers.  How do you overcome that to create highly available systems?
  • 7. The Scalability Challenge Two different components to scale:  State (inputs, data store, output)  Behavior (business logic) Any non-trivial application has both. Scaling one component means scaling the other, too.
  • 8. Scalability Considerations  Performance vs Scalability  Latency vs Throughput  Availability vs ConsistencyHow do you manage overload ?
  • 9. Scalable Service A scalable architecture is critical to take advantage of scalable infrastructure. Characteristic of Scalable Service:  Increasing resources results in a proportional increase in performance  A scalable service is capable of handling heterogeneity  A scalable service is operationally efficient  A scalable service is resilient  A scalable service becomes more cost effective when it grows.
  • 10. 1. Design for Failure and nothing will really fail Avoid single points of failure Assume everything fails and design backwards. Applications should continue to function even if the physical hardware fails or removed or replaced.
  • 11. Design for Failure contd.. Unit of failure is a single host Where possible, choose services and infrastructure that assume host failures happen. By building simple services composed of a single host, rather then multiple dependent hosts, one can create replicated service instances that can survive host failures. Make your services small and stateless. Relax consistency requirements.
  • 12. 2. Build Loosely Coupled SystemsThe looser they’re coupled, the bigger they scale Loosely Coupled Dependencies Avoid complex design and interactions. Best Practices:  Tiered Architecture  Scale out units  Single role
  • 13. 3. Implement ElasticityElasticity is fundamental property of cloud Ability to add and remove capacity as and when it is required. Use Elastic Load Balancing. Use Auto-Scaling (free)
  • 14. 4. Build Security in every layerDesign with security in mind Encrypt data at rest. Encrypt data at transit (SSL) Consider encrypted file system for sensitive data. Rotate your credentials, Pass in arguments encrypted. Use MultiFactor authentication. Restrict external access to specific IP ranges.
  • 15. 5. Don’t Fear ConstraintsRe-think architectural constraints More RAM? – Distribute load across machines. Shared distributed cache. Better IOPS on database – Multiple read-only / Sharding. Performance – Caching at different levels
  • 16. 6. Think ParallelSerial and Sequential is now history Experiment different architectures in parallel Multi-threading and Concurrent requests to cloud services. Run parallel Map Reduce Jobs Use Elastic Load balancing to distribute load across multiple servers. Decompose job into its simplest form – and with “Shared Nothing”
  • 17. 7. Leverage many storage optionsOne Size does not fit all Amazon S3 / Azure Blob – Large Static Objects Amazon Simple DB / Azure Tables – Data indexing and Querying Amazon RDS / SQL Azure – RDMBS Service – Automated and Managed MySQL/Azure Amazon Cloud Front / Azure CDN – Content Distribution
  • 18. Cloud Architecture Lessons Design for failure and nothing fails. Loose coupling sets you free. Implement Elasticity. Build Security in every layer. Don’t fear constraints. Think Parallel. Leverage many storage options.
  • 19. Windows
  • 20. Why Windows Azure?
  • 21. Azure Overview
  • 22. Execution Models
  • 23. Application Scenarios
  • 24. Data Management
  • 25. Storage: What are our options?
  • 26. Blob Storage Concepts
  • 27. Networking
  • 28. Business Analytics
  • 29. Messaging
  • 30. Caching
  • 31. Stages of Service Deployment
  • 32. Packaging & Deployment
  • 33. Questions
  • 34. References
  • 35.
  • 37. App Scalability Patterns for State Data Grids  CAP theorem: Data Consistency Distributed Caching  Eventually Consistent HTTP Caching  Atomic Data  Reverse Proxy  DB Strategies  CDN  RDBMS  Denormalization Concurrency  Sharding  Message-Passing  NOSQL  Dataflow  Key-Value store  Software Transactional Memory  Document store  Shared-State  Data Structure store Partitioning  Graph database
  • 38. App Scalability Patterns for Behavior Compute Grids  Load Balancing  Round-robin Event-Driven Architecture  Random  Messaging  Weighted  Actors  Dynamic  Enterprise Service Bus  Parallel Computing  Domain Events  Master/Worker  Event Stream Processing  Fork/Join  Event Sourcing  MapReduce  Command & Query Responsibility  SPMD Segregation (CQRS)  Loop Parallelism
  • 39. The Availability Challenge Availability: Tolerate failures Traditional IT focuses on increasing MTTF  Mean Time to Failure Cloud IT focuses on reducing MTTR  Mean Time to Recovery
  • 40. Data modelling Classic distributed systems focused on ACID semantics  Atomicity: either the operation (e.g., write) is performed on all replicas or is not performed on any of them  Consistency: after each operation all replicas reach the same state  Isolation: no operation (e.g., read) can see the data from another operation (e.g., write) in an intermediate state  Durability: once a write has been successful, that write will persist indefinitely Modern Internet Systems – focused on BASE  Basically Available  Soft-state (or scalable)  Eventually consistent
  • 41. CAP TheoremAny distributed system has three properties – CAP Strong Consistency: all clients see the same view, even in the presence of updates High Availability: all clients can find some replica of the data, even in the presence of failures Partition-tolerance: the system properties hold even when the system is partitionedAs per CAP theorem you can only have two of these three properties. Choiceof which feature to discard determines the nature of your system.
  • 42. Map Reduce Model for processing large data sets. Many tasks composed of processing lots of data to produce lots of other data Want to use hundreds or thousands of CPUs... but this needs to be easy! Contains Map and Reduce functions.
  • 43. Programming model Input & Output: each a set of key/value pairs Programmer specifies two functions: map (in_key, in_value) -> list(out_key, intermediate_value) Processes input key/value pair Produces set of intermediate pairs reduce (out_key, list(intermediate_value)) -> list(out_value) Combines all intermediate values for a particular key Produces a set of merged output values (usually just one)
  • 44. So How Does It Work?
  • 45. So How Does It Work?
  • 46. Example Page 1: the weather is good Page 2: today is good Page 3: good weather is good.
  • 47. Map output Worker 1:  (the 1), (weather 1), (is 1), (good 1). Worker 2:  (today 1), (is 1), (good 1). Worker 3:  (good 1), (weather 1), (is 1), (good 1).
  • 48. Reduce Input Worker 1:  (the 1) Worker 2:  (is 1), (is 1), (is 1) Worker 3:  (weather 1), (weather 1) Worker 4:  (today 1) Worker 5:  (good 1), (good 1), (good 1), (good 1)
  • 49. Reduce Output Worker 1:  (the 1) Worker 2:  (is 3) Worker 3:  (weather 2) Worker 4:  (today 1) Worker 5:  (good 4)
  • 50. Conclusion – Map Reduce MapReduce has proven to be a useful abstraction Greatly simplifies large-scale computations Fun to use: focus on problem, let library deal w/ messy details