• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Building A Scalable Architecture
 

Building A Scalable Architecture

on

  • 2,418 views

This is a presentation I delivered at the Great Indian Developer Summit 2008. It covers a wide-array of topics and a plethora of lessons we have learnt (some the hard way) over the last 9 years in ...

This is a presentation I delivered at the Great Indian Developer Summit 2008. It covers a wide-array of topics and a plethora of lessons we have learnt (some the hard way) over the last 9 years in building web apps that are used by millions of users serving billions of page views every month. Topics and Techniques include Vertical scaling, Horizontal Scaling, Vertical Partitioning, Horizontal Partitioning, Loose Coupling, Caching, Clustering, Reverse Proxying and more.

Statistics

Views

Total Views
2,418
Views on SlideShare
2,302
Embed Views
116

Actions

Likes
3
Downloads
85
Comments
1

11 Embeds 116

http://sqlandsiva.blogspot.com 71
http://sqlandsiva.blogspot.in 18
http://sqlandsiva.blogspot.ca 7
http://sqlandsiva.blogspot.co.uk 6
http://www.slideshare.net 6
http://sqlandsiva.blogspot.fr 2
http://sqlandsiva.blogspot.ie 2
http://webcache.googleusercontent.com 1
http://sqlandsiva.blogspot.se 1
http://sqlandsiva.blogspot.be 1
http://sqlandsiva.blogspot.dk 1
More...

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel

11 of 1 previous next

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
  • Excellent literature on architecture. You have done a great job in defining what true scability is!
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Building A Scalable Architecture Building A Scalable Architecture Presentation Transcript

    • Building a Scalable Architecture for Web Apps - Part I (Lessons Learned @ Directi)
      • By Bhavin Turakhia
      • CEO, Directi
      • ( http://www.directi.com | http://wiki.directi.com | http://careers.directi.com )
      Licensed under Creative Commons Attribution Sharealike Noncommercial
    • Agenda
      • Why is Scalability important
      • Introduction to the Variables and Factors
      • Building our own Scalable Architecture (in incremental steps)
        • Vertical Scaling
        • Vertical Partitioning
        • Horizontal Scaling
        • Horizontal Partitioning
        • … etc
      • Platform Selection Considerations
      • Tips
      Creative Commons Sharealike Attributions Noncommercial
    • Why is Scalability Important in a Web 2.0 world
      • Viral marketing can result in instant successes
      • RSS / Ajax / SOA
        • pull based / polling type
        • XML protocols - Meta-data > data
        • Number of Requests exponentially grows with user base
      • RoR / Grails – Dynamic language landscape gaining popularity
      • In the end you want to build a Web 2.0 app that can serve millions of users with ZERO downtime
      Creative Commons Sharealike Attributions Noncommercial
    • The Variables
      • Scalability - Number of users / sessions / transactions / operations the entire system can perform
      • Performance – Optimal utilization of resources
      • Responsiveness – Time taken per operation
      • Availability - Probability of the application or a portion of the application being available at any given point in time
      • Downtime Impact - The impact of a downtime of a server/service/resource - number of users, type of impact etc
      • Cost
      • Maintenance Effort
      Creative Commons Sharealike Attributions Noncommercial High : scalability, availability, performance & responsiveness Low : downtime impact, cost & maintenance effort
    • The Factors
      • Platform selection
      • Hardware
      • Application Design
      • Database/Datastore Structure and Architecture
      • Deployment Architecture
      • Storage Architecture
      • Abuse prevention
      • Monitoring mechanisms
      • … and more
      Creative Commons Sharealike Attributions Noncommercial
    • Lets Start …
      • We will now build an example architecture for an example app using the following iterative incremental steps –
        • Inspect current Architecture
        • Identify Scalability Bottlenecks
        • Identify SPOFs and Availability Issues
        • Identify Downtime Impact Risk Zones
        • Apply one of -
          • Vertical Scaling
          • Vertical Partitioning
          • Horizontal Scaling
          • Horizontal Partitioning
        • Repeat process
      Creative Commons Sharealike Attributions Noncommercial
    • Step 1 – Lets Start … Creative Commons Sharealike Attributions Noncommercial Appserver & DBServer
    • Step 2 – Vertical Scaling Creative Commons Sharealike Attributions Noncommercial Appserver, DBServer CPU CPU RAM RAM
    • Step 2 - Vertical Scaling
      • Introduction
        • Increasing the hardware resources without changing the number of nodes
        • Referred to as “Scaling up” the Server
      • Advantages
        • Simple to implement
      • Disadvantages
        • Finite limit
        • Hardware does not scale linearly (diminishing returns for each incremental unit)
        • Requires downtime
        • Increases Downtime Impact
        • Incremental costs increase exponentially
      Creative Commons Sharealike Attributions Noncommercial Appserver, DBServer CPU CPU RAM RAM CPU CPU RAM RAM
    • Step 3 – Vertical Partitioning (Services) Creative Commons Sharealike Attributions Noncommercial AppServer DBServer
      • Introduction
        • Deploying each service on a separate node
      • Positives
        • Increases per application Availability
        • Task-based specialization, optimization and tuning possible
        • Reduces context switching
        • Simple to implement for out of band processes
        • No changes to App required
        • Flexibility increases
      • Negatives
        • Sub-optimal resource utilization
        • May not increase overall availability
        • Finite Scalability
    • Understanding Vertical Partitioning Creative Commons Sharealike Attributions Noncommercial
      • The term Vertical Partitioning denotes –
        • Increase in the number of nodes by distributing the tasks/functions
        • Each node (or cluster) performs separate Tasks
        • Each node (or cluster) is different from the other
      • Vertical Partitioning can be performed at various layers (App / Server / Data / Hardware etc)
    • Step 4 – Horizontal Scaling (App Server) Creative Commons Sharealike Attributions Noncommercial AppServer AppServer AppServer Load Balancer DBServer
      • Introduction
        • Increasing the number of nodes of the App Server through Load Balancing
        • Referred to as “Scaling out” the App Server
    • Understanding Horizontal Scaling Creative Commons Sharealike Attributions Noncommercial
      • The term Horizontal Scaling denotes –
        • Increase in the number of nodes by replicating the nodes
        • Each node performs the same Tasks
        • Each node is identical
        • Typically the collection of nodes maybe known as a cluster (though the term cluster is often misused)
        • Also referred to as “Scaling Out”
      • Horizontal Scaling can be performed for any particular type of node (AppServer / DBServer etc)
    • Load Balancer – Hardware vs Software Creative Commons Sharealike Attributions Noncommercial
      • Hardware Load balancers are faster
      • Software Load balancers are more customizable
      • With HTTP Servers load balancing is typically combined with http accelerators
    • Load Balancer – Session Management Creative Commons Sharealike Attributions Noncommercial
      • Sticky Sessions
        • Requests for a given user are sent to a fixed App Server
        • Observations
          • Asymmetrical load distribution (especially during downtimes)
          • Downtime Impact – Loss of session data
      AppServer AppServer AppServer Load Balancer Sticky Sessions User 1 User 2
    • Load Balancer – Session Management Creative Commons Sharealike Attributions Noncommercial
      • Central Session Store
        • Introduces SPOF
        • An additional variable
        • Session reads and writes generate Disk + Network I/O
        • Also known as a Shared Session Store Cluster
      AppServer AppServer AppServer Load Balancer Session Store Central Session Storage
    • Load Balancer – Session Management Creative Commons Sharealike Attributions Noncommercial
      • Clustered Session Management
        • Easier to setup
        • No SPOF
        • Session reads are instantaneous
        • Session writes generate Network I/O
        • Network I/O increases exponentially with increase in number of nodes
        • In very rare circumstances a request may get stale session data
          • User request reaches subsequent node faster than intra-node message
          • Intra-node communication fails
        • AKA Shared-nothing Cluster
      AppServer AppServer AppServer Load Balancer Clustered Session Management
    • Load Balancer – Session Management Creative Commons Sharealike Attributions Noncommercial
      • Sticky Sessions with Central Session Store
        • Downtime does not cause loss of data
        • Session reads need not generate network I/O
      • Sticky Sessions with Clustered Session Management
        • No specific advantages
      Sticky Sessions AppServer AppServer AppServer Load Balancer User 1 User 2
    • Load Balancer – Session Management Creative Commons Sharealike Attributions Noncommercial
      • Recommendation
        • Use Clustered Session Management if you have –
          • Smaller Number of App Servers
          • Fewer Session writes
        • Use a Central Session Store elsewhere
        • Use sticky sessions only if you have to
    • Load Balancer – Removing SPOF Creative Commons Sharealike Attributions Noncommercial
      • In a Load Balanced App Server Cluster the LB is an SPOF
      • Setup LB in Active-Active or Active-Passive mode
        • Note: Active-Active nevertheless assumes that each LB is independently able to take up the load of the other
        • If one wants ZERO downtime, then Active-Active becomes truly cost beneficial only if multiple LBs (more than 3 to 4) are daisy chained as Active-Active forming an LB Cluster
      AppServer AppServer AppServer Load Balancer Active-Passive LB Load Balancer AppServer AppServer AppServer Load Balancer Active-Active LB Load Balancer Users Users
    • Step 4 – Horizontal Scaling (App Server) Creative Commons Sharealike Attributions Noncommercial DBServer
      • Our deployment at the end of Step 4
      • Positives
        • Increases Availability and Scalability
        • No changes to App required
        • Easy setup
      • Negatives
        • Finite Scalability
      Load Balanced App Servers
    • Step 5 – Vertical Partitioning (Hardware) Creative Commons Sharealike Attributions Noncommercial DBServer
      • Introduction
        • Partitioning out the Storage function using a SAN
      • SAN config options
        • Refer to “Demystifying Storage” at http://wiki.directi.com -> Dev University -> Presentations
      • Positives
        • Allows “Scaling Up” the DB Server
        • Boosts Performance of DB Server
      • Negatives
        • Increases Cost
      SAN Load Balanced App Servers
    • Step 6 – Horizontal Scaling (DB) Creative Commons Sharealike Attributions Noncommercial DBServer
      • Introduction
        • Increasing the number of DB nodes
        • Referred to as “Scaling out” the DB Server
      • Options
        • Shared nothing Cluster
        • Real Application Cluster (or Shared Storage Cluster)
      DBServer DBServer SAN Load Balanced App Servers
    • Shared Nothing Cluster Creative Commons Sharealike Attributions Noncommercial
      • Each DB Server node has its own complete copy of the database
      • Nothing is shared between the DB Server Nodes
      • This is achieved through DB Replication at DB / Driver / App level or through a proxy
      • Supported by most RDBMs natively or through 3 rd party software
      DBServer Database DBServer Database DBServer Database Note: Actual DB files maybe stored on a central SAN
    • Replication Considerations Creative Commons Sharealike Attributions Noncommercial
      • Master-Slave
        • Writes are sent to a single master which replicates the data to multiple slave nodes
        • Replication maybe cascaded
        • Simple setup
        • No conflict management required
      • Multi-Master
        • Writes can be sent to any of the multiple masters which replicate them to other masters and slaves
        • Conflict Management required
        • Deadlocks possible if same data is simultaneously modified at multiple places
    • Replication Considerations Creative Commons Sharealike Attributions Noncommercial
      • Asynchronous
        • Guaranteed, but out-of-band replication from Master to Slave
        • Master updates its own db and returns a response to client
        • Replication from Master to Slave takes place asynchronously
        • Faster response to a client
        • Slave data is marginally behind the Master
        • Requires modification to App to send critical reads and writes to master, and load balance all other reads
      • Synchronous
        • Guaranteed, in-band replication from Master to Slave
        • Master updates its own db, and confirms all slaves have updated their db before returning a response to client
        • Slower response to a client
        • Slaves have the same data as the Master at all times
        • Requires modification to App to send writes to master and load balance all reads
    • Replication Considerations Creative Commons Sharealike Attributions Noncommercial
      • Replication at RDBMS level
        • Support may exists in RDBMS or through 3 rd party tool
        • Faster and more reliable
        • App must send writes to Master, reads to any db and critical reads to Master
      • Replication at Driver / DAO level
        • Driver / DAO layer ensures
          • writes are performed on all connected DBs
          • Reads are load balanced
          • Critical reads are sent to a Master
        • In most cases RDBMS agnostic
        • Slower and in some cases less reliable
    • Real Application Cluster Creative Commons Sharealike Attributions Noncommercial
      • All DB Servers in the cluster share a common storage area on a SAN
      • All DB servers mount the same block device
      • The filesystem must be a clustered file system (eg GFS / OFS)
      • Currently only supported by Oracle Real Application Cluster
      • Can be very expensive (licensing fees)
      DBServer SAN DBServer DBServer Database
    • Recommendation Creative Commons Sharealike Attributions Noncommercial
      • Try and choose a DB which natively supports Master-Slave replication
      • Use Master-Slave Async replication
      • Write your DAO layer to ensure
        • writes are sent to a single DB
        • reads are load balanced
        • Critical reads are sent to a master
      DBServer DBServer DBServer Writes & Critical Reads Other Reads Load Balanced App Servers
    • Step 6 – Horizontal Scaling (DB) Creative Commons Sharealike Attributions Noncommercial
      • Our architecture now looks like this
      • Positives
        • As Web servers grow, Database nodes can be added
        • DB Server is no longer SPOF
      • Negatives
        • Finite limit
      Load Balanced App Servers DB Cluster DB DB DB SAN
    • Step 6 – Horizontal Scaling (DB) Creative Commons Sharealike Attributions Noncommercial
      • Shared nothing clusters have a finite scaling limit
        • Reads to Writes – 2:1
        • So 8 Reads => 4 writes
        • 2 DBs
          • Per db – 4 reads and 4 writes
        • 4 DBs
          • Per db – 2 reads and 4 writes
        • 8 DBs
          • Per db – 1 read and 4 writes
      • At some point adding another node brings in negligible incremental benefit
      Reads Writes DB1 DB2
    • Step 7 – Vertical / Horizontal Partitioning (DB) Creative Commons Sharealike Attributions Noncommercial
      • Introduction
        • Increasing the number of DB Clusters by dividing the data
      • Options
        • Vertical Partitioning - Dividing tables / columns
        • Horizontal Partitioning - Dividing by rows (value)
      Load Balanced App Servers DB Cluster DB DB DB SAN
    • Vertical Partitioning (DB) Creative Commons Sharealike Attributions Noncommercial
      • Take a set of tables and move them onto another DB
        • Eg in a social network - the users table and the friends table can be on separate DB clusters
      • Each DB Cluster has different tables
      • Application code or DAO / Driver code or a proxy knows where a given table is and directs queries to the appropriate DB
      • Can also be done at a column level by moving a set of columns into a separate table
      App Cluster DB Cluster 1 Table 1 Table 2 DB Cluster 2 Table 3 Table 4
    • Vertical Partitioning (DB) Creative Commons Sharealike Attributions Noncommercial
      • Negatives
        • One cannot perform SQL joins or maintain referential integrity (referential integrity is as such over-rated)
        • Finite Limit
      App Cluster DB Cluster 1 Table 1 Table 2 DB Cluster 2 Table 3 Table 4
    • Horizontal Partitioning (DB) Creative Commons Sharealike Attributions Noncommercial
      • Take a set of rows and move them onto another DB
        • Eg in a social network – each DB Cluster can contain all data for 1 million users
      • Each DB Cluster has identical tables
      • Application code or DAO / Driver code or a proxy knows where a given row is and directs queries to the appropriate DB
      • Negatives
        • SQL unions for search type queries must be performed within code
      App Cluster DB Cluster 1 Table 1 Table 2 Table 3 Table 4 DB Cluster 2 Table 1 Table 2 Table 3 Table 4 1 million users 1 million users
    • Horizontal Partitioning (DB) Creative Commons Sharealike Attributions Noncommercial
      • Techniques
        • FCFS
          • 1 st million users are stored on cluster 1 and the next on cluster 2
        • Round Robin
        • Least Used (Balanced)
          • Each time a new user is added, a DB cluster with the least users is chosen
        • Hash based
          • A hashing function is used to determine the DB Cluster in which the user data should be inserted
        • Value Based
          • User ids 1 to 1 million stored in cluster 1 OR
          • all users with names starting from A-M on cluster 1
        • Except for Hash and Value based all other techniques also require an independent lookup map – mapping user to Database Cluster
        • This map itself will be stored on a separate DB (which may further need to be replicated)
    • Step 7 – Vertical / Horizontal Partitioning (DB) Creative Commons Sharealike Attributions Noncommercial Lookup Map
      • Our architecture now looks like this
      • Positives
        • As App servers grow, Database Clusters can be added
      • Note: This is not the same as table partitioning provided by the db (eg MSSQL)
      • We may actually want to further segregate these into Sets, each serving a collection of users (refer next slide
      Load Balanced App Servers DB Cluster DB DB DB DB Cluster DB DB DB SAN
    • Step 8 – Separating Sets Creative Commons Sharealike Attributions Noncommercial Lookup Map Lookup Map Global Redirector Global Lookup Map SET 1 – 10 million users SET 2 – 10 million users
      • Now we consider each deployment as a single Set serving a collection of users
      Load Balanced App Servers DB Cluster DB DB DB DB Cluster DB DB DB SAN Load Balanced App Servers DB Cluster DB DB DB DB Cluster DB DB DB SAN
    • Creating Sets Creative Commons Sharealike Attributions Noncommercial
      • The goal behind creating sets is easier manageability
      • Each Set is independent and handles transactions for a set of users
      • Each Set is architecturally identical to the other
      • Each Set contains the entire application with all its data structures
      • Sets can even be deployed in separate datacenters
      • Users may even be added to a Set that is closer to them in terms of network latency
    • Step 8 – Horizontal Partitioning (Sets) Creative Commons Sharealike Attributions Noncommercial App Servers Cluster DB Cluster SAN Global Redirector SET 1 DB Cluster App Servers Cluster DB Cluster SAN SET 2 DB Cluster
      • Our architecture now looks like this
      • Positives
        • Infinite Scalability
      • Negatives
        • Aggregation of data across sets is complex
        • Users may need to be moved across Sets if sizing is improper
        • Global App settings and preferences need to be replicated across Sets
    • Step 9 – Caching Creative Commons Sharealike Attributions Noncommercial
      • Add caches within App Server
        • Object Cache
        • Session Cache (especially if you are using a Central Session Store)
        • API cache
        • Page cache
      • Software
        • Memcached
        • Teracotta (Java only)
        • Coherence (commercial expensive data grid by Oracle)
    • Step 10 – HTTP Accelerator Creative Commons Sharealike Attributions Noncommercial
      • If your app is a web app you should add an HTTP Accelerator or a Reverse Proxy
      • A good HTTP Accelerator / Reverse proxy performs the following –
        • Redirect static content requests to a lighter HTTP server (lighttpd)
        • Cache content based on rules (with granular invalidation support)
        • Use Async NIO on the user side
        • Maintain a limited pool of Keep-alive connections to the App Server
        • Intelligent load balancing
      • Solutions
        • Nginx (HTTP / IMAP)
        • Perlbal
        • Hardware accelerators plus Load Balancers
    • Step 11 – Other cool stuff Creative Commons Sharealike Attributions Noncommercial
      • CDNs
      • IP Anycasting
      • Async Nonblocking IO (for all Network Servers)
      • If possible - Async Nonblocking IO for disk
      • Incorporate multi-layer caching strategy where required
        • L1 cache – in-process with App Server
        • L2 cache – across network boundary
        • L3 cache – on disk
      • Grid computing
        • Java – GridGain
        • Erlang – natively built in
    • Platform Selection Considerations Creative Commons Sharealike Attributions Noncommercial
      • Programming Languages and Frameworks
        • Dynamic languages are slower than static languages
        • Compiled code runs faster than interpreted code -> use accelerators or pre-compilers
        • Frameworks that provide Dependency Injections, Reflection, Annotations have a marginal performance impact
        • ORMs hide DB querying which can in some cases result in poor query performance due to non-optimized querying
      • RDBMS
        • MySQL, MSSQL and Oracle support native replication
        • Postgres supports replication through 3 rd party software (Slony)
        • Oracle supports Real Application Clustering
        • MySQL uses locking and arbitration, while Postgres/Oracle use MVCC (MSSQL just recently introduced MVCC)
      • Cache
        • Teracotta vs memcached vs Coherence
    • Tips Creative Commons Sharealike Attributions Noncommercial
      • All the techniques we learnt today can be applied in any order
      • Try and incorporate Horizontal DB partitioning by value from the beginning into your design
      • Loosely couple all modules
      • Implement a REST-ful framework for easier caching
      • Perform application sizing ongoingly to ensure optimal utilization of hardware
    • Questions?? bhavin.t@directi.com http://directi.com http://careers.directi.com Download slides: http://wiki.directi.com