Service Primitives for Internet Scale Applications
Upcoming SlideShare
Loading in...5
×

Like this? Share it with your network

Share

Service Primitives for Internet Scale Applications

  • 2,901 views
Uploaded on

A general framework to describe internet scale applications and characterize the functional properties that can be traded away to improve the following operational metrics:...

A general framework to describe internet scale applications and characterize the functional properties that can be traded away to improve the following operational metrics:

* Throughput (how many user requests/sec?)

* Interactivity (latency, how fast user requests finish?)

* Availability (% of time user perceives service as up), including fast recovery to improve availability

* TCO (Total Cost of Ownership)

More in: Technology , Business
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
2,901
On Slideshare
2,864
From Embeds
37
Number of Embeds
6

Actions

Shares
Downloads
14
Comments
0
Likes
1

Embeds 37

http://www.linkedin.com 25
https://www.linkedin.com 7
http://www.slideshare.net 2
http://www.lmodules.com 1
http://static.slideshare.net 1
http://www.slideee.com 1

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Service Primitives for Internet Scale Applications Amr Awadallah, Armando Fox, Ben Ling Computer Systems Lab Stanford University
  • 2. Interactive Internet-Scale Application?
    • Millions of users.
    Global LB Local LB Presentation Servers + $ LB Application Servers + $ Fail over State Replica Local LB Presentation Servers + $ Presentation Servers + $ Application Servers + $ Application Servers + $ Data Center State PS + $ LB AS + $ Fail over Local LB State PS + $ LB AS + $ Fail over
  • 3. Motivation
    • A general framework to describe IIA’s and characterize the functional properties that can be traded away to improve the following operational metrics:
      • Throughput (how many user requests/sec?)
      • Interactivity (latency, how fast user requests finish?)
      • Availability (% of time user perceives service as up), including fast recovery to improve availability
      • TCO (Total Cost of Ownership)
    • In particular, enumerate architectural primitives that expose partial degradation of functional properties and illustrate how they can be built with “commodity” HW.
  • 4. Recall ACID
    • Atomicity: For a transaction involving two or more discrete pieces of information, either all pieces changed are committed or none.
    • Consistency: A transaction creates a new valid state obeying all user integrity constraints.
    • Isolation: Changes from non-committed transactions remains hidden from all other concurrent transactions (Serializable, Repeatable-R, Commited-R, Uncommit-R)
    • Durability: Committed data survives beyond system restarts and storage failures.
  • 5. ACID is too much for Internet scale
    • Yahoo UDB: tens of thousands of reads/sec, up to 10k writes/sec
    • Geoplexing used for both disaster recovery and scalability, but eager replication (strong consistency) across replicas scales poorly
      • If total DB size grows with # nodes, deadlock rate increases at the same rate as number of nodes
      • If DB size grows sublinearly, deadlock rate increases as cube of number of nodes
    • Even if we could use transactional DB’s and eager replication, cost would be too high
  • 6. The New Properties
    • Durability (State): Hard, Soft, Stateless
    • Consistency: Strong, Eventual, Weak, NonC
    • Completeness: Full, Incomp-R, Lossy-W
    • Visibility: User, Entity, World
  • 7. Durability (Hard, Soft, Stateless)
    • Hard: This is permanent state in the original sense of the D in ACID.
    • Soft: This is temporary storage in the RAM sense, i.e. if power fails then data is lost. This is cheaper and acceptable if user can rebuild state quickly.
    • Stateless: No need to store state on behalf of the user.
  • 8. Consistency (Strong, Eventual, Weak)
    • Eventual: after a write, there is some time t after which all reads see the new value. (eg caching)
    • Strong: in addition, before time t, no reads see the new value (single-copy ACID consistency)
    • Weak: This is weak consistency in the TACT sense - captures ordering inaccuracies, or persistent staleness.
  • 9. Completeness (Full, Incomp, Lossy)
    • Complete: all updates either succeed, or fail synchronously. All queries return 100% accurate data.
    • Incomplete Queries: This is aggregated lossy reads over partitioned state, or state sampling. The best example here is Inktomi’s distributed search where its ok that some partitions not return results under load.
    • Lossy Updates: This means that its ok for some commited writes to not make it. Example: Lossy Counters and online polls.
  • 10. Visibility (World, Entity, User)
    • World: The state and changes to it are visible to all the world, e.g. listing a product on eBay.
    • Entity: State is only visible to a group of users, or within a specific subset of the data (e.g. eBay Jewlery)
    • User: The state and changes to it are only visible to the user interacting with it, e.g. the MyYahoo user profile. This could be simpler to implement using ReadMyWrites techniques.
  • 11. Architectural Primitives Interactiveness, Graceful Degradation Weak Consistency Lossy/Sampled Aggregation Interactiveness, Graceful Degradation Entity Visibility Partitioning Interactiveness, Availability, Throughput Eventual Consistency Caching, Replication Gains Trades Primitives
  • 12. Examples of Primitives
    • LossyUpdate(key,newVal)
    • LossyAccumulator(key, updateOp) - for commutative ops
    • LossyAggregate(searchKeys) - lossy search of an index
  • 13. LossyUpdate implementation
    • LossyUpdate
      • Steve Gribble’s DHT: atomic ops, single-copy consistency; during failure recovery, reads are slower and writes are refused
      • If update occurs while updated partition is recovering => fail
      • Otherwise, update is persistent
      • When is this useful?
    • LossyAccumulator (for hit counter, online poll, etc)
      • Every period T, in-memory sub-accumulators from worker nodes are swept to persistent copy
      • At the same time, current value of master accumulator is read by each worker node, to serve reads locally
      • Worker nodes don’t backup in-memory copy => fast restart
      • Can bound loss rate of accumulator and inconsistency in read
  • 14. What is given up
    • What is given up
      • Strict consistency of read copies of accumulator
      • Precision of accumulator value (lost updates)
    • What is gained: fast recovery for each node, continuous operation despite transient per-node failures