Effective SOA
Lessons from Amazon, Google, and Lucidchart
By Derrick Isaacson
Can I get that
without the
bacon?
Said no one
ever
http://www.food.com/photo-finder/all/bacon?photog=1072593
http://baconipsum.com/?paras=1&type=all-meat&start-with-lorem=1
http://www.someecards.com/usercards/viewcard/MjAxMi03YWZiMjJiMTg3NDFhYTUy
Simplicity of Single Component Services
• I can’t remember if that getter function takes 100ns or 100ms. - Said no
enginee...
"A distributed system is at best a
necessary evil, evil because of the extra
complexity...
An application is rarely, if ev...
Distributed System Architectures
Does it have to be “Service-oriented”?
http://upload.wikimedia.org/wikipedia/commons/d/da/KL_CoreMemory.jpg
Distributed Memory
RPC
<I’m>
<not>
<making>
<a>
<service>
<request>
<I’m>
<just>
<calling>
<a>
<procedure>
Distributed File System
mount -t nfs -o proto=tcp,port=2049 nfs-server:/ /mnt
Distributed Data Stores
• Replated MySQL
• Mongo
• S3
• RDS
• BigTable
• Cassandra
…
P2P
Streaming Media
Service-oriented Architectures
Social Bookmarking App
GET /profiles/123
GET /users/123
Calculate something
GET /users/123/permissions
If user can’t view profile
send 403
POST /...
Lucidchart.com by Status Code
96.5%
2xx or
3xx
Lucidchart.com 1s+ Latencies
10.8%
> 1s
What Happened?!?
I though SOA was supposed to make my app better!
Simple SOA Availability
<98.7%
99.5%
99.8%
99.6%
.995 * .998 * .998 * .996 = 0.987
A distributed system is at
best a necessary evil
<98.7%
99.5%
99.8%
99.6%
The CAP Theorem
http://learnyousomeerlang.com/distribunomicon
The CAP Theorem1
• Safety – nothing bad ever happens
• Liveness – good things happen
• Unreliability – network dis-connect...
Consistency: Nothing Bad Happens
Assumption: Failures Happen
Availability Consistency
ResponseHandler<User> handler = new ResponseHandler<User>()
{
public User handleResponse(
final HttpResponse response) {
i...
GET /profiles/123
GET /users/123
Calculate something
GET /users/123/permissions
If user can’t view profile
send 403
POST /...
Best Effort Availability -
Euphemism for not always available
Best Effort Consistency -
Euphemism for not always consistent
Google File System: relaxed consistency model
Throughput
Latency
Amazon Checkout
http://highscalability.com/amazon-architecture
“WOW
I really regret
sacrificing
consistency for
availability”
-said no amazon ever That’s $74 Billion
Hang Consistency!
Add
• Caching
• Timeouts
• Retries
• Guessing
• Anything!
Tip 1:
HTTP
Caching
Availability/Performance Consistency
Tip 2: HTTP Caching as Fallback
Tip 3: Retries
• Exponential backoffs & max retries
Tip 3: HTTP Caching Technologies
• Apache HttpComponents – HttpClient Cache
• Ehcache
• Redis
• Memcached
• CloudFront
• A...
Segmenting Consistency and Availability
1. Data Partitioning
Shopping Cart
Warehouse Inventory DB
Segmenting
2. Operation Partitioning
Reads
Writes
Dynamo PNUTS&
Segmenting
3. Functional partitioning
User Service, Document Snapshots
Document Service
Segmenting
4. Hierarchical Partitioning
Leaves
Root
http://www.slashgear.com/google-data-center-hd-photos-hit-where-the-internet-lives-gallery-17252451/
Timeouts
Stop Guessing and Just Calculate It
• Max I/O wait time = # of threads * (CONNECT_TIMEOUT +
READ_TIMEOUT)
• 9 front end se...
Best Effort Consistency System
99.9%
99.5%
99.8%
99.6%
Wow, my
pizza has too
much cheese
and toppings
Said no one
ever
http://upload.wikimedia.org/wikipedia/commons/6/60/Pizza_H...
“WOW
My system has
too much
caching,
timeouts, and
availability.”
-said no one ever
Questions?
golucid.co
http://www.slideshare.net/DerrickIsaacson
References
1. Perspectives on the CAP Theorem
2. Bacon Ipsum
3. Brewer’s Conjecture and the Feasibility of Consistent, Ava...
Effective SOA
Effective SOA
Effective SOA
Effective SOA
Upcoming SlideShare
Loading in …5
×

Effective SOA

809 views

Published on

It has been observed that "A distributed system is at best a necessary evil, evil because of the extra complexity." Multiple nodes computing on inconsistent state with regular communication failures present entirely different challenges than those computer science students face in the classroom writing DFS algorithms. The past 30 years have seen some interesting theories and architectures to deal with these complexities in what we now call "cloud computing". Some researchers worked on "distributed memory" and others built "remote procedure calls". More commercially successful architectures of late have popularized ideas like the CAP theorem, distributed caches, and REST.

Using examples from companies like Amazon and Google this presentation walks through some practical tips to evolve your service-oriented architecture. Google's Chubby service demonstrates how you can take advantage of CAP's "best effort availability" options and Amazon's "best effort consistency" services show the other end of the spectrum. Practical lessons learned from Lucidchart's forays into SOA share insight through quantitative analyses on how to make your system highly available.

Published in: Software, Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
809
On SlideShare
0
From Embeds
0
Number of Embeds
35
Actions
Shares
0
Downloads
6
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Effective SOA: Lessons from Amazon, Google, and LucidchartIt has been observed that &quot;A distributed system is at best a necessary evil, evil because of the extra complexity.&quot; Multiple nodes computing on inconsistent state with regular communication failures present entirely different challenges than those computer science students face in the classroom writing DFS algorithms. The past 30 years have seen some interesting theories and architectures to deal with these complexities in what we now call &quot;cloud computing&quot;. Some researchers worked on &quot;distributed memory&quot; and others built &quot;remote procedure calls&quot;. More commercially successful architectures of late have popularized ideas like the CAP theorem, distributed caches, and REST.Using examples from companies like Amazon and Google this presentation walks through some practical tips to evolve your service-oriented architecture. Google&apos;s Chubby service demonstrates how you can take advantage of CAP&apos;s &quot;best effort availability&quot; options and Amazon&apos;s &quot;best effort consistency&quot; services show the other end of the spectrum. Practical lessons learned from Lucidchart&apos;s forays into SOA share insight through quantitative analyses on how to make your system highly available.Bio:Derrick Isaacson is the Director of Engineering for Lucid Software Inc (lucidchart.com). He has a BS in EE from BYU and an MS in CS from Stanford. He&apos;s developed big services at Amazon, web platforms for Microsoft, and graphical apps at Lucidchart. Derrick has two patent applications at Microsoft and Domo. For fun he cycles, backpacks, and takes his son out in their truck.
  • Multiple nodes computing on inconsistent state with regular communication failures present entirely different challenges than those computer science students face in the classroom writing DFS algorithms.
  • Idea:Nothing’s more familiar to programmers than reading from and writing to memory? We access variables all day long. Why not make distributed state access look like simple memory access? We can use modern operating systems’ support for virtual memory to “swap in” memory that is located on another machine.Problem:How often do you go to access a variable and can’t because a section of memory is “down”?How do you provide a mutex to parallel threads of execution?How can the distributed memory layer be efficient when it has no knowledge of the application?
  • Idea: Next to memory access, nothing’s more familiar to programmers than functional calls. Can we make distributed state transfer look like a simple procedure call? SOAP!Problems:How often do you retry a method call because the JVM failed to invoke it the first time?Why does incrementing a value take 100 milliseconds?Why does your internet only support .NET and PHP (stub compiler/SDK)?
  • Idea:Easy network file sharing.NFS, AFS, GFSWorks great for files.
  • Idea:Easy network file sharing.NFS, AFS, GFSWorks great for files.
  • Idea:How could you steal bandwidth from universities and avoid infringement lawsuits at the same time?Problems:Mooching resources is a great business model but a terrible architecture if that’s not what you’re going for.
  • Idea:I have so much state I don’t want to transfer it all in a single response.
  • What’s the availability of the overall system if a single response for service A is calculated by making 4 total requests to services B, C, and D?If the average availabilities of those 3 components are as given, and the random values are modeled as IID, what is the maximum percentage of requests is service A able to calculate correctly?.995 * .998 * .998 * .996 = 0.987IID is a bad assumption for nearly any distributed system, but it illustrates the effective of naively distributing computation.When crash failures originating at service A are included, the total availability is &lt; 98.7%!That’s an average of 19 minutes of downtime per day!
  • What’s the availability of the overall system if a single response for service A is calculated by making 4 total requests to services B, C, and D?If the average availabilities of those 3 components are as given, and the random values are modeled as IID, what is the maximum percentage of requests is service A able to calculate correctly?.995 * .998 * .998 * .996 = 0.987IID is a bad assumption for nearly any distributed system, but it illustrates the effective of naively distributing computation.When crash failures originating at service A are included, the total availability is &lt; 98.7%!That’s an average of 19 minutes of downtime per day!
  • Conjecture made by UC Berkely computer scientist in 2000Gilbert &amp; Lynch published a formal proof
  • Conjecture made by UC Berkely computer scientist in 2000Gilbert &amp; Lynch published a formal proof
  • We want the user to 1) get a response (available) and 2) have it be consistent with the view of other nodes.From that end user definition, a slow, error status, or non-existent response are all “incorrect”.
  • “In order to model partition tolerance, the networkwill be allowed to lose arbitrarily many messages sent from one node toanother.” – Gilbert &amp; Lynchhttp://lpd.epfl.ch/sgilbert/pubs/BrewersConjecture-SigAct.pdfIt becomes a fundamental tradeoff between availability and consistency.
  • It drops below the SLA for a consistent, available response perhaps 10 of every 1000 requests.
  • It turns out the usual approach to implementing a computation like this errs on the side of consistency. If a single service request fails, this calculation hangs or returns an error to the user.
  • Web crawler, Big Table“our access patterns are highly stylized”“GFS has a relaxed consistency model that supports our highly distributed applications well but remains relatively simple and efficient to implement.”“Record append’s append-at-least-once semantics preserves each writer’s output. Readers deal with the occasional padding and duplicates as follows. Each record prepared by the writer contains extra information like checksums so that its validity can be verified. A reader can identify and discard extra padding and record fragments using the checksums. If it cannot tolerate the occasional duplicates (e.g., if they would trigger non-idempotent operations), it can filter them out using unique identifiers in the records, which are often needed anyway to name corresponding application entities such as web documents.”
  • “For the checkout process you always want to honor requests to add items to a shopping cart because it&apos;s revenue producing. In this case you choose high availability. Errors are hidden from the customer and sorted out later.”
  • Wow, that guy must be an engineer – said no one ever
  • The Amazon Dynamo and Yahoo PNUTS data stores support high read availability while limiting write availability in the face of partitions. For example, Dynamo has a configurable number of replicas on which the data must be stored before a write is confirmed to the client.
  • Network partitions are less frequent at the leaves of a geographically hierarchical system.
  • The CAP theorem appears to have implications for scalability.“Intuitively, we think of a system a scalable if it can grow efficiently, using new resources efficiently to handle more load. In order to efficiently use new resources, there must be coordination among those resources;” – Gilbert &amp; Lynch
  • What’s the availability of the overall system if a single response for service A is calculated by making 4 total requests to services B, C, and D?If the average availabilities of those 3 components are as given, and the random values are modeled as IID, what is the maximum percentage of requests is service A able to calculate correctly?.995 * .998 * .998 * .996 = 0.987IID is a bad assumption for nearly any distributed system, but it illustrates the effective of naively distributing computation.When crash failures originating at service A are included, the total availability is &lt; 98.7%!That’s an average of 19 minutes of downtime per day!
  • Effective SOA

    1. 1. Effective SOA Lessons from Amazon, Google, and Lucidchart By Derrick Isaacson
    2. 2. Can I get that without the bacon? Said no one ever http://www.food.com/photo-finder/all/bacon?photog=1072593
    3. 3. http://baconipsum.com/?paras=1&type=all-meat&start-with-lorem=1
    4. 4. http://www.someecards.com/usercards/viewcard/MjAxMi03YWZiMjJiMTg3NDFhYTUy
    5. 5. Simplicity of Single Component Services • I can’t remember if that getter function takes 100ns or 100ms. - Said no engineer ever • Should I try to model this server request as a “remote procedure call”? • 6 orders of magnitude difference! • My front-side bus fails for only 1 second every 17 minutes! - Said no engineer ever • 99.9% availability • Our internet only supports .NET. - Said no engineer ever • Do we need an SDK?
    6. 6. "A distributed system is at best a necessary evil, evil because of the extra complexity... An application is rarely, if ever, intrinsically distributed. Distribution is just the lesser of the many evils, or perhaps better put, a sensible engineering decision given the trade-offs involved." -David Cheriton, Distributed Systems Lecture Notes, ch. 1
    7. 7. Distributed System Architectures Does it have to be “Service-oriented”?
    8. 8. http://upload.wikimedia.org/wikipedia/commons/d/da/KL_CoreMemory.jpg Distributed Memory
    9. 9. RPC <I’m> <not> <making> <a> <service> <request> <I’m> <just> <calling> <a> <procedure>
    10. 10. Distributed File System mount -t nfs -o proto=tcp,port=2049 nfs-server:/ /mnt
    11. 11. Distributed Data Stores • Replated MySQL • Mongo • S3 • RDS • BigTable • Cassandra …
    12. 12. P2P
    13. 13. Streaming Media
    14. 14. Service-oriented Architectures Social Bookmarking App
    15. 15. GET /profiles/123 GET /users/123 Calculate something GET /users/123/permissions If user can’t view profile send 403 POST /eventFeed {new profile view} GET /users/123/friends GET /bookmarks?userId=123 GET /catalog/books?ids=1,3,10 Calculate something else GET /bookmarks/trending Send response
    16. 16. Lucidchart.com by Status Code 96.5% 2xx or 3xx
    17. 17. Lucidchart.com 1s+ Latencies 10.8% > 1s
    18. 18. What Happened?!? I though SOA was supposed to make my app better!
    19. 19. Simple SOA Availability <98.7% 99.5% 99.8% 99.6% .995 * .998 * .998 * .996 = 0.987
    20. 20. A distributed system is at best a necessary evil <98.7% 99.5% 99.8% 99.6%
    21. 21. The CAP Theorem http://learnyousomeerlang.com/distribunomicon
    22. 22. The CAP Theorem1 • Safety – nothing bad ever happens • Liveness – good things happen • Unreliability – network dis-connectivity, crash failures, message loss, Byzantine failures, slowdown, etc. • Consistency – every response sent to a client is correct • Availability – every request gets a response • Partition tolerance – operating in the face of arbitrary failures
    23. 23. Consistency: Nothing Bad Happens
    24. 24. Assumption: Failures Happen Availability Consistency
    25. 25. ResponseHandler<User> handler = new ResponseHandler<User>() { public User handleResponse( final HttpResponse response) { int status = response.getStatusLine().getStatusCode(); if (status >= 200 && status < 300) { HttpEntity entity = response.getEntity(); return entity != null ? Parser.parse(entity) : null; } else { … } } }; HttpGet userGet = new HttpGet("http://example.com/users/123"); User user = httpclient.execute(userGet, handler); https://hc.apache.org/httpcomponents-client-4.3.x/examples.html Works great to calculate a user!
    26. 26. GET /profiles/123 GET /users/123 Calculate something GET /users/123/permissions If user can’t view profile send 403 POST /eventFeed {new profile view} GET /users/123/friends GET /bookmarks?userId=123 GET /catalog/books?ids=1,3,10 Calculate something else GET /bookmarks/trending Send response
    27. 27. Best Effort Availability - Euphemism for not always available
    28. 28. Best Effort Consistency - Euphemism for not always consistent
    29. 29. Google File System: relaxed consistency model Throughput Latency
    30. 30. Amazon Checkout http://highscalability.com/amazon-architecture
    31. 31. “WOW I really regret sacrificing consistency for availability” -said no amazon ever That’s $74 Billion
    32. 32. Hang Consistency! Add • Caching • Timeouts • Retries • Guessing • Anything!
    33. 33. Tip 1: HTTP Caching Availability/Performance Consistency
    34. 34. Tip 2: HTTP Caching as Fallback
    35. 35. Tip 3: Retries • Exponential backoffs & max retries
    36. 36. Tip 3: HTTP Caching Technologies • Apache HttpComponents – HttpClient Cache • Ehcache • Redis • Memcached • CloudFront • Akamai • Berkeley DB • AWS SNS (for notifying caches components of changes)
    37. 37. Segmenting Consistency and Availability 1. Data Partitioning Shopping Cart Warehouse Inventory DB
    38. 38. Segmenting 2. Operation Partitioning Reads Writes Dynamo PNUTS&
    39. 39. Segmenting 3. Functional partitioning User Service, Document Snapshots Document Service
    40. 40. Segmenting 4. Hierarchical Partitioning Leaves Root
    41. 41. http://www.slashgear.com/google-data-center-hd-photos-hit-where-the-internet-lives-gallery-17252451/
    42. 42. Timeouts
    43. 43. Stop Guessing and Just Calculate It • Max I/O wait time = # of threads * (CONNECT_TIMEOUT + READ_TIMEOUT) • 9 front end servers received 1900 requests in 60 seconds and 300 for Flickr resources (16%). • 35 requests per server per minute • Max 100 threads, => 6,000 thread seconds in one minute • Goal: ensure < 10% of thread seconds spent blocked on Flickr I/O • 600 < 35 requests * (CONNECT_TIMEOUT + READ_TIMEOUT) • CONNECT_TIMEOUT + READ_TIMEOUT < 17 seconds TCP Connect Send Request Block on socket read Read response CONNECT_TIMEOU T READ_TIMEOUT
    44. 44. Best Effort Consistency System 99.9% 99.5% 99.8% 99.6%
    45. 45. Wow, my pizza has too much cheese and toppings Said no one ever http://upload.wikimedia.org/wikipedia/commons/6/60/Pizza_Hut_Meat_Lover's_pizza_3.JPG
    46. 46. “WOW My system has too much caching, timeouts, and availability.” -said no one ever
    47. 47. Questions? golucid.co http://www.slideshare.net/DerrickIsaacson
    48. 48. References 1. Perspectives on the CAP Theorem 2. Bacon Ipsum 3. Brewer’s Conjecture and the Feasibility of Consistent, Available, Partition-Tolerant Web 4. The Google File System 5. Big Table 6. Amazon Architecture References 7. Apache HttpComponents 8. Apache HttpClient Cache 9. Ehcache

    ×