Designing for Scale

4,085 views
3,912 views

Published on

At wooga we build backends for games that have millions of daily users.

In the gaming business we have a write heavy environment, with a high frequency of requests, traffic bursts and distribution across many nodes. These are all problems we need to solve and keep in mind in order to write a system that stands the chance of supporting the required load.

How do you meet the challenge of writing a brand new system with performance in mind? Where should the line between necessary efficiency and premature optimization be drawn? How do you measure performances? How do we generate synthetic load that reflects real usage patterns? How do you know you have enough capacity? How do you combine all the above with safely introducing changes and new features working in a two people team?

We had to answer to all the above questions and we want to share the solutions we found and the problems that we consider still open.

Published in: Technology
0 Comments
10 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
4,085
On SlideShare
0
From Embeds
0
Number of Embeds
40
Actions
Shares
0
Downloads
130
Comments
0
Likes
10
Embeds 0
No embeds

No notes for slide

Designing for Scale

  1. 1. Designing for Scale Knut Nesheim @knutin Paolo Negri @hungryblank
  2. 2. About this talk2 developers and erlang vs. 1 million daily users
  3. 3. Social GamesFlash client (game) HTTP API
  4. 4. Social GamesFlash client • Game actions need to be persisted and validated • 1 API call every 2 secs
  5. 5. Social Games HTTP API• @ 1 000 000 daily users• 5000 HTTP reqs/sec• more than 90% writes
  6. 6. The hard nuthttp://www.flickr.com/photos/mukluk/315409445/
  7. 7. Users we expect DAU 1000000 750000 “Monster World” daily usersjuly - december 2010 500000 250000 0 July December
  8. 8. Users we have DAU New game daily usersmarch - june 2011 50 0 march april may june
  9. 9. What to do?1 Simulate users
  10. 10. Simulating users• Must not be too synthetic (like apachebench)• Must look like a meaningful game session• Users must come online at a given rate and play
  11. 11. Tsung • Multi protocol (HTTP, XMPP) benchmarking tool • Able to test non trivial call sequences • Can actually simulate a scripted gaming sessionhttp://tsung.erlang-projects.org/
  12. 12. Tsung - configuration Fixed content Dynamic parameter <request subst="true"> <http url="http://server.wooga.com/users/% %ts_user_server:get_unique_id%%/resources/column/5/ row/14?%%_routing_key%%" method="POST" contents={"parameter1":"value1"}> </http> </request>http://tsung.erlang-projects.org/
  13. 13. Tsung - configuration • Not something you fancy writing • We’re in development, calls change and we constantly add new calls • A session might contain hundreds of requests • All the calls must refer to a consistent game statehttp://tsung.erlang-projects.org/
  14. 14. Tsung - configuration • From our ruby test code user.resources(:column => 5, :row => 14) • Same as <request subst="true"> <http url="http://server.wooga.com/users/% %ts_user_server:get_unique_id%%/resources/column/5/ row/14?%%_routing_key%%" method="POST" contents={"parameter1":"value1"}> </http> </request>http://tsung.erlang-projects.org/
  15. 15. Tsung - configuration • Session A session is a • requests group of requests • Arrival phase Sessions arrive in • duration phases with a specific arrival • arrival rate ratehttp://tsung.erlang-projects.org/
  16. 16. Tsung - setup Application Benchmarking cluster app server tsung HTTP reqs worker ssh app server tsung master app serverhttp://tsung.erlang-projects.org/
  17. 17. Tsung • Generates ~ 2500 reqs/sec on AWS m1.large • Flexible but hard to extend • Code base rather obscurehttp://tsung.erlang-projects.org/
  18. 18. What to do?2 Collect metrics
  19. 19. Tsung-metrics • Tsung collects measures and provides reports • But these measure include tsung network/ cpu congestion itself • Tsung machines aren’t a good point of viewhttp://tsung.erlang-projects.org/
  20. 20. HAproxyApplication Benchmarking clusterapp server tsung HTTP reqs worker haproxy sshapp server tsung masterapp server
  21. 21. HAproxy “The Reliable, High Performance TCP/ HTTP Load Balancer”• Placed in front of http servers• Load balancing• Fail over
  22. 22. HAproxy - syslog• Easy to setup• Efficient (UDP)• Provides 5 timings per each request
  23. 23. HAproxy • Time to receive request from clientApplication Benchmarking clusterapp server tsung haproxy worker sshapp server tsung master
  24. 24. HAproxy • Time spent in HAproxy queueApplication Benchmarking clusterapp server tsung haproxy worker sshapp server tsung master
  25. 25. HAproxy • Time to connect to the serverApplication Benchmarking clusterapp server tsung haproxy worker sshapp server tsung master
  26. 26. HAproxy• Time to receive response headers from serverApplication Benchmarking cluster app server tsung haproxy worker ssh app server tsung master
  27. 27. HAproxy• Total session duration timeApplication Benchmarking cluster app server tsung haproxy worker ssh app server tsung master
  28. 28. HAproxy - syslog• Application urls identify directly server call• Application urls are easy to parse• Processing haproxy syslog gives per call metric
  29. 29. What to do?3 Understand metrics
  30. 30. Reading/aggregating metrics• Python to parse/normalize syslog• R language to analyze/visualize data• R language console to interactively explore benchmarking results
  31. 31. R is a free software environment for statistical computing and graphics.
  32. 32. What you get• Aggregate performance levels (throughput, latency)• Detailed performance per call type• Statistical analysis (outliers, trends, regression, correlation, frequency, standard deviation)
  33. 33. What you get
  34. 34. What to do?4 go deeper
  35. 35. Digging into the data• From HAproxy log analisys one call emerged as exceptionally slow• Using eprof we were able to determine that most of the time was spent in a redis query fetching many keys (MGET)
  36. 36. Tracing erldis query• More than 60% of runtime is spent manipulating the socket• gen_tcp:recv/2 is the culprit• But why is it called so many times?
  37. 37. Understanding the redis protocolC: LRANGE mylist 0 2 <<"*2rns: *2 $5rns: $5 Hellorn $5rns: Hello Worldrn">>s: $5s: World
  38. 38. Understanding erldis• recv_value/2 is used in the protocol parser to get the next data to parse
  39. 39. A different approach• Two ways to use gen_tcp: active or passive• In passive, use gen_tcp:recv to explicitly ask for data, blocking• In active, gen_tcp will send the controlling process a message when there is data• Hybrid: active once
  40. 40. A different approach• Is active sockets faster?• Proof-of-concept proved active socket faster• Change erldis or write a new driver?
  41. 41. A different approach• Radical change => new driver• Keep Erldis queuing approach• Think about error handling from the start• Use active sockets
  42. 42. A different approach• Active socket, parse partial replies
  43. 43. Circuit breaker• eredis has a simple circuit breaker for when Redis is down/unreachable• eredis returns immediately to clients if connection is down• Reconnecting is done outside request/ response handling• Robust handling of errors
  44. 44. Benchmarking eredis• Redis driver critical for our application• Must perform well• Must be stable• How do we test this?
  45. 45. Basho bench• Basho produces the Riak KV store• Basho build a tool to test KV servers• Basho bench• We used Basho bench to test eredis
  46. 46. Basho bench• Create callback module
  47. 47. Basho bench• Configuration term-file
  48. 48. Basho bench output
  49. 49. eredis is open sourcehttps://github.com/wooga/eredis
  50. 50. What to do?5 measure internals
  51. 51. Measure internalsHAproxy point of view is valid but how tomeasure internals of our application, whilewe are live, without the overhead oftracing?
  52. 52. Think Basho bench• Basho bench can benchmark a redis driver• Redis is very fast, 100K ops/sec• Basho bench overhead is acceptable• The code is very simple
  53. 53. Cherry pick ideas from Basho Bench• Creates a histogram of timings on the fly, reducing the number of data points• Dumps to disk every N seconds• Allows statistical tools to work on already aggregated data• Near real-time, from event to stats in N+5 seconds
  54. 54. Homegrown stats• Measures latency from the edges of our system (excludes HTTP handling)• And at interesting points inside the system• Statistical analysis using R• Correlate with HAproxy data• Produces graphs and data specific to our application
  55. 55. Homegrown stats
  56. 56. Recap Measure:• From an external point of view (HAproxy)• At the edge of the system (excluding HTTP handling)• Internals in the single process (eprof)
  57. 57. Recap Analyze:• Aggregated measures• Statistical properties of measures • standard deviation • distribution • trends
  58. 58. Thanks!http://www.wooga.com/jobsknut.nesheim@wooga.com @knutinpaolo.negri@wooga.com @hungryblank

×