Webinar - Preparing Your Social Game for Massive Growth

589 views

Published on

When building and launching a social game, being ready for growth is critical to your success. Many games have accelerated from zero to millions of users literally overnight — OMGPOP’s Draw Something game recently reached 50 million downloads within 50 days of the launch. Figuring out how to support that kind of growth, while sustaining a snappy and compelling gaming experience, presents an enormous challenge at every layer of the game’s technology stack.

View these slides to learn more about:

- Key considerations as you plan for growing volumes of users and data
- Why NoSQL databases are a good fit for social game applications
- How to choose the right database to ensure your game’s scalability and performance
- Real-world case studies, including a discussion of Draw Something’s viral growth

Published in: Technology, Business
0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
589
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
16
Comments
0
Likes
2
Embeds 0
No embeds

No notes for slide

Webinar - Preparing Your Social Game for Massive Growth

  1. 1. Preparing  your  social  game  for   massive  growth   James  Phillips   1  
  2. 2. How  to  Prepare  Your  Social  Game  for  Massive  Growth   Published  February  2,  2012   Five  days  later…   hAp://mashable.com/2012/02/01/social-­‐game-­‐prepare-­‐growth/   2  
  3. 3. Draw  Something  by  OMGPOP   3  
  4. 4. Draw  Something  “goes  viral”  3  weeks  aPer  launch     Draw  Something  by  OMGPOP   Daily  Ac)ve  Users  (millions)  16  14  12  10   8   6   4   2   2/6   8   10   12   14   16   18   20   22   24   26   28   3/1   3   5   7   9   11   13   15   17   19   21   4  
  5. 5. As  usage  grew,  game  data  went  non-­‐linear.   Draw  Something  by  OMGPOP   Daily  Ac)ve  Users  (millions)  16  14   By  March  29,  there  were    12   over  30,000,000  downloads  of  the  app,   over  5,000  drawings  being  stored  per  second,  10   over  2,200,000,000  drawings  stored,   over  105,000  database  transacCons  per  second,   8   and  over  3.3  terabytes  of  data  stored.   6   4   2   2/6   8   10   12   14   16   18   20   22   24   26   28   3/1   3   5   7   9   11   13   15   17   19   21   5  
  6. 6. In  contrast.   The  Simpson’s:  Tapped  Out   Daily  Ac)ve  Users  (millions)  16  14  12  10   8   6   4   #2  Free  app  on  iPad   2   #3  Free  app  on  iPhone   2/6   8   10   12   14   16   18   20   22   24   26   28   3/1   3   5   7   9   11   13   15   17   19   21   6  
  7. 7. WHY  NOSQL?   7  
  8. 8. Modern interactive software architecture Application Scales Out Just add more commodity web servers Database Scales Up Get a bigger, more complex server Note  –  Relaonal  database  technology  is  great  for  what  it  is  great  for,  but  it  is  not  great  for  this.   8  
  9. 9. Extending  the  scope  of  RDBMS  technology   •  Data  paroning  (“sharding”)   –  Disrupve  to  reshard  –  impacts  applicaon   –  No  cross-­‐shard  joins   –  Schema  management  on  every  shard   •  Denormalizng   –  Increases  speed   –  At  the  limit,  provides  complete  flexibility   –  Eliminates  relaonal  query  benefits   •  Distributed  caching   –  Accelerate  reads   –  Scale  out   –  Another  er,  no  write  acceleraon,  coherency  management   9  
  10. 10. NoSQL database matches application logic tier architectureData layer now scales with linear cost and constant performance. Application Scales Out Just add more commodity web servers NoSQL  Database  Servers   Database Scales Out Just add more commodity data servers Scaling out flattens the cost and performance curves. 10  
  11. 11. Survey:  Schema  inflexibility  #1  adopon  driver   What  is  the  biggest  data  management  problem     driving  your  use  of  NoSQL  in  the  coming  year?   Lack  of  flexibility/rigid  schemas   49%   Inability  to  scale  out  data   35%   High  latency/low  performance   29%   Costs   16%   All  of  these   12%   Other   11%   Source: Couchbase NoSQL Survey, December 2011, n=1351 11  
  12. 12. NOSQL  TAXONOMY   12  
  13. 13. The Key-Value Store – the foundation of NoSQL Key   101100101000100010011101   101100101000100010011101   101100101000100010011101   101100101000100010011101   101100101000100010011101   Opaque   101100101000100010011101   101100101000100010011101   Binary   101100101000100010011101   101100101000100010011101   Value   101100101000100010011101   101100101000100010011101   101100101000100010011101   101100101000100010011101   101100101000100010011101   101100101000100010011101   13  
  14. 14. Memcached – the NoSQL precursor Key   101100101000100010011101   memcached   101100101000100010011101   101100101000100010011101   101100101000100010011101   In-­‐memory  only   101100101000100010011101   Limited  set  of  operaons   Opaque   101100101000100010011101   Blob  Storage:  Set,  Add,  Replace,  CAS   101100101000100010011101   Binary   101100101000100010011101   Retrieval:  Get   101100101000100010011101   Structured  Data:  Append,  Increment   Value   101100101000100010011101     101100101000100010011101   “Simple  and  fast.”   101100101000100010011101   101100101000100010011101     101100101000100010011101   Challenges:  cold  cache,  disrupve  elascity   101100101000100010011101   14  
  15. 15. Redis  –  More  “Structured  Data”  commands   Key   101100101000100010011101   redis   101100101000100010011101   101100101000100010011101   101100101000100010011101   “Data  Structures”   In-­‐memory  only   101100101000100010011101   Vast  set  of  operaons   Blob   101100101000100010011101   Blob  Storage:  Set,  Add,  Replace,  CAS   101100101000100010011101   List   101100101000100010011101   Retrieval:  Get,  Pub-­‐Sub   101100101000100010011101   Set   Structured  Data:  Strings,  Hashes,  Lists,  Sets,   101100101000100010011101   Sorted  lists   Hash   101100101000100010011101   101100101000100010011101   …   101100101000100010011101   Example  operaCons  for  a  Set   101100101000100010011101   Add,  count,  subtract  sets,  intersecon,  is   101100101000100010011101   member?,  atomic  move  from  one  set  to   another   15  
  16. 16. NoSQL  catalog   Key-­‐Value   Data  Structure   Document   Column   Graph  (memory  only)   Cache   memcached   redis   16  
  17. 17. Membase  –  From  key-­‐value  cache  to  database   Key   101100101000100010011101   membase   101100101000100010011101   101100101000100010011101   101100101000100010011101   Disk-­‐based  with  built-­‐in  memcached  cache   101100101000100010011101   Cache  refill  on  restart   Opaque   101100101000100010011101   Memcached  compable  (drop  in  replacement)   101100101000100010011101   Binary   101100101000100010011101   Highly-­‐available  (data  replicaon)   101100101000100010011101   Add  or  remove  capacity  to  live  cluster   Value   101100101000100010011101     101100101000100010011101   “Simple,  fast,  elasc.”   101100101000100010011101   101100101000100010011101     101100101000100010011101   101100101000100010011101   17  
  18. 18. NoSQL  catalog   Key-­‐Value   Data  Structure   Document   Column   Graph  (memory  only)   Cache   memcached   redis  (memory/disk)   membase   Database   18  
  19. 19. Couchbase  –  document-­‐oriented  database   Key   Couchbase   {          “string”  :  “string”,          “string”  :  value,   Auto-­‐sharding          “string”  :     Disk-­‐based  with  built-­‐in  memcached  cache   JSON                        {    “string”  :  “string”,   Cache  refill  on  restart                                “string”  :  value  },          “string”  :  [  array  ]   OBJECT   Memcached  compable  (drop  in  replace)   Highly-­‐available  (data  replicaon)   }   (“DOCUMENT”)   Add  or  remove  capacity  to  live  cluster       When  values  are  JSON  objects  (“documents”):   Create  indices,  views  and  query  against  the   views   19  
  20. 20. NoSQL  catalog   Key-­‐Value   Data  Structure   Document   Column   Graph  (memory  only)   Cache   memcached   redis  (memory/disk)   membase   couchbase   Database   20  
  21. 21. MongoDB  –  Document-­‐oriented  database   Key   MongoDB   {          “string”  :  “string”,          “string”  :  value,   Disk-­‐based  with  in-­‐memory  “caching”          “string”  :     BSON  (“binary  JSON”)  format  and  wire  protocol   BSON                        {    “string”  :  “string”,   Master-­‐slave  replicaon   OBJECT                                “string”  :  value  },   Auto-­‐sharding          “string”  :  [  array  ]   (“DOCUMENT”)   Values  are  BSON  objects   }   Supports  ad  hoc  queries  –  best  when  indexed     21  
  22. 22. NoSQL  catalog   Key-­‐Value   Data  Structure   Document   Column   Graph  (memory  only)   Cache   memcached   redis  (memory/disk)   membase   couchbase   Database   mongoDB   22  
  23. 23. Cassandra  –  Column  overlays   Key 101100101000100010011101 Cassandra   101100101000100010011101Column  1   101100101000100010011101 101100101000100010011101 Disk-­‐based  system   101100101000100010011101 Opaque 101100101000100010011101 Clustered    Column  2   101100101000100010011101 Binary External  caching  required  for  low-­‐latency  reads   101100101000100010011101 101100101000100010011101 “Columns”  are  overlaid  on  the  data   Value 101100101000100010011101 101100101000100010011101 Not  all  rows  must  have  all  columns  Column  3     101100101000100010011101(not  present)     101100101000100010011101 Supports  efficient  queries  on  columns   101100101000100010011101 101100101000100010011101 Restart  required  when  adding  columns   Good  cross-­‐datacenter  support     23  
  24. 24. NoSQL  catalog   Key-­‐Value   Data  Structure   Document   Column   Graph  (memory  only)   Cache   memcached   redis  (memory/disk)   membase   couchbase   cassandra   Database   mongoDB   24  
  25. 25. Neo4j  –  Graph  database   Key 101100101000100010011101 101100101000100010011101 101100101000100010011101 101100101000100010011101 101100101000100010011101 Opaque 101100101000100010011101 101100101000100010011101 Binary 101100101000100010011101 101100101000100010011101 Value Neo4j   101100101000100010011101 101100101000100010011101 101100101000100010011101 101100101000100010011101 101100101000100010011101 101100101000100010011101 Key Key Disk-­‐based  system   101100101000100010011101 101100101000100010011101 101100101000100010011101 101100101000100010011101 101100101000100010011101 101100101000100010011101 External  caching  required  for  low-­‐latency  reads   Nodes,  relaonships  and  paths   101100101000100010011101 101100101000100010011101 101100101000100010011101 101100101000100010011101 Opaque 101100101000100010011101 Opaque 101100101000100010011101 101100101000100010011101 101100101000100010011101 Binary Binary Properes  on  nodes   101100101000100010011101 101100101000100010011101 101100101000100010011101 101100101000100010011101 Value 101100101000100010011101 Value 101100101000100010011101 101100101000100010011101 101100101000100010011101 Delete,  Insert,  Traverse,  etc.   101100101000100010011101 101100101000100010011101 101100101000100010011101 101100101000100010011101 101100101000100010011101 101100101000100010011101 101100101000100010011101 101100101000100010011101     Key Key 101100101000100010011101 101100101000100010011101 101100101000100010011101 101100101000100010011101 101100101000100010011101 101100101000100010011101 101100101000100010011101 101100101000100010011101 101100101000100010011101 101100101000100010011101 Opaque 101100101000100010011101 Opaque 101100101000100010011101 101100101000100010011101 101100101000100010011101 Binary 101100101000100010011101 Binary 101100101000100010011101 101100101000100010011101 101100101000100010011101 Value 101100101000100010011101 Value 101100101000100010011101 101100101000100010011101 101100101000100010011101 101100101000100010011101 101100101000100010011101 101100101000100010011101 101100101000100010011101 101100101000100010011101 101100101000100010011101 101100101000100010011101 101100101000100010011101 25  
  26. 26. NoSQL  catalog   Key-­‐Value   Data  Structure   Document   Column   Graph  (memory  only)   Cache   memcached   redis  (memory/disk)   membase   couchbase   cassandra   Neo4j   Database   mongoDB   26  
  27. 27. COUCHBASE   27  
  28. 28. Typical  Couchbase  producon  environment   ApplicaCon  users   Load  Balancer   ApplicaCon  Servers   Servers   28  
  29. 29. Basic  Operaon   APP  SERVER  1   APP  SERVER  2       § Docs  distributed  evenly  across       COUCHBASE  CLIENT  LIBRARY   servers  in  the  cluster   COUCHBASE  CLIENT  LIBRARY               § Each  server  stores  both  ac)ve   CLUSTER  MAP     CLUSTER  MAP             &  replica  docs       §  Only  one  server  acve  at  a  me   § Client  library  provides  app  with   Read/Write/Update   Read/Write/Update   simple  interface  to  database   § Cluster  map  provides  map  to   which  server  doc  is  on   §  App  never  needs  to  know   SERVER  1   SERVER  2   SERVER  3   §  App  reads,  writes,  updates   Acve  Docs   Acve  Docs   Acve  Docs         docs     Doc  5   DOC     Doc  4   DOC     Doc  1   DOC         §  Mulple  App  Servers  can     Doc  2   DOC     Doc  7   DOC     Doc  3   DOC   access  same  document  at           Doc  9   DOC     Doc  8   DOC     Doc  6   DOC   same  me             Replica  Docs     Replica  Docs     Replica  Docs           Doc  4   DOC     Doc  6   DOC     Doc  7   DOC           Doc  1   DOC     Doc  3   DOC     Doc  9   DOC           Doc  8   DOC     Doc  2   DOC     Doc  5   DOC   COUCHBASE  SERVER    CLUSTER  User  Configured  Replica  Count  =  1   29  
  30. 30. Add  Nodes   APP  SERVER  1   APP  SERVER  2           §  Two  servers  added  to   COUCHBASE  CLIENT  LIBRARY   COUCHBASE  CLIENT  LIBRARY   cluster           §  One-­‐click  operaon     CLUSTER  MAP     CLUSTER  MAP       §  Docs  automacally               rebalanced  across   cluster   §  Even  distribuon  of   docs   Read/Write/Update   Read/Write/Update   §  Minimum  doc   movement   §  Cluster  map  updated   §  App  database  calls  now   distributed  over  larger  #   SERVER  1   SERVER  2   SERVER  3   SERVER  4   SERVER  5   of  servers   Acve  Docs       Acve  Docs    Acve  Docs  ocs   Acve  Docs     Acve  Docs     Acve  D   Doc  5   DOC     Doc  4   DOC     Doc  1   DOC             Doc  3         Doc  2   DOC     Doc  7   DOC     Doc  3   DOC             Doc  6         Doc  9   DOC     Doc  8   DOC     Doc  6   DOC                   Replica  Docs     Replica  Docs    Replica  Docs     Replica  Docs     Replica  Docs         Replica  Docs         Doc  4   DOC     Doc  6   DOC     Doc  7   7   DOC   Doc                   Doc  1   DOC     Doc  3   DOC     Doc  9   9   DOC   Doc                   Doc  8   DOC     Doc  2   DOC     Doc  5   DOC       COUCHBASE  SERVER    CLUSTER  User  Configured  Replica  Count  =  1   30  
  31. 31. Fail  Over  Node   APP  SERVER  1   APP  SERVER  2   §  App  servers  happily  accessing  docs       on  Server  3       COUCHBASE  CLIENT  LIBRARY   §  Server  fails   COUCHBASE  CLIENT  LIBRARY         §  App  server  requests  to  server  3  fail       CLUSTER  MAP     CLUSTER  MAP     §  Cluster  detects  server  has  failed             §  Promotes  replicas  of  docs  to  ac)ve       §  Updates  cluster  map   §  App  server  requests  for  docs  now   go  to  appropriate  server   §  Typically  rebalance    would  follow     SERVER  1   SERVER  2   SERVER  3   SERVER  4   SERVER  5   Acve  Docs       Acve  Docs    Acve  Docs  ocs   Acve  Docs     Acve  Docs     Acve  D   Doc  5   DOC     Doc  4   DOC     Doc  1   DOC     Doc  9   DOC     Doc  6   DOC         Doc  3         Doc  2   DOC     Doc  7   DOC     Doc  3     Doc  8     DOC         Doc  6           DOC                     Replica  Docs     Replica  Docs    Replica  Docs     Replica  Docs     Replica  Docs         Replica  Docs         Doc  4   DOC     Doc  6   DOC     Doc  7   7   DOC   Doc     Doc  5   DOC     Doc  8   DOC               Doc  1   DOC     Doc  3   DOC     Doc  9   9   DOC   Doc     Doc  2     DOC                       COUCHBASE  SERVER    CLUSTER  User  Configured  Replica  Count  =  1   31  
  32. 32. TRIBAL  CROSSING   CASE  STUDY   32  
  33. 33. Tribal  Crossing:  Animal  Party  •  Tribal  Crossing  (FableLabs)  Facebook  game     –  Hosted  on  EC2  •  “Part  Pokemon,  part  Fronerville  and  part  Mario  Galaxy!     In  Animal  Party  youll  discover  hundreds  of  amazing   animals  across  the  galaxy  and  raise  them  on  your   magical  garden.”   33  
  34. 34. Tribal  Crossing:  Challenges  Common  steps  on  scaling  up  RDBMS:  • Tune  queries  (indexing,  explain  query)  • Denormalizaon  • Cache  data  (Memcache)  • Tune  MySQL  configuraon  • Replicaon  (read  slaves)  • Where  do  we  go  from  here  to  prepare  for  the  scale  of  a  successful  social  game?   34  
  35. 35. Tribal  Crossing:  Challenges  •  Write-­‐heavy  requests   •  Caching  does  not  help   •  MySQL  /  InnoDB  limitaon  (Percona)  •  Need  to  scale  drascally  over  night   •  My  Polls  –  100  to  1m  users  over  a  weekend  •  Small  team,  no  dedicated  sysadmin   •  Focus  on  what  we  do  best  –  making  games  •  Keeping  cost  down   35  
  36. 36. Tribal  Crossing:  “Old”  Architecture  and  Opons  •  MySQL  with  master-­‐to-­‐master  replicaon  and     sharding   •  Complex  to  setup,  high  administraon  cost   •  Requires  applicaon  level  changes  •  Cassandra   •  High  write,  but  low  read  throughput   •  Live  cluster  reconfiguraon  and  rebalance  is  quite  complicated   •  Eventual  consistency  gives  too  much  burden  to  applicaon  developers  •  MongoDB   •  High  read/write,  but  unpredictable  latency   •  Live  cluster  rebalance  for  exisng  nodes  only   •  Disk  data  corrupon  concerns  on  node  failures   36  
  37. 37. Tribal  Crossing:  Why  Couchbase  Server?  •  SPEED,  SPEED,  SPEED  •  Immediate  consistency  •  Interface  is  dead  simple  to  use   •  We  are  already  using  Memcache  •  Low  sysadmin  overhead  •  Schema-­‐less  data  store  •  Used  and  Proven  by  big  guys  like  Zynga  •  …  and  lastly,  because  Tribal  CAN   •  Bigger  firms  with  legacy  code  base  =  hard  to  adapt   •  Small  team  =  ability  to  get  on  the  cu{ng  edge   37  
  38. 38. Tribal  Crossing:  New  Challenges  With  Couchbase  •  But,  there  are  some  differences  in     using  Couchbase  (currently  1.7)  to  handle  the  game   data:   •  No  easy  way  to  query  data   –  Couchbase  Server  2.0  resolves  this  by  using  new  persistence   layer  •  Can  this  work  for  an  online  game?   •  Break  out  of  the  old  ORM  /  relaonal  paradigm!   •  Tribal:  “we  are  not  handling  crical  transacons”   38  
  39. 39. Tribal  Crossing:  Deploying  Couchbase  in  EC2   • Basic  producon  environment   setup     • Dev/Stage  environment  –  feel  free   to  install  Couchbase  on  your  web   server   39  
  40. 40. Tribal  Crossing:  Deploying  Couchbase  in  EC2   1.  Amazon  Linux  AMI,     64-­‐bit,  EBS  backed  instance   2.  Setup  swap  space   3.  Install  Couchbase’s     Membase  Server  1.7   4.  Access  web  console   hAp://<hostname>:8091   5.  Start  the  new  cluster  with  a  single   node   6.  Add  the  other  nodes  to  the  cluster   and  rebalance   40  
  41. 41. Tribal  Crossing:  Deploying  Couchbase  in  EC2   • Moxi  figures  out  which  node  in  the   cluster  holds  data  for  a  given  key.     Used  with  older,  non-­‐cluster  aware   clients  like  PHP,  Ruby,  Perl.   [smart  clients  for  PHP  &  Ruby  released  since]   • On  each  web  server,  install  Moxi   proxy   • Start  Moxi  by  poinng  it  to  the   DNS  entry  already  created   • Web  apps  connect  to  Moxi  that  is   running  locally   memcache-­‐>addServer(‘localhost’,   11211);   41  
  42. 42. Tribal  Crossing:  Represenng  Game  Data  in  Couchbase    Use  case  -­‐  simple  version  of  farming  part:  • A  player  can  have  a  variety  of  plants  on  their  farm.  • A  player  can  add  or  remove  plants  from  their  farm.  • A  Player  can  see  what  plants  are  on  another  players  farm.   42  
  43. 43. Tribal  Crossing:  Represenng  Game  Data  in  Couchbase  Represenng  Objects  • Simply  treat  an  object  as  a  Map/diconary/JSON  object  • Determine  the  key  for  an  object  using  the  class  name  (or  type)  of  the  object  and  an  unique  ID  Represenng  Object  Lists  • Denormalizaon  • Save  a  comma  separated  list  or  an  array  of  object  IDs   43  
  44. 44. Tribal  Crossing:  Represenng  Game  Data  in  Couchbase   Player  Object Key: Player1 Plant  Object JSON Key: Plant201 { “_id” : “Player1”, JSON “nid” : 1, { “name” : “Shawn” “_id” : “Plant201”, } “nid” : 201, “player_id” : 1PlayerPlant  List “name” : “Starflower”Key: Player1_PlantList }JSON{ “_id” : “Player1_Plantlist”, “plants” : [201, 202, 204]} 44  
  45. 45. Tribal  Crossing:  Schema-­‐less  Game  Data  •  No  need  to  “ALTER  TABLE”  •  Add  new  “fields”  all  objects  at  any  me   •  Specify  default  value  for  missing  fields   •  Increased  development  speed  •  Using  JSON  for  data  objects  though,     –  Offers  the  ability  to  query  and  analyze  arbitrary  fields  in   Couchbase  2.0   45  
  46. 46. Tribal  Crossing:  Accessing  Game  Data  in  Couchbase  Get  all  plants  belong  to  a  given  player  Request: GET /player/1/farm// Create a new PlantList from playersPlantList playersPlants = new PlantList(cbclient.get("Player1_PlantList");for (Plant plant : playersPlants) { aPlayer.addPlant(plant);} 46  
  47. 47. Tribal  Crossing:  Modifying  Game  Data  in  Couchbase  Give  a  player  a  new  plant// Create the new plantPlant givenPlant = new Plant(100, "Mushroom");cbclient.set("Plant100", givenPlant);// Update the player plant listPlayer thePlayer = player.fetch(cbclient.get("Player1");// Add the plant to the playerthePlayer.receivePlant(givenPlant);// Store the players new plant listcbclient.set("Player1_PlantList", thePlayer.getPlantsArray()); 47  
  48. 48. Tribal  Crossing:  Concurrency  •  Concurrency  issue  can  occur  when  mulple     requests  are  working  with  the  same  piece  of  data.    Soluons:  •  CAS  (check-­‐and-­‐set)   •  Client  can  know  if  someone  else  has  modified  the  data  while  you  are   trying  to  update   •  Provides  opmisc  concurrency  control  •  GETL  (get  with  lock)   •  Locking  (try/wait  cycle)   •  Provides  pessimisc  concurrency  control   48  
  49. 49. Tribal  Crossing:  Data  Relaonship  •  Record  object  relaonships  both  ways   •  Example:    Plots  and  Plants   –  Plot  object  stores  id  of  the  plant  that  it  hosts   –  Plant  object  stores  id  of  the  plot  that  it  grows  on   •  Resoluon  in  case  of  mismatch  •  Dont  sweat  the  extra  calls  to  load  data  in  a  one-­‐to-­‐many   relaonship   •  Use  mulGet   •  Tribal:  "We  can  sll  cache  aggregated  results  in  a  Memcache  bucket  if   needed"   49  
  50. 50. Migrang  to  Couchbase:  Moving  from  MySQL  &  memcached   50  
  51. 51. Tribal  Crossing:  Migrang  to  Couchbase  Servers  •  First  migrated  large  or  slow  performing  tables  and     frequently  updated  fields  from  MySQL  to  Couchbase   51  
  52. 52. Tribal  Crossing:  Deployment     52  
  53. 53. Tribal  Crossing:  Deployment   53  
  54. 54. Tribal  Crossing:  Conclusion  •  Significantly  reduced  the  cost  incurred     by  scaling  up  database  servers  and  managing  them.  •  Achieved  significant  improvements  in  various   performance  metrics  (e.g.,  read,  write,  latency,  etc.)  •  Allowed  them  to  focus  more  on  game  development  and   opmizing  key  metrics  •  Plan  to  use  real-­‐me  Map-­‐Reduce,  querying,  and   indexing  abilies  provided  by  Couchbase  Server  2.0   54  
  55. 55. QUESTIONS?    INFO@COUCHBASE.COM   55  

×