NoSQL for Architects - Migrating from RDBMS to a Schema-less world

902 views
783 views

Published on

Published in: Technology
0 Comments
3 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
902
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
36
Comments
0
Likes
3
Embeds 0
No embeds

No notes for slide

NoSQL for Architects - Migrating from RDBMS to a Schema-less world

  1. 1. NoSQL  for  Architects  Migra3ng  From  RDBMS  to  a   Schema-­‐less  World   Dip&  Borkar   Senior  Product  Manager   1  
  2. 2. NoSQL  Webinar  Series   NoSQL  for  Architects:  Migra3ng  from  RDBMS  to  a   schema-­‐less  world     NoSQL  for  Developers:  Migra3ng  from  RDBMS  to  a   schema-­‐less  world     NoSQL  for  DBAs:  Migra3ng  from  RDBMS  to  a   schema-­‐less  world       2  
  3. 3. INTRODUCTION  TO  DOCUMENT   DATABASES   3  
  4. 4. NoSQL  catalog   Key-­‐Value   Data  Structure   Document   Column   Graph  (memory  only)   Cache   memcached   redis  (memory/disk)   membase   couchbase   cassandra   Neo4j   Database   couchDB   mongoDB   4  
  5. 5. Document  Databases  •  Each  record  in  the  database  is  a  self-­‐ describing  document     {  •  Each  document  has  an  independent   “UUID”:  “ 21f7f8de-­‐8051-­‐5b89-­‐86 “Time”:   “2011-­‐04-­‐01T13:01:02.42 “Server”:   “A2223E”, structure   “Calling   Server”:   “A2213W”, “Type”:   “E100”, “Initiating   User”:   “dsallings@spy.net”,•  Documents  can  be  complex     “Details”:   { “IP”:  “ 10.1.1.22”,•  All  databases  require  a  unique  key   “API”:   “InsertDVDQueueItem”, “Trace”:   “cleansed”,•  Documents  are  stored  using  JSON  or   “Tags”:   [ “SERVER”,   XML  or  their  deriva&ves   “US-­‐West”,   “API” ]•  Content  can  be  indexed  and  queried     } }•  Offer  auto-­‐sharding  for  scaling  and   replica&on  for  high-­‐availability   5  
  6. 6. CRITICAL  DIFFERENCES  BETWEEN   NOSQL  AND  RDBMS   6  
  7. 7. Changes  in  interac&ve  soVware  –  NoSQL  driver   7  
  8. 8. COMPARING  DATA  MODELS   8  
  9. 9. h[p://www.geneontology.org/images/diag-­‐godb-­‐er.jpg   9  
  10. 10. Rela&onal  vs  Document  data  model   {   “UUID”:  “ 21f7f8de-­‐8051-­‐5b89-­‐86 R1C1   R1C2   R1C3   R1C4   {   “Time”:   “2011-­‐04-­‐01T13:01:02.42 “UUID”:  “ 21f7f8de-­‐8051-­‐5b89-­‐86 “Server”:   “A2223E”, {   “Time”:   “2011-­‐04-­‐01T13:01:02.42 “Calling   Server”:   “A2213W”, “Server”:   “A2223E”, “UUID”:  “ 21f7f8de-­‐8051-­‐5b89-­‐86 “Type”:   “E100”, {   “Time”:   “2011-­‐04-­‐01T13:01:02.42 “Calling   Server”:   User”:   “dsallings@spy.net”, “Initiating   “A2213W”, “Server”:   “A2223E”, “Type”:   “E100”, “Details”:   “UUID”:  “ 21f7f8de-­‐8051-­‐5b89-­‐86 “Initiating   User”:   “dsallings@spy.net”, “Time”:   “2011-­‐04-­‐01T13:01:02.42 “Calling   Server”:   “A2213W”, { R2C1   R2C2   R2C3   R2C4   “Details”:   “IP”:  “ 10.1.1.22”, “Server”:   “A2223E”, “Type”:   “E100”, { “Initiating   User”:   “dsallings@spy.net”, “Calling   Server”:   “A2213W”, “API”:   “InsertDVDQueueItem”, “Details”:   “Type”:   “E100”, “IP”:  “ 10.1.1.22”, “Trace”:   “cleansed”, { “API”:  “Tags”:   “Initiating   User”:   “dsallings@spy.net”,“InsertDVDQueueItem”, “Details”:   “Trace”:   “cleansed”, “IP”:  “ 10.1.1.22”, [ { “Tags”:   “API”:   “InsertDVDQueueItem”, “SERVER”,   “IP”:  “ 10.1.1.22”, [ “Trace”:   “cleansed”, “US-­‐West”,   R3C1   R3C2   R3C3   R3C4   “Tags”:   “API”:   “InsertDVDQueueItem”, [ “Trace”:   “cleansed”, “SERVER”,   “API” “US-­‐West”,   ] “Tags”:   “SERVER”,  “API” } [ ] “US-­‐West”,   } “SERVER”,   } “API” } ] “US-­‐West”,   } “API” R4C1   R4C2   R4C3   R4C4   } } ] } Rela&onal  data  model   Document  data  model   Highly-­‐structured  table  organiza&on  with     Collec&on  of  complex  documents  with   rigidly-­‐defined  data  formats  and  record   arbitrary,  nested  data  formats  and   structure.   varying  “record”  format.   10  
  11. 11. Example:  Error  Logging  Use  case   Table  1:  Error  Log   Table  2:  Data  Centers   KEY   ERR   TIME   DC   KEY   LOC   NUM   FK(DC2)   303-­‐223-­‐   1   ERR   TIME   1   DEN   2332   FK(DC2)   212-­‐223-­‐   2   ERR   TIME   2   NYC   2332   FK(DC2)   415-­‐223-­‐   3   ERR   TIME   3   SFO   2332   FK(DC3)   4   ERR   TIME   11  
  12. 12. Example:  Error  Logging  Use  case   Table  1:  Error  Log   Table  2:  Data  Centers   KEY   ERR   TIME   DC   KEY   LOC   NUM   FK(DC2)   FK(DC2)   303-­‐223-­‐   1   ERR   TIME   1   DEN   2332   FK(DC2)   FK(DC2)   212-­‐223-­‐   2   ERR   TIME   2   NYC   2332   FK(DC2)   FK(DC2)   415-­‐223-­‐   3   ERR   TIME   3   SFO   2332   FK(DC3)   FK(DC3)   4   ERR   TIME   12  
  13. 13. Example:  Error  Logging  Use  case   Table  1:  Error  Log   Table  2:  Data  Centers   KEY   ERR   TIME   DC   KEY   LOC   NUM   FK(DC2)   303-­‐223-­‐   1   ERR   TIME   1   DEN   2332   FK(DC2)   FK(DC2)   212-­‐223-­‐   2   ERR   TIME   2   NYC   2332   FK(DC2)   FK(DC2)   415-­‐223-­‐   3   ERR   TIME   3   SFO   2332   FK(DC3)   FK(DC3)   4   ERR   TIME       {          “ID”:  1,          “ERR”:  “Out  of  Memory”,          “TIME”:  “2004-­‐09-­‐16T23:59:58.75”,          “DC”:  “NYC”,          “NUM”:  “212-­‐223-­‐2332”   }   13  
  14. 14. Document  design  with  flexible  schema      {              “ID”:  4,   {              “ERR”:  “Out  of  Memory”,          “ID”:    1,   {            “TIME”:  “2004-­‐09-­‐16T23:59:58.75”,          “ERR”:  “Out  of  Memory”,          “ID”:  1,   {          “DC”:    “NYC”,   “Out  of  Memory”,          “TIME”:  “2004-­‐09-­‐16T23:59:58.75”,         “ERR”:   1,          “ID”:          “NUM”:    ““  NYC”,   “Out  of  Memory”,   212-­‐223-­‐2332”          “DC”:       “ERR”:          “TIME”:  “2004-­‐09-­‐16T23:59:58.75”,  }          “NUM”:  TIME”:  “2004-­‐09-­‐16T23:59:58.75”,          “ “212-­‐223-­‐2332”          “DC”:  “NYC”,       }          “NUM”:  “212-­‐223-­‐2332”          “DC”:  “NYC”,   SCHEMA  CHANGE   {       }          “NUM”:  “212-­‐223-­‐2332”          “ID”:  5,   {   }          “ERR”:  “Out  of  Memory”,          “ID”:  5,          “TIME”:  Out  of  Memory”,          “ERR”:  “ “2004-­‐09-­‐16T23:59:58.75”,              “TIME”:  “2004-­‐09-­‐16T23:59:58.75”, COMPONENT”:  ”DMS”              “DC”:  ““LEVEL1”   SEV”:   NYC”,            “DC”:  “NYC”,          “NUM”:  “212-­‐223-­‐2332”   }      “NUM”:  “212-­‐223-­‐2332”       }   14  
  15. 15. Document  modeling       •  Are  these  separate  object  in  the  model  layer?         Q   •  •  Are  these  objects  accessed  together?     Do  you  need  updates  to  these  objects  to  be  atomic?   •  Are  mul&ple    people  edi&ng  these  objects  concurrently?        When  considering  how  to  model  data  for  a  given    applica&on   •  Think  of  a  logical  container  for  the  data   •  Think  of  how  data  groups  together         15  
  16. 16. Document  Design  Op&ons             •  One  document  that  contains  all  related  data       –  Data  is  de-­‐normalized   –  Be[er  performance  and  scale   –  Eliminate  client-­‐side  joins       •  Separate  documents  for  different  object  types  with   cross  references     –  Data  duplica&on  is  reduced   –  Objects  may  not  be  co-­‐located     –  Transac&ons  supported  only  on  a  document  boundary   –  Most  document  databases  do  not  support  joins   16  
  17. 17. Document  ID  /  Key  selec&on   •  Similar  to  primary  keys  in  rela&onal  databases   •  Documents  are  sharded  based  on  the  document  ID   •  ID  based  document  lookup  is  extremely  fast     •  Usually  an  ID  can  only  appear  once  in  a  bucket         Q     •         Do  you  have  a  unique  way  of  referencing  objects?   •         Are  related  objects  stored  in  separate  documents?   Op3ons   • UUIDs,  date-­‐based  IDs,  numeric  IDs       • Hand-­‐craVed  (human  readable)     • Matching  prefixes  (for  mul&ple  related  objects)   17  
  18. 18. Example:  En&&es  for  a  Blog   BLOG   •  User  profile   The  main  pointer  into  the  user  data   •  Blog  entries   •  Badge  sepngs,  like  a  twi[er  badge       •  Blog  posts   Contains  the  blogs  themselves       •  Blog  comments   •  Comments  from  other  users   18  
  19. 19. Blog  Document  –  Op&on  1  –  Single  document     {   “UUID ”:  “2 1 f7 f8 de-­‐8 0 5 1 -­‐5 b89 -­‐8 6 “Time”:   “2 0 1 1 -­‐0 4-­‐0 1 T1 3 :0 1 :0 2.4 2 { “Server”:   “A2 2 2 3 E”, ! “_id”: “dborkar_Hello_World”,! W”, “Calling   Server”:   “A2 2 1 3 “Type”:   “E1 0 0 ”, “author”: “dborkar”, ! “Initiating   Us er”:   “ds allings @s py.net”, “type”: “post”! “D etails ”:   “title”: “Hello World”,! { “format”: “IP”:  “1 0 .1 ! .2 2 ”, “markdown”, .1 “API”:   “Ins ertD VD QueueItem”, “body”: “Hello from [Couchbase](http://couchbase.com).”, ! “Trace”:   “cleans ed”, “html”: “<p>Hello from <a href=“http: …! “Tags ”:   “comments”:[ ! [ [“format”: “markdown”, “body”:”Awesome post!”],! “SERVER”,   “US-­‐Wes t”,   [“format”: “markdown”, “body”:”Like it.” ]! ]! “API” ] }   } } 19  
  20. 20. Blog  Document  –  Op&on  2  -­‐  Split  into  mul&ple  docs    {  { !“UUID ”:  “21f7f8de-­‐8051 -­‐5b89 -­‐86“_id”: “dborkar_Hello_World”,!“Time”:   “2011 -­‐04-­‐01T13:01:02.42“author”: “A2223E”, !“Server”:   “dborkar”,“Calling   Server”:   “A2213W”,“type”: “E100 ”,“Type”:   “post”!“title”: “Hello World”,! @s py.net”,“Initiating   Us er”:   “ds allings“D etails ”:  “format”: “markdown”, ! {“body”:“IP”:  “10.1.1.22”, “Hello from [Couchbase]( “API”:   “Ins ertDVD QueueItem”,http://couchbase.com).”, ! “Trace”:   “cleans ed”,“html”:“Tags ”:   “<p>Hello from <a href=“http: …! [“comments”:[! “SERVER”,   ! “comment1_jchris_Hello_world”! “US-­‐Wes t”,   ! “API” ]! ] {   COMMENT  }! } “UUID ”:  “ 2 1 f7 f8 de-­‐8 0 5 1 -­‐5 b8 9 -­‐8 6 “Time”:   “ 2 0 1 1 -­‐0 4 -­‐0 1 T1 3 :0 1 :0 2 .4 2 “Server”:   “A2 2 2 3 E”,} “Calling   Server”:   “A2 2 1 3 W ”, {! BLOG  DOC   “Type”:   “E1 0 0 ”, “Initiating   Us er”:   “ds allings @s py.net”, “_id”: “comment1_dborkar_Hello_World”,! “D etails ”:   { “IP ”:  “ 1 0 .1 .1 .2 2 ”, “format”: “markdown”, ! “AP I”:   “ Ins ertD VD QueueItem”, “Trace”:   “cleans ed”, “Tags ”:   “body”:”Awesome post!” ! [ “SERVER”,   “US-­‐Wes t”,   }   “AP I” ] } } 20  
  21. 21. Threaded  Comments  •  You  can  imagine  how  to  take  this  to  a  threaded  list   List   First   Reply  to   comment   Blog   List   comment   More   Comments  Advantages  •  Only  fetch  the  data  when  you  need  it   •  For  example,  rendering  part  of  a  web  page  •  Spread  the  data  and  load  across  the  en&re  cluster     21  
  22. 22. COMPARING    SCALING  MODEL   22  
  23. 23. Modern interactive software architecture Application Scales Out Just add more commodity web servers Database Scales Up Get a bigger, more complex server Note  –  Rela&onal  database  technology  is  great  for  what  it  is  great  for,  but  it  is  not  great  for  this.   23  
  24. 24. NoSQL database matches application logic tier architectureData layer now scales with linear cost and constant performance. Application Scales Out Just add more commodity web servers NoSQL  Database  Servers   Database Scales Out Just add more commodity data servers Scaling out flattens the cost and performance curves. 24  
  25. 25. EVALUATING  NOSQL   25  
  26. 26. The  Process  –  From  Evalua&on  to  Go  Live     No  different  from  evalua&ng  a  rela&onal  database     1    Analyze  your  requirements       2    Find  solu&ons  /  products  that  match  key  requirements     3    Execute  a  proof  of  concept  /  performance  evalua&on     4    Begin  development  of  applica&on         5    Deploy  in  staging  and  then  produc&on     New  requirements  è  New  solu&ons     26  
  27. 27. 1    Analyze  your  requirements     Common  applica3on  requirements     •  Rapid  applica&on  development   –  Changing  market  needs   –  Changing  data  needs     •  Scalability     –  Unknown  user  demand     –  Constantly  growing  throughput   •  Consistent  Performance     –  Low  response  &me  for  be[er  user  experience   –  High  throughput  to  handle  viral  growth     •  Reliability   –  Always  online     27  
  28. 28. 2    Find  solu&ons  that  match  key  requirements  •  Linear  Scalability    •  Schema  flexibility   NoSQL  •  High  Performance  •  Mul&-­‐document  transac&ons  •  Database  Rollback    •  Complex  security  needs   RDBMS  •  Complex  joins  •  Extreme  compression  needs  •  Both  /  depends  on  the  data   RDBMS   NoSQL   28  
  29. 29. 3    Proof  of  concept  /  Performance  evalua&on   Prototype  a  workload     •  Look  for  consistent  performance…         –  Low  response  &mes  /  latency       •  For  be[er  user  experience   –  High  throughput     •  To  handle  viral  growth     •  For  resource  efficiency   •  …  across   –  Read  heavy  /  Write  heavy  /  Mixed  workloads   –  Clusters  of  growing  sizes     •  …  and  watch  for     –  Conten&on  /  heavy  locking     –  Linear  scalability   29  
  30. 30. 3    Other  considera&ons        Accessing  data   App  Server   –  No  standards  exist  yet   –  Typically  via  SDKs  or  over  HTTP   –  Check  if  the  programing  language  of  your   choice  is  supported.        Consistency   App  Server   –  Consistent  only  at  the  document  level   –  Most  documents  stores  currently  don’t   support  mul&-­‐document  transac&ons   –  Analyze  your  applica&on  needs        Availability   App  Server   –  Each  node  stores  ac&ve  and  replica  data   (Couchbase)   –  Each  node  is  either  a  master  or  slave   (MongoDB)   30  
  31. 31. 3    Other  considera&ons        Opera3ons   App  Server   –  Monitoring  the  system   –  Backup  and  restore  the  system   –  Upgrades  and  maintenance     –  Support        Ease  of  Scaling   App  Server   –  Ease  of  adding  and  reducing  capacity   Client   –  Single  node  type   –  App  availability  on  topology  changes        Indexing  and  Querying   –  Secondary  indexes  (Map  func&ons)   –  Aggregates  Grouping  (Reduce  func&ons)   –  Basic  querying     31  
  32. 32. 4    Begin  development         Data  Modeling  and   Document  Design   32  
  33. 33. 5    Deploying  to  staging  and  produc&on     •  Monitoring  the  system     •  RESTful  interfaces  /  Easy  integra&on  with  monitoring     tools   •  High-­‐availability   •  Replica&on   •  Failover  and  Auto-­‐failover     •  Always  Online  –  even  for  maintenance  tasks     •  Database  upgrades   •  SoVware  (OS)  and  Hardware  upgrades   •  Backup  and  restore   •  Index  building   •  Compac&on   33  
  34. 34. So  are  you  being  impacted  by  these?     Schema  Rigidity  problems     •  Do  you  store  serialized  objects  in  the  database?   •  Do  you  have  lots  of  sparse  tables  with  very  few  columns   Q   being  used  by  most  rows?   •  Do  you  find  that  your  applica&on  developers  require  schema   changes  frequently  due  to  constantly  changing  data?       •  Are  you  using  your  database  as  a  key-­‐value  store?   Scalability  problems     •  Do  you  periodically  need  to  upgrade  systems  to  more   powerful  servers  and  scale  up?     Q   •  Are  you  reaching  the  read  /  write  throughput  limit  of  a  single   database  server?     •  Is  your  server’s  read  /  write  latency  not  mee&ng  your  SLA?     •  Is  your  user  base  growing  at  a  frightening  pace?     34  
  35. 35. WHERE  IS  NOSQL  A  GOOD  FIT?   35  
  36. 36. Performance  driven  use  cases   •  Low  latency   •  High  throughput  ma[ers   •  Large  number  of  users     •  Unknown  demand  with  sudden  growth  of  users/data     •  Predominantly  direct  document  access   •  Workloads  with  very  high  muta&on  rate  per  document   (temporal  locality)  Working  set  with  heavy  writes     36  
  37. 37. Data  driven  use  cases     •  Support  for  unlimited  data  growth       •  Data  with  non-­‐homogenous  structure     •  Need  to  quickly  and  oVen  change  data  structure   •  3rd  party  or  user  defined  structure   •  Variable  length  documents   •  Sparse  data  records   •  Hierarchical  data     37  
  38. 38. BRIEF  OVERVIEW  COUCHBASE  SERVER   38  
  39. 39. Couchbase  Server   Simple.  Fast.  Elas&c.  NoSQL.      Couchbase  automa&cally  distributes  data  across  commodity  servers.  Built-­‐in  caching   enables  apps  to  read  and  write  data  with  sub-­‐millisecond  latency.  And  with  no  schema  to   manage,  Couchbase  effortlessly  accommodates  changing  data  management  requirements.     39  
  40. 40. Representa&ve  user  list   40  
  41. 41. Couchbase  architecture   Database  Opera&ons   REST  management  API/Web  UI   vBucket  state  and  replica&on  manager   (built-­‐in  memcached)   Global  singleton  supervisor   Rebalance  orchestrator   Configura&on  manager   Node  health  monitor   Process  monitor   Membase  EP  Engine   Heartbeat   Data  Manager   Cluster  Manager   storage  interface   CouchDB   h[p   on  each  node   one  per  cluster   Erlang/OTP   Cluster  Management   41  
  42. 42. Couchbase  deployment   Web   Applica&on   Couchbase   Client  Library   Data  Flow   Cluster  Management   42  
  43. 43. Clustering  With  Couchbase   2   1   SET  request  arrives  at  KEY’s   1   SET  acknowledgement   master  server   returned  to  applica&on   3   2   3   Listener-­‐Sender   RAM   Couchbase  storage  engine   4   Disk Disk Disk Disk Disk DiskReplica  Server  1  for  KEY   Master  server  for  KEY   Replica  Server  2  for  KEY   43  
  44. 44. Basic  Opera&on   APP  SERVER  1   APP  SERVER  2       § Docs  distributed  evenly  across       COUCHBASE  CLIENT  LIBRARY   servers  in  the  cluster   COUCHBASE  CLIENT  LIBRARY               § Each  server  stores  both  ac#ve   CLUSTER  MAP     CLUSTER  MAP             &  replica  docs       §  Only  one  server  ac&ve  at  a  &me   § Client  library  provides  app  with   Read/Write/Update   Read/Write/Update   simple  interface  to  database   § Cluster  map  provides  map  to   which  server  doc  is  on   §  App  never  needs  to  know   SERVER  1   SERVER  2   SERVER  3   §  App  reads,  writes,  updates   Ac&ve  Docs   Ac&ve  Docs   Ac&ve  Docs         docs     Doc  5   DOC     Doc  4   DOC     Doc  1   DOC         §  Mul&ple  App  Servers  can     Doc  2   DOC     Doc  7   DOC     Doc  3   DOC   access  same  document  at           Doc  9   DOC     Doc  8   DOC     Doc  6   DOC   same  &me             Replica  Docs     Replica  Docs     Replica  Docs           Doc  4   DOC     Doc  6   DOC     Doc  7   DOC           Doc  1   DOC     Doc  3   DOC     Doc  9   DOC           Doc  8   DOC     Doc  2   DOC     Doc  5   DOC   COUCHBASE  SERVER    CLUSTER  User  Configured  Replica  Count  =  1   44  
  45. 45. Add  Nodes   APP  SERVER  1   APP  SERVER  2           §  Two  servers  added  to   COUCHBASE  CLIENT  LIBRARY   COUCHBASE  CLIENT  LIBRARY   cluster           §  One-­‐click  opera&on     CLUSTER  MAP     CLUSTER  MAP       §  Docs  automa&cally               rebalanced  across   cluster   §  Even  distribu&on  of   docs   Read/Write/Update   Read/Write/Update   §  Minimum  doc   movement   §  Cluster  map  updated   §  App  database  calls  now   distributed  over  larger  #   SERVER  1   SERVER  2   SERVER  3   SERVER  4   SERVER  5   of  servers   Ac&ve  Docs       Ac&ve  Docs    Ac&ve  Docs  ocs   Ac&ve  Docs     Ac&ve  Docs     Ac&ve  D   Doc  5   DOC     Doc  4   DOC     Doc  1   DOC             Doc  3         Doc  2   DOC     Doc  7   DOC     Doc  3   DOC             Doc  6         Doc  9   DOC     Doc  8   DOC     Doc  6   DOC                   Replica  Docs     Replica  Docs    Replica  Docs     Replica  Docs     Replica  Docs         Replica  Docs         Doc  4   DOC     Doc  6   DOC     Doc  7   7   DOC   Doc                   Doc  1   DOC     Doc  3   DOC     Doc  9   9   DOC   Doc                   Doc  8   DOC     Doc  2   DOC     Doc  5   DOC       COUCHBASE  SERVER    CLUSTER  User  Configured  Replica  Count  =  1   45  
  46. 46. Fail  Over  Node   APP  SERVER  1   APP  SERVER  2   §  App  servers  happily  accessing  docs       on  Server  3       COUCHBASE  CLIENT  LIBRARY   §  Server  fails   COUCHBASE  CLIENT  LIBRARY         §  App  server  requests  to  server  3  fail       CLUSTER  MAP     CLUSTER  MAP     §  Cluster  detects  server  has  failed             §  Promotes  replicas  of  docs  to  ac#ve       §  Updates  cluster  map   §  App  server  requests  for  docs  now   go  to  appropriate  server   §  Typically  rebalance    would  follow     SERVER  1   SERVER  2   SERVER  3   SERVER  4   SERVER  5   Ac&ve  Docs       Ac&ve  Docs    Ac&ve  Docs  ocs   Ac&ve  Docs     Ac&ve  Docs     Ac&ve  D   Doc  5   DOC     Doc  4   DOC     Doc  1   DOC     Doc  9   DOC     Doc  6   DOC         Doc  3         Doc  2   DOC     Doc  7   DOC     Doc  3     Doc  8     DOC         Doc  6           DOC                     Replica  Docs     Replica  Docs    Replica  Docs     Replica  Docs     Replica  Docs         Replica  Docs         Doc  4   DOC     Doc  6   DOC     Doc  7   7   DOC   Doc     Doc  5   DOC     Doc  8   DOC               Doc  1   DOC     Doc  3   DOC     Doc  9   9   DOC   Doc     Doc  2     DOC                       COUCHBASE  SERVER    CLUSTER  User  Configured  Replica  Count  =  1   46  
  47. 47. THANK  YOU      DIPTI@COUCHBASE.COM   47  
  48. 48. Reading  and  Wri&ng   Reading  Data   Wri3ng  Data   Application  Server Application  Server Give  me   Please  store   document  A   A   document  A   Here  is     A   OK,  I  stored   document  A   document  A   A   Server   A   Server   RAM RAM A   A   DISK DISK 48  
  49. 49. Flow  of  data  when  wri&ng   Application  Server Application  Server Application  ServerApplica3ons  wri3ng  to  Couchbase     Server   Replica3on  queue   Disk  write  queue   Couchbase  transmibng  replicas   Couchbase  wri3ng  to  disk   network   Wri3ng  Data   49  
  50. 50. 50  
  51. 51. 51  

×