Your SlideShare is downloading. ×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Introduction_to_couchbase_SF_2013

829
views

Published on

Published in: Technology, Education

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
829
On Slideshare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
20
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • All nodes are equal, single node type, easy to scale your cluster. No single point of failoverEvery node manages some active data and some replica data. Data is distributed across the clsuter and hence the load is also uniformly distributed using auto sharding. We have a fixed number of shards that a key get hashed to. 1024 shards, distributed across the cluster. Replication within the cluster for high availability. Number of replicas are configurable with upto 3 replicas. With auto-failiover or manual failover, replica information is immediately promoted to active Add multiple nodes at a time to grow and shrink your cluster.
  • JSON support – natively stored as json, whne you build an app, there is not conversion required. New doc viewing , editing capability. Indexing and querying – look inside your json, build views and query for a key, for ranges or to aggregate data Incremental mapreduce – powers indexing. Build complex views over your data. Great for real-time analytics XDCR – replicate information from one cluster to another cluster
  • 1.  A set request comes in from the application .2.  Couchbase Server responses back that they key is written3. Couchbase Server then Replicates the data out to memory in the other nodes4. At the same time it is put the data into a write que to be persisted to disk
  • 1.  A set request comes in from the application .2.  Couchbase Server responses back that they key is written3. Couchbase Server then Replicates the data out to memory in the other nodes4. At the same time it is put the data into a write que to be persisted to disk
  • 1.  A set request comes in from the application .2.  Couchbase Server responses back that they key is written3. Couchbase Server then Replicates the data out to memory in the other nodes4. At the same time it is put the data into a write que to be persisted to disk
  • 1.  A set request comes in from the application .2.  Couchbase Server responses back that they key is written3. Couchbase Server then Replicates the data out to memory in the other nodes4. At the same time it is put the data into a write que to be persisted to disk
  • Bulletize the text. Make sure the builds work.
  • Bulletize the text. Make sure build work properly.
  • Bulletize the text. Make sure build work properly.
  • Bulletize the text. Make sure the builds work.
  • Overview of what this feature is
  • 1.  A set request comes in from the application .2.  Couchbase Server responses back that they key is written3. Couchbase Server then Replicates the data out to memory in the other nodes4. At the same time it is put the data into a write que to be persisted to disk
  • Transcript

    • 1. Introduc)on  to     Couchbase  Server   Dip)  Borkar   Director,  Product  Management  
    • 2. Couchbase  Server   NoSQL  Document  Database  
    • 3. Couchbase  Open  Source  Project   •  Leading  NoSQL  database  project   focused  on  distributed  database   technology  and  surrounding   ecosystem   •  Supports  both  key-­‐value  and   document-­‐oriented  use  cases   •  All  components  are  available   under  the  Apache  2.0  Public   License   •  Obtained  as  packaged  so?ware  in   both  enterprise  and  community   ediAons.   Couchbase Open Source Project
    • 4. In  this  session   •  Overview  of  Couchbase  Server  features     •  What’s  new  in  Couchbase  Server  2.1  and  2.2   •  Architectural  Overview  and  Couchbase  Opera)ons   •  Live  Demo  with  a  peak  into  new  features    
    • 5. Easy   Scalability   Consistent  High   Performance   Always  On   24x365   Grow  cluster  without   applicaAon  changes,  without   downAme  with  a  single  click   Consistent  sub-­‐millisecond     read  and  write  response  Ames     with  consistent  high  throughput   No  downAme  for  so?ware   upgrades,  hardware   maintenance,  etc.   JSON JSON JSON JSONJSON PERFORMANCE Flexible  Data   Model   JSON  document  model  with   no  fixed  schema.   Couchbase  Server  
    • 6. Core  Couchbase  Server  Features   Built-­‐in  clustering  –  All  nodes  equal     Data  replicaAon  with  auto-­‐failover     Zero-­‐downAme  maintenance       Built-­‐in  managed  cached       Append-­‐only  storage  layer     Online  compacAon     Monitoring  and  admin  API  &  UI     SDK  for  a  variety  of  languages  
    • 7. 2.0  introduced   JSON  support   Indexing  and  Querying   Cross  data  center  replica)on  Incremental  Map  Reduce   JSON JSON JSON JSONJSON
    • 8. 2.1  introduced   New  in  2.2   §  MulA-­‐threaded  persistence   engine   §  OpAmisAc  XDCR   §  CBHealthcheck  –  Cluster   health  check  tool   §  Hostname  management   §  Rebalance  progress   indicators   §  New  XDCR  protocol  based   on  memcached   §  Read-­‐only  admin  user     §  Automated  and  opAmized   purge  management   §  CBRecovery  Data  recovery   tool  from  remote  clusters   §  Non-­‐root,  non-­‐sudo  install   Conference  Tip:  Learn  more  about  the  health  checker  in     “Keeping  your  cluster  healthy”  @  4:30pm  
    • 9. Couchbase  Server  Architecture   Heartbeat   Process  monitor   Global  singleton  supervisor   ConfiguraAon  manager   on  each  node   Rebalance  orchestrator   Node  health  monitor   one  per  cluster   vBucket  state  and  replicaAon  manager   hZp  REST  management  API/Web  UI   HTTP   8091   Erlang  port  mapper   4369   Distributed  Erlang   21100  -­‐  21199   Erlang/OTP   storage  interface   Couchbase  EP  Engine   11210   Memcapable    2.0   Moxi   11211   Memcapable    1.0   Memcached   New  Persistence  Layer   8092   Query  API  Query  Engine   Data  Manager   Cluster  Manager  
    • 10. Couchbase  Server  Architecture   ReplicaAon,  Rebalance,     Shard  State  Manager   REST  management     API/Web  UI   8091   Admin  Console   Erlang  /OTP   11210  /  11211   Data  access  ports   Object-­‐managed   Cache   Mul)-­‐threaded     Persistence  Engine   8092   Query  API   Query  Engine   hZp   Data  Manager   Cluster  Manager  
    • 11. Couchbase  Opera)ons  
    • 12. 3  3   2   Single  node  -­‐  Couchbase  Write   Opera)on   Managed  Cache   Disk  Queue   Disk   ReplicaAon   Queue   App  Server   Couchbase  Server  Node   Doc  1  Doc  1   Doc  1   To  other  node  
    • 13. 3  3   2   Single  node  -­‐  Couchbase  Update   Opera)on   Managed  Cache   Disk  Queue   ReplicaAon   Queue   App  Server   Doc  1’   Doc  1   Doc  1’  Doc  1   Doc  1’   Disk   To  other  node   Couchbase  Server  Node  
    • 14. GET   Doc  1   3  3   2   Single  node  -­‐  Couchbase  Read   Opera)on   Disk  Queue   ReplicaAon   Queue   App  Server   Doc  1   Doc  1  Doc  1   Managed  Cache   Disk   To  other  node   Couchbase  Server  Node  
    • 15. 3  3   2   Single  node  –  Couchbase  Cache  Miss   2   Disk  Queue   ReplicaAon   Queue   App  Server   Couchbase  Server  Node   Doc  1   Doc  3  Doc  5   Doc  2  Doc  4   Doc  6   Doc  5   Doc  4   Doc  3   Doc  2   Doc  4   GET   Doc  1   Doc  1   Doc  1   Managed  Cache   Disk   To  other  node  
    • 16. COUCHBASE  SERVER    CLUSTER   Basic  Opera)on   •  Docs  distributed  evenly  across   servers     •  Each  server  stores  both  ac)ve  and   replica  docs   Only  one  server  acAve  at  a  Ame   •  Client  library  provides  app  with   simple  interface  to  database   •  Cluster  map  provides  map     to  which  server  doc  is  on   App  never  needs  to  know   •  App  reads,  writes,  updates  docs   •  Mul)ple  app  servers  can  access  same   document  at  same  )me   User  Configured  Replica  Count  =  1   READ/WRITE/UPDATE       ACTIVE   Doc  5   Doc  2   Doc   Doc   Doc   SERVER  1       ACTIVE   Doc  4   Doc  7   Doc   Doc   Doc   SERVER  2   Doc  8       ACTIVE   Doc  1   Doc  2   Doc   Doc   Doc   REPLICA   Doc  4   Doc  1   Doc  8   Doc   Doc   Doc   REPLICA   Doc  6   Doc  3   Doc  2   Doc   Doc   Doc   REPLICA   Doc  7   Doc  9   Doc  5   Doc   Doc   Doc   SERVER  3   Doc  6   APP  SERVER  1   COUCHBASE  Client  Library      CLUSTER  MAP   COUCHBASE  Client  Library      CLUSTER  MAP   APP  SERVER  2   Doc  9  
    • 17. Add  Nodes  to  Cluster   •  Two  servers  added   One-­‐click  opera)on   •  Docs  automa)cally   rebalanced  across   cluster   Even  distribuAon  of  docs   Minimum  doc  movement   •  Cluster  map  updated   •  App  database     calls  now  distributed     over  larger  number  of   servers         REPLICA   ACTIVE   Doc  5   Doc  2   Doc   Doc   Doc  4   Doc  1   Doc   Doc   SERVER  1       REPLICA   ACTIVE   Doc  4   Doc  7   Doc   Doc   Doc  6   Doc  3   Doc   Doc   SERVER  2       REPLICA   ACTIVE   Doc  1   Doc  2   Doc   Doc   Doc  7   Doc  9   Doc   Doc   SERVER  3       SERVER  4       SERVER  5   REPLICA   ACTIVE   REPLICA   ACTIVE   Doc   Doc  8   Doc   Doc  9   Doc   Doc  2   Doc   Doc  8   Doc   Doc  5   Doc   Doc  6   READ/WRITE/UPDATE   READ/WRITE/UPDATE   APP  SERVER  1   COUCHBASE  Client  Library      CLUSTER  MAP   COUCHBASE  Client  Library      CLUSTER  MAP   APP  SERVER  2   COUCHBASE  SERVER    CLUSTER   User  Configured  Replica  Count  =  1  
    • 18. Fail  Over  Node       REPLICA   ACTIVE   Doc  5   Doc  2   Doc   Doc   Doc  4   Doc  1   Doc   Doc   SERVER  1       REPLICA   ACTIVE   Doc  4   Doc  7   Doc   Doc   Doc  6   Doc  3   Doc   Doc   SERVER  2       REPLICA   ACTIVE   Doc  1   Doc  2   Doc   Doc   Doc  7   Doc  9   Doc   Doc   SERVER  3       SERVER  4       SERVER  5   REPLICA   ACTIVE   REPLICA   ACTIVE   Doc  9   Doc  8   Doc   Doc  6   Doc   Doc   Doc  5   Doc   Doc  2   Doc  8   Doc   Doc   •  App  servers  accessing  docs   •  Requests  to  Server  3  fail   •  Cluster  detects  server  failed   Promotes  replicas  of  docs  to   acAve   Updates  cluster  map   •  Requests  for  docs  now  go  to   appropriate  server   •  Typically  rebalance     would  follow   Doc   Doc  1   Doc  3   APP  SERVER  1   COUCHBASE  Client  Library      CLUSTER  MAP   COUCHBASE  Client  Library      CLUSTER  MAP   APP  SERVER  2   User  Configured  Replica  Count  =  1   COUCHBASE  SERVER    CLUSTER   Conference  Tip:  Learn  more  about  running  “Couchbase  in   produc7on”  in  Perry’s  session  at  2:40pm  
    • 19. Demo  Time  
    • 20. Indexing  and  Querying  –  The  basics   • Define  materialized  views  on  JSON  documents  and  then   query  across  the  data  set     • Using  views  you  can  define   •  Primary  indexes     •  Simple  secondary  indexes  (most  common  use  case)   •  Complex  secondary,  terAary  and  composite  indexes   •  AggregaAons  (reducAon)     • Indexes  are  eventually  indexed     • Queries  are  eventually  consistent   • Built  using  Map/Reduce  technology     •  Map  and  Reduce  funcAons  are  wricen  in  Javascript    
    • 21. COUCHBASE  SERVER    CLUSTER   Indexing  and  Querying     User  Configured  Replica  Count  =  1       ACTIVE   Doc  5   Doc  2   Doc   Doc   Doc   SERVER  1   REPLICA   Doc  4   Doc  1   Doc  8   Doc   Doc   Doc   APP  SERVER  1   COUCHBASE  Client  Library      CLUSTER  MAP   COUCHBASE  Client  Library      CLUSTER  MAP   APP  SERVER  2   Doc  9   •  Indexing  work  is  distributed   amongst  nodes   •  Large  data  set  possible   •  Parallelize  the  effort   •  Each  node  has  index  for  data  stored   on  it   •  Queries  combine  the  results  from   required  nodes       ACTIVE   Doc  5   Doc  2   Doc   Doc   Doc   SERVER  2   REPLICA   Doc  4   Doc  1   Doc  8   Doc   Doc   Doc   Doc  9       ACTIVE   Doc  5   Doc  2   Doc   Doc   Doc   SERVER  3   REPLICA   Doc  4   Doc  1   Doc  8   Doc   Doc   Doc   Doc  9   Query  
    • 22. Cross  Data  Center  Replica)on  –  The  basics   •  Replicate  your  Couchbase  data  across  clusters   •  Clusters  may  be  spread  across  geos   •  Configured  on  a  per-­‐bucket  (per-­‐database)  basis   •  Supports  unidirec)onal  and  bidirec)onal  opera)on   •  Applica)on  can  read  and  write  from  both  clusters     ­  AcAve  –  AcAve  replicaAon   •  Replica)on  throughput  scales  out  linearly   •  Different  from  intra-­‐cluster  replica)on  
    • 23. 3  3   2   Cross  data  center  replica)on  –  Data  flow  2   Managed  Cache   Disk  Queue   Disk   ReplicaAon   Queue   App  Server   Couchbase  Server  Node   Doc  1  Doc  1   Doc  1   To  other  node   XDCR  Engine   Doc  1   To  other  cluster  
    • 24.     SERVER  3       SERVER  1       SERVER  2   Couchbase  Server  –  San  Francisco       SERVER  3       SERVER  1       SERVER  2   Couchbase  Server  –  New  York   Op)mis)c  replica)on   Per  replica)on     Tunable  Parameters   Op)mized  protocol   based  on  memcached   Reliability  and     performance  at  scale   Cross  Data  Center  Replica)on  (XDCR)  
    • 25. Demo  Time  
    • 26. What’s  else  is  New?  
    • 27. Couchbase  Query  Language     N1QL   Our  next  genera)on  query   language  for  JSON   Read  “Nickel”     Conference  Tip:  Learn  more  about  N1QL  @  1:00pm  in   the  Dev  Track  or  visit  query.couchbase.com   In Dev Preview
    • 28. www.couchbase.com/download   Couchbase  Server    
    • 29. Thank  you!   dipA@couchbase.com   @dborkar   Download  Couchbase  Server  2.2     hcp://www.couchbase.com/download  

    ×