Cc cbs-demo & tour-12012012

369 views

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
369
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
4
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Hello everyone, I am diptiborkar, director of product management at couchbase. I manage the product roadmap of couchbase server, our flagship product. And I’m really excited to talk to you about couchbase server 2.0
  • Rapid app devwithouth the need to perform an expensive alter table operation.
  • All nodes are equal, single node type, easy to scale your cluster. No single point of failoverEvery node manages some active data and some replica data. Data is distributed across the clsuter and hence the load is also uniformly distributed using auto sharding. We have a fixed number of shards that a key get hashed to. 1024 shards, distributed across the cluster. Replication within the cluster for high availability. Number of replicas are configurable with upto 3 replicas. With auto-failiover or manual failover, replica information is immediately promoted to active Add multiple nodes at a time to grow and shrink your cluster.
  • JSON support – natively stored as json, whne you build an app, there is not conversion required. New doc viewing , editing capability. Indexing and querying – look inside your json, build views and query for a key, for ranges or to aggregate data Incremental mapreduce – powers indexing. Build complex views over your data. Great for real-time analytics XDCR – replicate information from one cluster to another cluster
  • 1.  A set request comes in from the application .2.  Couchbase Server responses back that they key is written3. Couchbase Server then Replicates the data out to memory in the other nodes4. At the same time it is put the data into a write que to be persisted to disk
  • 1.  A set request comes in from the application .2.  Couchbase Server responses back that they key is written3. Couchbase Server then Replicates the data out to memory in the other nodes4. At the same time it is put the data into a write que to be persisted to disk
  • 1.  A set request comes in from the application .2.  Couchbase Server responses back that they key is written3. Couchbase Server then Replicates the data out to memory in the other nodes4. At the same time it is put the data into a write que to be persisted to disk
  • Bulletize the text. Make sure the builds work.
  • Bulletize the text. Make sure build work properly.
  • Bulletize the text. Make sure build work properly.
  • Bulletize the text. Make sure the builds work.
  • Break out to demo here
  • This chart shows Couchbase Server throughput in average operations per second across the number of nodes in a cluster. With 4 nodes, throughput is nearly 1.15 million ops/sec, this means that 1.4 GB/sec is being transferred between the database server and clientOperations are a mix of reads and writes in a 70:30 ratioHigh write throughput is seen even with significant sized 1KB documents This also shows linear throughput as more servers are added to the cluster. 0.62 million ops / sec with 2 nodes and 1.15 million ops / sec with 4 nodes.Demonstrates a shared nothing architecture, all nodes are identical and independentKeys are auto-sharded across the cluster and evenly distributed whether the cluster has 1 node or 8 nodes
  • Cc cbs-demo & tour-12012012

    1. 1. Introduction toCouchbase Server 2.0 Dipti Borkar Director, Product Management 1
    2. 2. Couchbase Server NoSQL Document Database 2.0 2
    3. 3. Couchbase Server Easy Consistent High Scalability Performance Grow cluster without Consistent sub-millisecond application changes, without read and write response times downtime with a single click with consistent high throughput Always Flexible Data On JSON JSON JSO JSON JSON N Model 24x365 No downtime for software JSON document model with upgrades, hardware no fixed schema. maintenance, etc. 3
    4. 4. Flexible Data Model { “ID”: 1, “FIRST”: “Dipti”, “LAST”: “Borkar”, “ZIP”: “94040”, “CITY”: “MV”, “STATE”: “CA” } JSON JSON JSON JSON • No need to worry about the database when changing your application • Records can have different structures, there is no fixed schema • Allows painless data model changes for rapid application development 4
    5. 5. Couchbase Server Features Built-in clustering – All nodes equal Data replication with auto-failover Zero--downtime maintenance Clone to grow and scale horizontally Built-in managed cached Monitoring and administration APIs and GUI SDK for a variety of languages 5
    6. 6. New in 2.0 JSON support Indexing and Querying JSON JSON JSO JSON N JSON Incremental Map Reduce Cross data center replication 6
    7. 7. Couchbase Server 2.0 Architecture 8092 11211 11210 Query API Memcapable 1.0 Memcapable 2.0 Moxi Query Engine REST management API/Web UI vBucket state and replication manager Memcached Global singleton supervisor Rebalance orchestrator Configuration manager Node health monitor Process monitor Heartbeat Couchbase EP Engine Data Manager Cluster Manager storage interface New Persistence Layer http on each node one per cluster Erlang/OTP HTTP Erlang port mapper Distributed Erlang 8091 4369 21100 - 21199 7
    8. 8. Couchbase Server 2.0 Architecture 8092 11211 11210 Query API Memcapable 1.0 Memcapable 2.0 Moxi Query Engine REST management API/Web UI vBucket state and replication manager Memcached Global singleton supervisor Rebalance orchestrator Configuration manager Node health monitor Process monitor Heartbeat Couchbase EP Engine storage interface New Persistence Layer http on each node one per cluster Erlang/OTP HTTP Erlang port mapper Distributed Erlang 8091 4369 21100 - 21199 8
    9. 9. COUCHBASE OPERATIONS 9
    10. 10. Single node - Couchbase Write Operation 2 Doc 1 App Server 3 2 3 Managed Cache To other node Replication Doc 1 Queue Disk Queue Disk Couchbase Server Node 10
    11. 11. Single node - Couchbase Update Operation 2 Doc 1’ App Server 3 2 3 Managed Cache To other node Replication 1 Doc 1’ Queue Disk Queue Disk Doc 1 Couchbase Server Node 11
    12. 12. Single node - Couchbase Read Operation 2 Doc 1 GET App Server 3 2 3 Managed Cache To other node Replication Queue Doc 1 Disk Queue Disk Doc 1 Couchbase Server Node 12
    13. 13. Basic Operation APP SERVER 1 APP SERVER 2 COUCHBASE Client Library COUCHBASE Client Library CLUSTER MAP CLUSTER MAP READ/WRITE/UPDATE SERVER 1 SERVER 2 SERVER 3 • Docs distributed evenly across ACTIVE ACTIVE ACTIVE servers Doc 5 Doc Doc 4 Doc Doc 1 Doc • Each server stores both active and replica docs Doc 2 Doc Doc 7 Doc Doc 2 Doc Only one server active at a time • Client library provides app with Doc 9 Doc Doc 8 Doc Doc 6 Doc simple interface to database REPLICA REPLICA REPLICA • Cluster map provides map to which server doc is on Doc 4 Doc Doc 6 Doc Doc 7 Doc App never needs to know Doc 1 Doc Doc 3 Doc Doc 9 Doc • App reads, writes, updates docs Doc 8 Doc Doc 2 Doc Doc 5 Doc • Multiple app servers can access same document at same time COUCHBASE SERVER CLUSTERUser Configured Replica Count = 1 13
    14. 14. Add Nodes to Cluster APP SERVER 1 APP SERVER 2 COUCHBASE Client Library COUCHBASE Client Library CLUSTER MAP CLUSTER MAP READ/WRITE/UPDATE READ/WRITE/UPDATE SERVER 1 SERVER 2 SERVER 3 SERVER 4 SERVER 5 • Two servers added ACTIVE ACTIVE ACTIVE ACTIVE ACTIVE One-click operation Doc 5 Doc Doc 4 Doc Doc 1 Doc • Docs automatically rebalanced across Doc 2 Doc Doc 7 Doc Doc 2 Doc cluster Even distribution of docs Minimum doc movement Doc 9 Doc Doc 8 Doc Doc 6 Doc • Cluster map updated REPLICA REPLICA REPLICA REPLICA REPLICA • App database Doc 4 Doc Doc 6 Doc Doc 7 Doc calls now distributed over larger number of Doc 1 Doc Doc 3 Doc Doc 9 Doc servers Doc 8 Doc Doc 2 Doc Doc 5 Doc COUCHBASE SERVER CLUSTERUser Configured Replica Count = 1 14
    15. 15. Fail Over Node APP SERVER 1 APP SERVER 2 COUCHBASE Client Library COUCHBASE Client Library CLUSTER MAP CLUSTER MAP SERVER 1 SERVER 2 SERVER 3 SERVER 4 SERVER 5 • App servers accessing docs ACTIVE ACTIVE ACTIVE ACTIVE ACTIVE • Requests to Server 3 fail Doc 5 Doc Doc 4 Doc Doc 1 Doc Doc 9 Doc Doc 6 Doc • Cluster detects server failed Promotes replicas of docs to Doc 2 Doc Doc 7 Doc Doc 2 Doc Doc 8 Doc Doc active Updates cluster map Doc 1 Doc 3 • Requests for docs now go to REPLICA REPLICA REPLICA REPLICA REPLICA appropriate server Doc 4 Doc Doc 6 Doc Doc 7 Doc Doc 5 Doc Doc 8 Doc • Typically rebalance would follow Doc 1 Doc Doc 3 Doc Doc 9 Doc Doc 2 Doc COUCHBASE SERVER CLUSTERUser Configured Replica Count = 1 15
    16. 16. DEMO TIME 16
    17. 17. Indexing and Querying APP SERVER 1 APP SERVER 2 COUCHBASE Client Library COUCHBASE Client Library CLUSTER MAP CLUSTER MAP Query SERVER 1 SERVER 2 SERVER 3 • Indexing work is distributed ACTIVE ACTIVE ACTIVE amongst nodes Doc 5 Doc Doc 5 Doc Doc 5 Doc • Large data set possible Doc 2 Doc Doc 2 Doc Doc 2 Doc • Parallelize the effort Doc 9 Doc • Each node has index for data stored Doc 9 Doc Doc 9 Doc on it REPLICA REPLICA REPLICA • Queries combine the results from Doc 4 Doc required nodes Doc 4 Doc Doc 4 Doc Doc 1 Doc Doc 1 Doc Doc 1 Doc Doc 8 Doc Doc 8 Doc Doc 8 Doc COUCHBASE SERVER CLUSTERUser Configured Replica Count = 1 17
    18. 18. Cross Data Center Replication (XDCR) SERVER 1 SERVER 2 SERVER 3 ACTIVE ACTIVE ACTIVE COUCHBASE SERVER CLUSTER Doc Doc Doc NY DATA CENTER Doc 2 Doc Doc Doc 9 Doc DocRAM RAM RAM Doc Doc Doc Doc Doc Doc Doc Doc Doc DISK DISK DISK SERVER 1 SERVER 2 SERVER 3 ACTIVE ACTIVE ACTIVE Doc Doc Doc Doc 2 Doc Doc Doc 9 Doc Doc RAM RAM RAM COUCHBASE SERVER CLUSTER Doc Doc Doc Doc Doc Doc Doc Doc Doc SF DATA CENTER DISK DISK DISK 18
    19. 19. Couchbase SDKsJava SDK User Code.Net SDK Java client API CouchbaseClient cb = new CouchbaseClient(listURIs, "aBucket", "letmein"); cb.set("hello", 0, "world"); cb.get("hello"); Couchbase Java LibraryPHP SDK (spymemcached)Ruby SDK Couchbase Server…and manymore http://www.couchbase.com/develop 19
    20. 20. DEMO TIME 20
    21. 21. Demo: The next big social game 3 Objects (documents) within game: • Players • Monsters • Items Gameplay: • Players fight monsters • Monsters drop items • Players own items 21
    22. 22. Player Document { "jsonType": "player", "uuid": "35767d02-a958-4b83-8179-616816692de1", "name": "Keith4540", "hitpoints": 75, Player ID "experience": 663, "level": 4, "loggedIn": false } 22
    23. 23. Item Document { Item ID "jsonType": "item", "name": "Katana_e5890c94-11c6-65746ce6c560", "uuid": "e5890c94-11c6-4856-a7a6-65746ce6c560", "ownerId": "Dale9887" } Player ID 23
    24. 24. Monster Document { "jsonType": "monster", Monster ID "name": "Bauchan9932", "uuid": "d10dfc1b-0412-4140-b4ec-affdbf2aa5ec", "hitpoints": 370, "experienceWhenKilled": 52, "itemProbability": 0.5050581341872865 } 24
    25. 25. GAME ON! 25
    26. 26. Linear Scaling: Couchbase + Cisco + Solarflare High throughput with 1.4 GB/sec data transfer rate using 4 servers Linear throughput scalability 26
    27. 27. 2.0 Coming soon. JSON Documents Indexing  QueryingCross Data Center Replication 27
    28. 28. THANK YOU! Get Couchbase Server 2.0 athttp://www.couchbase.com/download dipti@couchbase.com @dborkar 28
    29. 29. QUESTIONS? 29

    ×