MongoDB London 2013 - Basic Replication
Upcoming SlideShare
Loading in...5
×
 

MongoDB London 2013 - Basic Replication

on

  • 296 views

 

Statistics

Views

Total Views
296
Views on SlideShare
295
Embed Views
1

Actions

Likes
1
Downloads
4
Comments
0

1 Embed 1

https://twitter.com 1

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

MongoDB London 2013 - Basic Replication MongoDB London 2013 - Basic Replication Presentation Transcript

  • #MongoDBDays - @m4rcschReplication and Replica SetsMarc SchweringSolutions Architect, 10gen
  • Notes to the presenterThemes for this presentation:•  Balance between cost and redundancy.•  Cover the many scenarios which replication would solve and why.•  Secret sauce of avoiding downtime & data loss•  If time is short, skip Behind the curtain section•  If time is very short, skip Operational Considerations also
  • Agenda•  Replica Sets Lifecycle•  Developing with Replica Sets•  Operational Considerations
  • Why Replication?•  How many have faced node failures?•  How many have been woken up from sleep to do a fail-over(s)?•  How many have experienced issues due to network latency?•  Different uses for data –  Normal processing –  Simple analytics
  • ReplicaSet Lifecycle
  • Node 1 Node 2 Node 3Replica Set – Creation
  • Node 1 Node 2 Secondary Secondary Heartbeat Re n tio pli ica cat pl on i Re Node 3 PrimaryReplica Set – Initialize
  • Primary Election Node 1 Node 2 Secondary Heartbeat Secondary Node 3Replica Set – Failure
  • Replication Node 1 Node 2 Secondary Primary Heartbeat Node 3Replica Set – Failover
  • Replication Node 1 Node 2 Secondary Primary Heartbeat n tio ica pl Re Node 3 RecoveryReplica Set – Recovery
  • Replication Node 1 Node 2 Secondary Primary Heartbeat n tio ica pl Re Node 3 SecondaryReplica Set – Recovered
  • ReplicaSet Roles &Configuration
  • Node 1 Node 2 Secondary Arbiter Heartbeat Re pli cat on i Node 3 PrimaryReplica Set Roles
  • Configuration Options> conf = { _id : "mySet", members : [ {_id : 0, host : "A”, priority : 3}, {_id : 1, host : "B", priority : 2}, {_id : 2, host : "C”}, {_id : 3, host : "D", hidden : true}, {_id : 4, host : "E", hidden : true, slaveDelay :3600} ]}> rs.initiate(conf)
  • Configuration Options> conf = { _id : "mySet”, members : [ Primary DC {_id : 0, host : "A”, priority : 3}, {_id : 1, host : "B", priority : 2}, {_id : 2, host : "C”}, {_id : 3, host : "D", hidden : true}, {_id : 4, host : "E", hidden : true, slaveDelay :3600} ]}> rs.initiate(conf)
  • Configuration Options> conf = { Secondary DC _id : "mySet”, Default Priority = 1 members : [ {_id : 0, host : "A”, priority : 3}, {_id : 1, host : "B", priority : 2}, {_id : 2, host : "C”}, {_id : 3, host : "D", hidden : true}, {_id : 4, host : "E", hidden : true, slaveDelay :3600} ]}> rs.initiate(conf)
  • Configuration Options> conf = { _id : "mySet”, members : [ {_id : 0, host : "A”, priority : 3}, Analytics {_id : 1, host : "B", priority : 2}, node {_id : 2, host : "C”}, {_id : 3, host : "D", hidden : true}, {_id : 4, host : "E", hidden : true, slaveDelay :3600} ]}> rs.initiate(conf)
  • Configuration Options> conf = { _id : "mySet”, members : [ {_id : 0, host : "A”, priority : 3}, {_id : 1, host : "B", priority : 2}, {_id : 2, host : "C”}, {_id : 3, host : "D", hidden : true}, {_id : 4, host : "E", hidden : true, slaveDelay :3600} ]} Backup node> rs.initiate(conf)
  • Developing with Replica Sets
  • Client Application Driver Write Read Primary Secondary SecondaryStrong Consistency
  • Client Application Driver Write ad Re Re ad Primary Secondary SecondaryDelayed Consistency
  • Write Concern•  Network acknowledgement•  Wait for error•  Wait for journal sync•  Wait for replication
  • Driver write Primary apply in memoryUnacknowledged
  • Driver getLastError Primary apply in memoryMongoDB Acknowledged (wait for error)
  • Driver getLastError j:true write Primary apply in write to memory journalWait for Journal Sync
  • Driver getLastError write w:2 Primary replicate apply in memory SecondaryWait for Replication
  • Tagging•  Control where data is written to, and read from•  Each member can have one or more tags –  tags: {dc: "ny"} –  tags: {dc: "ny", subnet: "192.168", rack: "row3rk7"}•  Replica set defines rules for write concerns•  Rules can change without changing app code
  • Tagging Example{ _id : "mySet", members : [ {_id : 0, host : "A", tags : {"dc": "ny"}}, {_id : 1, host : "B", tags : {"dc": "ny"}}, {_id : 2, host : "C", tags : {"dc": "sf"}}, {_id : 3, host : "D", tags : {"dc": "sf"}}, {_id : 4, host : "E", tags : {"dc": "cloud"}}], settings : { getLastErrorModes : { allDCs : {"dc" : 3}, someDCs : {"dc" : 2}} }}> db.blogs.insert({...})> db.runCommand({getLastError : 1, w : "someDCs"})
  • Driver getLastError W:allDCs write Primary (SF) replicate apply in memory Secondary (NY) replicate Secondary (Cloud)Wait for Replication (Tagging)
  • Read Preference Modes•  5 modes –  primary (only) - Default –  primaryPreferred –  secondary –  secondaryPreferred –  Nearest When more than one node is possible, closest node is used for reads (all modes but primary)
  • Tagged Read Preference•  Custom read preferences•  Control where you read from by (node) tags –  E.g. { "disk": "ssd", "use": "reporting" }•  Use in conjunction with standard read preferences –  Except primary
  • Operational Considerations
  • Maintenance and Upgrade•  No downtime•  Rolling upgrade/maintenance –  Start with Secondary –  Primary last
  • Replica Set – 1 Data Center •  Single datacenter Datacenter •  Single switch & power Member 1 •  Points of failure: –  Power Member 2 –  Network –  Data center Datacenter 2 Member 3 –  Two node failure •  Automatic recovery of single node crash
  • Replica Set – 2 Data Centers •  Multi data center Datacenter 1 •  DR node for safety Member 1 •  Can’t do multi data Member 2 center durable write safely since only 1 node in distant DC Datacenter 2 Member 3
  • Replica Set – 3 Data Centers Datacenter 1 •  Three data centers Member 1 Member 2 •  Can survive full data center loss Datacenter 2 Member 3 •  Can do w= { dc : 2 } to Member 4 guarantee write in 2 data centers (with tags) Datacenter 3 Member 5
  • Behind the Curtain
  • Implementation details•  Heartbeat every 2 seconds –  Times out in 10 seconds•  Local DB (not replicated) –  system.replset –  oplog.rs •  Capped collection •  Idempotent version of operation stored
  • Op(erations) Log is idempotent> db.replsettest.insert({_id:1,value:1}){ "ts" : Timestamp(1350539727000, 1), "h" :NumberLong("6375186941486301201"), "op" : "i", "ns" :"test.replsettest", "o" : { "_id" : 1, "value" : 1 } }> db.replsettest.update({_id:1},{$inc:{value:10}}){ "ts" : Timestamp(1350539786000, 1), "h" :NumberLong("5484673652472424968"), "op" : "u", "ns" :"test.replsettest", "o2" : { "_id" : 1 }, "o" : { "$set" : { "value" : 11 } } }
  • Single operation can have manyentries> db.replsettest.update({},{$set:{name : ”foo”}, false,true}){ "ts" : Timestamp(1350540395000, 1), "h" :NumberLong("-4727576249368135876"), "op" : "u", "ns" :"test.replsettest", "o2" : { "_id" : 2 }, "o" :{ "$set" : { "name" : "foo" } } }{ "ts" : Timestamp(1350540395000, 2), "h" :NumberLong("-7292949613259260138"), "op" : "u", "ns" :"test.replsettest", "o2" : { "_id" : 3 }, "o" :{ "$set" : { "name" : "foo" } } }{ "ts" : Timestamp(1350540395000, 3), "h" :NumberLong("-1888768148831990635"), "op" : "u", "ns" :"test.replsettest", "o2" : { "_id" : 1 }, "o" :{ "$set" : { "name" : "foo" } } }
  • Recent improvements•  Read preference support with sharding –  Drivers too•  Improved replication over WAN/high-latency networks•  rs.syncFrom command•  buildIndexes setting•  replIndexPrefetch setting
  • Just Use It•  Use replica sets•  Easy to setup –  Try on a single machine•  Check doc page for RS tutorials –  http://docs.mongodb.org/manual/replication/#tutorials
  • #MongoDBDays - @m4rcschThank YouMarc SchweringSolutions Architect, 10gen