Replication and Replica #mongodbSetsChad TindelSenior Solutions Architect, 10gen
Agenda• Introduction to Replica Sets• Developing with Replica Sets• Operational Considerations• Behind the Curtain
Introduction to ReplicaSets
Why Replication?• How many have faced node failures?• How many have been woken up from sleep to do a fail-over(s)?• How ma...
Replica Set – Creation
Replica Set – Initialize
Replica Set – Failure
Replica Set – Failover
Replica Set – Recovery
Replica Set – Recovered
Replica Set Roles
Configuration Options> conf = {    _id : "mySet",    members : [         {_id : 0, host : "A”, priority : 3},         {_id...
Configuration Options> conf = {    _id : "mySet”,                                                 Primary DC    members : ...
Configuration Options> conf = {                                                                  Secondary DC    _id : "my...
Configuration Options> conf = {    _id : "mySet”,    members : [         {_id : 0, host : "A”, priority : 3},             ...
Configuration Options> conf = {    _id : "mySet”,    members : [         {_id : 0, host : "A”, priority : 3},         {_id...
Developing with ReplicaSets
Strong Consistency
Eventual Consistency
Write Preference• Fire and forget• Wait for error• Wait for journal sync• Wait for replication
Fire and Forget
Wait for Error
Wait for Journal Sync
Wait for Replication
Tagging• New in 2.0.0• Control where data is written to, and read from• Each member can have one or more tags   – tags: {d...
Tagging Example{    _id : "mySet",    members : [       {_id : 0, host : "A", tags : {"dc": "ny"}},       {_id : 1, host :...
Wait for Replication (Tagging)
Read Preference Modes• 5 modes (new in 2.2)  –   primary (only) - Default  –   primaryPreferred  –   secondary  –   second...
Tagged Read Preference• Custom read preferences• Control where you read from by (node) tags   – E.g. { "disk": "ssd", "use...
OperationalConsiderations
Maintenance and Upgrade• No downtime• Rolling upgrade/maintenance  – Start with Secondary  – Primary last
Replica Set – 1 Data Center                • Single datacenter                • Single switch & power                • Poi...
Replica Set – 2 Data Centers               • Multi data center               • DR node for safety               • Can’t do...
Replica Set – 3 Data Centers               • Three data centers               • Can survive full data                 cent...
Behind the Curtain
Behind the Curtain• Heartbeat every 2 seconds   – Times out in 10 seconds• Local DB (not replicated)   – system.replset   ...
Just Use It• Use replica sets• Easy to setup   – Try on a single machine• Check doc page for RS tutorials   – http://docs....
#ConferenceHashTagThank YouChad TindelSenior Solutions Architect, 10gen
Upcoming SlideShare
Loading in …5
×

Webinar: Replication and Replica Sets

1,055 views

Published on

For the first time this year, 10gen will be offering a track completely dedicated to Operations at MongoSV, 10gen's annual MongoDB user conference on December 4. Learn more at MongoSV.com

MongoDB supports replication for failover and redundancy. In this session we will introduce the basic concepts around replica sets, which provide automated failover and recovery of nodes. We'll show you how to set up, configure, and initiate a replica set, and methods for using replication to scale reads. We'll also discuss proper architecture for durability.

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,055
On SlideShare
0
From Embeds
0
Number of Embeds
509
Actions
Shares
0
Downloads
19
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Why?What is it?Configuration Options
  • Basic explanation2 or more nodes form the setQuorum
  • Initialize -> ElectionPrimary + data replication from primary to secondary
  • Primary down/network failureAutomatic election of new primary if majority exists
  • New primary electedReplication established from new primary
  • Down node comes upRejoins setsRecovery and then secondary
  • PrimaryData memberSecondaryHot standbyArbitersVoting member
  • PriorityFloating point number between 0..1000Highest member that is up to date wins Up to date == within 10 seconds of primaryIf a higher priority member catches up, it will force election and win Slave DelayLags behind master by configurable time delay Automatically hidden from clientsProtects against operator errorsFat fingeringApplication corrupts data
  • PriorityFloating point number between 0..1000Highest member that is up to date wins Up to date == within 10 seconds of primaryIf a higher priority member catches up, it will force election and win Slave DelayLags behind master by configurable time delay Automatically hidden from clientsProtects against operator errorsFat fingeringApplication corrupts data
  • PriorityFloating point number between 0..1000Highest member that is up to date wins Up to date == within 10 seconds of primaryIf a higher priority member catches up, it will force election and win Slave DelayLags behind master by configurable time delay Automatically hidden from clientsProtects against operator errorsFat fingeringApplication corrupts data
  • PriorityFloating point number between 0..1000Highest member that is up to date wins Up to date == within 10 seconds of primaryIf a higher priority member catches up, it will force election and win Slave DelayLags behind master by configurable time delay Automatically hidden from clientsProtects against operator errorsFat fingeringApplication corrupts data
  • PriorityFloating point number between 0..1000Highest member that is up to date wins Up to date == within 10 seconds of primaryIf a higher priority member catches up, it will force election and win Slave DelayLags behind master by configurable time delay Automatically hidden from clientsProtects against operator errorsFat fingeringApplication corrupts data
  • ConsistencyWrite preferencesRead preferences
  • Default is w:1W:majority
  • Upgrade/maintenanceCommon deployment scenarios
  • Schemaoplog
  • Webinar: Replication and Replica Sets

    1. 1. Replication and Replica #mongodbSetsChad TindelSenior Solutions Architect, 10gen
    2. 2. Agenda• Introduction to Replica Sets• Developing with Replica Sets• Operational Considerations• Behind the Curtain
    3. 3. Introduction to ReplicaSets
    4. 4. Why Replication?• How many have faced node failures?• How many have been woken up from sleep to do a fail-over(s)?• How many have experienced issues due to network latency?• Different uses for data – Normal processing – Simple analytics
    5. 5. Replica Set – Creation
    6. 6. Replica Set – Initialize
    7. 7. Replica Set – Failure
    8. 8. Replica Set – Failover
    9. 9. Replica Set – Recovery
    10. 10. Replica Set – Recovered
    11. 11. Replica Set Roles
    12. 12. Configuration Options> conf = { _id : "mySet", members : [ {_id : 0, host : "A”, priority : 3}, {_id : 1, host : "B", priority : 2}, {_id : 2, host : "C”}, {_id : 3, host : "D", hidden : true}, {_id : 4, host : "E", hidden : true, slaveDelay : 3600} ]}> rs.initiate(conf)
    13. 13. Configuration Options> conf = { _id : "mySet”, Primary DC members : [ {_id : 0, host : "A”, priority : 3}, {_id : 1, host : "B", priority : 2}, {_id : 2, host : "C”}, {_id : 3, host : "D", hidden : true}, {_id : 4, host : "E", hidden : true, slaveDelay : 3600} ]}> rs.initiate(conf)
    14. 14. Configuration Options> conf = { Secondary DC _id : "mySet”, Default Priority = 1 members : [ {_id : 0, host : "A”, priority : 3}, {_id : 1, host : "B", priority : 2}, {_id : 2, host : "C”}, {_id : 3, host : "D", hidden : true}, {_id : 4, host : "E", hidden : true, slaveDelay : 3600} ]}> rs.initiate(conf)
    15. 15. Configuration Options> conf = { _id : "mySet”, members : [ {_id : 0, host : "A”, priority : 3}, Analytics {_id : 1, host : "B", priority : 2}, node {_id : 2, host : "C”}, {_id : 3, host : "D", hidden : true}, {_id : 4, host : "E", hidden : true, slaveDelay : 3600} ]}> rs.initiate(conf)
    16. 16. Configuration Options> conf = { _id : "mySet”, members : [ {_id : 0, host : "A”, priority : 3}, {_id : 1, host : "B", priority : 2}, {_id : 2, host : "C”}, {_id : 3, host : "D", hidden : true}, {_id : 4, host : "E", hidden : true, slaveDelay : 3600} ] Backup node}> rs.initiate(conf)
    17. 17. Developing with ReplicaSets
    18. 18. Strong Consistency
    19. 19. Eventual Consistency
    20. 20. Write Preference• Fire and forget• Wait for error• Wait for journal sync• Wait for replication
    21. 21. Fire and Forget
    22. 22. Wait for Error
    23. 23. Wait for Journal Sync
    24. 24. Wait for Replication
    25. 25. Tagging• New in 2.0.0• Control where data is written to, and read from• Each member can have one or more tags – tags: {dc: "ny"} – tags: {dc: "ny",
 ip: "192.168",
 rack: "row3rk7"}• Replica set defines rules for write concerns• Rules can change without changing app code
    26. 26. Tagging Example{ _id : "mySet", members : [ {_id : 0, host : "A", tags : {"dc": "ny"}}, {_id : 1, host : "B", tags : {"dc": "ny"}}, {_id : 2, host : "C", tags : {"dc": "sf"}}, {_id : 3, host : "D", tags : {"dc": "sf"}}, {_id : 4, host : "E", tags : {"dc": "cloud"}}], settings : { getLastErrorModes : { allDCs : {"dc" : 3}, someDCs : {"dc" : 2}} }}> db.blogs.insert({...})> db.runCommand({getLastError : 1, w : "allDCs"})
    27. 27. Wait for Replication (Tagging)
    28. 28. Read Preference Modes• 5 modes (new in 2.2) – primary (only) - Default – primaryPreferred – secondary – secondaryPreferred – Nearest When more than one node is possible, closest node is used for reads (all modes but primary)
    29. 29. Tagged Read Preference• Custom read preferences• Control where you read from by (node) tags – E.g. { "disk": "ssd", "use": "reporting" }• Use in conjunction with standard read preferences – Except primary
    30. 30. OperationalConsiderations
    31. 31. Maintenance and Upgrade• No downtime• Rolling upgrade/maintenance – Start with Secondary – Primary last
    32. 32. Replica Set – 1 Data Center • Single datacenter • Single switch & power • Points of failure: – Power – Network – Data center – Two node failure • Automatic recovery of single node crash
    33. 33. Replica Set – 2 Data Centers • Multi data center • DR node for safety • Can’t do multi data center durable write safely since only 1 node in distant DC
    34. 34. Replica Set – 3 Data Centers • Three data centers • Can survive full data center loss • Can do w= { dc : 2 } to guarantee write in 2 data centers (with tags)
    35. 35. Behind the Curtain
    36. 36. Behind the Curtain• Heartbeat every 2 seconds – Times out in 10 seconds• Local DB (not replicated) – system.replset – oplog.rs • Capped collection • Idempotent version of operation stored
    37. 37. Just Use It• Use replica sets• Easy to setup – Try on a single machine• Check doc page for RS tutorials – http://docs.mongodb.org/manual/replication/#tutorials
    38. 38. #ConferenceHashTagThank YouChad TindelSenior Solutions Architect, 10gen

    ×