2. Agenda
• Replica Sets Lifecycle
• Developing with Replica Sets
• Operational Considerations
• Behind the Curtain
3. Why Replication?
• How many have faced node failures?
• How many have been woken up from sleep to do
a fail-over(s)?
• How many have experienced issues due to
network latency?
• Different uses for data
– Normal processing
– Simple analytics
20. Client Application
Driver
Write
ad
Re
Re
ad
Primary
Secondary Secondary
Delayed Consistency
21. Write Concern
• Network acknowledgement
• Wait for error
• Wait for journal sync
• Wait for replication
22. Driver
write
Primary
apply in
memory
Unacknowledged
23. Driver
getLastError
Primary
apply in
memory
MongoDB Acknowledged (wait for error)
24. Driver
getLastError
j:true
write
Primary
apply in write to
memory journal
Wait for Journal Sync
25. Driver
getLastError
write
w:2
Primary
replicate
apply in
memory
Secondary
Wait for Replication
26. Tagging
• New in 2.0.0
• Control where data is written to, and read from
• Each member can have one or more tags
– tags: {dc: "ny"}
– tags: {dc: "ny",
subnet: "192.168",
rack: "row3rk7"}
• Replica set defines rules for write concerns
• Rules can change without changing app code
28. Driver
getLastError
W:allDCs
write
Primary (SF)
replicate
apply in
memory
Secondary (NY)
replicate
Secondary (Cloud)
Wait for Replication (Tagging)
29. Read Preference Modes
• 5 modes (new in 2.2)
– primary (only) - Default
– primaryPreferred
– secondary
– secondaryPreferred
– Nearest
When more than one node is possible, closest node is used for
reads (all modes but primary)
30. Tagged Read Preference
• Custom read preferences
• Control where you read from by (node) tags
– E.g. { "disk": "ssd", "use": "reporting" }
• Use in conjunction with standard read preferences
– Except primary
32. Maintenance and Upgrade
• No downtime
• Rolling upgrade/maintenance
– Start with Secondary
– Primary last
33. Replica Set – 1 Data Center
• Single datacenter
Datacenter
• Single switch & power
Member 1 • Points of failure:
– Power
Member 2 – Network
– Data center
Datacenter 2
Member 3 – Two node failure
• Automatic recovery of
single node crash
34. Replica Set – 2 Data Centers
• Multi data center
Datacenter 1
• DR node for safety
Member 1
• Can’t do multi data
Member 2 center durable write
safely since only 1 node
in distant DC
Datacenter 2
Member 3
35. Replica Set – 3 Data Centers
Datacenter 1 • Three data centers
Member 1
Member 2
• Can survive full data
center loss
Datacenter 2
Member 3 • Can do w= { dc : 2 } to
Member 4 guarantee write in 2
data centers (with tags)
Datacenter 3
Member 5
36. Just Use It
• Use replica sets
• Easy to setup
– Try on a single machine
• Check doc page for RS tutorials
– http://docs.mongodb.org/manual/replication/#tutorials