Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.



Published on

Sharding allows you to distribute load across multiple servers and keep your data balanced across those servers. This session will review MongoDB’s sharding support, including an architectural overview, design principles, and automation.

  • Sex in your area is here: ❶❶❶ ❶❶❶
    Are you sure you want to  Yes  No
    Your message goes here
  • Follow the link, new dating source: ♥♥♥ ♥♥♥
    Are you sure you want to  Yes  No
    Your message goes here


  1. 1. #MongoChicagoShardingChad TindelSolution Architect, 10gen
  2. 2. Agenda• Short history of scaling data• Why shard• MongoDBs approach• Architecture• Configuration• Mechanics• Solutions
  3. 3. The story of scalingdata
  4. 4. Visual representation of vertical scaling1970 - 2000: Vertical Scalability (scaleup)
  5. 5. Visual representation of horizontal scalingGoogle, ~2000: Horizontal Scalability (scaleout)
  6. 6. Data Store Scalability in 2005• Custom Hardware – Oracle• Custom Software – Facebook + MySQL
  7. 7. Data Store Scalability Today• MongoDB auto-sharding available in 2009• A data store that is – Free – Publicly available – Open source ( – Horizontally scalable – Application independent
  8. 8. Why shard?
  9. 9. Working Set Exceeds PhysicalMemory
  10. 10. Read/Write Throughput Exceeds I/O
  11. 11. MongoDBs approachto sharding
  12. 12. Partition data based on ranges• User defines shard key• Shard key defines range of data• Key space is like points on a line• Range is a segment of that line
  13. 13. Distribute data in chunks acrossnodes• Initially 1 chunk• Default max chunk size: 64mb• MongoDB automatically splits & migrates chunks when max reached
  14. 14. MongoDB manages data• Queries routed to specific shards• MongoDB balances cluster• MongoDB migrates data to new nodes
  15. 15. MongoDB Auto-Sharding• Minimal effort required – Same interface as single mongod• Two steps – Enable Sharding for a database – Shard collection within database
  16. 16. Architecture
  17. 17. Data stored in shard• Shard is a node of the cluster• Shard can be a single mongod or a replica set
  18. 18. Config server stores meta data• Config Server – Stores cluster chunk ranges and locations – Can have only 1 or 3 (production must have 3) – Two phase commit (not a replica set)
  19. 19. MongoS manages the data• Mongos – Acts as a router / balancer – No local data (persists to config database) – Can have 1 or many
  20. 20. Sharding infrastructure
  21. 21. Configuration
  22. 22. Example cluster setup• Don’t use this setup in production! - Only one Config server (No Fault Tolerance) - Shard not in a replica set (Low Availability) - Only one Mongos and shard (No Performance Improvement) - Useful for development or demonstrating configuration mechanics
  23. 23. Start the config server• “mongod --configsvr”• Starts a config server on the default port (27019)
  24. 24. Start the mongos router• “mongos --configdb <hostname>:27019”• For 3 config servers: “mongos --configdb <host1>:<port1>,<host2>:<port2>,<host3>:<port3>”• This is always how to start a new mongos, even if the cluster is already running
  25. 25. Start the shard database• “mongod --shardsvr”• Starts a mongod with the default shard port (27018)• Shard is not yet connected to the rest of the cluster• Shard may have already been running in production
  26. 26. Add the shard• On mongos: “sh.addShard(„<host>:27018‟)”• Adding a replica set: “sh.addShard(„<rsname>/<seedlist>‟)• In 2.2 and later can use sh.addShard(„<host>:<port>‟)
  27. 27. Verify that the shard was added• db.runCommand({ listshards:1 })• { "shards" : [ { "_id”: "shard0000”, "host”: ”<hostname>:27018” } ], "ok" : 1 }
  28. 28. Enabling Sharding• Enable sharding on a database – sh.enableSharding(“<dbname>”)• Shard a collection with the given key – sh.shardCollection(“<dbname>.people”,{“country”:1})• Use a compound shard key to prevent duplicates – sh.shardCollection(“<dbname>.cars”,{“year”:1, ”uniqueid”:1})
  29. 29. Mechanics
  30. 30. Partitioning• Remember its based on ranges
  31. 31. Chunk is a section of the entire range
  32. 32. Chunk splitting• A chunk is split once it exceeds the maximum size• There is no split point if all documents have the same shard key• Chunk split is a logical operation (no data is moved)• If split creates too large of a discrepancy of chunk count across cluster a balancing round starts
  33. 33. Balancing• Balancer is running on mongos• Once the difference in chunks between the most dense shard and the least dense shard is above the migration threshold, a balancing round starts
  34. 34. Acquiring the Balancer Lock• The balancer on mongos takes out a “balancer lock”• To see the status of these locks: - use config - db.locks.find({ _id: “balancer” })
  35. 35. Moving the chunk• The mongos sends a “moveChunk” command to source shard• The source shard then notifies destination shard• The destination claims the chunk shard-key range• Destination shard starts pulling documents from source shard
  36. 36. Committing Migration• When complete, destination shard updates config server - Provides new locations of the chunks
  37. 37. Cleanup• Source shard deletes moved data - Must wait for open cursors to either close or time out - NoTimeout cursors may prevent the release of the lock• Mongos releases the balancer lock after old chunks are deleted
  38. 38. Routing Requests
  39. 39. Cluster Request Routing• Targeted Queries• Scatter Gather Queries• Scatter Gather Queries with Sort
  40. 40. Cluster Request Routing: TargetedQuery
  41. 41. Routable request received
  42. 42. Request routed to appropriate shard
  43. 43. Shard returns results
  44. 44. Mongos returns results to client
  45. 45. Cluster Request Routing: Non-TargetedQuery
  46. 46. Non-Targeted Request Received
  47. 47. Request sent to all shards
  48. 48. Shards return results to mongos
  49. 49. Mongos returns results to client
  50. 50. Cluster Request Routing: Non-TargetedQuery with Sort
  51. 51. Non-Targeted request with sortreceived
  52. 52. Request sent to all shards
  53. 53. Query and sort performed locally
  54. 54. Shards return results to mongos
  55. 55. Mongos merges sorted results
  56. 56. Mongos returns results to client
  57. 57. Shard Key
  58. 58. Shard Key• Choose a field common used in queries• Shard key is immutable• Shard key values are immutable• Shard key requires index on fields contained in key• Uniqueness of `_id` field is only guaranteed within individual shard• Shard key limited to 512 bytes in size
  59. 59. #MongoChicagoThank YouChad TindelSolution Architect, 10gen