Your SlideShare is downloading. ×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

MongoDB San Francisco 2013: Basic Sharding in MongoDB presented by Brandon Black, 10gen

1,384
views

Published on

Sharding allows you to distribute load across multiple servers and keep your data balanced across those servers. This session will review MongoDB’s sharding support, including an architectural …

Sharding allows you to distribute load across multiple servers and keep your data balanced across those servers. This session will review MongoDB’s sharding support, including an architectural overview, design principles, and automation.

Published in: Technology

0 Comments
5 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,384
On Slideshare
0
From Embeds
0
Number of Embeds
4
Actions
Shares
0
Downloads
76
Comments
0
Likes
5
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • Ops will be most interested in ConfigurationDev will be most interested in Mechanics
  • Take this slide and toss it out the window for the combined sharding talk. It’s a nice intro, but it’s more effective to spend time talking about tag aware and shard keys if you’re trying to go beyond the intro talk.
  • Indexes should be contained in working set.
  • From mainframes, to RAC Oracle servers.. People solved problems by adding more resources to a single machine.
  • Google was the first to demonstrate that a large scale operation could be had with high performance on commodity hardwareBuild - Document oriented database maps perfectly to object oriented languagesScale - MongoDB presents clear path to scalability that isn't ops intensive - Provides same interface for sharded cluster as single instance
  • In 2005, two ways to achieve datastore scalability:Lots of money to purchase custom hardwareLots of money to develop custom software
  • MongoDB 2.2 and later only need <host> and <port> for one member of the replica set
  • This can be skipped for the intro talk, but might be good to include if you’re doing the combined sharding talk. Totally optional, you don’t really have enough time to do this topic justice but it might be worth a mention.
  • Quick review from earlier.
  • Once chunk size is reached, mongos asks mongod to split a chunk + internal function called splitVector()mongod counts number of documents on each side of split + based on avg. document size `db.stats()`Chunk split is a **logical** operation (no data has moved)Max on first chunk should be 14
  • Balancer lock actually held on config server.
  • Moved chunk on shard2 should be gray
  • How do the other mongoses know that their configuration is out of date? When does the chunk version on the shard itself get updated?
  • The mongos does not have to load the whole set into memory since each shard sorts locally. The mongos can just getMore from the shards as needed and incrementally return the results to the client.
  • _id could be unique across shards if used as shard key.we could only guarantee uniqueness of (any) attributes if the keys are used as shard keys with unique attribute equals true
  • Cardinality – Can your data be broken down enough?Query Isolation - query targeting to a specific shardReliability – shard outagesA good shard key can:Optimize routingMinimize (unnecessary) trafficAllow best scaling
  • This section can be skipped for the intro talk, but might be good to include if you’re doing the combined sharding talk.
  • Reliability = if we lost an entire shard what would be the effect on the system?For most frequently used/largest indexes, how are they partitioned across shards?
  • Reliability = if we lost an entire shard what would be the effect on the system?For most frequently used/largest indexes, how are they partitioned across shards?
  • Reliability = if we lost an entire shard what would be the effect on the system?For most frequently used/largest indexes, how are they partitioned across shards?
  • Reliability = if we lost an entire shard what would be the effect on the system?For most frequently used/largest indexes, how are they partitioned across shards?
  • Compound shard key, utilize already existing (used) index, partition chunk for "power" users.For those users more than one shard *may* need to be queried in a targeted query.Reliability = if we lost an entire shard what would be the effect on the system?For most frequently used/largest indexes, how are they partitioned across shards?
  • Indexes should be contained in working set.
  • Transcript

    • 1. Software Engineer, 10gen@brandonmblackBrandon Black#MongoDBDaysIntroduction toSharding
    • 2. Agenda• Scaling Data• MongoDBs Approach• Architecture• Configuration• Mechanics
    • 3. Scaling Data
    • 4. Examining Growth• User Growth– 1995: 0.4% of the world‟s population– Today: 30% of the world is online (~2.2B)– Emerging Markets & Mobile• Data Set Growth– Facebook‟s data set is around 100 petabytes– 4 billion photos taken in the last year (4x a decade ago)
    • 5. Read/Write Throughput Exceeds I/O
    • 6. Working Set Exceeds PhysicalMemory
    • 7. Vertical Scalability (Scale Up)
    • 8. Horizontal Scalability (Scale Out)
    • 9. Data Store Scalability• Custom Hardware– Oracle• Custom Software– Facebook + MySQL– Google
    • 10. Data Store Scalability Today• MongoDB Auto-Sharding• A data store that is– 100% Free– Publicly available– Open-source (https://github.com/mongodb/mongo)– Horizontally scalable– Application independent
    • 11. MongoDBs Approachto Sharding
    • 12. Partitioning• User defines shard key• Shard key defines range of data• Key space is like points on a line• Range is a segment of that line
    • 13. Data Distribution• Initially 1 chunk• Default max chunk size: 64mb• MongoDB automatically splits & migrates chunkswhen max reached
    • 14. Routing and Balancing• Queries routed to specificshards• MongoDB balances cluster• MongoDB migrates data tonew nodes
    • 15. MongoDB Auto-Sharding• Minimal effort required– Same interface as single mongod• Two steps– Enable Sharding for a database– Shard collection within database
    • 16. Architecture
    • 17. What is a Shard?• Shard is a node of the cluster• Shard can be a single mongod or a replica set
    • 18. • Config Server– Stores cluster chunk ranges and locations– Can have only 1 or 3 (production must have 3)– Not a replica setMeta Data Storage
    • 19. Routing and Managing Data• Mongos– Acts as a router / balancer– No local data (persists to config database)– Can have 1 or many
    • 20. Sharding infrastructure
    • 21. Configuration
    • 22. Example Cluster• Don’t use this setup in production!- Only one Config server (No FaultTolerance)- Shard not in a replica set (LowAvailability)- Only one mongos and shard (No Performance Improvement)- Useful for developmentor demonstrating configuration mechanics
    • 23. Starting the Configuration Server• mongod --configsvr• Starts a configuration server on the default port (27019)
    • 24. Start the mongos Router• mongos --configdb <hostname>:27019• For 3 configuration servers:mongos--configdb<host1>:<port1>,<host2>:<port2>,<host3>:<port3>• This is always how to start a new mongos, even if the clusteris already running
    • 25. Start the shard database• mongod --shardsvr• Starts a mongod with the default shard port (27018)• Shard is not yet connected to the rest of the cluster• Shard may have already been running in production
    • 26. Add the Shard• On mongos:- sh.addShard(„<host>:27018‟)• Adding a replica set:- sh.addShard(„<rsname>/<seedlist>‟)
    • 27. Verify that the shard was added• db.runCommand({ listshards:1 }){ "shards" :[{"_id”:"shard0000”,"host”:”<hostname>:27018”} ],"ok" : 1}
    • 28. Enabling Sharding• Enable sharding on a databasesh.enableSharding(“<dbname>”)• Shard a collection with the given keysh.shardCollection(“<dbname>.people”,{“country”:1})• Use a compound shard key to prevent duplicatessh.shardCollection(“<dbname>.cars”,{“year”:1,”uniqueid”:1})
    • 29. Tag Aware Sharding• Tag aware sharding allows you to control thedistribution of your data• Tag a range of shard keyssh.addTagRange(<collection>,<min>,<max>,<tag>)• Tag a shardsh.addShardTag(<shard>,<tag>)
    • 30. Mechanics
    • 31. Partitioning• Remember its based on ranges
    • 32. Chunk is a section of the entire range
    • 33. Chunk splitting• Achunk is split once it exceeds the maximum size• There is no split point if all documents have the same shardkey• Chunk split is a logical operation (no data is moved)
    • 34. Balancing• Balancer is running on mongos• Once the difference in chunks between the most dense shardand the least dense shard is above the migration threshold, abalancing round starts
    • 35. Acquiring the Balancer Lock• The balancer on mongos takes out a “balancer lock”• To see the status of these locks:use configdb.locks.find({_id: “balancer”})
    • 36. Moving the chunk• The mongos sends a moveChunk command to source shard• The source shard then notifies destination shard• Destination shard starts pulling documents from source shard
    • 37. Committing Migration• When complete, destination shard updates config server- Provides new locationsof the chunks
    • 38. Cleanup• Source shard deletes moved data- Must wait for open cursors to either close or time out- NoTimeoutcursors may preventthe release of the lock• The mongos releases the balancer lock after old chunks aredeleted
    • 39. Routing Requests
    • 40. Cluster Request Routing• Targeted Queries• Scatter Gather Queries• Scatter Gather Queries with Sort
    • 41. Cluster Request Routing: TargetedQuery
    • 42. Routable request received
    • 43. Request routed to appropriate shard
    • 44. Shard returns results
    • 45. Mongos returns results to client
    • 46. Cluster Request Routing: Non-TargetedQuery
    • 47. Non-Targeted Request Received
    • 48. Request sent to all shards
    • 49. Shards return results to mongos
    • 50. Mongos returns results to client
    • 51. Cluster Request Routing: Non-TargetedQuery with Sort
    • 52. Non-Targeted request with sortreceived
    • 53. Request sent to all shards
    • 54. Query and sort performed locally
    • 55. Shards return results to mongos
    • 56. Mongos merges sorted results
    • 57. Mongos returns results to client
    • 58. Shard Key
    • 59. Shard Key• Shard key is immutable• Shard key values are immutable• Shard key must be indexed• Shard key limited to 512 bytes in size• Shard key used to route queries– Choose a field commonly used in queries• Only shard key can be unique across shards– `_id` field is only unique within individual shard
    • 60. Shard Key Considerations• Cardinality• Write Distribution• Query Isolation• Reliability• Index Locality
    • 61. Conclusion
    • 62. Read/Write Throughput Exceeds I/O
    • 63. Working Set Exceeds PhysicalMemory
    • 64. Sharding Enables Scale• MongoDB‟sAuto-Sharding– Easy to Configure– Consistent Interface– Free and Open-Source
    • 65. • What‟s next?– Hash-Based Sharding in MongoDB 2.4 (2:50pm)– Webinar: Indexing and Query Optimization (May 22nd)– Online Education Program– MongoDB User Group• Resourceshttps://education.10gen.com/http://www.10gen.com/presentationshttp://www.10gen.com/eventshttp://github.com/brandonblack/presentations
    • 66. Software Engineer, 10gen@brandonmblackBrandon Black#MongoDBDaysThank You