Optimizing MongoDB: Lessons Learned at Localytics


Published on

MongoDB Optimizations done at Localytics to improve throughput while reducing cost.

Published in: Technology, Education
1 Comment
No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Optimizing MongoDB: Lessons Learned at Localytics

  1. 1. Optimizing MongoDB: Lessons Learned at Localytics Benjamin Darfler MongoBoston - September 2011
  2. 2. Introduction <ul><ul><li>Benjamin Darfler </li></ul></ul><ul><ul><ul><li>@bdarfler </li></ul></ul></ul><ul><ul><ul><li>http://bdarfler.com </li></ul></ul></ul><ul><ul><ul><li>Senior Software Engineer at Localytics </li></ul></ul></ul><ul><ul><li>Localytics </li></ul></ul><ul><ul><ul><li>Real time analytics for mobile applications </li></ul></ul></ul><ul><ul><ul><li>100M+ datapoints a day </li></ul></ul></ul><ul><ul><ul><li>More than 2x growth over the past 4 months </li></ul></ul></ul><ul><ul><ul><li>Heavy users of Scala, MongoDB and AWS </li></ul></ul></ul><ul><ul><li>This Talk </li></ul></ul><ul><ul><ul><li>Revised and updated from MongoNYC 2011 </li></ul></ul></ul>
  3. 3. MongoDB at Localytics <ul><ul><li>Use cases </li></ul></ul><ul><ul><ul><li>Anonymous loyalty information </li></ul></ul></ul><ul><ul><ul><li>De-duplication of incoming data </li></ul></ul></ul><ul><ul><li>Scale today </li></ul></ul><ul><ul><ul><li>Hundreds of GBs of data per shard </li></ul></ul></ul><ul><ul><ul><li>Thousands of ops per second per shard </li></ul></ul></ul><ul><ul><li>History </li></ul></ul><ul><ul><ul><li>In production for ~8 months </li></ul></ul></ul><ul><ul><ul><li>Increased load 10x in that time </li></ul></ul></ul><ul><ul><ul><li>Reduced shard count by more than a half </li></ul></ul></ul>
  4. 4. Disclaimer <ul><li>These steps worked for us and our data </li></ul><ul><li>We verified them by testing early and often  </li></ul><ul><li>You should too </li></ul>
  5. 5. Quick Poll <ul><ul><li>Who is using MongoDB in production? </li></ul></ul><ul><ul><li>Who is deployed on AWS? </li></ul></ul><ul><ul><li>Who has a sharded deployment? </li></ul></ul><ul><ul><ul><li>More than 2 shards? </li></ul></ul></ul><ul><ul><ul><li>More than 4 shards? </li></ul></ul></ul><ul><ul><ul><li>More than 8 shards? </li></ul></ul></ul>
  6. 6. Optimizing Our Data Documents and Indexes
  7. 7. Shorten Names <ul><li>Before </li></ul><ul><li>{super_happy_fun_awesome_name:&quot;yay!&quot;} </li></ul><ul><li>After </li></ul><ul><li>{s:&quot;yay!&quot;} </li></ul><ul><ul><li>Significantly reduced document size </li></ul></ul>
  8. 8. Use BinData for uuids/hashes <ul><li>Before </li></ul><ul><li>{u:&quot;21EC2020-3AEA-1069-A2DD-08002B30309D&quot;} </li></ul><ul><li>After </li></ul><ul><li>{u:BinData(0, &quot;...&quot;)} </li></ul><ul><ul><li>Used BinData type 0, least overhead </li></ul></ul><ul><ul><li>Reduced data size by more then 2x over UUID </li></ul></ul><ul><ul><li>Reduced index size on the field </li></ul></ul>
  9. 9. Override _id <ul><li>Before </li></ul><ul><li>{_id:ObjectId(&quot;...&quot;), u:BinData(0, &quot;...&quot;)} </li></ul><ul><li>After  </li></ul><ul><li>{_id:BinData(0, &quot;...&quot;)} </li></ul><ul><ul><li>Reduced data size </li></ul></ul><ul><ul><li>Eliminated an index </li></ul></ul><ul><ul><li>Warning: Locality - more on that later </li></ul></ul>
  10. 10. Pre-aggregate <ul><li>Before </li></ul><ul><li>{u:BinData(0, &quot;...&quot;), k:BinData(0, &quot;abc&quot;)} </li></ul><ul><li>{u:BinData(0, &quot;...&quot;), k:BinData(0, &quot;abc&quot;)} </li></ul><ul><li>{u:BinData(0, &quot;...&quot;), k:BinData(0, &quot;def&quot;)} </li></ul><ul><li>After </li></ul><ul><li>{u:BinData(0, &quot;abc&quot;), c:2} </li></ul><ul><li>{u:BinData(0, &quot;def&quot;), c:1} </li></ul><ul><ul><li>Actually kept data in both forms </li></ul></ul><ul><ul><li>Fewer records meant smaller indexes </li></ul></ul>
  11. 11. Prefix Indexes <ul><li>Before </li></ul><ul><li>{k:BinData(0, &quot;...&quot;)} // indexed </li></ul><ul><li>After </li></ul><ul><li>{ </li></ul><ul><li>p:BinData(0, &quot;...&quot;)  // prefix of k, indexed </li></ul><ul><li>s:BinData(0, &quot;...&quot;)  // suffix of k, not indexed </li></ul><ul><li>} </li></ul><ul><ul><li>Reduced index size </li></ul></ul><ul><ul><li>Warning: Prefix must be sufficiently unique </li></ul></ul><ul><ul><li>Would be nice to have it built in - SERVER-3260 </li></ul></ul>
  12. 12. Sparse Indexes <ul><li>Create a sparse index </li></ul><ul><li>db.collection.ensureIndex({middle:1}, {sparse:true}); </li></ul><ul><li>Only indexes documents that contain the field </li></ul><ul><li>{u:BinData(0, &quot;abc&quot;), first:&quot;Ben&quot;, last:&quot;Darfler&quot;} </li></ul><ul><li>{u:BinData(0, &quot;abc&quot;), first:&quot;Mike&quot;, last:&quot;Smith&quot;} </li></ul><ul><li>{u:BinData(0, &quot;abc&quot;), first:&quot;John&quot;, middle:&quot;F&quot;, last:&quot;Kennedy&quot;} </li></ul><ul><ul><li>Fewer records meant smaller indexes </li></ul></ul><ul><ul><li>New in 1.8 </li></ul></ul>
  13. 13. Upgrade to {v:1} indexes <ul><ul><li>Upto 25% smaller </li></ul></ul><ul><ul><li>Upto 25% faster </li></ul></ul><ul><ul><li>New in 2.0 </li></ul></ul><ul><ul><li>Must reindex after upgrade </li></ul></ul>
  14. 14. Optimizing Our Queries Reading and Writing
  15. 15. You are using an index right? <ul><li>Create an index </li></ul><ul><li>db.collection.ensureIndex({user:1}); </li></ul><ul><li>Ensure you are using it </li></ul><ul><li>db.collection.find(query).explain(); </li></ul><ul><li>Hint that it should be used if its not </li></ul><ul><li>db.collection.find({user:u, foo:d}).hint({user:1}); </li></ul><ul><ul><li>I've seen the wrong index used before </li></ul></ul><ul><ul><ul><li>open a bug if you see this happen </li></ul></ul></ul>
  16. 16. Only as much as you need <ul><li>Before </li></ul><ul><li>db.collection.find(); </li></ul><ul><li>After </li></ul><ul><li>db.collection.find().limit(10); </li></ul><ul><li>db.collection.findOne(); </li></ul><ul><ul><li>Reduced bytes on the wire </li></ul></ul><ul><ul><li>Reduced bytes read from disk </li></ul></ul><ul><ul><li>Result cursor streams data but in large chunks </li></ul></ul>
  17. 17. Only what you need <ul><li>Before </li></ul><ul><li>db.collection.find({u:BinData(0, &quot;...&quot;)}); </li></ul><ul><li>After </li></ul><ul><li>db.collection.find({u:BinData(0, &quot;...&quot;)}, {field:1}); </li></ul><ul><ul><li>Reduced bytes on the wire </li></ul></ul><ul><ul><li>Necessary to exploit covering indexes </li></ul></ul>
  18. 18. Covering Indexes <ul><li>Create an index </li></ul><ul><li>db.collection.ensureIndex({first:1, last:1}); </li></ul><ul><li>Query for data only in the index </li></ul><ul><li>db.collection.find({last:&quot;Darfler&quot;}, {_id:0, first:1, last:1}); </li></ul><ul><ul><li>Can service the query entirely from the index </li></ul></ul><ul><ul><li>Eliminates having to read the data extent </li></ul></ul><ul><ul><li>Explicitly exclude _id if its not in the index </li></ul></ul><ul><ul><li>New in 1.8 </li></ul></ul>
  19. 19. Prefetch <ul><li>Before </li></ul><ul><li>db.collection.update({u:BinData(0, &quot;...&quot;)}, {$inc:{c:1}}); </li></ul><ul><li>After </li></ul><ul><li>db.collection.find({u:BinData(0, &quot;...&quot;)}); </li></ul><ul><li>db.collection.update({u:BinData(0, &quot;...&quot;)}, {$inc:{c:1}}); </li></ul><ul><ul><li>Prevents holding a write lock while paging in data </li></ul></ul><ul><ul><li>Most updates fit this pattern anyhow </li></ul></ul><ul><ul><li>Less necessary with yield improvements in 2.0 </li></ul></ul>
  20. 20. Optimizing Our Disk Fragmentation
  21. 21. Inserts doc1 doc2 doc3 doc4 doc5
  22. 22. Deletes doc1 doc2 doc3 doc4 doc5 doc1 doc2 doc3 doc4 doc5
  23. 23. Updates doc1 doc2 doc3 doc4 doc5 doc1 doc2 doc3 doc4 doc5 doc3 Updates can be in place if the document doesn't grow
  24. 24. Reclaiming Freespace doc1 doc2 doc6 doc4 doc5 doc1 doc2 doc3 doc4 doc5
  25. 25. Memory Mapped Files doc1 doc2 doc6 doc4 doc5 } } page page Data is mapped into memory a full page at a time 
  26. 26. Fragmentation <ul><li>RAM used to be filled with useful data </li></ul><ul><li>Now it contains useless space or useless data </li></ul><ul><li>Inserts used to cause sequential writes </li></ul><ul><li>Now inserts cause random writes </li></ul>
  27. 27. Fragmentation Mitigation <ul><ul><li>Automatic Padding  </li></ul></ul><ul><ul><ul><li>MongoDB auto-pads records </li></ul></ul></ul><ul><ul><ul><li>Manual tuning scheduled for 2.2 </li></ul></ul></ul><ul><ul><li>Manual Padding </li></ul></ul><ul><ul><ul><li>Pad arrays that are known to grow </li></ul></ul></ul><ul><ul><ul><li>Pad with a BinData field, then remove it </li></ul></ul></ul><ul><ul><li>Free list improvement in 2.0 and scheduled in 2.2 </li></ul></ul>
  28. 28. Fragmentation Fixes <ul><ul><li>Repair </li></ul></ul><ul><ul><ul><li>db.repairDatabase();  </li></ul></ul></ul><ul><ul><ul><li>Run on secondary, swap with primary </li></ul></ul></ul><ul><ul><ul><li>Requires 2x disk space </li></ul></ul></ul><ul><ul><li>Compact </li></ul></ul><ul><ul><ul><li>db.collection.runCommand( &quot;compact&quot; ); </li></ul></ul></ul><ul><ul><ul><li>Run on secondary, swap with primary </li></ul></ul></ul><ul><ul><ul><li>Faster than repair </li></ul></ul></ul><ul><ul><ul><li>Requires minimal extra disk space </li></ul></ul></ul><ul><ul><ul><li>New in 2.0 </li></ul></ul></ul><ul><ul><li>Repair, compact and import remove padding </li></ul></ul>
  29. 29. Optimizing Our Keys Index and Shard
  30. 30. B-Tree Indexes - hash/uuid key Hashes/UUIDs randomly distribute across the whole b-tree
  31. 31. B-Tree Indexes - temporal key Keys with a temporal prefix (i.e. ObjectId) are right aligned
  32. 32. Migrations - hash/uuid shard key Chunk 1 k: 1 to 5 Chunk 2 k: 6 to 9 Shard 1                                                Shard 2 Chunk 1 k: 1 to 5 {k: 4, …} {k: 8, …} {k: 3, …} {k: 7, …} {k: 5, …} {k: 6, …} {k: 4, …} {k: 3, …} {k: 5, …}
  33. 33. Hash/uuid shard key <ul><ul><li>Distributes read/write load evenly across nodes </li></ul></ul><ul><ul><li>Migrations cause random I/O and fragmentation </li></ul></ul><ul><ul><ul><li>Makes it harder to add new shards </li></ul></ul></ul><ul><ul><li>Pre-split </li></ul></ul><ul><ul><ul><li>db.runCommand({split:&quot;db.collection&quot;, middle:{_id:99}}); </li></ul></ul></ul><ul><ul><li>Pre-move </li></ul></ul><ul><ul><ul><li>db.adminCommand({moveChunk:&quot;db.collection&quot;, find:{_id:5}, to:&quot;s2&quot;}); </li></ul></ul></ul><ul><ul><li>Turn off balancer </li></ul></ul><ul><ul><ul><li>db.settings.update({_id:&quot;balancer&quot;}, {$set:{stopped:true}}, true}); </li></ul></ul></ul>
  34. 34. Migrations - temporal shard key Chunk 1 k: 1 to 5 Chunk 2 k: 6 to 9 Shard 1                                                Shard 2 Chunk 1 k: 1 to 5 {k: 3, …} {k: 4, …} {k: 5, …} {k: 6, …} {k: 7, …} {k: 8, …} {k: 3, …} {k: 4, …} {k: 5, …}
  35. 35. Temporal shard key <ul><ul><li>Can cause hot chunks </li></ul></ul><ul><ul><li>Migrations are less destructive </li></ul></ul><ul><ul><ul><li>Makes it easier to add new shards </li></ul></ul></ul><ul><ul><li>Include a temporal prefix in your shard key  </li></ul></ul><ul><ul><ul><li>{day: ..., id: ...} </li></ul></ul></ul><ul><ul><li>Choose prefix granularity based on insert rate </li></ul></ul><ul><ul><ul><li>low 100s of chunks (64MB) per &quot;unit&quot; of prefix </li></ul></ul></ul><ul><ul><ul><li>i.e. 10 GB per day => ~150 chunks per day </li></ul></ul></ul>
  36. 36. Optimizing Our Deployment Hardware and Configuration
  37. 37. Elastic Compute Cloud <ul><ul><li>Noisy Neighbor </li></ul></ul><ul><ul><ul><li>Used largest instance in a family (m1 or m2) </li></ul></ul></ul><ul><ul><li>Used m2 family for mongods </li></ul></ul><ul><ul><ul><li>Best RAM to dollar ratio </li></ul></ul></ul><ul><ul><li>Used micros for arbiters and config servers  </li></ul></ul>
  38. 38. Elastic Block Storage <ul><ul><li>Noisy Neighbor </li></ul></ul><ul><ul><ul><li>Netflix claims to only use 1TB disks </li></ul></ul></ul><ul><ul><li>RAID'ed our disks </li></ul></ul><ul><ul><ul><li>Minimum of 4-8 disks </li></ul></ul></ul><ul><ul><ul><li>Recommended 8-16 disks </li></ul></ul></ul><ul><ul><ul><li>RAID0 for write heavy workload </li></ul></ul></ul><ul><ul><ul><li>RAID10 for read heavy workload </li></ul></ul></ul>
  39. 39. Pathological Test <ul><ul><li>What happens when data far exceeds RAM? </li></ul></ul><ul><ul><ul><li>10:1 read/write ratio </li></ul></ul></ul><ul><ul><ul><li>Reads evenly distributed over entire key space </li></ul></ul></ul>
  40. 40. One Mongod Index out  of RAM Index in RAM <ul><ul><li>One mongod on the host </li></ul></ul><ul><ul><ul><li>  Throughput drops more than 10x </li></ul></ul></ul>
  41. 41. Many Mongods Index out of RAM Index in RAM <ul><ul><li>16 mongods on the host </li></ul></ul><ul><ul><ul><li>Throughput drops less than 3x </li></ul></ul></ul><ul><ul><ul><li>Graph for one shard, multiply by 16x for total </li></ul></ul></ul>
  42. 42. Sharding within a node <ul><ul><li>One read/write lock per mongod  </li></ul></ul><ul><ul><ul><li>Ticket for lock per collection - SERVER-1240 </li></ul></ul></ul><ul><ul><ul><li>Ticket for lock per extent - SERVER-1241 </li></ul></ul></ul><ul><ul><li>For in memory work load </li></ul></ul><ul><ul><ul><li>Shard per core </li></ul></ul></ul><ul><ul><li>For out of memory work load </li></ul></ul><ul><ul><ul><li>Shard per disk </li></ul></ul></ul><ul><ul><li>Warning: Must have shard key in every query </li></ul></ul><ul><ul><ul><li>Otherwise scatter gather across all shards </li></ul></ul></ul><ul><ul><ul><li>Requires manually managing secondary keys </li></ul></ul></ul><ul><ul><li>Less necessary in 2.0 with yield improvements </li></ul></ul>
  43. 43. Reminder <ul><li>These steps worked for us and our data </li></ul><ul><li>We verified them by testing early and often  </li></ul><ul><li>You should too </li></ul>
  44. 44. Questions? @bdarfler http://bdarfler.com