High Performance,Scalable MongoDBin a Bare Metal CloudHarold Hannon, Sr. Software Architect
100kservers24kcustomers23million domains
13 data centers16 network POPs20Gb fiber interconnectsGlobal Footprint
On the agenda today…..• Big Data considerations• Some deployment options• Performance Testing with JSBenchmarking Harness•...
“Build me a Big DataSolution”
Product Use Case• MongoDB deployed for customers on purchase• Complex configurations including sharding andreplication• Co...
Big Data Requirements• High Performance• Reliable, Predictable Performance• Rapidly Scalable• Easy to Deploy
Requirements ReviewedCloud Provider Bare Metal InstanceHigh PerformanceReliable, PredictablePerformanceRapidly ScalableXEa...
The “Marc-O-Meter”I’M NOTHAPPY 
Marc… Angry
Thinking about Big Data
The 3 V’s
Physical Deployment
Public Cloud
Public Cloud• Speed of deployment• Great for bursting use case• Imaging and cloning make POC/Dev work easy• Shared I/O• Gr...
Physical Servers
Bare Metal• Build to your specs• Robust, quickly scaled environment• Management of all aspects of environment• Image Based...
The Proof is in the Pudding
Beware The “Best Case Test Case”1 8 5 8 1 7 . 61 9 0 5 2 5 . 41 8 7 8 8 2 . 21 9 1 1 0 1 . 81 8 4 4 0 8 . 81 8 8 1 3 5 . 4...
Do It Yourself• Data Set Sizing• Document/Object Sizes• Platform• Controlled client or AFAIC• Concurrency• Local or Remote...
JS Benchmarking Harness• Data Set Sizing• Document/Object Sizes• Platform• Controlled client or AFAIC• Concurrency• Local ...
db.foo.drop();db.foo.insert( { _id : 1 } )ops = [{op: "findOne", ns: "test.foo", query: {_id: 1}},{op: "update", ns: "test...
hostThe hostname of the machine mongod is running on (defaults to localhost).usernameThe username to use when authenticati...
nsThe namespace of the collection you are running the operation on, should be of the form"db.collection".opThe type of ope...
{ "#RAND_INT" : [ min , max , <multiplier> ] }[ 0 , 10 , 4 ] would produce random numbers between 0 and 10 and then multip...
Lots of them here:https://github.com/mongodb/mongo/tree/master/jstestsExample Scripts
Read Only Test• Random document size < 4k (mostly 1k)• 6GB Working Data Set Size• Random read only• 10 second per query se...
The ResultsConcurrent Clients Avg Read OPS/Sec1 38288.5272 72103.357964 127451.88678 180798.439616 191817.336132 186429.45...
Some Tougher Tests• Small MongoDB Bare Metal Cloud vs PublicCloud Instance• Medium MongoDB Bare Metal Cloud vsPublic Cloud...
Pre-configurations• Set SSD Read Ahead Defaults to 16 Blocks – SSD drives haveexcellent seek times allowing for shrinking ...
• Turn NUMA Off in BIOS – Linux, NUMA and MongoDBtend not to work well together. If you are runningMongoDB on NUMA hardwar...
Use ext4 – We have selected ext4 over ext3. We found ext3to be very slow in allocating files (or removing them).Additional...
Private NetworkJMETERSERVERJMETERSERVERJMETERSERVERJMETERSERVERRMIJmeter Master ClientRDPTester’s Local MachineTest Enviro...
var ops = [];while (low_rand < high_id) {if(readTest){ops.push({op : "findOne",ns : "test.foo",query : {incrementing_id : ...
Small TestSmall MongoDB ServerSingle 4-core Intel 1270 CPU64-bit CentOS8GB RAM2 x 500GB SATAII – RAID11Gb NetworkVirtual P...
Small TestSmall Bare Metal Cloud Instance• 64-bit CentOS• 8GB RAM• 2 x 500GB SATAII – RAID1• 1Gb NetworkPublic Cloud Insta...
Small Public Cloud122193 201 27148083502004006008001000120014001 2 4 8 16 32Ops/SecondConcurrent Clients
Small Bare Metal237337413 5245971112020040060080010001200140016001 2 4 8 16 32Ops/SecondConcurrent Clients
Medium TestMedium MongoDB ServerDual 6-core Intel 5670 CPUs64-bit CentOS36GB RAM2 x 64GB SSD – RAID1 (Journal Mount)4 x 30...
Medium TestBare Metal Cloud Instance• Dual 6-core Intel 5670 CPUs• 64-bit CentOS• 36GB RAM• 2 x 64GB SSD – RAID1 (Journal ...
Medium TestBare Metal Cloud Instance• Dual 6-core Intel 5670 CPUs• 64-bit CentOS• 36GB RAM• 2 x 64GB SSD – RAID1 (Journal ...
Medium TestTests Performed• Data Set (32GB of .5mb documents)• 200 iterations of 6:1 query-to-update operations• Concurren...
Medium Public Cloud219 326 477 716129815541483159405001000150020002500300035004000450050001 2 4 8 16 32 64 128Ops/SecondCo...
Medium Bare Metal15k SAS542 818 1042 126016433392412054430100020003000400050006000700080001 2 4 8 16 32 64 128Ops/SecondCo...
Medium Bare MetalSSD13892115263729953047 31613742 38460500100015002000250030003500400045001 2 4 8 16 32 64 128Ops/SecondCo...
Large TestLarge MongoDB ServerDual 8-core Intel E5-2620 CPUs64-bit CentOS128GB RAM2 x 64GB SSD – RAID1 (Journal Mount)6 x ...
Large TestBare Metal Cloud Instance• Dual 8-core Intel E5-2620 CPUs• 64-bit CentOS• 128GB RAM• 2 x 64GB SSD – RAID1 (Journ...
Large TestBare Metal Cloud Instance• Dual 8-core Intel E5-2620 CPUs• 64-bit CentOS• 128GB RAM• 2 x 64GB SSD – RAID1 (Journ...
Large TestTests Performed• Data Set (64GB of .5mb documents)• 200 iterations of 6:1 query-to-update operations• Concurrent...
Large Public Cloud105409943636125217331902204401000200030004000500060001 2 4 8 16 32 64 128Ops/SecondConcurrent Clients
Large Bare Metal15k SAS412686 946 11231373235350975572010002000300040005000600070001 2 4 8 16 32 64 128Ops/SecondConcurren...
Large Bare MetalSSD1898291936724351 39613629 3737 386401000200030004000500060001 2 4 8 16 32 64 128Ops/SecondConcurrent Cl...
Superior PerformanceDeployment Size Bare Metal DriveTypeBare Metal AveragePerformance Advantageover VirtualSmall SATA II 7...
Consistent PerformanceVirtual Instance Bare Metal InstanceSmall 6-36% 1-9%Medium 8-43% 1-8%Large 8-93% 1-9%RSD (Relative S...
Requirements ReviewedCloud Provider Bare Metal InstanceHigh PerformanceXReliable, PredictablePerformance XRapidly Scalable...
The “Marc-O-Meter”NOT SURE IFWANT
The Dream
The RealityVirtual InstanceStriped NetworkAttached VirtualVolumes
ClusterDeployment ComplexityVirtual InstanceStriped NetworkAttached VirtualVolumesVirtual InstanceStriped NetworkAttached ...
Deployment Serenity:The Solution Designer
MongoDB Solutions• Preconfigured• Performance Tuned• Bare Metal Single Tenant• Complex Environment Configurations
Requirements ReviewedCloud Provider Bare Metal InstanceHigh PerformanceXReliable, PredictablePerformance XRapidly Scalable...
The “Marc-O-Meter”B+ FOREFFORT
Customer Feedback“We have over two terabytes of raw eventdata coming in every day ... Struq hasbeen able to process over 9...
The “Marc-O-Meter”WIN!!
Summary• Bare Metal Cloud can be leveraged tosimplify deployments• Bare Metal has a significantperformance superiority/con...
More information:www.softlayer.comblog@ http://sftlyr.com/bdperf
High Performance, Scalable MongoDB in a Bare Metal Cloud
Upcoming SlideShare
Loading in …5
×

High Performance, Scalable MongoDB in a Bare Metal Cloud

902 views

Published on

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
902
On SlideShare
0
From Embeds
0
Number of Embeds
213
Actions
Shares
0
Downloads
16
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • I am HH work for Softlayer for about 6-7 years now, Work in product innovation as a Sr Software ArchitectPart of what we do is R&amp;D for new product solutions for Softlayer which gives me the opportunity to get exposure to a lot of exciting new technologies and solutionsOne thing I’ve been working with lately has been Big Data solutions specifically Big DataToday we are talking about Big Data Cloud SubscriptionSome of how we put it togetherSome considerations for deployment and how we arrived at the model we didSome metrics/info on performanceSome helpful hints
  • Softlayer?
  • This is about a narrative of building a deployable big data solution for our customers
  • We still need to solve the deployment issue public cloud still was winning on ease and speed
  • This is about a narrative of building a deployable big data solution for our customers
  • This is about a narrative of building a deployable big data solution for our customers
  • Before we started building the solution thinking about big data
  • So here is our one and only obligatory analyst slide I promiseThink in terms of the 3 V’s Gartner definedThere are lots of 4th V’s (Value, Veracity, etc.) But really these apply to all data right? These 3 are at the coreAlso for our discussion today we are mostly going to be focused on Volume and Velocity (Variety is a given for us)These are important to consider when we start talking about how we want to deploy our solutionHow much and How fast is our data going to come at us?
  • Those 3 V’s have a lot of impact on our decision for how to physically deployPublic Cloud and Single Tenant dedicated are 2 options (there is SAS but not really the focus today)Both have their strengths and weaknesses
  • Like to focus on Public Cloud Vs Bare Metal for deploying Big Data SolutionsBoth have distinct impact on the requirements we had
  • Typically fast to setup up frontGreat for entry level, POC, Testing, Small applications where maybe things like Velocity aren’t as importantCan be great for auto-scaling needs in bursty use casesAt first these deployments look very affordableBut we are usually talking about shared network attached resourcesWith shared I/O comes widely varied performance that I am convinced is based upon the direction of the wind in some casesPersonal tests have shown that standard deviation swings are as large (30% or higher)You are going to here me talk a lot today about RSD relative standard deviation when we get to some actual performance testing numbersMost platforms use network attached storageI DO NOT USE NETWORK ATTACHED STORAGE BACKED VIRTUAL INSTANCES for disk intensive applications Big DataFor everyone that hit the snooze button on my presentation this is probably the most important take away I can give youSo I will repeat that because it is very importantWe found that most customers wanting I/O intensive applications like Big Data that have an absolute requirement for virtual instances do better with local disk for obvious reasonsNo network hop to data = better performance so we push our customers implementing heavy disk I/O solutions Big Data to our Local Disc Virtual Instances when they have a hard requirement Multi Tenant Public CloudThat’s not our best solution but when they just can’t leave a virtual instance at least local disk helps alleviate some of the shared resource pain for these sorts of applications
  • So lets look at a different strategy for deployingWe have seen a growing number of customers coming to us wanting single tenant solution for high disk I/O data storage solutions like Big Data applicationsWe consider our platform to be a complete portfolio of Cloud Offerings including Single Tenant Options beyond our multi tenant public cloud offeringsWe do have multi tenant with local disk, but we believe our Bare Metal Cloud offering is far better suited for Big Data solutions than any otherAll the advantages of the Cloud without the pain pointsEasy automated provisioningConsistent high performanceBecause you have noShared I/ONetwork DiskWildly Varied deviated performanceYou get Consistent Solid performance every time because our single tenant offerings are backed by BARE METALStress consistent
  • This is caramel mango macadamia nut pudding by the way and it is deliciousSo I can talk all I want about how theoretically sharing resources and network hops impact high storage I/O deploymentsBut if you are like me, then if you are looking to really understand something then you need to test itWe were building a product, so we looked into the different deployments and how they shaped up
  • Numbers with no context are not very useful
  • This is the ACTUAL test for that crazy number from before. Notice it has been heavily designed to produce a falsely high number. Not very useful.
  • These were the results
  • The numbers are average read operations per second with writes occurring as well. The vertical white lines represent variance in that data. This slide and the other public cloud ones show that the variance in the data is HUGE. This means the platform is unstable under load and cannot give you a reliable predictable deployment
  • Numbers speak for themselvesYou take the overall average consistent performance + the consistencyCoupled with the ease of deployment
  • Numbers speak for themselvesYou take the overall average consistent performance + the consistencyCoupled with the ease of deployment
  • We still need to solve the deployment issue public cloud still was winning on ease and speed
  • This is about a narrative of building a deployable big data solution for our customers
  • When we talk about a public cloud deployment everyone has this dream of just right clicking and “adding new” and everything is perfect
  • Although at first things seem simple scaling on multi tenant (especially with NAS) gets trickyIn this case this is a SINGLE instance of a Mongo Node (This is one node, most deployments are going to have 3 or more of these)In order to achieve desired performance you have to raid network volumes and attach them to virtual instancesThis still doesn’t solve shared I/O deviation issues it just smears them so they may not spike as drastically
  • It gets even crazier when you do highly available deploymentsStriped volumes (sometimes up to 10) attachedSo you can see that as you scale on a NAS Virtual environment You start to see when you look at this picture that your simple Virtualized environment has suddenly started to get very complexIf you are an engineer that believes in keeping things simple to avoid issues, this sort of thing keeps you up at nightBoth complexity and cost can start to spiral beyond what you may have anticipated
  • The goal is to capture the ease of virtual deploymentConfigure complex cluster environmentsAllow for rapid deployment
  • Now we’ve solved the deployment issues marrying the ease of public cloud with the performance
  • This is about a narrative of building a deployable big data solution for our customers
  • Highlight the 95% as further evidence of our extreme superiority in consistent performance.
  • This is about a narrative of building a deployable big data solution for our customers
  • Thank you for your time, I hope you found this helpful QuestionsBlog
  • High Performance, Scalable MongoDB in a Bare Metal Cloud

    1. 1. High Performance,Scalable MongoDBin a Bare Metal CloudHarold Hannon, Sr. Software Architect
    2. 2. 100kservers24kcustomers23million domains
    3. 3. 13 data centers16 network POPs20Gb fiber interconnectsGlobal Footprint
    4. 4. On the agenda today…..• Big Data considerations• Some deployment options• Performance Testing with JSBenchmarking Harness• Review some internal product researchperformed• Discuss the impact of those findings onour product development
    5. 5. “Build me a Big DataSolution”
    6. 6. Product Use Case• MongoDB deployed for customers on purchase• Complex configurations including sharding andreplication• Configurable via Portal interface• Performance tuned to 3 „t-shirt size‟deployments
    7. 7. Big Data Requirements• High Performance• Reliable, Predictable Performance• Rapidly Scalable• Easy to Deploy
    8. 8. Requirements ReviewedCloud Provider Bare Metal InstanceHigh PerformanceReliable, PredictablePerformanceRapidly ScalableXEasy to DeployXI’ve got nothing……
    9. 9. The “Marc-O-Meter”I’M NOTHAPPY 
    10. 10. Marc… Angry
    11. 11. Thinking about Big Data
    12. 12. The 3 V’s
    13. 13. Physical Deployment
    14. 14. Public Cloud
    15. 15. Public Cloud• Speed of deployment• Great for bursting use case• Imaging and cloning make POC/Dev work easy• Shared I/O• Great for POC/DEV• Excellent for App level applications• Not consistent enough for disk intensive applications• Must have application developed for “cloud”
    16. 16. Physical Servers
    17. 17. Bare Metal• Build to your specs• Robust, quickly scaled environment• Management of all aspects of environment• Image Based• No Hypervisor• Single Tenant• Great for Big Data Solutions
    18. 18. The Proof is in the Pudding
    19. 19. Beware The “Best Case Test Case”1 8 5 8 1 7 . 61 9 0 5 2 5 . 41 8 7 8 8 2 . 21 9 1 1 0 1 . 81 8 4 4 0 8 . 81 8 8 1 3 5 . 41 8 7 0 8 0 . 61 8 6 3 4 3 . 41 9 1 8 9 9 . 61 8 7 7 3 6 . 61 8 8 9 7 8 . 81 8 7 4 4 01 8 6 9 5 0 . 41 8 7 6 2 31 8 7 7 8 3 . 81 8 7 7 7 5 . 81 9 2 8 0 6 . 81 8 6 6 4 3 . 2
    20. 20. Do It Yourself• Data Set Sizing• Document/Object Sizes• Platform• Controlled client or AFAIC• Concurrency• Local or Remote Client• Read/Write Tests
    21. 21. JS Benchmarking Harness• Data Set Sizing• Document/Object Sizes• Platform• Controlled client or AFAIC• Concurrency• Local or Remote Client• Read/Write Tests
    22. 22. db.foo.drop();db.foo.insert( { _id : 1 } )ops = [{op: "findOne", ns: "test.foo", query: {_id: 1}},{op: "update", ns: "test.foo", query: {_id: 1}, update: {$inc: {x: 1}}}]for ( var x = 1; x <= 128; x *= 2) {res = benchRun( {parallel : x ,seconds : 5 ,ops : ops} );print( "threads: " + x + "t queries/sec: " + res.query );}Quick Example
    23. 23. hostThe hostname of the machine mongod is running on (defaults to localhost).usernameThe username to use when authenticating to mongod (only use if running with auth).passwordThe password to use when authenticating to mongod (only use if running with auth).dbThe database to authenticate to (only necessary if running with auth).opsA list of objects describing the operations to run (documented below).parallelThe number of threads to run (defaults to single thread).secondsThe amount of time to run the tests for (defaults to one second).Options
    24. 24. nsThe namespace of the collection you are running the operation on, should be of the form"db.collection".opThe type of operation can be "findOne", "insert", "update", "remove", "createIndex","dropIndex" or "command".queryThe query object to use when querying or updating documents.updateThe update object (same as 2nd argument of update() function).docThe document to insert into the database (only for insert and remove).safeboolean specifying whether to use safe writes (only for update and insert).Options
    25. 25. { "#RAND_INT" : [ min , max , <multiplier> ] }[ 0 , 10 , 4 ] would produce random numbers between 0 and 10 and then multiply by 4.{ "#RAND_STRING" : [ length ] }[ 3 ] would produce a string of 3 random characters.var complexDoc3 = { info: "#RAND_STRING": [30] } }var complexDoc3 = { info: { inner_field: { "#RAND_STRING": [30] } } }Dynamic Values
    26. 26. Lots of them here:https://github.com/mongodb/mongo/tree/master/jstestsExample Scripts
    27. 27. Read Only Test• Random document size < 4k (mostly 1k)• 6GB Working Data Set Size• Random read only• 10 second per query set execution• Exponentially increasing concurrent clients from 1-128• 48 Hour Test Run• RAID10 4 SSD drives• Local Client• “Pre-warmed cache”
    28. 28. The ResultsConcurrent Clients Avg Read OPS/Sec1 38288.5272 72103.357964 127451.88678 180798.439616 191817.336132 186429.451764 187011.7824128 188187.0704
    29. 29. Some Tougher Tests• Small MongoDB Bare Metal Cloud vs PublicCloud Instance• Medium MongoDB Bare Metal Cloud vsPublic Cloud Instance• SSD and 15K SAS• Large MongoDB Bare Metal Cloud vs PublicCloud Instance• SSD and 15K SAS
    30. 30. Pre-configurations• Set SSD Read Ahead Defaults to 16 Blocks – SSD drives haveexcellent seek times allowing for shrinking the Read Ahead to16 blocks. Spinning disks might require slight buffering so thesehave been set to 32 blocks.• noatime – Adding the noatime option eliminates the need forthe system to make writes to the file system for files which aresimply being read — or in other words: Faster file access andless disk wear.
    31. 31. • Turn NUMA Off in BIOS – Linux, NUMA and MongoDBtend not to work well together. If you are runningMongoDB on NUMA hardware, we recommend turning itoff (running with an interleave memory policy). If youdon’t, problems will manifest in strange ways likemassive slow downs for periods of time or high systemCPU time.• Set ulimit – We have set the ulimit to 64000 for open filesand 32000 for user processes to prevent failures due to aloss of available file handles or user processes.
    32. 32. Use ext4 – We have selected ext4 over ext3. We found ext3to be very slow in allocating files (or removing them).Additionally, access within large files is poor with ext3.
    33. 33. Private NetworkJMETERSERVERJMETERSERVERJMETERSERVERJMETERSERVERRMIJmeter Master ClientRDPTester’s Local MachineTest Environment
    34. 34. var ops = [];while (low_rand < high_id) {if(readTest){ops.push({op : "findOne",ns : "test.foo",query : {incrementing_id : {"#RAND_INT" : [ low_rand, low_rand + RAND_STEP ]}}});}if(updateTest){ops.push({ op: "update", ns: "test.foo",query: { incrementing_id: { "#RAND_INT" : [0,high_id]}},update: { $inc: { counter: 1 }},safe: true });}low_rand += RAND_STEP;}function printLine(tokens, columns, width) {line = "";column_width = width / columns;for (var i=0;i<tokens.length;i++) {line += tokens[i];// token_width = tokens[token].toString().length;// pad = column_width - token_width;// while (pad--) {if(i != tokens.length-1)line += " , ";// }}
    35. 35. Small TestSmall MongoDB ServerSingle 4-core Intel 1270 CPU64-bit CentOS8GB RAM2 x 500GB SATAII – RAID11Gb NetworkVirtual Provider Instance4 Virtual Compute Units64-bit CentOS7.5GB RAM2 x 500GB Network Storage – RAID11Gb NetworkTests PerformedSmall Data Set (8GB of .5mbdocuments)200 iterations of 6:1 query-to-updateoperationsConcurrent client connectionsexponentially increased from 1 to 32Test duration spanned 48 hours
    36. 36. Small TestSmall Bare Metal Cloud Instance• 64-bit CentOS• 8GB RAM• 2 x 500GB SATAII – RAID1• 1Gb NetworkPublic Cloud Instance• 4 Virtual Compute Units• 64-bit CentOS• 7.5GB RAM• 2 x 500GB Network Storage – RAID1• 1Gb Network
    37. 37. Small Public Cloud122193 201 27148083502004006008001000120014001 2 4 8 16 32Ops/SecondConcurrent Clients
    38. 38. Small Bare Metal237337413 5245971112020040060080010001200140016001 2 4 8 16 32Ops/SecondConcurrent Clients
    39. 39. Medium TestMedium MongoDB ServerDual 6-core Intel 5670 CPUs64-bit CentOS36GB RAM2 x 64GB SSD – RAID1 (Journal Mount)4 x 300GB 15K SAS – RAID10 (Data Mount)1Gb Network – BondedVirtual Provider Instance26 Virtual Compute Units64-bit CentOS30GB RAM2 x 64GB Network Storage – RAID1 (JournalMount)4 x 300GB Network Storage – RAID10 (DataMount)1Gb NetworkTests PerformedSmall Data Set (32GB of .5mbdocuments)200 iterations of 6:1 query-to-updateoperationsConcurrent client connectionsexponentially increased from 1 to 128Test duration spanned 48 hours
    40. 40. Medium TestBare Metal Cloud Instance• Dual 6-core Intel 5670 CPUs• 64-bit CentOS• 36GB RAM• 2 x 64GB SSD – RAID1 (Journal Mount)• 4 x 300GB 15K SAS – RAID10 (Data Mount)• 1Gb Network – BondedPublic Cloud Instance• 26 Virtual Compute Units• 64-bit CentOS• 30GB RAM• 2 x 64GB Network Storage – RAID1 (Journal Mount)• 4 x 300GB Network Storage – RAID10 (Data Mount)• 1Gb Network
    41. 41. Medium TestBare Metal Cloud Instance• Dual 6-core Intel 5670 CPUs• 64-bit CentOS• 36GB RAM• 2 x 64GB SSD – RAID1 (Journal Mount)• 4 x 400GB SSD– RAID10 (Data Mount)• 1Gb Network – BondedPublic Cloud Instance• 26 Virtual Compute Units• 64-bit CentOS• 30GB RAM• 2 x 64GB Network Storage – RAID1 (Journal Mount)• 4 x 400GB Network Storage – RAID10 (Data Mount)• 1Gb Network
    42. 42. Medium TestTests Performed• Data Set (32GB of .5mb documents)• 200 iterations of 6:1 query-to-update operations• Concurrent client connections exponentiallyincreased from 1 to 128• Test duration spanned 48 hours
    43. 43. Medium Public Cloud219 326 477 716129815541483159405001000150020002500300035004000450050001 2 4 8 16 32 64 128Ops/SecondConcurrent Clients
    44. 44. Medium Bare Metal15k SAS542 818 1042 126016433392412054430100020003000400050006000700080001 2 4 8 16 32 64 128Ops/SecondConcurrent Clients
    45. 45. Medium Bare MetalSSD13892115263729953047 31613742 38460500100015002000250030003500400045001 2 4 8 16 32 64 128Ops/SecondConcurrent Clients
    46. 46. Large TestLarge MongoDB ServerDual 8-core Intel E5-2620 CPUs64-bit CentOS128GB RAM2 x 64GB SSD – RAID1 (Journal Mount)6 x 600GB 15K SAS – RAID10 (Data Mount)1Gb Network – BondedVirtual Provider Instance26 Virtual Compute Units64-bit CentOS64GB RAM (Maximum available on thisprovider)2 x 64GB Network Storage – RAID1 (JournalMount)6 x 600GB Network Storage – RAID10 (DataMount)1Gb NetworkTests PerformedSmall Data Set (64GB of .5mbdocuments)200 iterations of 6:1 query-to-updateoperationsConcurrent client connectionsexponentially increased from 1 to 128Test duration spanned 48 hours
    47. 47. Large TestBare Metal Cloud Instance• Dual 8-core Intel E5-2620 CPUs• 64-bit CentOS• 128GB RAM• 2 x 64GB SSD – RAID1 (Journal Mount)• 6 x 600GB 15K SAS – RAID10 (Data Mount)• 1Gb Network – BondedPublic Cloud Instance• 26 Virtual Compute Units• 64-bit CentOS• 64GB RAM (Maximum available on this provider)• 2 x 64GB Network Storage – RAID1 (Journal Mount)• 6 x 600GB Network Storage – RAID10 (Data Mount)• 1Gb Network
    48. 48. Large TestBare Metal Cloud Instance• Dual 8-core Intel E5-2620 CPUs• 64-bit CentOS• 128GB RAM• 2 x 64GB SSD – RAID1 (Journal Mount)• 6 x 400GB SSD – RAID10 (Data Mount)• 1Gb Network – BondedPublic Cloud Instance• 26 Virtual Compute Units• 64-bit CentOS• 64GB RAM (Maximum available on this provider)• 2 x 64GB Network Storage – RAID1 (Journal Mount)• 6 x 400GB Network Storage – RAID10 (Data Mount)• 1Gb Network
    49. 49. Large TestTests Performed• Data Set (64GB of .5mb documents)• 200 iterations of 6:1 query-to-update operations• Concurrent client connections exponentiallyincreased from 1 to 128• Test duration spanned 48 hours
    50. 50. Large Public Cloud105409943636125217331902204401000200030004000500060001 2 4 8 16 32 64 128Ops/SecondConcurrent Clients
    51. 51. Large Bare Metal15k SAS412686 946 11231373235350975572010002000300040005000600070001 2 4 8 16 32 64 128Ops/SecondConcurrent Clients
    52. 52. Large Bare MetalSSD1898291936724351 39613629 3737 386401000200030004000500060001 2 4 8 16 32 64 128Ops/SecondConcurrent Clients
    53. 53. Superior PerformanceDeployment Size Bare Metal DriveTypeBare Metal AveragePerformance Advantageover VirtualSmall SATA II 70%Medium 15k SAS 133%Medium SSD 297%Large 15k SAS 111%Large SSD 446%
    54. 54. Consistent PerformanceVirtual Instance Bare Metal InstanceSmall 6-36% 1-9%Medium 8-43% 1-8%Large 8-93% 1-9%RSD (Relative Standard Deviation) by Platform
    55. 55. Requirements ReviewedCloud Provider Bare Metal InstanceHigh PerformanceXReliable, PredictablePerformance XRapidly ScalableXEasy to DeployXNot Quite There Yet……
    56. 56. The “Marc-O-Meter”NOT SURE IFWANT
    57. 57. The Dream
    58. 58. The RealityVirtual InstanceStriped NetworkAttached VirtualVolumes
    59. 59. ClusterDeployment ComplexityVirtual InstanceStriped NetworkAttached VirtualVolumesVirtual InstanceStriped NetworkAttached VirtualVolumesVirtual InstanceStriped NetworkAttached VirtualVolumes
    60. 60. Deployment Serenity:The Solution Designer
    61. 61. MongoDB Solutions• Preconfigured• Performance Tuned• Bare Metal Single Tenant• Complex Environment Configurations
    62. 62. Requirements ReviewedCloud Provider Bare Metal InstanceHigh PerformanceXReliable, PredictablePerformance XRapidly ScalableX XEasy to DeployX X
    63. 63. The “Marc-O-Meter”B+ FOREFFORT
    64. 64. Customer Feedback“We have over two terabytes of raw eventdata coming in every day ... Struq hasbeen able to process over 95 percent ofrequests in fewer than 30 milliseconds”- Aaron McKeeCTO, Struq
    65. 65. The “Marc-O-Meter”WIN!!
    66. 66. Summary• Bare Metal Cloud can be leveraged tosimplify deployments• Bare Metal has a significantperformance superiority/consistencyover Public Cloud• Public Cloud is best suited for Dev/POCor when running data sets in memoryonly
    67. 67. More information:www.softlayer.comblog@ http://sftlyr.com/bdperf

    ×