Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Puppet Availability and Performance at 100K Nodes - PuppetConf 2014

6,896 views

Published on

Puppet Availability and Performance at 100K Nodes - John Jawed, eBay/PayPal

Published in: Technology
  • Be the first to comment

Puppet Availability and Performance at 100K Nodes - PuppetConf 2014

  1. 1. puppet @ 100,000+ agents John Jawed (“JJ”) eBay/PayPal
  2. 2. but I don’t have 100,000 agents issues ahead encountered at <1000 agents
  3. 3. me responsible for Puppet/Foreman @ eBay how I got here: engineer -> engineer with root access -> system/infrastructure engineer
  4. 4. free time: PuppyConf
  5. 5. puppet @ eBay, quick facts -> perhaps the largest Puppet deployment -> more definitively the most diverse -> manages core security -> trying to solve the “p100k” problems
  6. 6. #’s • 100K+ agents – Solaris, Linux, and Windows – Production & QA – Cloud (openstack & VMware) + bare metal • 32 different OS versions, 43 hardware configurations – Over 300 permutations in production • Countless apps from C/C++ to Hadoop – Some applications over 15+ years old
  7. 7. currently • 3-4 puppet masters per data center • foreman for ENC, statistics, and fact collection • 150+ puppet runs per second • separate git repos per environment, common core modules – caching git daemon used by ppm’s
  8. 8. nodes growing, sometimes violently linear growth trendline
  9. 9. setup puppetmasters setup puppet master, it’s the CA too sign and run 400 agents concurrently, that’s less than half a percent of all the nodes you need to get through.
  10. 10. not exactly puppet issues entropy unavailable crypto is CPU heavy (heavier than you ever have and still believe) passenger children are all busy
  11. 11. OK, let’s setup separate hosts which only function as a CA
  12. 12. multiple dedicated CA’s much better, distributed the CPU I/O and helped the entropy problem. the PPM’s can handle actual puppet agent runs because they aren’t tied up signing. Great!
  13. 13. wait, how do the CA’s know about each others certs? some sort of network file system (NFS sounds okay).
  14. 14. shared storage for CA cluster -> Get a list of pending signing requests (should be small!) # puppet cert list … wait … wait …
  15. 15. optimize CA’s for large # of certs Traversing a large # of certs is too slow over NFS. -> Profile -> Implement optimization -> Get patch accepted (PUP-1665, 8x improvement)
  16. 16. <3 puppetlabs team
  17. 17. optimizing foreman - read heavy is fine, DB’s do it well. - read heavy in a write heavy environment is more challenging. - foreman writes a lot of log, fact, and report data post puppet run. - majority of requests are to get ENC data - use makara with PG read slaves (https://github.com/taskrabbit/makara) to scale ENC requests - Needs updates to foreigner (gem) - If ENC requests areslow, puppetmasters fall over.
  18. 18. optimizing foreman ENC requests load balanced to read slaves fact/report/host info write requests sent to master makara knows how to arbitrate the connection (great job TaskRabbit team!)
  19. 19. more optimizations make sure RoR cache is set to use dalli (config.cache_store = :dalli_store), see foreman wiki fact collection optimization (already in upstream), without this reporting facts back to foreman can kill a busy puppetmaster! (if you care: https://github.com/theforeman/puppet-foreman/ pull/145)
  20. 20. <3 the foreman team
  21. 21. let’s add more nodes Adding another 30,000 nodes (that’s 30% coverage). Agent setup: pretty standard stuff, puppet agent as a service.
  22. 22. results average puppet run: 29 seconds. not horrible. but average latency is a lie because that usually represents the mean average (sum of N / N). the actual puppet run graph looks more like…
  23. 23. curve impossible No one in operations or infrastructure ever wants a service runtime graph like this. mean average
  24. 24. PPM running @ medium load PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 16765 puppet 20 0 341m 76m 3828 S 53.0 0.1 67:14.92 ruby 17197 puppet 20 0 343m 75m 3828 S 40.7 0.1 62:50.01 ruby 17174 puppet 20 0 353m 78m 3996 S 38.7 0.1 70:07.44 ruby 16330 puppet 20 0 338m 74m 3828 S 33.8 0.1 66:08.81 ruby 17231 puppet 20 0 344m 75m 3820 S 29.8 0.1 70:00.47 ruby 17238 puppet 20 0 353m 76m 3996 S 29.8 0.1 69:11.94 ruby 17187 puppet 20 0 343m 76m 3820 S 26.2 0.1 70:48.66 ruby 17156 puppet 20 0 353m 75m 3984 S 25.8 0.1 64:44.62 ruby … system processes
  25. 25. 60 seconds later…idle PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 17343 puppet 20 0 344m 77m 3828 S 11.6 0.1 74:47.23 ruby 31152 puppet 20 0 203m 9048 2568 S 11.3 0.0 0:03.67 httpd 29435 puppet 20 0 203m 9208 2668 S 10.9 0.0 0:05.46 httpd 16220 puppet 20 0 337m 74m 3828 S 10.3 0.1 70:07.42 ruby 16354 puppet 20 0 339m 75m 3816 S 10.3 0.1 62:11.71 ruby … system processes
  26. 26. 120 seconds later…thrashing PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 16765 puppet 20 0 341m 76m 3828 S 94.0 0.1 67:14.92 ruby 17197 puppet 20 0 343m 75m 3828 S 93.7 0.1 62:50.01 ruby 17174 puppet 20 0 353m 78m 3996 S 92.7 0.1 70:07.44 ruby 16330 puppet 20 0 338m 74m 3828 S 90.8 0.1 66:08.81 ruby 17231 puppet 20 0 344m 75m 3820 S 89.8 0.1 70:00.47 ruby 17238 puppet 20 0 353m 76m 3996 S 89.8 0.1 69:11.94 ruby 17187 puppet 20 0 343m 76m 3820 S 88.2 0.1 70:48.66 ruby 17156 puppet 20 0 353m 75m 3984 S 87.8 0.1 64:44.62 ruby 17152 puppet 20 0 353m 75m 3984 S 86.3 0.1 64:44.62 ruby 17153 puppet 20 0 353m 75m 3984 S 85.3 0.1 64:44.62 ruby 17151 puppet 20 0 353m 75m 3984 S 82.9 0.1 64:44.62 ruby … more ruby processes
  27. 27. what we really want A flat consistent runtime curve, this is important for any production service. Without predictability there is no reliability!
  28. 28. consistency @ medium load PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 16765 puppet 20 0 341m 76m 3828 S 53.0 0.1 67:14.92 ruby 17197 puppet 20 0 343m 75m 3828 S 40.7 0.1 62:50.01 ruby 17174 puppet 20 0 353m 78m 3996 S 38.7 0.1 70:07.44 ruby 16330 puppet 20 0 338m 74m 3828 S 33.8 0.1 66:08.81 ruby 17231 puppet 20 0 344m 75m 3820 S 29.8 0.1 70:00.47 ruby 17238 puppet 20 0 353m 76m 3996 S 29.8 0.1 69:11.94 ruby 17187 puppet 20 0 343m 76m 3820 S 26.2 0.1 70:48.66 ruby 17156 puppet 20 0 353m 75m 3984 S 25.8 0.1 64:44.62 ruby … system processes
  29. 29. hurdle: runinterval near impossible to get a flat curve because of uneven and chaotic agent run distribution. runinterval is non-deterministic … even if you manage to sync up service times eventually it’s nebulous.
  30. 30. the puppet agent daemon approach is not going to work.
  31. 31. plan A: puppet via cron generate run time based some deterministic agent data point (IP, MAC address, hostname, etc.). IE, if you wanted a puppet run every 30 minutes, your crontab may look like: 08 * * * * puppet agent -t 38 * * * * puppet agent -t
  32. 32. plan A yields Fewer and predictable spikes
  33. 33. Improved. But does not scale because cronjobs help run times become deterministic but lack even distribution.
  34. 34. eliminate all masters? masterless puppet kicking the can down the road, somewhere infrastructure still has to serve the files and catalog to agents. masterless puppet creates a whole host of other issues (file transfer channels, catalog compiler host).
  35. 35. eliminate all masters? masterless puppet …and the same issues exists in albeit in different forms. shifts problems to “compile interval” and “manifest/module push interval”.
  36. 36. plan Z: increase your runinterval Z, the zombie apocalypse plan (do not do this!). delaying failure till you are no longer responsible for it (hopefully).
  37. 37. alternate setups SSL termination on load balancer – expensive - LB’s are difficult to deploy, cost more (you still need fail over otherwise it’s a SPoF!) caching – cache is meant to make things faster, not required to work. If cache is required to make services functional, solving the wrong problem.
  38. 38. zen moment maybe the issue isn’t about timing the agent from the host. maybe the issue is that the agent doesn’t know when there’s enough capacity to reliably and predictably run puppet.
  39. 39. enforcing states is delayed runinterval/cronjobs/masterless setups still render puppet as a suboptimal solution in a state sensitive environment (customer and financial data). the problem is not unique to puppet. salt, coreOS, et al. are susceptible.
  40. 40. security trivia web service REST3DotOh just got compromised and allows a sensitive file managed by puppet to be manipulated. Q: how/when does puppet set the proper state?
  41. 41. the how; sounds awesome A: every puppet runs ensures that a file is in its’ intended state and records the previous state if it was not.
  42. 42. the when; sounds far from awesome A: whenever puppet is scheduled to run next. up to runinterval minutes from the compromise, masterless push, or cronjob execution.
  43. 43. smaller intervals help but… all the strategies have one common issue: puppet masters do not scale with smaller intervals, exasperate spikes in the runtime curve.
  44. 44. this needs to change
  45. 45. pvc “pvc” – open source & lightweight process for a deterministic and evenly distributed puppet service curve… …and reactive state enforcement puppet runs.
  46. 46. pvc a different approach that executes puppet runs based on available capacity and local state changes. pings from an agent to check if its’ time to run puppet. file monitoring to force puppet runs when important files change outside of puppet (think /etc/shadow, /etc/sudoers).
  47. 47. pvc basic concepts: - Frequent pings to determine when to run puppet - Tied in to backend PPM health/capacity - Frequent fact collection without needing to run puppet - Sensitive files should be subject to monitoring - on change or updates outside of puppet, immediately run puppet! - efficiency an important factor.
  48. 48. pvc advantages -> variable puppet agent run timing - allows the flat and predictable service curve (what we want). - more frequent puppet runs when capacity is available, less frequent puppet runs less capacity is available.
  49. 49. pvc advantages -> improves security (kind of a big deal these days) - puppet runs when state changes rather than waiting to run. - efficient, uses inotify to monitor files. - if a file being monitored is changed, a puppet run is forced.
  50. 50. pvc advantages - orchestration between foreman & puppet - controlled rollout of changes - upload facts between puppet runs into foreman
  51. 51. pvc – backend 3 endpoints – all get the ?fqdn=<certname> parameter GET /host – should pvc run puppet or facter? POST /report – raw puppet run output, files monitored were changed POST /facts – facter output (puppet facts in JSON)
  52. 52. pvc – /host > curl http://hi.com./host?fqdn=jj.e.com < PVC_RETURN=0 < PVC_RUN=1 < PVC_PUPPET_MASTER=puppet.vip.e.com < PVC_FACT_RUN=0 < PVC_CHECK_INTERVAL=60 < PVC_FILES_MONITORED="/etc/security/access.conf /etc/passwd"
  53. 53. pvc – /facts allows collecting of facts outside of the normal puppet run, useful for monitoring. set PVC_FACT_RUN to report facts back to the pvc backend.
  54. 54. pvc – git for auditing push actual changes between runs into git - branch per host, parentless branches & commits are cheap. - easy to audit fact changes (fact blacklist to prevent spam) and changes between puppet runs. - keeping puppet reports between runs is not helpful.
  55. 55. pvc – incremental rollouts select candidate hosts based on your criteria and set an environment variable via the /host endpoint output: FACTER_UPDATE_FLAG=true in your manifest, check: if $::UPDATE_FLAG { … }
  56. 56. example pvc.conf host_endpoint=http://jj.e.com./host report_endpoint=http://jj.e.com./report facts_endpoint=http://jj.e.com./facts info=1 warnings=1
  57. 57. pvc – available on github $ git clone https://github.com/johnj/pvc make someone happy, achieve:
  58. 58. wishlist stuff pvc should probably have: • authentication of some sort • a more general backend, currently tightly integrated into internal PPM infrastructure health • whatever other users wish it had
  59. 59. misc. lessons learned your ENC has to be fast, or your puppetmasters fail without ever doing anything. upgrade ruby to 2.x for the performance improvements. serve static module files with a caching http server (nginx).
  60. 60. contact @johnjawed https://github.com/johnj jj@x.com

×