Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Iteratively introducing Puppet technologies in the brownfield; Jeffrey Miller


Published on

Puppet Camp America West; 25 June 2020

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Iteratively introducing Puppet technologies in the brownfield; Jeffrey Miller

  1. 1. ORNL is managed by UT-Battelle, LLC for the US Department of Energy Iteratively introducing Puppet technologies in the brownfield Jeffrey Miller HPC Linux Systems Engineer HPC Core Ops Group National Center for Computational Sciences
  2. 2. 22 Open slide master to edit Citation and Disclaimer This work was supported by the Oak Ridge Leadership Computing Facility (OLCF) and the Computer and Data Environment for Science (CADES) at Oak Ridge National Laboratory (ORNL) for the Department of Energy (DOE) under Prime Contract Number DE-AC05-00OR-22725 This presentation does not contain any proprietary or confidential information.
  3. 3. 33 Open slide master to edit Acknowledgements Greg Shutt, CADES Task Lead Cory Stargel, HPC Infrastructure Task Lead Larry Orcutt, HPC Linux Systems Engineer Michael Shute, HPC Linux Systems Engineer James “Jake” Wynne, III, HPC Linux Systems Engineer
  4. 4. 44 Open slide master to edit Contact Information Jeffrey Miller Email: LinkedIn: GitHub: On Slack: – – – ID: millerjl1701
  5. 5. 55 Open slide master to edit What We Do: Infrastructure Team – HPC Core Ops Group As part of the National Center for Computational Sciences (NCCS), the HPC Core Ops group provides all the necessary infrastructure services, networking support, security oversight, and monitoring analytics required to keep the OLCF leadership supercomputing systems healthy. The Infrastructure Team provides necessary external services for use by the OLCF HPC resources as well as other programs and projects supported by NCCS. CADES provides a compute and data infrastructure environment to enable the scientific discovery process for researchers at ORNL and their collaborators.
  6. 6. 66 Open slide master to edit Wouldn’t this be great? From Deer Standing, by Petr Kratochvil. Retrieved from
  7. 7. 77 Open slide master to edit Brownfield Infrastructure • Preexisting environment providing production services • Inventory? What inventory? • Documentation? • Compliance? • Conglomeration of configuration methods? • Disaster recovery? Backups? • etc. From Dry Agricultural Brown Soil, by George Hodan. Retrieved from 000/velka/dry-agricultural-brown-soil.jpg
  8. 8. 88 Open slide master to edit Don’t Touch Anything From Fire in Dumpster, by Ben Watts, 2009. Retrieved from
  9. 9. 99 Open slide master to edit Where to start??? Retrieved from
  10. 10. 1010 Open slide master to edit Bolt – What is this? “An open source orchestration tool that automates the manual work it takes to maintain your infrastructure” - Works against local or remote - Run scripts or commands - Organized in tasks and plans - Plans can be written in Puppet or yaml - Connects to remote targets over SSH or WinRM From
  11. 11. 1111 Open slide master to edit Bolt – What can you do? You probably have a desktop system and/or a management server… • Install bolt and start writing a “laptop_config” plan – Install git and other tools – Run .dotfiles setup script • Keep your code in git and commit often • • Start a habit of automate first • Read Ben Ford’s April 2, 2020 blog post : something/
  12. 12. 1212 Open slide master to edit Automate All the Repos Consider setting up a GitLab instance using bolt and the Vox Pupuli GitLab puppet module on a system if you don’t have an instance already. GitLab and GitLab runners can enable: - Code review process - Infrastructure code deployment to a management server - Puppet code repositories validation testing and deployment to Puppet Servers
  13. 13. 1313 Open slide master to edit Provisioning Razor ”Advanced provisioning application that can deploy both bare metal and virtual systems” by Puppet - PXE boot management - Hypervisor deployment - Automate the hand off to configuration management bs/razor-server/wiki Terraform “a tool for for building, changing, and versioning infrastructure safely and efficiently” by HashiCorp - Infrastructure as Code - Terraform creates the VM and razor provisions ndex.html
  14. 14. 1414 Open slide master to edit Puppet Agent and Facter • Facter: Puppet’s system profiling library that is included with the Puppet Agent package – Bolt leverages facter to retrieve node facts – But, for Bolt to use facter, the Puppet agent needs to be installed • Enter apply_prep – Built in Bolt function like run_command, run_script, etc. – Installs the Puppet agent package if it isn’t already installed – Collects facts from the node into the running inventory – This can be an expensive operation…
  15. 15. 1515 Open slide master to edit Puppet Infrastructure • Using Bolt: – Install the Puppet Agent on several new VMs – Install and configure a PuppetCA (and optionally catalog compile servers) – Deploy PuppetDB and backend PostgreSQL database – Reconfigure Puppet Server systems to use PuppetDB • Using Puppet or Bolt – Deploy Puppetboard (or alternative) dashboard to PuppetDB Then…
  16. 16. 1616 Open slide master to edit Puppet Agent Rollout • Using Bolt: – Install the Puppet agent on each system (hardware or VM) – Configure the Puppet agent to register with the Puppet Server infrastructure – Then, (this is key) have the Puppet agent configure absolutely nothing Yes… Absolutely nothing. Null. Zero. Zilch. What you have now is a growing inventory that furthers systems and services discovery.
  17. 17. 1717 Open slide master to edit --noop or noop() When Puppet is run locally on the system, a “--noop” flag may be passed to report what would change but not actually change anything. Similar result with the “noop” parameter set puppet.conf. noop() function Function in the trlinkin-noop module that sets a scope to noop n/noop For examples of this working see: Puppet noop, no-noop, and the path to safe Puppet Deployments, Alessandro Franceschi, 2017. puppet-deployments/
  18. 18. 1818 Open slide master to edit Systems Configuration – Iterative Style • If you haven’t already, `git init` • What do you want to configure today? • Leverage a git workflow for puppet code use – Create branch – Add puppet code – Deploy branch to Puppet Servers • Use bolt to test canary systems using noop • Find edge cases on the other nodes using noop • Merge and enforce (i.e. no-noop)
  19. 19. 1919 Open slide master to edit Even better… Test environment… - Dedicate Puppet Servers to serve out test code - Test VMs for types of nodes you support in production - Consider using a unique environment for testing instead of a branch against production - Code dev -> squash -> merge -> cherry pick
  20. 20. 2020 Open slide master to edit Even better better… Watch: Multi-node acceptance tests for fun and profit. Trevor Vaughan, 2019.
  21. 21. 2121 Open slide master to edit Final thoughts… Implementation takes a willing team. Learn to trust the process and the tooling. Trust, but verify. Be flexible. There will be landmines in the brownfield. Focus on what can be done rather than shortcomings. Patience – the iterative process may take a long time… Notice I didn’t mention Ruby skillz… oops. #facepalm The Puppet Community.
  22. 22. 2222 Open slide master to edit Questions?