SlideShare a Scribd company logo
1 of 36
Puppet @Twitter
Andrew Hamilton
@ahamilton55
Who am I?
What we’ll cover
The Dev in DevOps
Version Control
Peer review of code
Review Board
http://www.reviewboard.org/
Define your style
OWNERS
OWNERS example
Moving from Puppet 2.7 to Puppet 3.x
Need to go from:
source => ‘puppet:///<module>/<filename>’

to:
source => ‘puppet:///modules/<module>/<filename>’
But of course…
You can automate all of this!
Review Board – git review tools
Style Guide – puppet-lint
OWNERS – Script a hook and git review tools
Version control hooks
And always iterate on this
Branching
Head
Our true testing branch
Where development happens
Systems don’t live on this branch
Testing
Works along with our canary system for
deployments
Major changes can sit here for a couple days to
weeks
Production
99% of our systems are on the production branch
Testing on puppet and head should prevent any
error from propagating to production
Workflow
Head

Testing

Production

Develop

Cherry pick

Cherry pick

Review Board

Review Board

Review Board

Commit

Test

Commit

Test

Commit
But it could be better
puppetmasterd
Each host runs 3 puppetmasterd processes
Each can serve between ~1k – 2k hosts
We use round-robin DNS
ENC
Audubon
Facts
Mostly deprecated, legacy
Pulled info from Facter
Attributes
Basic host information
Two levels
- server, role
Allows for inheritance
Groups
Allows for a logical grouping of hosts
Tells you things like role, rack, cluster, etc
Pulling in info from the ENC is easy
We have functions that pull in each type of change
in a manifest
attribute(‘<attr_name>’, ‘<default_value>’)
group(‘<group_name>’, ‘<default_value>’)
Our ENC data store is also what controls the
puppet branch our hosts use
We set a group called puppet_branch which is
used by our client cronjob
Agents
1. Find branch the host is using

2. Find the port that branch uses on masters
3. Get a master from DNS
1. Health check the master
2. If health continue, else repeat until timeout

4. Run puppet apply against the master and port
Modules
Service
Modules

Team Modules

Base Modules
Create a module for each system service
Chain these together in your service modules

If you do something often, create a module that
every group can use
Build for self-service
Looking forward

More Related Content

What's hot

Django strategy-test
Django strategy-testDjango strategy-test
Django strategy-test
Royce Haynes
 
Plugin Research Status 2009.07.14
Plugin Research Status 2009.07.14Plugin Research Status 2009.07.14
Plugin Research Status 2009.07.14
guest255f79
 
Building cautious software
Building cautious softwareBuilding cautious software
Building cautious software
Kyle Dyer
 

What's hot (20)

Intro to testing Javascript with jasmine
Intro to testing Javascript with jasmineIntro to testing Javascript with jasmine
Intro to testing Javascript with jasmine
 
RSpec and Rails
RSpec and RailsRSpec and Rails
RSpec and Rails
 
Up & running with jasmine
Up & running with jasmineUp & running with jasmine
Up & running with jasmine
 
Angularjs - Unit testing introduction
Angularjs - Unit testing introductionAngularjs - Unit testing introduction
Angularjs - Unit testing introduction
 
Create database with mule
Create database with muleCreate database with mule
Create database with mule
 
Automating Perl deployments with Hudson
Automating Perl deployments with HudsonAutomating Perl deployments with Hudson
Automating Perl deployments with Hudson
 
Hello, AngularJS 1.3
Hello, AngularJS 1.3Hello, AngularJS 1.3
Hello, AngularJS 1.3
 
Continuous testing and deployment in Perl (London.pm Technical Meeting Octobe...
Continuous testing and deployment in Perl (London.pm Technical Meeting Octobe...Continuous testing and deployment in Perl (London.pm Technical Meeting Octobe...
Continuous testing and deployment in Perl (London.pm Technical Meeting Octobe...
 
VodQA_ParallelizingCukes_AmanKing
VodQA_ParallelizingCukes_AmanKingVodQA_ParallelizingCukes_AmanKing
VodQA_ParallelizingCukes_AmanKing
 
Reducing Build Time
Reducing Build TimeReducing Build Time
Reducing Build Time
 
OSDC 2019 | Simplifying Your IT Workflow with Katello and Foreman by Nikhil K...
OSDC 2019 | Simplifying Your IT Workflow with Katello and Foreman by Nikhil K...OSDC 2019 | Simplifying Your IT Workflow with Katello and Foreman by Nikhil K...
OSDC 2019 | Simplifying Your IT Workflow with Katello and Foreman by Nikhil K...
 
Django strategy-test
Django strategy-testDjango strategy-test
Django strategy-test
 
Plugin Research Status 2009.07.14
Plugin Research Status 2009.07.14Plugin Research Status 2009.07.14
Plugin Research Status 2009.07.14
 
[English][Test Girls] Zero to Hero: Start Test automation with Cypress
[English][Test Girls] Zero to Hero: Start Test automation with Cypress[English][Test Girls] Zero to Hero: Start Test automation with Cypress
[English][Test Girls] Zero to Hero: Start Test automation with Cypress
 
Sending email with send grid and laravel
Sending email with send grid and laravelSending email with send grid and laravel
Sending email with send grid and laravel
 
.NET Conf 2018: Build Great Libraries using .NET Standard
.NET Conf 2018: Build Great Libraries using .NET Standard.NET Conf 2018: Build Great Libraries using .NET Standard
.NET Conf 2018: Build Great Libraries using .NET Standard
 
Qa Service Mesh approach
Qa Service Mesh approachQa Service Mesh approach
Qa Service Mesh approach
 
All girlhacknight intro to rails
All girlhacknight intro to railsAll girlhacknight intro to rails
All girlhacknight intro to rails
 
Chef for beginners module 1
Chef for beginners   module 1Chef for beginners   module 1
Chef for beginners module 1
 
Building cautious software
Building cautious softwareBuilding cautious software
Building cautious software
 

Similar to Puppet At Twitter - Puppet Camp Silicon Valley

Working Effectively With Legacy Perl Code
Working Effectively With Legacy Perl CodeWorking Effectively With Legacy Perl Code
Working Effectively With Legacy Perl Code
erikmsp
 
Unit testing php-unit - phing - selenium_v2
Unit testing   php-unit - phing - selenium_v2Unit testing   php-unit - phing - selenium_v2
Unit testing php-unit - phing - selenium_v2
Tricode (part of Dept)
 
John Adams Puppet Camp 2010
John Adams Puppet Camp 2010John Adams Puppet Camp 2010
John Adams Puppet Camp 2010
Puppet
 

Similar to Puppet At Twitter - Puppet Camp Silicon Valley (20)

Puppet quick start guide
Puppet quick start guidePuppet quick start guide
Puppet quick start guide
 
Puppet for Developers
Puppet for DevelopersPuppet for Developers
Puppet for Developers
 
Working Effectively With Legacy Perl Code
Working Effectively With Legacy Perl CodeWorking Effectively With Legacy Perl Code
Working Effectively With Legacy Perl Code
 
Continuous Deployment Pipeline for Systems at Cascadia IT Conference - 2017-0...
Continuous Deployment Pipeline for Systems at Cascadia IT Conference - 2017-0...Continuous Deployment Pipeline for Systems at Cascadia IT Conference - 2017-0...
Continuous Deployment Pipeline for Systems at Cascadia IT Conference - 2017-0...
 
Selenium-Webdriver With PHPUnit Automation test for Joomla CMS!
Selenium-Webdriver With PHPUnit Automation test for Joomla CMS!Selenium-Webdriver With PHPUnit Automation test for Joomla CMS!
Selenium-Webdriver With PHPUnit Automation test for Joomla CMS!
 
A Fabric/Puppet Build/Deploy System
A Fabric/Puppet Build/Deploy SystemA Fabric/Puppet Build/Deploy System
A Fabric/Puppet Build/Deploy System
 
Puppet Development Workflow
Puppet Development WorkflowPuppet Development Workflow
Puppet Development Workflow
 
Unit testing php-unit - phing - selenium_v2
Unit testing   php-unit - phing - selenium_v2Unit testing   php-unit - phing - selenium_v2
Unit testing php-unit - phing - selenium_v2
 
John Adams Puppet Camp 2010
John Adams Puppet Camp 2010John Adams Puppet Camp 2010
John Adams Puppet Camp 2010
 
The Veil-Framework
The Veil-FrameworkThe Veil-Framework
The Veil-Framework
 
Anatomy of a Build Pipeline
Anatomy of a Build PipelineAnatomy of a Build Pipeline
Anatomy of a Build Pipeline
 
Puppet Camp Düsseldorf 2014: Continuously Deliver Your Puppet Code with Jenki...
Puppet Camp Düsseldorf 2014: Continuously Deliver Your Puppet Code with Jenki...Puppet Camp Düsseldorf 2014: Continuously Deliver Your Puppet Code with Jenki...
Puppet Camp Düsseldorf 2014: Continuously Deliver Your Puppet Code with Jenki...
 
Puppet Camp Duesseldorf 2014: Toni Schmidbauer - Continuously deliver your pu...
Puppet Camp Duesseldorf 2014: Toni Schmidbauer - Continuously deliver your pu...Puppet Camp Duesseldorf 2014: Toni Schmidbauer - Continuously deliver your pu...
Puppet Camp Duesseldorf 2014: Toni Schmidbauer - Continuously deliver your pu...
 
Frontend testing of (legacy) websites
Frontend testing of (legacy) websitesFrontend testing of (legacy) websites
Frontend testing of (legacy) websites
 
Dev ops
Dev opsDev ops
Dev ops
 
(Agile) engineering best practices - What every project manager should know
(Agile) engineering best practices - What every project manager should know(Agile) engineering best practices - What every project manager should know
(Agile) engineering best practices - What every project manager should know
 
Continuous Deployment Pipeline for Systems - Presented at Ohio LinuxFest 2017...
Continuous Deployment Pipeline for Systems - Presented at Ohio LinuxFest 2017...Continuous Deployment Pipeline for Systems - Presented at Ohio LinuxFest 2017...
Continuous Deployment Pipeline for Systems - Presented at Ohio LinuxFest 2017...
 
Devops interview questions 2 www.bigclasses.com
Devops interview questions  2  www.bigclasses.comDevops interview questions  2  www.bigclasses.com
Devops interview questions 2 www.bigclasses.com
 
Puppet
PuppetPuppet
Puppet
 
Regain Control Thanks To Prometheus
Regain Control Thanks To PrometheusRegain Control Thanks To Prometheus
Regain Control Thanks To Prometheus
 

More from Puppet

Puppet camp2021 testing modules and controlrepo
Puppet camp2021 testing modules and controlrepoPuppet camp2021 testing modules and controlrepo
Puppet camp2021 testing modules and controlrepo
Puppet
 
2021 04-15 operational verification (with notes)
2021 04-15 operational verification (with notes)2021 04-15 operational verification (with notes)
2021 04-15 operational verification (with notes)
Puppet
 
Enforce compliance policy with model-driven automation
Enforce compliance policy with model-driven automationEnforce compliance policy with model-driven automation
Enforce compliance policy with model-driven automation
Puppet
 

More from Puppet (20)

Puppet camp2021 testing modules and controlrepo
Puppet camp2021 testing modules and controlrepoPuppet camp2021 testing modules and controlrepo
Puppet camp2021 testing modules and controlrepo
 
Puppetcamp r10kyaml
Puppetcamp r10kyamlPuppetcamp r10kyaml
Puppetcamp r10kyaml
 
2021 04-15 operational verification (with notes)
2021 04-15 operational verification (with notes)2021 04-15 operational verification (with notes)
2021 04-15 operational verification (with notes)
 
Puppet camp vscode
Puppet camp vscodePuppet camp vscode
Puppet camp vscode
 
Modules of the twenties
Modules of the twentiesModules of the twenties
Modules of the twenties
 
Applying Roles and Profiles method to compliance code
Applying Roles and Profiles method to compliance codeApplying Roles and Profiles method to compliance code
Applying Roles and Profiles method to compliance code
 
KGI compliance as-code approach
KGI compliance as-code approachKGI compliance as-code approach
KGI compliance as-code approach
 
Enforce compliance policy with model-driven automation
Enforce compliance policy with model-driven automationEnforce compliance policy with model-driven automation
Enforce compliance policy with model-driven automation
 
Keynote: Puppet camp compliance
Keynote: Puppet camp complianceKeynote: Puppet camp compliance
Keynote: Puppet camp compliance
 
Automating it management with Puppet + ServiceNow
Automating it management with Puppet + ServiceNowAutomating it management with Puppet + ServiceNow
Automating it management with Puppet + ServiceNow
 
Puppet: The best way to harden Windows
Puppet: The best way to harden WindowsPuppet: The best way to harden Windows
Puppet: The best way to harden Windows
 
Simplified Patch Management with Puppet - Oct. 2020
Simplified Patch Management with Puppet - Oct. 2020Simplified Patch Management with Puppet - Oct. 2020
Simplified Patch Management with Puppet - Oct. 2020
 
Accelerating azure adoption with puppet
Accelerating azure adoption with puppetAccelerating azure adoption with puppet
Accelerating azure adoption with puppet
 
Puppet catalog Diff; Raphael Pinson
Puppet catalog Diff; Raphael PinsonPuppet catalog Diff; Raphael Pinson
Puppet catalog Diff; Raphael Pinson
 
ServiceNow and Puppet- better together, Kevin Reeuwijk
ServiceNow and Puppet- better together, Kevin ReeuwijkServiceNow and Puppet- better together, Kevin Reeuwijk
ServiceNow and Puppet- better together, Kevin Reeuwijk
 
Take control of your dev ops dumping ground
Take control of your  dev ops dumping groundTake control of your  dev ops dumping ground
Take control of your dev ops dumping ground
 
100% Puppet Cloud Deployment of Legacy Software
100% Puppet Cloud Deployment of Legacy Software100% Puppet Cloud Deployment of Legacy Software
100% Puppet Cloud Deployment of Legacy Software
 
Puppet User Group
Puppet User GroupPuppet User Group
Puppet User Group
 
Continuous Compliance and DevSecOps
Continuous Compliance and DevSecOpsContinuous Compliance and DevSecOps
Continuous Compliance and DevSecOps
 
The Dynamic Duo of Puppet and Vault tame SSL Certificates, Nick Maludy
The Dynamic Duo of Puppet and Vault tame SSL Certificates, Nick MaludyThe Dynamic Duo of Puppet and Vault tame SSL Certificates, Nick Maludy
The Dynamic Duo of Puppet and Vault tame SSL Certificates, Nick Maludy
 

Puppet At Twitter - Puppet Camp Silicon Valley

Editor's Notes

  1. At least from where I look at the community, I mainly see OPS and very little dev when it comes in implementing best practices. If you’re going to treat your infrastructure as code, make sure to treat that code like your developers treat their code (and maybe even better ;)). Developers have a lot of great practices (and of course some bad ones) that an Ops team can pull into their workflow to improve their own coding standards. Here I’ll talk about how Twitter’s ops teams have pulled in practices and tools that our developers use to make our code better.
  2. This should probably go without saying but all of your puppet code should be in some form of version control.Current Twitter is using SVN, possibly looking to more to git in the future.Standardizing commit messages can also help here as well. We require commit messages to mention who gave the code a “ship it” and what the RB number was that the ship it was in.SVN can be a good choice if you the have the possibility of dropping some large files into your code base. Be careful with this though. If you’re dropping RPMs or debs onto a system you’d probably be better off just setting up a repo. They aren’t that hard to put together.Branches also allow for an easier way of dealing with testing and pushing code into production. I’ll get into how Twitter handles this a little later.
  3. Peer review of code can be very powerful. I know that I’m not the best person at Twitter when it comes to code in general and even with Puppet. But with peer review, we have the ability to easily pull in expertise before our code goes out and into production. Plus, everybody makes stupid little mistakes that aren’t seen after working on a set of modules for an hour that someone might see with a fresh set of eyes.
  4. The peer review tool we use at Twitter is Review Board. We use the same tool that our developers use for their code reviews on the rest of the code at Twitter. Using a system like this you’ll also get an easy place to lookup the history of a change. We place the RB number in the commit message so if someone is looking through the commit log they can easily see what was added in each commit plus who shipped it (though this is supposed to already be in the commit message).Reviews aren’t hard to setup. - Make your changes. - Create a diff of your changes. - Create a new review board. Upload your diff, add some people and groups as reviewers, maybe put in a JIRA ticket with the bug you’re fixing or the feature being added, click publish - People added as reviewers will get an email about the RB they need to checkout. - Reviewers will review the code you’ve uploaded, make comments and then when it looks good you’ll get a “Ship it”. - Once you have a “ship it”, push your changes into the branch you were moving your code into. - Profit.
  5. Have a style guide and make it useful. We had a style guide created by a group during a hackweek last year. PuppetLabs has a great style guide that you can use verbatim if it works for you. Or you can create an internal doc or wiki that takes what you like from the PuppetLabs style guide and adds your own tweaks to it.If you create a style guide try to get as much by in about. If you do something like review board, try to get people to enforce the style guide (or at least the majority of it) before giving a “ship it”. Style guides only work if people are willing to work with them but they can make your code much easier to read and work with for future engineers.
  6. *NOTE*: This isn’t ownership with an iron fist that locks everyone else out!We have files in our source control systems that allow us to designate who owns certain modules. Just a simple list of usernames for the users who happen to know a set of modules. Owners just know this code better than the majority of the code in the modules that you might be working with.These files can either be for authorization or informative.Where this can be helpful if is if you’re making a change across a large set of modules. Let’s say you’re working on upgrading to Puppet 3 and you want to make sure that “provider =&gt; puppet:///&lt;module&gt;/&lt;filename&gt;” is setup like “provider:///modules/&lt;module&gt;/&lt;filename&gt;”
  7. You might have one engineer that is working on an upgrade from Puppet 2.7 to Puppet 3.x. There are some formatting changes that need to happen for this and one is example is updating the path for a provider when puppet is being used. In this example, the engineer updates the code, setups up and RB using the OWNERS files for each modules so that all of the known parties are notified. Another email could also be sent out about the RB and that it’ll be left open for a set amount of time (~ couple days to a week) and then it will be pushed up and into production. This makes sure that you let everyone know about the change where as with just an email, you might miss a list that contains a group and then no one on that team sees the changes and something breaks for them on the merge.
  8. Of course all of this can be automated and it should be because it’ll allow for more and more of your engineers to follow the best practices.
  9. git review tools allow you to automatically create the diff of your changes and upload the changes into RB. You can also take the OWNERS files and automatically add them to the RB for you.puppet-lint can check the style of code and can be run by all of your users. You can even check the code that is being committed as a hook in your version control system. If it fails, then don’t commit and let the user know that they need to fix their style. If the style works then continue on.OWNERS can be read and uploaded with git review tools. If you want a more authoritative use of them, you can add a git hook that can check RB for ship its and make sure that somebody in the owners file for a module gave a ship it before it’s pushed to production systems.Script anything that you need to do if possible. Version control hooks can be great for this and help improve your code quality. You can script a basic regex that checks for certain things in a commit message as well to make sure you get all the information you need.
  10. Figure out what you’re doing well and what isn’t working. If something isn’t working for you and your team, don’t keep doing it. Figure out what it not working, and either fix it or remove it. Don’t just do something because you “should”, do something because it’s what works.Also look at what’s missing from your workflow. For example, at Twitter we want to work on getting our puppet code tested. We have a side project that an engineer is working on that will work with rspec-puppet. We’re hoping that we can get some coverage on our code and make it even better. This can even be added as a CI job that runs before your code is committed and then if it passes, your code is committed to the production branch.
  11. Because going directly to production is not always a smart idea.With our usage of SVN we don’t really have an easy way to branch a lot so we’re currently using three branches. We’ve named them head (which is the master branch), puppet-testing and puppet-production. With this branch system and how we run the puppet agent (discussed later), we’re able to effectively work on our infrastructure without screwing everyone else up.
  12. Our head branch is where we do our development. This is the master branch of the repository so it’s the easiest branch to commit to. This branch can be rather volatile so we try not to leave hosts on this branch. With our puppet-util tool and ENC it’s easy to move hosts between branches.Normal workflow: - Make your changes - Commit into head - Put a machine on head in the role you’re working on and test out puppet - If everything is good then continue onto pulling the changes into the puppet-testing and puppet-production branches. - else go back to the first step.
  13. Our puppet-testing branch goes along with our canary system for deployments. This branch used to allow for long term testing in a production setting. We usually have a small subset of the hosts in a role set to this branch to make sure that any changes that happen can be tested and debugged across the fleet before making it to production and affecting all hosts.Allowing major changes to sit for days to weeks on a subset of systems really allows us to make sure that what we’re sending to production is working properly. This is usually done for large DNS changes, JVM updates, kernel updates. Our kernel update process involves the OS team updating the kernel puppet module with the latest canary kernel that is automatically installed on system using this branch. Once the systems are rebooted they pick up the new changes.Usual workflow: - Changes made to head - Find commit ID for head commit - Use cherry-pick.sh utility to pull changes into testing branch - Get a ship it from somebody else who knows the code - Commit the changes into the repo for the branch
  14. Our puppet-testing branch goes along with our canary system for deployments. This branch used to allow for long term testing in a production setting. We usually have a small subset of the hosts in a role set to this branch to make sure that any changes that happen can be tested and debugged across the fleet before making it to production and affecting all hosts.Usual workflow: - Changes made to head - Find commit ID for head commit - Use cherry-pick.sh utility to pull changes into production branch - Get a ship it from somebody else who knows the code - Commit the changes into the repo for the branch
  15. Here is the basic workflow for our commits to our puppet repo. You start out working in head where develop, commit your changes and then test. If your changes are working as expected in your testing, you can begin promoting up into the other branches or you just keep repeating the three steps until you have what you need. Cherry picking is done by a script that takes a commit ID and a puppet branch. The script updates all of the files in the commit that you’ve references. From this you use a special commit to create a Review Board entry. You get a “ship it” or two from a couple other developers
  16. This setup can be a lot for a new user to take in at first. But once it’s been done a few times it becomes quite easy.Sometimes you can have a few commits to head before you get it right and if you don’t document those commits and get pulled into something else it can become difficult to find all of your changes that need to be cherry picked.Commits can get orphaned in head. We try to remedy this with an email that sends out diffs between head -&gt; testing and testing -&gt; production. Unfortunately, these emails aren’t usually acted upon so these changes can sit in one of two branches but not all three which can cause some confusion in the future.
  17. Since we’re using 3 different branches, we need a way for hosts to differentiate which code set they get. So, we run three puppetmasterd processes per host on 3 different ports. Each port is scaled differently depending how much traffic it should expect to get. So the port hosting our production code will usually run about 26 passenger processes, the canary and head branches due to their lower traffic patterns run about 4 passenger processes.We don’t run the puppet agent daemon but use a cronjob for clients. This tool, which will be discussed in depth later, uses roundrobin DNS to determine which puppet master to access.Since we have multiple puppet masters, certificates can become and issue. So, we have autosigning turned on and when the puppet-util script accesses the puppet master it asks for a certificate no matter what.
  18. Audubon is an internal tool that was created for keeping inventory of hosts and roles at Twitter. At it’s most fundamental it’s a REST API that returns JSON blobs with information on a role, a host, an attribute, a fact or a group that you specify. It will also take wildcards so if you’re not sure exactly what you’re looking for you can find it. It’s also all command line driven which I find nice but some might prefer a web GUI for this information as well.Audubon has three basic constructs: - Facts - Attributes - GroupsSince Audubon was being built to house all of our information on hosts and roles it became a perfect fit for our ENC.
  19. Facts have mainly been deprecated now and we have a script that now refreshes the majority of this information upon boot of the host as an init script. This data really doesn’t change so doing it upon reboot is enough and we don’t have to worry about grabbing the information during a puppet run.
  20. Attributes are the most used and useful of the three. Attributes are anything that describes a host or role so it can include things such as, ownership, access groups, information about services running on the host as well as information about the host and its hardware.Attributes allow for settings to either be made on the server or role levels. On the server level, this host is the only one that care about this information and it’s unique to this host. We try to avoid these types of attributes for anything that describes the service but is useful for things like IP address, MAC addresses and other information like that. On the role level, this information is relevant across the entire role and can include information about the service used upon setup or instantiation.Attributes set on the role level at inherited so as your role gets new machines, you won’t need to update these attributes. Anything set as a server attribute though will need to be updated.There is also a hierarchy where server level attributes have a priority over role level attributes and will be returned first.One example of a role level attribute is our kernel or jdk_version attributes. Since we have the canary setup described earlier, we can set server level attributes to use the canaried versions of these but across the roles we use the kernel and JDK version that have been tested over the past month to 6 weeks.
  21. Groups be used like attributes but they are strictly server level and are not inherited. So, you might want to use a group level attribute for something like cluster_name so when a new host is added to the role it is almost completely setup, but does not randomly join the rest until you tell it to so you can watch it and make sure it’s working properly.
  22. Our ENC allows us to easily pull in values for whatever host the puppet agent is being run on. This allows us to easily make our manifests dynamic based on the information pulled from the ENC when doing the initial lookup for the role of the host.
  23. So both puppet masters and puppet clients use our ENC so that we have one central location for all the information that we need. This makes it convenient for finding the information you need about a host and what should be happening in a puppet manifest if something goes wrong.
  24. With our three branch setup, we can’t use the puppet daemon. So, we instead use a cronjob to run the puppet agent on a randomized minute every thirty minutes based on the hostname of the host. This helps us some with mitigating the possibility of a thundering herd upon the puppet masters. With our setup, there are a set of steps that each host must go through when contacting the puppet masters.
  25. The basic process for how puppet-util runs:First we check to make sure that we have loony. After that get the branch that the host is using from the puppet_branch group in Audubon.If the branch name is head, canary, testing, prod or production then we’ll continue.If the branch name is off or something that we don’t recognize then we’ll kill the puppet run and let the user know.Take the branch name, and figure out which port to use against the puppet master.Get a puppet master from a well know DNS name that is setup for round-robin. After getting the name do some name to IP matching to make sure that we got back a good name. Next, use the name and the port to download the CA cert for the puppet master.If this is successful then we’ll continue on down the lineIf this is unsuccessful then we’ll retry up to 5 times before quitting the puppet runFinally, we’ll run the puppet agent command against the puppet master and port combo that we’ve found, download our catalog and apply it to the host (unless dryrun has been specified)This tool though doesn’t only do the puppet runs. It’s also our general puppet tool for doing tasks such as setting the branch, turning off puppet runs, doing dry runs and even some basic checking for puppet manifests that are constantly updating.
  26. It’s a good idea to think of modules as a pyramid. You can subdivide the pyramid as much as you want but for the most part three basic categories should suffice for most and it’s basically what we do at Twitter.At the lowest level we have our base modules. The base modules are those that are imported by every system by default. Such things can include kernels, LDAP/Kerberos, YUM/APT repos, any tools or packages that should be installed across the board. These are the files that make your hosts production ready and just need the the extras from the service to operate fully.Next you’ll tend to have team level modules. These can be modules that include a set of tools, packages or tasks that your team wants to do across all of the hosts you run. This can include things such as a custom MOTD, cronjobs that check various aspects of the systems, etc.Finally we have service modules. These are the modules that setup the services on top of everything else that was defined in the lower two levels. These modules can include things such as setting up monit, creating the directory structure that your application is expecting, setting up logrotatedconfs for your service. Really this is anything that needs to happen and is specific to your service.
  27. When you build a module for a system service, you gain the possibility to utilize it in more than one place. If you recreate your configs in a few different module then you run the risk of missing something when change occurs. Create abstract modules for things that most of your teams will use such as logrotated, monit, common release directories for the services you deploy (/usr/local/&lt;service&gt;, /var/log/&lt;service&gt;, /var/run/&lt;service&gt;, etc). If you have abstracted modules then everyone who is doing the same thing should be able to use your module to setup their services.E.g. JVM moduleWe’re moving from our Ruby stack to a bunch of JVM servicesEach service needs pretty much the same setupFiles created such as /usr/local/&lt;service&gt;, /var/log/service, /var/run/&lt;parent&gt;/&lt;service&gt;Configs for supporting services such as monit, logrotated, JVM metricsCreating a single module for this has allowed teams to have a common setup so it’s easy for others to figure out what is going on if needed.The default settings for the module should give you a working service. If they need to tweak it, you should allow it.
  28. At times developers are going to request software applications such as DBs to test out changes on. Make sure that it’s easy for them to set these up and working with them easily. For example, we have a great DBA team but sometimes our devs want to try something in a DB in a non-production setting. So, our DBA team has created a module that will setup a MySQL DB in the same way that they use on any machine you set the attribute on. This way, the developers can easily setup a DB to play with in less than 5 minutes and 1 puppet run which makes them quite happy.
  29. Over the coming months what are we looking to do with our puppet infrastructure? Our biggest goal will be to move towards Puppet 3.x. We have some updating of syntax and some canarying to do but this has been in the process for a few months and is a Q1 goal for completion. Perhaps by PuppetConf I’ll have an update on this front and possibly a few new sections on what to look for on a puppet upgrade (but haven’t most people upgraded?).We also have the possibility of moving away from SVN and to git. Git is the version control system that we use for almost all of our other repos. We also usually don’t keep large files in our puppet repo so git would be an interesting change. Using git could also allow us to more easily test out feature branches on systems without possibly having to make multiple commits into HEAD like we do today.