Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
CAPS: What's best for deploying and managing OpenStack? Chef vs. Ansible vs. Puppet vs. Salt
1. CAPS: What’s best for deploying and managing OpenStack?
Chef vs. Ansible vs. Puppet vs. Salt
Animesh Singh
@AnimeshSingh
flickr.com/68397968@N07
Daniel Krook
@DanielKrook
Paul Czarkowski
@PCzarkowski
2. Our goal is to help you make an informed decision about your configuration management tool
Where to go next when adopting the tool that’s right for you
How your role and organization influences the decision to adopt a particular tool
To what degree each supports OpenStack deployments
What the four most popular configuration management projects are
Why configuration management is critical for running OpenStack
2
3. We are not affiliated with any of the projects in this presentation
3
Animesh Singh
• Senior Software Engineer, Cloud and Open Source Technologies, IBM
• @AnimeshSingh
Paul Czarkowski
• Cloud Engineer at Blue Box, an IBM company
• @PCzarkowski
Daniel Krook
• Senior Software Engineer, Cloud and Open Source Technologies, IBM
• @DanielKrook
5. Configuration management is critical for running OpenStack
• OpenStack is a large distributed cloud Infrastructure-as-a-Service systemComplexity
• OpenStack is an open source project with a rapid upgrade cycleChange
• OpenStack clusters are often duplicated into multiple environmentsConsistency
• OpenStack automation is critical for speed, reliability, complianceCompliance
• OpenStack CM tools implement cloud management best practicesQuality
5
Any tool is better than no tool!
6. Your role and organization affects the decision to adopt a particular tool
6
The OpenStack operator
is interested in stability,
maintainability, and availability of
large deployments.
The OpenStack innovator
is interested in quick evaluations,
standing up environments quickly,
evaluating new features such as
containerization.
The OpenStack contributor
Is looking to quickly iterate on
changes to a particular project.
10. Each tool has a strong community, clear mission, and scales well
10
Salt Ansible Puppet Chef
Motivation Creators found existing
solutions to be lacking,
and wanted a very low
latency, highly scalable
remote execution and
data collection framework
Disappointment that
existing tools required an
agent and made it difficult
to accomplish tasks like
rolling deployments
Created “… out of fear and
desperation, with the goal
of producing better
operations tools and
changing how we
manage systems”
Chef began as an internal
tool for Opscode, to build
end-to-end server/
deployment tools. Soon,
its creators realized its
broader use
Users PayPal, Verizon, HP,
Rackspace
Blue Box, Red Hat Paypal NYSE, ADP,
Symantec, Sony
Bloomberg, Ancestry.com,
GE Capital, Digital
Science, Nordstrom
Enterprise offering Yes Hosting/Consulting/
Training
Yes Yes
License Apache License v2 GNU Public License v3 Apache License v2 Apache License v2
GitHub activity
Contributors
Commits
Branches
Releases
1,041
49,193
11
82
1,003
13,527
33
57
355
19,595
9
291
369
12,089
177
231
12. Salt overview
A configuration management system, motivated by the idea of enabling high-speed communication
with large numbers of systems
Capable of maintaining remote nodes in defined states (for example, ensuring that specific packages
are installed and specific services are running)
Written in Python, Salt offers a push method and an SSH method of communication with clients and
querying data on remote nodes,
Parallel execution of remote commands using AES encrypted protocol
The networking layer is built with the ZeroMQ distributed messaging networking library, using
msgpack for binary data serialization enabling fast and light network traffic.
12
13. Salt characteristics and features
Highly Scalable; vertical and horizontal scale made easy as your
needs change. Example Syndicate Feature; One Master
managing multiple masters;
Peer Interface allows Minions to control other Minions;
advantage with query and continuous code delivery
Reactor system resides on Event Bus with Master; enables
ability to react to events down stream; useful in automatic code
deployment
13
15. Salt primary components
Salt Master – Controls Minions
Master Daemon – Runs task for Master (authenticating minions, communicating with connected minions
and 'salt' CLI.)
Salt Client – Runs on the same machine as the Master; provides commands to the Master; User able to
see results via the Client
Minion – Receives commands from the Master, runs job and communicates results back to master
Salt Modules – Collections of function (patterns) which can run from Salt CLI
Halite – An optional web UI
15
16. Salt for OpenStack
Salt is picking up for OpenStack deployments.
No default standard formula for OpenStack deployment provided, but
community has quickly spring up with various versions.
Salt OpenStack Formulas:
https://github.com/EntropyWorks/salt-openstack
https://github.com/CSSCorp/openstack-automation
https://github.com/cloudbase/salt-openstack
https://github.com/nmadhok/saltopenstack
https://github.com/Akilesh1597/salt-openstack
Ones which seems active recently are listed here
https://github.com/cloudbase/salt-openstack
https://github.com/nmadhok/saltopenstack
16
17. Salt for OpenStack – Typical installation steps
Install salt-master on a machine to control the installation.
Install salt-minion on all machines on host your OpenStack compute nodes
Edit ‘salt-master’ OpenStack configuration file to provide information about OpenStack salt
formulas and pillar
Configure Salt Grains e.g ‘ROLE’: controller, ‘ROLE’: network, ‘ROLE’: dashboard etc.
Configure Salt Pillars with meta-data that you would want to store on the minion e.g credentials,
environment, networking etc.
Configure Salt States for different roles to define end states for OpenStack controller, compute,
keystone etc.
17
18. Salt for OpenStack – Typical installation steps
Establish connectivity between salt-master and salt minions. Configure minions to tell about master,
and provide an id. The master identifies each minion with its ID and then the minion key has to be
accepted by the master.
Run commands to make the OpenStack parameters available and upload all of the custom state and
execution modules on the targeted minion(s).
Finally run the installation e.g
sudo salt -C 'I@OpenStack:Cluster:dev_cluster' state.sls OpenStack.
Reference: https://github.com/nmadhok/saltopenstack, https://www.youtube.com/watch?v=vkB7vfeAv98&feature=youtu.be
18
19. Salt summary
Strengths
Deepest technical depth & degree of flexibility versus the other vendors
Easier to start, install, deploy, and manage
Agent or agentless (via SSH)
Highly scalable architecture
Relatively easy to debug and solve problems
Python based language; this is a preference across the industry
Weaknesses
Documentation is challenging to understand at the introductory level.
OpenStack support is not mature, and not enough community uptake.
Web UI is newer and less complete than other too interfaces in this space.
Not great support for non-Linux operating systems.
19
21. Ansible overview
• A remote execution system used to orchestration the execution commands and query data for the
purpose of Orchestration and Configuration Management.
• Written in Python, Ansible performs tasks from easy to read and write YAML playbooks.
• Ansible offers multiple push methods, the primary and most commonly used is SSH based.
• Ansible does not require an agent to be installed, but does expect SSH access and a Python
interpreter on systems that it manages.
21
24. Ansible primary components
Ansible – Python CLI and libraries
Playbooks – YAML files describing the series of tasks to be performed
Roles – Collections of Playbooks and Variables
Inventory – listing of servers and their group memberships
Tower - $$$ offering from Ansible to offer Enterprise features
24
25. Ansible for OpenStack Operators
• Popular in the operations community
– Ursula - https://github.com/blueboxgroup/ursula
– OSAD - https://github.com/openstack/openstack-ansible
– Kolla - https://github.com/openstack/kolla
– Ansible Galaxy - https://github.com/openstack-ansible-galaxy
25
26. Ursula
Open source
> 1,000 tasks to deploy and manage fully HA OpenStack cloud
Defcore certified for Juno
Install from source or BYO [giftwrap] packages.
Opinionated and Curated with a focus on stability and operability
Proven track record for in place upgrades
Experimental support for Magnum and Nova-Docker
26
27. Ursula
Install both Ansible and OpenStack
with this one weird trick...
$ cd ~/development
$ git clone git@github.com:blueboxgroup/ursula.git
$ cd ursula
$ pip install -r requirements.txt
$ ursula --vagrant envs/example/allinone site.yml
27
28. Ansible for OpenStack Users
• OpenStack is a first class citizen in the Ansible module ecosystem
• Solid support for IAAS operations
• Uses native hooks into “fade” library
• Orchestrate your cloud, instances, and applications with the same
tooling.
• https://github.com/ansible/ansible-modules-core/tree/devel/cloud/
openstack
28
29. Ansible summary
• Strengths
– No central server
– Orchestration focus
– Very easy to get started
– Tasks are executed in the order written
– Easy to extend modules and create new ones
– Fairly easy to debug and diagnose issues.
– Python based, just like OpenStack.
• Weaknesses
– No central server
– CM features are secondary to orchestration features. (apt/yum vs. package)
– SSH based communications can be slow
– No Agent, but requires Python (Switches, CoreOS, etc.)
– Effectively have to give remote root SSH access
– Different syntax across Playbooks, Templates, and Modules.
– JINJA2 :(
29
31. Puppet overview
An open source configuration management tool capable of automating system administration tasks
Deployed in a typical client/server fashion, in which clients periodically poll server for desired state,
and send back status reports to the server (master)
Works in a highly distributed fashion to quickly and efficiently provision, upgrade, and manage nodes
all throughout their lifecycle
Based on Ruby, custom DSL for writing manifests, utilizes ERB for templates.
31
32. Puppet overview
Fairly easy to add and remove nodes; Each cluster may also have multiple masters for HA /
Scalability reasons.
Tasks are idempotent and are executed only if a node state doesn’t match required configuration.
Resources are abstracted so users can ignore details such as command names, file formats and
locations, etc., making manifests OS agnostic.
32
34. Puppet primary components
• Puppet Master – Received queries and status reports from the Puppet Agents; provides commands to
Puppet Agents
• Puppet Agents – Queries Puppet Master; runs Master commands as needed, reports results back to
master
• Reporting/Analytics – Visibility to into puppet agents including configuration logs, metrics on timing,
resources, & changes.
• Puppet Forge – Community modules maintained included approved puppet modules
• Puppet DB - Holds information about every node within the infrastructure
34
35. Puppet for OpenStack
35
• There are currently multiple Puppet modules for nearly each OpenStack
component available at
– https://forge.puppetlabs.com/modules?q=stackforge
– https://wiki.openstack.org/wiki/Puppet
• Can be deployed as a single node deployment or in a HA fashion
• Single node deployment is relatively simple
– https://wiki.openstack.org/wiki/Puppet/Deploy
36. Puppet for OpenStack – Typical installation steps
• Install puppet master on server and set up appropriate certs.
• Install/Configure puppet agent on servers to be managed by puppet
• Register the agent with the master
• Download or create manifests/modules to manage puppet agents based on role
36
37. Puppet summary
Strengths:
– Automation of compliance across environment; high value to enterprise
– Native capabilities (like iptables) to work with shell-level constructs are more robust leading to greater flexibility vs
competitor solutions like Puppet.
– Web UI & Reporting Tools
Weaknesses:
– Steep learning curve for new users
– can be difficult to scale*
– certificate management can be difficult especially with multiple masters.
37
39. Chef overview
• A systems and cloud infrastructure automation framework for installing software and applications to
bare metal, virtual machine, and container clouds.
• Configuration is in a Ruby DSL, formed around concepts of organizations, environments,
cookbooks, recipes, and resources – all driven by supplied or derived attributes.
• A logical Chef workstation is used to control the deployment of configurations from the Chef server
to Chef managed nodes. Nodes are bootstrapped with agents and pull configurations from the server.
• Chef the company provides a set of value add Software-as-a-Service to handle analytics and hybrid
delivery models.
39
40. Chef characteristics and features
• Developed in Erlang to provide scale to tens of thousands of servers. By default, the Chef node
contacts the server for configuration updates every 30 minutes and while “converging” to the
required state, and offloads processing to itself (pulling binaries, executing recipe logic).
• Designed around an infrastructure-as-code model with version control integral to the workstation
configuration setup, with a simple Ruby DSL, enabling advanced configuration logic and appealing to
developers.
• Key focus on being idempotent, predictable, and deterministic system configurations. That is,
directives are run top to bottom, and emphasis is on writing cookbooks that can be run 1 or 100 times
and achieve the same result.
• Recipes are highly dynamic, as the Ruby DSL contains logic driven by supplied attributes at 4 levels
of scope, real time node information from ohai, and existing state of installed software.
40
41. Chef primary components
41
• Workstation – Admin creates and
tests cookbooks to upload to Chef
Server
• Server – Hub for state, cookbooks,
and configuration from workstations,
controls Clients
• Client – Polls the server for state
changes (run lists) from the Server,
runs the job and communicates
results back.
• Analytics – Visibility to into chef
servers, changes and compliance.
Real time visibility to action logs,
can integrate with HipChat allowing
collaboration and notification to
stakeholders and tools.
• Supermarket – Community
authored and maintained
cookbooks.
42. 42
Chef architecture
Chef Agent Chef Agent
SSHSSH
Capabilities and
current state
Capabilities and
current state
Chef Workstation
Chef Server
43. Chef for OpenStack
• The main OpenStack Chef resource is the wiki:
– https://wiki.openstack.org/wiki/Chef
• The Chef cookbooks for OpenStack are stable and maintained with branches
for each release, along with a separate repository for each cookbook:
– https://launchpad.net/openstack-chef
• Highly available configurations aren’t well documented, but there are options for
Vagrant, All-in-one, single controller roles to provide a foundation with
instructions on how to extend those to bare metal.
– https://github.com/openstack/openstack-chef-repo
43
44. Chef for OpenStack – Typical installation steps
44
• Install and configure the open source Chef Server
• Install and configure Chef Workstation using the ChefDK (can be on the same machine
as server)
• Download and install the OpenStack cookbooks from GitHub, configure environments,
roles, and runlists for each target node.
• Bootstrap the nodes from the workstation by providing the IP address for SSH along with
roles and/or runlists.
• Alternatively, instead of the previous two steps, use chef-provisioning to manage
clusters of machines in parallel.
45. Chef summary
• Strengths
– Strong incumbent with large community of cookbooks and development tools
– Excels at management of operating systems and middleware
– Strong business partner network
– Ability to handle physical, virtual, containers infrastructure in public and private deployments
– Provides an ecosystem of hosted services, including hosted Chef server and analytics
• Weaknesses
– Most complex to set up and requires understanding Ruby
– Documentation is fragmented given the long history of versions
– Containers are supported, but still sees infrastructure more as pets than cattle
– Requires an agent to be installed and pull configuration on a specified schedule
45
47. Summary matrix by tool and role
47
Salt Ansible Puppet Chef
Operator Not as mature as the other
options for production
OpenStack deployments.
Ursula/OSAD are the
most straightforward
and consistent
approach to installing
the OpenStack.
Oldest method to
deploy OpenStack.
Managed through the
community process in
the Big Tent.
Mature support for
OpenStack.
Managed through the
community process in
the Big Tent.
Innovator Salt is gaining in market
share and is easy to set
up, but not effective at
absorbing the upstream
changes.
Lowest barrier to
entry. Fastest growing
community.
Fairly difficult to set up.
Skills not as
transferrable to other
cloud projects.
Most difficult to set up, given
the additional workstation
components. Documentation
from older versions conflicts
with new
Contributor Not integrated with the
OpenStack development
process (i.e., not a Big Tent
project).
In the OpenStack Big
Tent.
In the OpenStack Big
Tent.
In the OpenStack Big
Tent.
48. Our goal was to help you make an informed decision about your configuration management tool
The following page provides a set of other OpenStack Summit sessions to follow
Your role and organization culture influences your tool selection decision
However, each has a different degree of support for OpenStack deployments
There are four mature, popular, and powerful configuration management options
Configuration management is critical for running OpenStack
48
49. Where to go from here
49
OpenStack Summit sessions
Automated OpenStack Deployment: A Comparison
This won’t hurt a bit… Best practices for TDD Ansible and OpenStack deployment
10 minutes to OpenStack using SaltStack!
NTT Communications - Automate Deployment & Benchmark for Your OpenStack With Chef, Cobbler and Rally
What's Cooking? Deployment Using the OpenStack Chef Cookbooks
Automated Installation and Configuration of Networking and Compute: A Complete OpenStack Deployment in Minutes
Ansible Collaboration Day: Ansible + OpenStack — State of the Universe
Other comparisons
Taste Test: Puppet, Chef, SaltStack, Ansible bit.ly/p-c-s-a
Review: Puppet vs. Chef vs. Ansible vs. Salt bit.ly/iw-caps
Puppet vs. Chef Revisited bit.ly/sr-pc