5. • Deliver value to business faster,
more reliably
• Meet compliance requirements
• Adopt DevOps practices
• Adopt new technology while
supporting & sun-setting old
• Too much fire fighting
• Slow deployments
• Scripting & manual
processes aren’t cutting it
• Difficult to keep up with
demands from the business
Common Challenges Key Initiatives
6. Our software
automates the provisioning,
configuration &
ongoing management
of your machines & the applications,
services & software running on them.
7. Deploy code
more frequently30x
Fewer failures60x
50%
Higher
business
growthSource: 2015 State of DevOps Report.
5,000 respondents across 6 continents.
8. Automation Best Practices
Model & Enforce Desired State
Across The Lifecycle
Across All Technologies
From Core Infrastructure Through Applications
14. Where To Start With Automation
Start With Core Infrastructure & Work Up
Core infrastructure configurations
Provisioning
Application infrastructure
Application orchestration
Operating System · NTP · DNS · SSH · Firewall· Firewall· Users· Groups
Bare Metal · VMs · Cloud · Containers
SQL Server · Tomcat · WebSphere · IIS · MySQL
Custom Apps · COTS · Share Services
16. 3,700+ community-contributed modules
Founded in 2005
Over 10 million nodes managed
Deep partnerships with datacenter titans
1000+ enterprise customers
EXPERIENCE
SCALE
ECOSYSTEM
CUSTOMERS
COMMUNITY
BACKING
USERS
Puppet Labs
The leader in IT Automation
30,000+ organizations use Puppet
18. Next Steps
• Download and try Puppet Enterprise:
puppetlabs.com/download
• Contact Us:
sales@puppetlabs.com
Editor's Notes
Sales Presentation Deck – 4Q FY2016 –v7
Puppet Labs exists to reduce the timeline: from the moment you have new technology, to the moment it’s in the hands of your users delivering value. That new technology comes in a few different flavors. First, new applications. Maybe you’re deploying a new application that your team built, tested and is ready to deliver to the business. It might be a new application you bought from a vendor and you’re preparing to roll out to users. Second, new infrastructure. For example, maybe you’re deploying an OpenStack environment or spinning up a new greenfield project in AWS. Third, updates to existing services. Maybe your adding a new set of capabilities to an application the business already relies on. And lastly, configuration updates. Maybe there have been key configuration settings that have drifted from the state they should be in and you need to bring those systems back into compliance. In any case, we help reduce the timeline of getting that update out to your users, and help you do so with the reliability, predictability and repeatability you demand.
What’s interesting though, is that each customer has different timelines at play. When I talk to Wal-Mart, out of the 40+k nodes they manage with Puppet, about 11k of them are on SLES 11, and they are trying to move another 6k from SLES 10 to SLES 11. Puppet helps them reliably reduce that timeline. At a very different looking user like Spotify, the notion of SLES is nearly laughable. We talk to them about how they are managing a sophisticated containerized environment. But as different as the technologies are, the common thread is that both organizations are cycling out older technology and cycling in newer tech and updates – and Puppet helps them do that.
As we work with organizations to accelerate the delivery of value to the business, we see a common set of challenges and critical initiatives organizations use Puppet Enterprise to help address.
[read through key challenges and initiatives that you’ve discovered they are trying to address].
Does this list make sense? Any that shouldn’t be on the list for you? Any that stick out? Any that aren’t on it but should be? [Use this line of questioning to tease out team dynamics and concerns that you should be aware of as you pursue the deal.]
Our software helps you automate the configuration and ongoing management of your machines and the software running on them, so you spend less time fighting fires and more time deploying great software.
We help you make rapid, repeatable changes and automatically enforce the consistency of systems and devices–across physical and virtual machines, on prem or in the cloud.
Before we talk about HOW we do that, let’s spend some time talking about WHY any of it matters. In short, it’s because automating for speed and reliability deliver results.
This is from the most recent State of DevOps survey, the world’s largest, most comprehensive and longest-running DevOps survey. Over the years tens of thousands of people have responded, this last round includes data from 5,000 respondents from 6 continents. There is a whole set of corresponding data we can dig into, but in summary, the high performing teams (that’s to say the teams that automate for speed & repeatability) see significant gains compared to the organizations that don’t.
First off, high performing IT teams that adhere to these devops practices deploy code 30x more frequently than their lower performing counterparts. It’s one thing to move fast, but what’s pretty amazing is that these high performing teams didn’t sacrifice reliability. In fact, they showed that as they deployed more frequently they were able to do so with 60x fewer failures than their lower performing counterparts. We’ll go into some of the tech practices in a little bit that contribute to these gains. And finally, one of the correlations we saw is that organizations with these high performing IT grew more over a 3 year period, to the tune of 50% higher growth (and were 1.5x more profitable than their lower performing counterparts).
The takehome here is that automation and DevOps practices deliver results in driving down the time it takes to get technology to your users in a more reliable way. Let’s talk about what we think are some of the critical best practices to see these sorts of gains.
First, we think it’s critical to adopt automation technology that helps you model and enforce the desired state of the services you deliver.
Second, we think that you should automate those processes (among others) from your core infrastructure up through your applications, all in one place for full enforcement, traceability and auditability.
Third, we think you should automate across the entire lifecycle, from initial provisioning of infrastructure through decommissioning.
Finally, you should do this across everything. If it has an IP address, you should automate the management of it.
Let’s dig into each of these.
Key points. [The minimum points that a rep/SE should make. However, this is a good time to dig into details or have the SE lead a whiteboard discussion about our approach if you know this is an area of interest].
Our declarative, model-driven approach where you focus on defining the desired state of infrastructure, services and apps rather than the programmatic steps it takes to get there.
Once you’ve modeled your infrastructure/apps, we make it possible to test your code to see what happens when you deploy that app update, etc.
We also automate the deployment of that desired state to the infrastructure, and continually enforce that your infrastructure matches your desired state. When it doesn’t we let you know so you can remediate ASAP.
This approach is model once, use everywhere. Once you model your infrastructure and your applications, you can deploy those changes to dev, to test, to staging, to production. It’s the same set of Puppet code that defines your desired state and we make that state so across your deployment tiers – there is no need to rewrite a new set of runbooks to programatically account for all the differences across environments.
And all along the way, you get reports, so whether you want full traceability and insight through your environment or you need to meet audit requirements, you have the data you need about the state of your environments at your fingertips.
Key points. [The minimum points that a rep/SE should make. However, this is a good time to dig into details or have the SE lead a whiteboard discussion about our approach if you know this is an area of interest].
We think that the desired states that you define should not be limited to just an application model, or to just the infrastructure layer like they are with all other technologies. Rather, you should bring automation to your entire stack: from you core infrastructure up through your applications, all in one place for full enforcement, traceability and auditability.
This gives you one solution to model, test, deploy, enforce, remediate and audit.
Plus, our granular access control makes it easy for you to give the proper access to the right teams at each layer so given any application, the right teams have the appropriate authority to change just the portions of the stack that they control. And again, you get full traceability across this so you always know who did what.
Key points. [The minimum points that a rep/SE should make. However, this is a good time to dig into details or have the SE lead a whiteboard discussion about our approach if you know this is an area of interest].
Provisioning is too often a slow process filled with manual steps. You should automate more than just the configuration management of your infrastructure or the orchestration of your apps, and extend automation to go across the entire lifecycle: from initial provisioning of infrastructure through decommissioning.
Over the last few releases we added new provisioning capabilities making it easy to provision
Bare metal and the OSs and hypervisors on those servers
Virtualized environments like spinning up VMs with vSphere
Public cloud infrastructure in AWS and Azure
And Docker, both the Docker engine as well as Docker containers.
Key points. [The minimum points that a rep/SE should make. However, this is a good time to dig into details or have the SE lead a whiteboard discussion about our approach if you know this is an area of interest].
Finally, you should do this across all of your infrastructure.
If it has an IP address you should automate the management of it.
This is just a small set of the technology we support, but it gives you a sense of the different types of infrastructure we help manage.
From public and hybrid cloud services to Windows and Linux servers.
From virtualized environments to containers
From network switches to storage devices.
We think you should have one consistent and repeatable way to model, test, enforce, remediate and audit across your datacenters.
So where do you start?
Start with something straightforward. Start automating the configurations of your core infrastructure. Think things like laying down OSs, configuring them, configuring core things like NTP, DNS and SSH. Things like firewall configurations. Configuring users and groups.
After that, move to application infrastructure. Databases, web servers, app servers.
Then bring automation to your provisioning practices. Whether it’s laying down OSs on bare metal or spinning up new AWS environments, automate provisioning of infrastructure.
And then put all the pieces together and automate application orchestration by modeling and deploying your applications and the services they use.
Puppet deploys and manages desired configuration using a a client-server architecture. The code which defines the desired state is deployed to a central Puppet master server. It is a sort of master blueprint from which the individual configuration of any server in your environment can be derived.
Every server or device under Puppet management runs the Puppet agent software, which continuously monitors and enforces desired state as defined centrally at the master. If the code describing the desired configuration changes on the master, each Puppet agent will automatically update or make changes to its node as necessary to ensure that the enforced configuration on managed systems stays in sync with the central definition.
In the infrastructure as code theme, updating the Puppet code deployed to the master has the effect of updating the configuration of your entire infrastructure.