The document discusses the state of the Puppet community. It defines community to include all people working on Puppet projects, whether as employees or volunteers. It outlines the various components that make up the Puppet community, including mailing lists, IRC, Puppet Camps, user groups, Puppet Forge, and metrics for measuring community growth. It also discusses plans to improve open source contributions, the Forge, and making it easier for more people to contribute to Puppet projects.
Presentation of first product by start-up Circles 23: Find your circles, which consist of a proof of concept on what Social Network Analyisis can contribute to online profile organizations
This document summarizes Gordon Rowell's talk about Puppet deployment at Google. Puppet is used to manage internal laptops, desktops, and servers but not customer-facing infrastructure. It manages "lots" of Mac/Ubuntu nodes and "tens" of Puppet servers deployed across globally distributed virtual IPs. Scaling Puppet at Google involves deploying redundant server clusters with Anycast routing clients to the nearest cluster. Load balancing challenges include ensuring enough capacity and routing if clusters fail. Thundering herds of nodes checking in simultaneously and releasing new OS and addon versions on different tracks also create Puppet challenges.
Puppet Enterprise is an automation platform that allows organizations to define their infrastructure in code and automatically enforce configurations across their environments. It was demonstrated how Puppet defines infrastructure using a common language and automates configuration through a workflow of defining the desired state, simulating changes, and enforcing configurations on nodes. Benefits shown include significant increases in deployment speed, reductions in outages and security fix time, and improvements in successful audits. Puppet provides unique capabilities like abstraction and works across datacenters, clouds, and containers at enterprise scale.
The Puppet Community: Current State and Future PlansPuppet
This session will start with a look at the community today. I will use our community metrics to take a look at all kinds of data about pull requests, bugs, mailing lists, IRC and more. In addition to the numbers, I'll also talk about some of our top contributors. We also have much to do to make the community better. I'll be presenting my plans for improvements that we'll be making to the Puppet community.
Dawn Foster
Community Lead, Puppet Labs
Dawn Foster is the Community Lead for the Puppet Community at Puppet Labs. She has more than 18 years of experience in business and technology with expertise in community building, community management, open source software, market research, RSS and more. She is passionate about bringing people together through a combination of online communities and real-world events. She has experience building new communities, and managing existing communities with a particular emphasis on developer and open source communities. Past jobs include work at Intel and Jive Software among others.
The Puppet Community: Current State and Future PlansDawn Foster
This session starts with a look at the Puppet community today. I use our community metrics to take a look at all kinds of data about pull requests, bugs, mailing lists, IRC and more. In addition to the numbers, I'll also talk about some of our top contributors. We also have much to do to make the community better, so we'll talk about some plans for improvements that we'll be making to the Puppet community.
Science in the Open - Science Commons Pacific NorthwestCameron Neylon
Slides from talk given at the Science Commons Symposium Pacific Northwest. Includes new material on Panton Principles and simple user interfaces for scientists.
This document summarizes research on online collaboration within the biodiversity research community. It discusses the challenges of cataloging Earth's species and how an online platform called Scratchpads aims to help researchers overcome barriers to collaboration. Analysis of one Scratchpad site called Livingcreatures.org found that while some researchers co-authored papers, the overlap in their co-author networks was limited. The research aims to better understand how working online impacts scientific practice and knowledge sharing.
Presentation of first product by start-up Circles 23: Find your circles, which consist of a proof of concept on what Social Network Analyisis can contribute to online profile organizations
This document summarizes Gordon Rowell's talk about Puppet deployment at Google. Puppet is used to manage internal laptops, desktops, and servers but not customer-facing infrastructure. It manages "lots" of Mac/Ubuntu nodes and "tens" of Puppet servers deployed across globally distributed virtual IPs. Scaling Puppet at Google involves deploying redundant server clusters with Anycast routing clients to the nearest cluster. Load balancing challenges include ensuring enough capacity and routing if clusters fail. Thundering herds of nodes checking in simultaneously and releasing new OS and addon versions on different tracks also create Puppet challenges.
Puppet Enterprise is an automation platform that allows organizations to define their infrastructure in code and automatically enforce configurations across their environments. It was demonstrated how Puppet defines infrastructure using a common language and automates configuration through a workflow of defining the desired state, simulating changes, and enforcing configurations on nodes. Benefits shown include significant increases in deployment speed, reductions in outages and security fix time, and improvements in successful audits. Puppet provides unique capabilities like abstraction and works across datacenters, clouds, and containers at enterprise scale.
The Puppet Community: Current State and Future PlansPuppet
This session will start with a look at the community today. I will use our community metrics to take a look at all kinds of data about pull requests, bugs, mailing lists, IRC and more. In addition to the numbers, I'll also talk about some of our top contributors. We also have much to do to make the community better. I'll be presenting my plans for improvements that we'll be making to the Puppet community.
Dawn Foster
Community Lead, Puppet Labs
Dawn Foster is the Community Lead for the Puppet Community at Puppet Labs. She has more than 18 years of experience in business and technology with expertise in community building, community management, open source software, market research, RSS and more. She is passionate about bringing people together through a combination of online communities and real-world events. She has experience building new communities, and managing existing communities with a particular emphasis on developer and open source communities. Past jobs include work at Intel and Jive Software among others.
The Puppet Community: Current State and Future PlansDawn Foster
This session starts with a look at the Puppet community today. I use our community metrics to take a look at all kinds of data about pull requests, bugs, mailing lists, IRC and more. In addition to the numbers, I'll also talk about some of our top contributors. We also have much to do to make the community better, so we'll talk about some plans for improvements that we'll be making to the Puppet community.
Science in the Open - Science Commons Pacific NorthwestCameron Neylon
Slides from talk given at the Science Commons Symposium Pacific Northwest. Includes new material on Panton Principles and simple user interfaces for scientists.
This document summarizes research on online collaboration within the biodiversity research community. It discusses the challenges of cataloging Earth's species and how an online platform called Scratchpads aims to help researchers overcome barriers to collaboration. Analysis of one Scratchpad site called Livingcreatures.org found that while some researchers co-authored papers, the overlap in their co-author networks was limited. The research aims to better understand how working online impacts scientific practice and knowledge sharing.
The document provides information about Open Hack EU 3, a 34-hour hackathon event. It outlines the objectives of the event which are to learn what a hacker and open hack are, what will occur over the next 34 hours, and how to get the most out of the experience. It details the schedule of talks to be given, available resources like APIs and data, judging criteria, and speakers. It provides tips for participants such as forming a skilled team, practicing their pitch, finding subject matter experts, taking breaks, networking, and having fun.
The best thing about open source projects is that you have all of your community data in the public at your fingertips. You just need to know how to gather the data about your open source community so that you can hack it all together to get something interesting that you can really use. We’ll start with some general guidance for coming up with a set of metrics that makes sense for your project. The focus of the session will be on tips and techniques for collecting metrics from tools commonly used by open source projects: Bugzilla, MediaWiki, Mailman, IRC and more. It will include both general approaches and technical details about using various data collection tools, like mlstats. The final section of the presentation will talk about techniques for sharing this data with your community and highlighting contributions from key community members. For anyone who loves playing with data as much as I do, metrics can be a fun way to see what your community members are really doing in your open source project.
Working Together on the Web, Working Well? Innovation of a Research Work Envi...Vince Smith
Duin D, Smith VS, Rycroft S , Brake I, Roberts D & van den Besselaar P. Working Together on the Web, Working Well? Innovation of a Research Work Environment. Atlanta Conference on Science and Innovation Policy 2011, at the Georgia Tech Global Learning Center, Atlanta, Georgia, USA. 15-17 September, 2011.
The document discusses the fundamental changes happening in scholarly communication due to the digital environment. It highlights the disruption of traditional systems like peer review, archiving, and rewarding. It emphasizes understanding different audiences like machines and humans. It provides advice on basic steps to engage with the new digital ecosystem through tools like analytics, identifiers, and new channels. It also recommends areas to monitor like open access initiatives and experimental journals.
Here are some key people and organizations in the sports technology space that could be worth connecting with to build your network:
- Sportcasters and sports journalists who cover emerging tech trends (follow on Twitter, connect on LinkedIn)
- Executives at sports tech startups and larger sports/media companies doing innovative work (NBA Digital, DAZN, Second Spectrum, etc.)
- Investors and venture capital firms focused on sports/entertainment tech (connect to learn about opportunities and stay top of mind)
- Industry events like SXSW Sports, Sports Innovation Lab, and Sports Tech conferences (attend and meet attendees)
- Academics and researchers developing new sports tech (universities with sports
This document discusses a presentation about using data science to predict the Oscars. The presentation was given by David Samuel, a data scientist and engineer, and Justin Ezor, an LA community manager at Thinkful. The presentation walked through code in a Jupyter notebook to predict the 2017 best picture winner at the Oscars using data science techniques. It also advertised a free two-week data science course from Thinkful.
Confluence at NASA: Where No Wiki Has Gone Before - Atlassian Summit 2010Atlassian
The document discusses using a wiki to organize information for a preliminary design review (PDR) of a spacecraft called Ares. Over 75 documents and a diverse review team necessitated coordinating logistics. As space administrators rather than system administrators, the author and information manager created and trained users on the wiki to facilitate the review. Surveys showed the wiki was overwhelmingly successful in organizing the large amount of information and coordination needed for the complex review.
The Seven Wastes of Software DevelopmentMatt Stine
This document summarizes Matt Stine's presentation on the seven wastes of software development based on lean manufacturing principles. The seven wastes are: partially done work, extra processes, extra features, handoffs, delays, task switching, and defects. Stine provides examples of each waste and solutions to eliminate them, such as limiting work in progress, continuous integration, avoiding handoffs, minimizing task switching, and early defect detection. The goal is to reduce non-value adding activities and continuously improve productivity and quality.
This deck was part of the FailChat in San Francisco, Wed May 9, 6:30-9pm at Startup HQ.
“User Experience. We’ll worry about that once we have a product.” ~ Entrepreneur
NO, DON’T DO IT! Don’t wait! Your user experience work starts day one and helps make your product great through every stage of your company’s development. UX answers: Who is your user? What do they struggle with? How do you know if you're making a solution that works?
User Experience designer and entrepreneur Kate Rutter of LUXr shares her mistakes as well as how experimentation, failure, learning, trying again and learning more, helped her improve and won. You’ll get the whole inside scoop on lessons learned along the way.
The document discusses failing fast and learning from failures in product development. It provides examples of products that failed after investing significant resources and identifies what went wrong. The key messages are that thinking big, planning big, and designing big can lead to big failures from which big lessons can be learned. Failures are reframed as opportunities to learn and improve. Successful products are also highlighted that achieved results with fewer resources by failing quickly and learning faster.
Engaging With The Puppet Community: From Noob to Guru* in Under a YearPuppet
Puppet has a very active, and very broad community. Learn about the various aspects and channels for this community, and how one member leveraged this vibrant community to transform himself from a noob to enough of a guru as to be "the Puppet Guy" at work, and to be able to consistently be a top contributor in multiple community channels. With the variety of community channels available, anyone can not only easily get started with Puppet, but even those who have been using it a while can learn by teaching.
Lee Lowder
Support Engineer, Puppet Labs
Lee is currently a Support Engineer at Puppet Labs, where he trouble shoots and resolves issues for Puppet Enterprise customers. Prior to that, he was very active in the Puppet Community and used Puppet extensively at his prior job. While his educational background is in accounting, specifically operational audit, his professional career has consisted of technical support, retail sales management and systems administration. The core goal of operational audit is to improve effectiveness and efficiency, and this is the philosophy that drives him. He currently resides in Springfield, MO. "Automate All the Things!"
Unlocking doors: recent initiatives in open and linked data at National Libra...Gill Hamilton
Presentation given to "Data publication and linked data in the humanities" workshop at National Library of Wales, 12 November 2012. This presentation has developed from previous as it explains how and why the Library modelled its database structure in to RDF rather than use pre-existing schemas
Unlocking doors: recent initiatives in open and linked data at National Libra...Gill Hamilton
Presentation given on 21 Sept 2012 at Cataloguing and Index Group (Scotland) seminar on "Opening Library Linked Data to National Heritage: Perspectives on International
Practice" http://www.slainte.org.uk/events/EvntShow.cfm?uEventID=2999
How to design a social computing system that people want to useKurt Luther
Guest lecture by Kurt Luther in Prof. Leysia Palen's "Social Computing" course, Department of Computer Science, University of Colorado, Boulder, January 2014.
Open Source Community Metrics for FOSDEMDawn Foster
Presented in the Community DevRoom at FOSDEM 2013. A longer version of this presentation is available at http://fastwonderblog.com/2012/11/05/open-source-community-metrics-linuxcon-barcelona/
Immersive Recommendation incorporates cross-platform and diverse personal digital traces into recommendations. Our context-aware topic modeling algorithm systematically profiles users' interests based on their traces from different contexts, and our hybrid recommendation algorithm makes high-quality recommendations by fusing users' personal profiles, item profiles, and existing ratings. The proposed model showed significant improvement over the state-of-the-art algorithms, suggesting the value of using this new user-centric recommendation model to improve recommendation quality, including in cold-start situations.
Our #TopicTaster presentation at CIPD Conference 2012. Using Online Communities to Drive Engagement & Collaboration with the DPG Community as the backdrop for building the knowledge and capability to successfully use social technologies within organisations. What are the behaviours required by L&D and HR to become more social?
The Puppet Community: Current State and Future Plans - PuppetConf 2014Puppet
The document discusses ways to participate in the Puppet community both online and offline. It outlines how community members can contribute code, documentation, ask questions, and participate in events. The Puppet community recognizes top contributors through awards like Most Valuable Puppeteer. The community aims to be inclusive and help each other through online forums, meetups, conferences and more.
Puppet camp2021 testing modules and controlrepoPuppet
This document discusses testing Puppet code when using modules versus a control repository. It recommends starting with simple syntax and unit tests using PDK or rspec-puppet for modules, and using OnceOver for testing control repositories, as it is specially designed for this purpose. OnceOver allows defining classes, nodes, and a test matrix to run syntax, unit, and acceptance tests across different configurations. Moving from simple to more complex testing approaches like acceptance tests is suggested. PDK and OnceOver both have limitations for testing across operating systems that may require customizing spec tests. Infrastructure for running acceptance tests in VMs or containers is also discussed.
This document appears to be for a PuppetCamp 2021 presentation by Corey Osman of NWOPS, LLC. It includes information about Corey Osman and NWOPS, as well as sections on efficient development, presentation content, demo main points, Git strategies including single branch and environment branch strategies, and workflow improvements. Contact information is provided at the bottom.
More Related Content
Similar to State of the Puppet Community (Jan 2013)
The document provides information about Open Hack EU 3, a 34-hour hackathon event. It outlines the objectives of the event which are to learn what a hacker and open hack are, what will occur over the next 34 hours, and how to get the most out of the experience. It details the schedule of talks to be given, available resources like APIs and data, judging criteria, and speakers. It provides tips for participants such as forming a skilled team, practicing their pitch, finding subject matter experts, taking breaks, networking, and having fun.
The best thing about open source projects is that you have all of your community data in the public at your fingertips. You just need to know how to gather the data about your open source community so that you can hack it all together to get something interesting that you can really use. We’ll start with some general guidance for coming up with a set of metrics that makes sense for your project. The focus of the session will be on tips and techniques for collecting metrics from tools commonly used by open source projects: Bugzilla, MediaWiki, Mailman, IRC and more. It will include both general approaches and technical details about using various data collection tools, like mlstats. The final section of the presentation will talk about techniques for sharing this data with your community and highlighting contributions from key community members. For anyone who loves playing with data as much as I do, metrics can be a fun way to see what your community members are really doing in your open source project.
Working Together on the Web, Working Well? Innovation of a Research Work Envi...Vince Smith
Duin D, Smith VS, Rycroft S , Brake I, Roberts D & van den Besselaar P. Working Together on the Web, Working Well? Innovation of a Research Work Environment. Atlanta Conference on Science and Innovation Policy 2011, at the Georgia Tech Global Learning Center, Atlanta, Georgia, USA. 15-17 September, 2011.
The document discusses the fundamental changes happening in scholarly communication due to the digital environment. It highlights the disruption of traditional systems like peer review, archiving, and rewarding. It emphasizes understanding different audiences like machines and humans. It provides advice on basic steps to engage with the new digital ecosystem through tools like analytics, identifiers, and new channels. It also recommends areas to monitor like open access initiatives and experimental journals.
Here are some key people and organizations in the sports technology space that could be worth connecting with to build your network:
- Sportcasters and sports journalists who cover emerging tech trends (follow on Twitter, connect on LinkedIn)
- Executives at sports tech startups and larger sports/media companies doing innovative work (NBA Digital, DAZN, Second Spectrum, etc.)
- Investors and venture capital firms focused on sports/entertainment tech (connect to learn about opportunities and stay top of mind)
- Industry events like SXSW Sports, Sports Innovation Lab, and Sports Tech conferences (attend and meet attendees)
- Academics and researchers developing new sports tech (universities with sports
This document discusses a presentation about using data science to predict the Oscars. The presentation was given by David Samuel, a data scientist and engineer, and Justin Ezor, an LA community manager at Thinkful. The presentation walked through code in a Jupyter notebook to predict the 2017 best picture winner at the Oscars using data science techniques. It also advertised a free two-week data science course from Thinkful.
Confluence at NASA: Where No Wiki Has Gone Before - Atlassian Summit 2010Atlassian
The document discusses using a wiki to organize information for a preliminary design review (PDR) of a spacecraft called Ares. Over 75 documents and a diverse review team necessitated coordinating logistics. As space administrators rather than system administrators, the author and information manager created and trained users on the wiki to facilitate the review. Surveys showed the wiki was overwhelmingly successful in organizing the large amount of information and coordination needed for the complex review.
The Seven Wastes of Software DevelopmentMatt Stine
This document summarizes Matt Stine's presentation on the seven wastes of software development based on lean manufacturing principles. The seven wastes are: partially done work, extra processes, extra features, handoffs, delays, task switching, and defects. Stine provides examples of each waste and solutions to eliminate them, such as limiting work in progress, continuous integration, avoiding handoffs, minimizing task switching, and early defect detection. The goal is to reduce non-value adding activities and continuously improve productivity and quality.
This deck was part of the FailChat in San Francisco, Wed May 9, 6:30-9pm at Startup HQ.
“User Experience. We’ll worry about that once we have a product.” ~ Entrepreneur
NO, DON’T DO IT! Don’t wait! Your user experience work starts day one and helps make your product great through every stage of your company’s development. UX answers: Who is your user? What do they struggle with? How do you know if you're making a solution that works?
User Experience designer and entrepreneur Kate Rutter of LUXr shares her mistakes as well as how experimentation, failure, learning, trying again and learning more, helped her improve and won. You’ll get the whole inside scoop on lessons learned along the way.
The document discusses failing fast and learning from failures in product development. It provides examples of products that failed after investing significant resources and identifies what went wrong. The key messages are that thinking big, planning big, and designing big can lead to big failures from which big lessons can be learned. Failures are reframed as opportunities to learn and improve. Successful products are also highlighted that achieved results with fewer resources by failing quickly and learning faster.
Engaging With The Puppet Community: From Noob to Guru* in Under a YearPuppet
Puppet has a very active, and very broad community. Learn about the various aspects and channels for this community, and how one member leveraged this vibrant community to transform himself from a noob to enough of a guru as to be "the Puppet Guy" at work, and to be able to consistently be a top contributor in multiple community channels. With the variety of community channels available, anyone can not only easily get started with Puppet, but even those who have been using it a while can learn by teaching.
Lee Lowder
Support Engineer, Puppet Labs
Lee is currently a Support Engineer at Puppet Labs, where he trouble shoots and resolves issues for Puppet Enterprise customers. Prior to that, he was very active in the Puppet Community and used Puppet extensively at his prior job. While his educational background is in accounting, specifically operational audit, his professional career has consisted of technical support, retail sales management and systems administration. The core goal of operational audit is to improve effectiveness and efficiency, and this is the philosophy that drives him. He currently resides in Springfield, MO. "Automate All the Things!"
Unlocking doors: recent initiatives in open and linked data at National Libra...Gill Hamilton
Presentation given to "Data publication and linked data in the humanities" workshop at National Library of Wales, 12 November 2012. This presentation has developed from previous as it explains how and why the Library modelled its database structure in to RDF rather than use pre-existing schemas
Unlocking doors: recent initiatives in open and linked data at National Libra...Gill Hamilton
Presentation given on 21 Sept 2012 at Cataloguing and Index Group (Scotland) seminar on "Opening Library Linked Data to National Heritage: Perspectives on International
Practice" http://www.slainte.org.uk/events/EvntShow.cfm?uEventID=2999
How to design a social computing system that people want to useKurt Luther
Guest lecture by Kurt Luther in Prof. Leysia Palen's "Social Computing" course, Department of Computer Science, University of Colorado, Boulder, January 2014.
Open Source Community Metrics for FOSDEMDawn Foster
Presented in the Community DevRoom at FOSDEM 2013. A longer version of this presentation is available at http://fastwonderblog.com/2012/11/05/open-source-community-metrics-linuxcon-barcelona/
Immersive Recommendation incorporates cross-platform and diverse personal digital traces into recommendations. Our context-aware topic modeling algorithm systematically profiles users' interests based on their traces from different contexts, and our hybrid recommendation algorithm makes high-quality recommendations by fusing users' personal profiles, item profiles, and existing ratings. The proposed model showed significant improvement over the state-of-the-art algorithms, suggesting the value of using this new user-centric recommendation model to improve recommendation quality, including in cold-start situations.
Our #TopicTaster presentation at CIPD Conference 2012. Using Online Communities to Drive Engagement & Collaboration with the DPG Community as the backdrop for building the knowledge and capability to successfully use social technologies within organisations. What are the behaviours required by L&D and HR to become more social?
The Puppet Community: Current State and Future Plans - PuppetConf 2014Puppet
The document discusses ways to participate in the Puppet community both online and offline. It outlines how community members can contribute code, documentation, ask questions, and participate in events. The Puppet community recognizes top contributors through awards like Most Valuable Puppeteer. The community aims to be inclusive and help each other through online forums, meetups, conferences and more.
Similar to State of the Puppet Community (Jan 2013) (20)
Puppet camp2021 testing modules and controlrepoPuppet
This document discusses testing Puppet code when using modules versus a control repository. It recommends starting with simple syntax and unit tests using PDK or rspec-puppet for modules, and using OnceOver for testing control repositories, as it is specially designed for this purpose. OnceOver allows defining classes, nodes, and a test matrix to run syntax, unit, and acceptance tests across different configurations. Moving from simple to more complex testing approaches like acceptance tests is suggested. PDK and OnceOver both have limitations for testing across operating systems that may require customizing spec tests. Infrastructure for running acceptance tests in VMs or containers is also discussed.
This document appears to be for a PuppetCamp 2021 presentation by Corey Osman of NWOPS, LLC. It includes information about Corey Osman and NWOPS, as well as sections on efficient development, presentation content, demo main points, Git strategies including single branch and environment branch strategies, and workflow improvements. Contact information is provided at the bottom.
The document discusses operational verification and how Puppet is working on a new module to provide more confidence in infrastructure health. It introduces the concept of adding check resources to catalogs to validate configurations and service health directly during Puppet runs. Examples are provided of how this could detect issues earlier than current methods. Next steps outlined include integrating checks into more resource types, fixing reporting, integrating into modules, and gathering feedback. This allows testing and monitoring to converge by embedding checks within configurations.
This document provides tips and tricks for using Puppet with VS Code, including links to settings examples and recommended extensions to install like Gitlens, Remote Development Pack, Puppet Extension, Ruby, YAML Extension, and PowerShell Extension. It also mentions there will be a demo.
- The document discusses various patterns and techniques the author has found useful when working with Puppet modules over 10+ years, including some that may be considered unorthodox or anti-patterns by some.
- Key topics covered include optimization of reusable modules, custom data types, Bolt tasks and plans, external facts, Hiera classification, ensuring resources for presence/absence, application abstraction with Tiny Puppet, and class-based noop management.
- The author argues that some established patterns like roles and profiles can evolve to be more flexible, and that running production nodes in noop mode with controls may be preferable to fully enforcing on all nodes.
Applying Roles and Profiles method to compliance codePuppet
This document discusses adapting the roles and profiles design pattern to writing compliance code in Puppet modules. It begins by noting the challenges of writing compliance code, such as it touching many parts of nodes and leading to sprawling code. It then provides an overview of the roles and profiles pattern, which uses simple "front-end" roles/interfaces and more complex "back-end" profiles/implementations. The rest of the document discusses how to apply this pattern when authoring Puppet modules for compliance - including creating interface and implementation classes, using Hiera for configuration, and tools for reducing boilerplate code. It aims to provide a maintainable structure and simplify adapting to new compliance frameworks or requirements.
This document discusses Kinney Group's Puppet compliance framework for automating STIG compliance and reporting. It notes that customers often implement compliance Puppet code poorly or lack appropriate Puppet knowledge. The framework aims to standardize compliance modules that are data-driven and customizable. It addresses challenges like conflicting modules and keeping compliance current after implementation. The framework generates automated STIG checklists and plans future integration with Puppet Enterprise and Splunk for continued compliance reporting. Kinney Group cites practical experience implementing the framework for various military and government customers.
Enforce compliance policy with model-driven automationPuppet
This document discusses model-driven automation for enforcing compliance. It begins with an overview of compliance benchmarks and the CIS benchmarks. It then discusses implementing benchmarks, common challenges around configuration drift and lack of visibility, and how to define compliance policy as code. The key points are that automation is essential for compliance at scale; a model-driven approach defines how a system should be configured and uses desired-state enforcement to keep systems compliant; and defining compliance policy as code, managing it with source control, and automating it with CI/CD helps achieve continuous compliance.
This document discusses how organizations can move from a reactive approach to compliance to a proactive approach using automation. It notes that over 50% of CIOs cite security and compliance as a barrier to IT modernization. Puppet offers an end-to-end compliance solution that allows organizations to automatically eliminate configuration drift, enforce compliance at scale across operating systems and environments, and define policy as code. The solution helps organizations improve compliance from 50% to over 90% compliant. The document argues that taking a proactive automation approach to compliance can turn it into a competitive advantage by improving speed and innovation.
Automating it management with Puppet + ServiceNowPuppet
As the leading IT Service Management and IT Operations Management platform in the marketplace, ServiceNow is used by many organizations to address everything from self service IT requests to Change, Incident and Problem Management. The strength of the platform is in the workflows and processes that are built around the shared data model, represented in the CMDB. This provides the ‘single source of truth’ for the organization.
Puppet Enterprise is a leading automation platform focused on the IT Configuration Management and Compliance space. Puppet Enterprise has a unique perspective on the state of systems being managed, constantly being updated and kept accurate as part of the regular Puppet operation. Puppet Enterprise is the automation engine ensuring that the environment stays consistent and in compliance.
In this webinar, we will explore how to maximize the value of both solutions, with Puppet Enterprise automating the actions required to drive a change, and ServiceNow governing the process around that change, from definition to approval. We will introduce and demonstrate several published integration points between the two solutions, in the areas of Self-Service Infrastructure, Enriched Change Management and Automated Incident Registration.
This document promotes Puppet as a tool for hardening Windows environments. It states that Puppet can be used to harden Windows with one line of code, detect drift from desired configurations, report on missing or changing requirements, reverse engineer existing configurations, secure IIS, and export configurations to the cloud. Benefits of Puppet mentioned include hardening Windows environments, finding drift for investigation, easily passing audits, compliance reporting, easy exceptions, and exporting configurations. It also directs users to Puppet Forge modules for securing Windows and IIS.
Simplified Patch Management with Puppet - Oct. 2020Puppet
Does your company struggle with patching systems? If so, you’re not alone — most organizations have attempted to solve this issue by cobbling together multiple tools, processes, and different teams, which can make an already complicated issue worse.
Puppet helps keep hosts healthy, secure and compliant by replacing time-consuming and error prone patching processes with Puppet’s automated patching solution.
Join this webinar to learn how to do the following with Puppet:
Eliminate manual patching processes with pre-built patching automation for Windows and Linux systems.
Gain visibility into patching status across your estate regardless of OS with new patching solution from the PE console.
Ensure your systems are compliant and patched in a healthy state
How Puppet Enterprise makes patch management easy across your Windows and Linux operating systems.
Presented by: Margaret Lee, Product Manager, Puppet, and Ajay Sridhar, Sr. Sales Engineer, Puppet.
The document discusses how Puppet can be used to accelerate adoption of Microsoft Azure. It describes lift and shift migration of on-premises workloads to Azure virtual machines. It also covers infrastructure as code using Puppet and Terraform for provisioning, configuration management using Puppet Bolt, and implementing immutable infrastructure patterns on Azure. Integrations with Azure services like Key Vault, Blob Storage and metadata service are presented. Patch management and inventory of Azure resources with Puppet are also summarized.
This document discusses using Puppet Catalog Diff to analyze the impact of changes between Puppet environments or catalogs. It provides the command line usage and options for Puppet Catalog Diff. It also discusses how to integrate Puppet Catalog Diff into CI/CD pipelines for automated impact analysis when merging code changes. Additional resources like GitHub projects and Dev.to posts are provided for learning more about diffing Puppet environments and catalogs.
ServiceNow and Puppet- better together, Kevin ReeuwijkPuppet
ServiceNow and Puppet can be integrated in four key areas: 1) Self-service infrastructure allows non-Puppet experts to control infrastructure through a ServiceNow interface; 2) Enriched change management automatically generates ServiceNow change requests from Puppet changes and populates them with impact details; 3) Automated incident registration forwards details of configuration drift corrections in Puppet to ServiceNow to create incidents; and 4) Up-to-date asset management would periodically upload Puppet inventory data to ServiceNow to keep the CMDB accurate without disruptive discovery runs.
This document discusses how Puppet Relay uses Tekton pipelines to orchestrate containerized workflows. It provides an overview of how Tekton fits into the Relay architecture, with Tekton controllers managing taskrun pods to execute workflow steps defined in YAML. Triggers can initiate workflows based on events, with reusable and composable steps for tasks like provisioning infrastructure or clearing resources. Relay also includes features for parameters, secrets, outputs, and approvals to customize workflows. An ecosystem of open source integrations provides sample workflows and steps for common use cases.
100% Puppet Cloud Deployment of Legacy SoftwarePuppet
This document discusses deploying legacy software into the AWS cloud using Puppet. It proposes modeling AWS resources like security groups, autoscaling groups, and launch configurations as Puppet resources. This would allow Puppet to provision the underlying AWS infrastructure and configure servers launched in autoscaling groups. It acknowledges challenges around server reboots but suggests they can be addressed. In summary, it argues custom Puppet resources can easily model AWS resources and using Puppet to configure autoscaling servers is possible despite some challenges around rebooting servers during deployment.
This document discusses a partnership between Republic Polytechnic's School of Infocomm and Puppet to promote DevOps practices. It introduces several people involved with the partnership and outlines their mission to prepare more IT companies and individuals for jobs in the DevOps field through training courses. The document describes some short courses offered on DevOps topics and using the Puppet and Microsoft Azure platforms. It provides an example of how Republic Polytechnic has automated infrastructure configuration using Puppet to save time and reduce errors. There is a request at the end for readers to register their interest in DevOps by completing a survey.
This document discusses continuous compliance and DevSecOps best practices followed by financial services organizations.
Continuous compliance is defined as an ongoing process of proactive risk management that delivers predictable, transparent, and cost-effective compliance results. It involves continuously monitoring compliance controls, providing real-time alerts for failures and remediation recommendations, and maintaining up-to-date policies. Best practices for continuous compliance discussed include defining CIS controls and benchmarks, achieving transparent compliance dashboards and automated fixes for breaches.
DevSecOps is introduced as bringing security earlier in the application development lifecycle to minimize vulnerabilities. It aims to make everyone accountable for security. Challenges discussed include security teams struggling to keep up with DevOps pace and
The Dynamic Duo of Puppet and Vault tame SSL Certificates, Nick MaludyPuppet
The document discusses using Puppet and Vault together to dynamically manage SSL certificates. Puppet can use the vault_cert resource to request signed certificates from Vault and configure services to use the certificates. On Windows, some additional logic is needed to retrieve certificates' thumbprints and bind services to certificates using those thumbprints. This approach provides automated certificate renewal and distribution across platforms.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AI
State of the Puppet Community (Jan 2013)
1. State
of
the
Puppet
Community
Dawn
M.
Foster
Community
Lead
at
Puppet
Labs
@geekygirldawn
dawn@puppetlabs.com
puppetlabs.com/community
2. Agenda
/
Summary
• Defining
community
• Guidelines
/
Code
of
Conduct
• Components
of
Community
• Make
it
easier
to
contribute
• Metrics
• Plans
for
2013
3. Community
DefiniHon
• Community
includes
all
of
the
people
who
work
on
the
project
• Product
contributors:
developers,
release
managers,
quality
assurance,
localizaHon,
etc.
• Other
developers:
wriHng
applicaHons,
modules,
extensions,
etc.
• Users:
people
who
run
your
soRware
and
provide
feedback
• Vendors:
companies
with
products
/
services
based
on
your
project
• Other
contributors:
promoHon,
moderaHon,
documentaHon
and
more
Some
people
contribute
as
part
of
their
employment
at
companies,
while
others
contribute
free
Hme.
The
community
includes
all
of
the
people
who
are
working
on
Puppet
projects.
4. Community
Guidelines
and
Code
of
Conduct
• Be
nice:
Be
courteous,
respecVul
and
polite:
no
regional,
racial,
gender,
or
other
abuse
will
be
tolerated.
We
like
nice
people
way
beXer
than
mean
ones!
• Encourage
diversity
and
par3cipa3on:
Make
everyone
in
our
community
feel
welcome,
regardless
of
their
background,
and
encourage
parHcipaHon
• Keep
it
legal:
Don’t
get
us
in
trouble.
Post
only
content
you
own,
do
not
post
private
informaHon,
etc.
• Stay
on
topic:
Make
sure
that
you
are
posHng
to
the
correct
channel
and
avoid
off-‐topic
discussions.
Also
remember
that
nobody
likes
spam.
• Specific
guidelines
for
various
tools,
etc.
hXp://docs.puppetlabs.com/community/community_guidelines.html
7. • CFPs
open
for
many
Puppet
Camps
–
please
submit!
• Completed
(materials
posted
online)
or
WIP:
– Silicon
Valley:
Jan
18,
Sydney:
Jan.
24,
Ghent
• Upcoming:
– Stockholm:
Feb
7,
Melbourne:
Feb
8,
Oslo:
Feb
13
– LA/SCALE:
Feb
22,
Italy:
Mar
1,
Chicago:
Mar
13
– Barcelona:
Mar
14,
BalHmore:
Mar
15,
Atlanta:
Mar
19
– London:
Mar
27,
Amsterdam:
April
5,
Nuremburg:
Apr
19
• Maybe
soon?
– Paris?
New
York?
San
Francisco?
AusHn?
Phoenix?
hXps://puppetlabs.com/community/puppet-‐camp
8. Puppet
User
Groups
United
States
Europe
&
Australia
Bay
Area
(Mountain
View)
Barcelona
Chicago
Italy
Los
Angeles
Oslo
New
York
Stockholm
San
Francisco
Switzerland
SeaXle
Sydney
Atlanta
• Some
more
acHve
than
others
• Anyone
can
start
a
user
group
• Learn
more:
hXp://puppetlabs.com/community/user-‐groups-‐and-‐devops-‐groups/
hXp://puppetlabs.com/community/starHng-‐a-‐user-‐group/
9. Puppet
Forge:
The
Numbers
Stat
Jan
2012
Jan
2013
Modules
260
830+
User
Accounts
930
2000+
Daily
Downloads
500
2200+
10. Focusing
on
Puppet
Forge
in
2013
• Publishing
API
• Tighter
integraHon
with
other
tools,
like
github
• BeXer
search
tools
• More
visibility
and
recogniHon
for
great
modules
11. Focus
on
Open
Source
ContribuHons
• Several
teams
devoted
to
our
open
source
projects
– Puppet,
Razor,
MCollecHve,
PuppetDB,
Facter,
etc.
• People
dedicated
to
working
with
open
source
contributors
– Jeff
McCune
focused
on
pull
requests,
beXer
communicaHon
about
status.
– Recent
hire
(starts
next
week):
Open
source
support
engineer
(bug
triage,
support,
etc.)
– Hiring
an
engineering
lead
for
Facter
– Others
soon
12. Make
it
Easy
to
Contribute
to
Puppet
• Trivial
Patch
ExempHon:
No
CLA
Required
– are
fewer
than
10
lines
and
– introduce
no
new
funcHonality
– docs.puppetlabs.com/community/trivial_patch_exempHon.html
• BeXer
CLA
App
– Move
it
out
of
Redmine
– Tie
it
to
github
accounts
– Make
it
easier
to
sign
for
individual
or
company
– Coming
March
or
April
14. Puppet
Metrics
December
2012
Summary
5131
members
and
887
messages
in
Puppet-‐Users
941
members
and
108
messages
in
Puppet-‐Dev
919
nicks
on
#puppet
IRC
channel
1942
Puppet
Forge
accounts
and
726
modules
3728
Redmine
accounts
444
forks
/
1082
watchers
of
Puppet
Six
Month
Comparison
(July
2012)
4420
members
and
1198
messages
in
Puppet-‐Users
830
members
and
120
messages
in
Puppet-‐Dev
873
nicks
on
#puppet
IRC
channel
1405
Puppet
Forge
accounts
and
442
modules
3064
Redmine
accounts
342
Forks
/
904
watchers
of
Puppet
hXp://puppetlabs.com/community/metrics/
15. Mailing
Lists:
Top
Par3cipants
for
the
Month
Puppet-Users Mailing List Puppet-Dev Mailing List
Rank User Posts Rank User Posts
1 Jakov Sosic 56 1 Andy Parker 15
2 jcbollinger 47 2 Matthaus Litteken 13
3 Pete 25 3 Alex Harvey 9
4 Gary Larizza 20 4 Jeff McCune 6
5 Ellison Marks 18 5 R.I.Pienaar 5
6 Matthaus Litteken 16 6 Dawn Foster 5
7 R.I. Pienaar 13 7 dcl...@redhat.com 4
8 Schofield 12 8 Gavin Williams 4
9 Jagga Soorma 12 9 Moses Mendoza 4
10 vioilly 12 10 James Polley 4
16. Mailing
Lists:
Top
Par3cipants
for
the
Month
Puppet-Razor Mailing List MCollective Mailing List
Rank User Posts Rank User Posts
1 Daniel Pittman 39 1 R.I.Pienaar 30
2 Tom McSweeney 25 2 Douglas Mauch 12
3 Antonio Xanxess 5 3 sneha 8
4 Gavin Williams 5 4 Jo Rhett 7
5 Drew Weaver 3 5 Rajul Vora 3
6 Tim Bishop 3 6 Isaac Smitley 3
stefan.radu.munte...
7 Fletcher Nichol 2 7 2
@gmail.com
pup...@razorsedge.
8 2 8 Oded Ben Ozer 2
org
9 michael hancock 2 9 brad diafe 1
10 Cody Bunch 2 10 Matthew Ceroni 1
17. Contributors
to
Puppet:
Past
1
Year
Commits
Person
Commits
Person
310
Daniel
PiXman
41
Stefan
Schulte
234
Patrick
Carlisle
34
Kelsey
Hightower
210
Andrew
Parker
33
Jeff
Weiss
178
Josh
Cooper
32
Henrik
Lindberg
118
Jeff
McCune
28
Hailee
Kenney
117
MaXhaus
Owens
27
Nick
Lewis
97
Chris
Price
19
Ken
Barber
91
Rahul
16
Gary
Larizza
47
Moses
Mendoza
15
Dominic
Cleal
47
Nick
Fagerlund
14
MaX
Robinson
14
Eric
Sorenson
Thanks
to
Jeff
Weiss
for
awesome
data
21. #puppet
IRC
User
Ac3vity
for
the
Month
Rank
IRC
Nick
Num
of
Lines
Random
IRC
Quote
1 bluefoxxx 517 "apache should be running as puppet"
2 binford2k 444 "jamescarr how you do that… that's entirely up to you to define"
3 Randm 418 "waszi: what device are you using?"
4 Eduard_Munteanu 284 "Mantiss: it's running in the background"
5 Volcane 253 "and you're talking about auto generating those certs"
6 brendan_ 200 "jlambert121: which hiera thing?"
7 fubada 178 "im trying to set up a main filebucket in my masters site.pp"
8 vrillusions 138 "or don't have your editors setup properly :)"
9 zipkid 121 "and do all the steps you specified AFTER your code cleanup..."
10 sonne 73 "so that's why 3.0 was released so sooner than i expected"
11 jkyle 73 "I think that was it, binford2k"
12 ken_barber 71 "it drops the agent rss from like 95mb to 40mb or some such"
13 agaffney 71 "Randm: been there, done that, eh?"
14 robinbowes 67 "Templating can deal with that"
15 scwizard 64 "gives me Error: execution expired"
16 jeremyb 56 "ken_barber: yeah. he's drupal too i think"
17 ohadlevy 56 "Randm: well, you dont need to show them that"
18 wamarler 55 "yes, so far the load on our puppetmaster is practically nothing"
19 jeremy_carroll 54 "Randm: Something like that."
20 dblessing 53 "Volcane and FriedBob-work: oh didn't know that. neat"
34. Community
Plans
2013
• Improve
metrics
• Launch
new
CLA
App
• Work
on
unified
login
and
profile
• BeXer
recogniHon
for
community
members
• Lots
of
Puppet
Camps
(25+
in
2013)
• Get
more
people
starHng
Puppet
user
groups
• Grow
ask.puppetlabs.com
Q&A
site
35. Puppet
Labs
is
Hiring!*
*
Portland
is
a
great
place
to
live:
great
beer,
amazing
coffee,
fantasHc
food,
snowy
mountains,
ocean
&
more
36. Ways
to
Contribute
• Docs
• Ask
/
Mailing
Lists
• Bug
Triage
• Contribute
code
to
projects
• Contribute
modules
to
Forge
• Note:
we’ve
hired
a
lot
of
people
from
the
community
J
37. Learn
More
• Community
– puppetlabs.com/community
– puppetlabs.com/community/puppet-‐camp
– puppetlabs.com/community/starHng-‐a-‐user-‐group/
– docs.puppetlabs.com/#community
• Metrics
for
every
month:
– puppetlabs.com/community/metrics/
– Blog
post
• Contact:
Dawn
Foster
– dawn@puppetlabs.com
– @geekygirldawn
– IRC:
DawnFoster
38. Books
and
T-‐Shirts
New
Book!
Other
Books!
Did
you
get
a
T-‐shirt?
We
have
more!