This document discusses achieving continuous delivery with Puppet. It notes that currently, development cycles are long, integration is painful, and deployments are difficult. It proposes that continuous integration, continuous delivery, DevOps practices, and an agile infrastructure using automation can help address these issues. Puppet is presented as a tool that can be used to help achieve an agile infrastructure and automate application deployments, though some challenges with its use are also discussed. The document advocates for changing the relationship between development and operations teams to one of more shared responsibility.
This document discusses how Puppet can be used to enable continuous integration (CI). Puppet allows infrastructure to be automated and treated as code, facilitating CI workflows where development, test, and production environments can be duplicated and changes deployed in a controlled manner. The document outlines a CI process where developers and operations teams commit code to a source control management system, a build server compiles and tests the application and Puppet code, simulates changes, and instructs the Puppet master to deploy updates to target environments according to a specific tag. This allows full environments rather than just applications to be compiled, packaged, and deployed through CI.
PuppetConf 2016: Keynote: Pulling the Strings to Containerize Your Life - Sco...Puppet
Scott Coulton is a Platform Engineering Lead at Autopilot who discusses how his company used Docker and Puppet to improve their CI/CD processes and speed up deployments to production while maintaining compliance. He explains how they had development teams deploy themselves by treating infrastructure as code that is automated, built, and tested. This allowed them to break down barriers and usher in a new wave of infrastructure development. Puppet was used for configuration management to containerize systems and help spread DevOps practices to other teams.
The document discusses automating build and deployment pipelines using infrastructure as code. It recommends:
1. Treating development environments like production by making them automated, disposable, and recreated from code.
2. Not sharing secrets between environments and making credentials, keys, and other sensitive data unique to each automated environment.
3. Automating the creation of all infrastructure components including VMs, containers, Kubernetes clusters from configuration files to ensure they can be recreated identically on any cloud provider.
JUC Europe 2015: Continuous Integration and Distribution in the Cloud with DE...CloudBees
By Mark Galpin, JFrog
Correct this if it's wrong, but as a software developer you have two main dreams - to enjoy your coding and to not have to care about anything else but code. Setting up an environment and maintaining a CI/CD cycle for your software can be complicated and painful. The good news is, it doesn't have to be! In this talk, Mark will demo some of the most popular alternatives for a cloud-based development life cycle: from CI builds with DEV@cloud, through artifact deployment to a binary repository and finally, rolling out your release on a truly modern distribution platform.
JUC Europe 2015: Bringing CD at Cloud-Scale with Jenkins, Docker and "Tiger"CloudBees
By Kohsuke Kawaguchi and Harpreet Singh, CloudBees, Inc.
Continuous delivery (CD) is a competitive differentiator and development and operations teams are under pressure to deliver software faster. The DevOps world is going through a storm of changes - Docker being the key one. This session by Kohsuke and Harpreet will introduce a set of plugins that address various aspects of CD with Docker.
In this session, we will learn what is Docker and what was the need for it. We will also take a look at the benefits of Docker, and the concept behind containerization.
We will learn some core Docker concepts such as Docker images, Dockerfiles, Docker Hub, etc.
I will also show the Docker commands in action in the terminal and we will also take a look at an actual Dockerfile being used in an open-source project.
Finally, we will take a high-level look at the Docker architecture and understand how things work in Docker and what is the flow of commands.
This document describes eBay's use of Fluo for continuous integration and deployment using OpenStack. Fluo provides a single interface for configuring, building, testing, and deploying code changes. It provisions instances on OpenStack to run tasks defined in a configuration file like running tests, building packages, and deploying code. Fluo replicates code, packages, and configuration management code across regions and datacenters. It supports common workflows from code review through integration testing, releases, and periodic jobs. Fluo aims to provide a fully automated and scalable continuous delivery system to deploy code changes to eBay's global infrastructure on OpenStack.
JUC Europe 2015: Plugin Development with Gradle and GroovyCloudBees
By Daniel Spilker, CoreMedia
Learn how to use the Gradle JPI plugin to enable a 100% Groovy plugin development environment. We will delve into Groovy as the primary programming language, Spock for writing tests and Gradle as the build system.
This document discusses how Puppet can be used to enable continuous integration (CI). Puppet allows infrastructure to be automated and treated as code, facilitating CI workflows where development, test, and production environments can be duplicated and changes deployed in a controlled manner. The document outlines a CI process where developers and operations teams commit code to a source control management system, a build server compiles and tests the application and Puppet code, simulates changes, and instructs the Puppet master to deploy updates to target environments according to a specific tag. This allows full environments rather than just applications to be compiled, packaged, and deployed through CI.
PuppetConf 2016: Keynote: Pulling the Strings to Containerize Your Life - Sco...Puppet
Scott Coulton is a Platform Engineering Lead at Autopilot who discusses how his company used Docker and Puppet to improve their CI/CD processes and speed up deployments to production while maintaining compliance. He explains how they had development teams deploy themselves by treating infrastructure as code that is automated, built, and tested. This allowed them to break down barriers and usher in a new wave of infrastructure development. Puppet was used for configuration management to containerize systems and help spread DevOps practices to other teams.
The document discusses automating build and deployment pipelines using infrastructure as code. It recommends:
1. Treating development environments like production by making them automated, disposable, and recreated from code.
2. Not sharing secrets between environments and making credentials, keys, and other sensitive data unique to each automated environment.
3. Automating the creation of all infrastructure components including VMs, containers, Kubernetes clusters from configuration files to ensure they can be recreated identically on any cloud provider.
JUC Europe 2015: Continuous Integration and Distribution in the Cloud with DE...CloudBees
By Mark Galpin, JFrog
Correct this if it's wrong, but as a software developer you have two main dreams - to enjoy your coding and to not have to care about anything else but code. Setting up an environment and maintaining a CI/CD cycle for your software can be complicated and painful. The good news is, it doesn't have to be! In this talk, Mark will demo some of the most popular alternatives for a cloud-based development life cycle: from CI builds with DEV@cloud, through artifact deployment to a binary repository and finally, rolling out your release on a truly modern distribution platform.
JUC Europe 2015: Bringing CD at Cloud-Scale with Jenkins, Docker and "Tiger"CloudBees
By Kohsuke Kawaguchi and Harpreet Singh, CloudBees, Inc.
Continuous delivery (CD) is a competitive differentiator and development and operations teams are under pressure to deliver software faster. The DevOps world is going through a storm of changes - Docker being the key one. This session by Kohsuke and Harpreet will introduce a set of plugins that address various aspects of CD with Docker.
In this session, we will learn what is Docker and what was the need for it. We will also take a look at the benefits of Docker, and the concept behind containerization.
We will learn some core Docker concepts such as Docker images, Dockerfiles, Docker Hub, etc.
I will also show the Docker commands in action in the terminal and we will also take a look at an actual Dockerfile being used in an open-source project.
Finally, we will take a high-level look at the Docker architecture and understand how things work in Docker and what is the flow of commands.
This document describes eBay's use of Fluo for continuous integration and deployment using OpenStack. Fluo provides a single interface for configuring, building, testing, and deploying code changes. It provisions instances on OpenStack to run tasks defined in a configuration file like running tests, building packages, and deploying code. Fluo replicates code, packages, and configuration management code across regions and datacenters. It supports common workflows from code review through integration testing, releases, and periodic jobs. Fluo aims to provide a fully automated and scalable continuous delivery system to deploy code changes to eBay's global infrastructure on OpenStack.
JUC Europe 2015: Plugin Development with Gradle and GroovyCloudBees
By Daniel Spilker, CoreMedia
Learn how to use the Gradle JPI plugin to enable a 100% Groovy plugin development environment. We will delve into Groovy as the primary programming language, Spock for writing tests and Gradle as the build system.
This presentation was given at the Chicago DevOps Meetup by Doug Campbell, a DevOps Engineer at Gogo. The slides go over what DevOps means for Gogo, what our continuous delivery workflow looks like, and why you should be interested in Spinnaker and Foremast.
Most of the presentation was Spinnaker and Foremast demos and I will update this description with a link to the videos once published.
These slides are about my personal experience from creating a continuous delivery process in the last 2 years.
The main focus lies in the tools I used and my experience with them.
Win Spinnaker with Winnaker - Open Source North Conf 2017Medya Ghazizadeh
Spinnaker is an open source tool for deploying software releases to multiple cloud providers. Winnaker is a tool built by Target that helps automate common tasks when using Spinnaker like starting pipelines, getting stage details, integrating with chat tools, and troubleshooting errors. It removes company-specific code so others can contribute. Winnaker is distributed as a Docker container and makes it easy to pressure test Spinnaker and cloud environments by running multiple pipelines.
.Net OSS Ci & CD with Jenkins - JUC ISRAEL 2013 Tikal Knowledge
This document discusses using Jenkins for continuous integration (CI) and continuous delivery (CD) of .NET open source projects. It covers how to achieve CI using Jenkins by automating builds, testing on each commit, and more. It also discusses using NuGet for dependency management and Sonar for code quality analysis. Finally, it provides examples of using Jenkins to deploy builds to platforms like AWS Elastic Beanstalk for CD after builds pass testing.
JUC Europe 2015: Jenkins-Based Continuous Integration for Heterogeneous Hardw...CloudBees
By Oleg Nenashev, CloudBees, Inc.
This talk will address Jenkins-based continuous integration (CI) in the area of embedded systems, which include both hardware and software components. An overview of common automation cases, challenges and their solutions based on Jenkins CI services will be presented. The specifics of Jenkins usage in the hardware area (available plugins and workarounds, environment and desired high availability features) will also be discussed. The session will cover several automation examples and case studies.
Pimp your Continuous Delivery Pipeline with Jenkins workflow (W-JAX 14)CloudBees
Continuous delivery pipelines are, by definition, workflows with parallel job executions, join points, retries of jobs (Selenium tests are fragile) and manual steps (validation by a QA team). Come and discover how the new workflow engine of Jenkins CI and its Groovy-based DSL will give another dimension to your continuous delivery pipelines and greatly simplify your life.
Sample workflow groovy script used in this presentation: https://gist.github.com/cyrille-leclerc/796085e19d9cec4a71ef
Jenkins workflow syntax reference card: https://github.com/cyrille-leclerc/workflow-plugin/blob/master/SYNTAX-REFERENCE-CARD.md
By Nobuaki Ogawa, EFI DirectSmile
In this session, you will learn how you can easily utilize continuous delivery practices with Jenkins CI. From build, deploy, test, maintenance and monitoring, lots of processes can be easily orchestrated with Jenkins. Nobuaki will cover a very simple case for which he implemented continuous delivery with Jenkins CI, Azure and Selenium. This project is a basic case of continuous delivery. Especially noteworthy, for a Windows program. :-)
This document discusses Octopus Deploy, a deployment automation tool. It describes Octopus Deploy's architecture and 7 step deployment process. The process includes declaring environments, creating application packages, defining projects, creating deployment processes with steps and variables, releasing packages, and deploying releases to environments. Octopus Deploy supports features like automated deployments, rollbacks, configuration transformations, and integration with build pipelines. It provides visibility through audit logs and manages deployments across development, test, and production environments.
JUC Europe 2015: Enabling Continuous Delivery for Major RetailersCloudBees
By Masood Jan, Mazataz
Masood will illustrate the achievements and challenges faced whilst implementing a continuous delivery (CD) framework for a major retailer by using a rigorous but simple development process, integrated with Jenkins build pipelines. The pipelines have been carefully architected to orchestrate the various build, deployment, testing and release stages of e-commerce applications. The presenter will conclude with future goals regarding a cloud-based CD process using Jenkins.
JUC Europe 2015: Scaling of Jenkins Pipeline Creation and MaintenanceCloudBees
By Damien Coraboeuf, Clear2Pay
In a large company where several dozens of projects and branches will need their own pipelines, you cannot afford to maintain all the jobs, manually. For security reasons and knowledge limitations, Clear2Pay does not want to open Jenkins job configurations in the centralised master. Instead, the Clear2Pay team offers project teams a "shopping list" that they can use to automatically generate their own pipelines for all branches, without requiring the Jenkins administration team to intervene. Projects just update a jenkins.properties in the SCM branch and the pipeline for this branch is updated accordingly. This allows the number of projects to scale, each getting their own pipeline in the Jenkins master without having to worry about administering hundreds of jobs.
Continuous integration involves developers committing code changes daily which are then automatically built and tested. Continuous delivery takes this further by automatically deploying code changes that pass testing to production environments. The document outlines how Jenkins can be used to implement continuous integration and continuous delivery through automating builds, testing, and deployments to keep the process fast, repeatable and ensure quality.
The document outlines Julien Pivotto's presentation on building pipelines at scale using Jenkins and Puppet. It discusses how Puppet can be used to define Jenkins job configurations and pipelines for applications and infrastructure to allow easy deployment of new pipelines. It also covers alternative approaches using Jenkins plugins to define pipelines through Groovy scripts to reduce complexity compared to Puppet management.
This document provides an overview of Spinnaker, an open source tool for continuous delivery. It discusses the traditional software delivery lifecycle and issues with manual processes. Continuous delivery is presented as a better approach using automation to deliver software frequently with automated testing and feedback. Spinnaker is introduced as a tool that provides features like pipelines, cloud drivers, and image deployments to help enable continuous delivery. The document demonstrates Spinnaker's capabilities through a multi-cloud deployment demo.
Continuous delivery is a powerful concept, but hard to achieve. One of the challenges is automating the setup of environments and the deployment of the Java EE applications. We have looked at and used quite some tools like for instance Chef, Puppet, Vagrant and Nolio. All tools had one thing in common: we had never used them. Why should we invest time in mastering those tools? There is a perfect alternative in Jenkins, a tool most developers are familiar with. Besides the basic Jenkins buildserver capabilities it offers quite some useful plugins like the Build Pipeline plugin. To setup environments the popular Docker project is used. Docker allows you to create containers from any application. Only some knowledge is required for the setup of the containers. The rest of the configuration is done through commands most people are quite familiar with.
Webinar - Continuous Integration with GitLabOlinData
The document is a presentation about continuous integration with GitLab. It discusses what continuous integration is, why it is important, and how to set up continuous integration builds using GitLab. Specifically, it defines continuous integration as integrating code regularly to prevent problems and identify issues early. It recommends gradually adopting continuous integration practices like writing test cases whenever bugs are fixed. The presentation also provides instructions on setting up a GitLab runner to enable continuous integration builds and adding a .gitlab-ci.yml file to configure builds.
Containers provide an efficient application delivery mechanism where applications can be built once and run anywhere. The document discusses using containers with a continuous integration and continuous delivery (CI/CD) workflow where source code is built into container images using tools like Jenkins and Docker, and the images are deployed to environments like AWS, Azure, or bare metal using Calm.io. It also describes setting up a microservices architecture with services, backends, and monitoring containers, and automatically scaling infrastructure using tools like Docker swarm based on monitoring information to ensure high availability with zero connection drops during maintenance. The key takeaways are to automate everything, use small container images, be cloud agnostic, and quickly recover from failures.
Drupal Continuous Integration (European Drupal Days 2015)Eugenio Minardi
The document discusses continuous integration for Drupal projects. It covers tools like Jenkins, Drush, Phing, and Features that allow automating build, test, and deployment processes. Continuous integration helps catch errors early, keep code changes integrated frequently, and make it easier to deploy updates. The document provides overviews and examples of setting up continuous integration for Drupal projects.
The document discusses Camunda's transition from a traditional Jenkins setup with virtual machines to a containerized continuous integration infrastructure using Docker and Jenkins. Some of the key problems with the previous setup included a lack of isolation between jobs, limited scalability, and difficulties maintaining the infrastructure. The new system achieves isolated and reproducible jobs through one-off Docker containers, scalability through Docker Swarm on commodity hardware, and infrastructure maintenance through immutable Docker images and infrastructure as code definitions. Lessons learned include automating as much as possible, designing for scale, testing all aspects of the new system, and controlling dependencies.
The document outlines seven habits that DevOps teams can adopt to increase application security. The habits are: 1) Increase trust and transparency between development, security, and operations teams. 2) Understand the probability and impact of specific security risks. 3) Discard detailed security roadmaps in favor of incremental improvements. 4) Use continuous delivery pipelines to incrementally improve security practices. 5) Standardize and continuously update third-party software. 6) Govern with automated audit trails. 7) Test security preparedness through "security games". Adopting these habits helps integrate security across the development lifecycle to reduce vulnerability discovery and remediation time.
Why we are migrating to Chef from PuppetYu Yamanaka
This document discusses the author's experience using Puppet and their reasons for switching to Chef. It notes that they started using Puppet in 2012 when it was more popular than Chef. While Puppet worked well initially, they found it inflexible to apply partial manifests or forget dependencies. In contrast, Chef allowed ordering resources intuitively and using Ruby syntax which they were learning to write recipes. The overall message is that Chef enabled them to "Stay Creative" and "Live Comfortably" in their work.
This presentation was given at the Chicago DevOps Meetup by Doug Campbell, a DevOps Engineer at Gogo. The slides go over what DevOps means for Gogo, what our continuous delivery workflow looks like, and why you should be interested in Spinnaker and Foremast.
Most of the presentation was Spinnaker and Foremast demos and I will update this description with a link to the videos once published.
These slides are about my personal experience from creating a continuous delivery process in the last 2 years.
The main focus lies in the tools I used and my experience with them.
Win Spinnaker with Winnaker - Open Source North Conf 2017Medya Ghazizadeh
Spinnaker is an open source tool for deploying software releases to multiple cloud providers. Winnaker is a tool built by Target that helps automate common tasks when using Spinnaker like starting pipelines, getting stage details, integrating with chat tools, and troubleshooting errors. It removes company-specific code so others can contribute. Winnaker is distributed as a Docker container and makes it easy to pressure test Spinnaker and cloud environments by running multiple pipelines.
.Net OSS Ci & CD with Jenkins - JUC ISRAEL 2013 Tikal Knowledge
This document discusses using Jenkins for continuous integration (CI) and continuous delivery (CD) of .NET open source projects. It covers how to achieve CI using Jenkins by automating builds, testing on each commit, and more. It also discusses using NuGet for dependency management and Sonar for code quality analysis. Finally, it provides examples of using Jenkins to deploy builds to platforms like AWS Elastic Beanstalk for CD after builds pass testing.
JUC Europe 2015: Jenkins-Based Continuous Integration for Heterogeneous Hardw...CloudBees
By Oleg Nenashev, CloudBees, Inc.
This talk will address Jenkins-based continuous integration (CI) in the area of embedded systems, which include both hardware and software components. An overview of common automation cases, challenges and their solutions based on Jenkins CI services will be presented. The specifics of Jenkins usage in the hardware area (available plugins and workarounds, environment and desired high availability features) will also be discussed. The session will cover several automation examples and case studies.
Pimp your Continuous Delivery Pipeline with Jenkins workflow (W-JAX 14)CloudBees
Continuous delivery pipelines are, by definition, workflows with parallel job executions, join points, retries of jobs (Selenium tests are fragile) and manual steps (validation by a QA team). Come and discover how the new workflow engine of Jenkins CI and its Groovy-based DSL will give another dimension to your continuous delivery pipelines and greatly simplify your life.
Sample workflow groovy script used in this presentation: https://gist.github.com/cyrille-leclerc/796085e19d9cec4a71ef
Jenkins workflow syntax reference card: https://github.com/cyrille-leclerc/workflow-plugin/blob/master/SYNTAX-REFERENCE-CARD.md
By Nobuaki Ogawa, EFI DirectSmile
In this session, you will learn how you can easily utilize continuous delivery practices with Jenkins CI. From build, deploy, test, maintenance and monitoring, lots of processes can be easily orchestrated with Jenkins. Nobuaki will cover a very simple case for which he implemented continuous delivery with Jenkins CI, Azure and Selenium. This project is a basic case of continuous delivery. Especially noteworthy, for a Windows program. :-)
This document discusses Octopus Deploy, a deployment automation tool. It describes Octopus Deploy's architecture and 7 step deployment process. The process includes declaring environments, creating application packages, defining projects, creating deployment processes with steps and variables, releasing packages, and deploying releases to environments. Octopus Deploy supports features like automated deployments, rollbacks, configuration transformations, and integration with build pipelines. It provides visibility through audit logs and manages deployments across development, test, and production environments.
JUC Europe 2015: Enabling Continuous Delivery for Major RetailersCloudBees
By Masood Jan, Mazataz
Masood will illustrate the achievements and challenges faced whilst implementing a continuous delivery (CD) framework for a major retailer by using a rigorous but simple development process, integrated with Jenkins build pipelines. The pipelines have been carefully architected to orchestrate the various build, deployment, testing and release stages of e-commerce applications. The presenter will conclude with future goals regarding a cloud-based CD process using Jenkins.
JUC Europe 2015: Scaling of Jenkins Pipeline Creation and MaintenanceCloudBees
By Damien Coraboeuf, Clear2Pay
In a large company where several dozens of projects and branches will need their own pipelines, you cannot afford to maintain all the jobs, manually. For security reasons and knowledge limitations, Clear2Pay does not want to open Jenkins job configurations in the centralised master. Instead, the Clear2Pay team offers project teams a "shopping list" that they can use to automatically generate their own pipelines for all branches, without requiring the Jenkins administration team to intervene. Projects just update a jenkins.properties in the SCM branch and the pipeline for this branch is updated accordingly. This allows the number of projects to scale, each getting their own pipeline in the Jenkins master without having to worry about administering hundreds of jobs.
Continuous integration involves developers committing code changes daily which are then automatically built and tested. Continuous delivery takes this further by automatically deploying code changes that pass testing to production environments. The document outlines how Jenkins can be used to implement continuous integration and continuous delivery through automating builds, testing, and deployments to keep the process fast, repeatable and ensure quality.
The document outlines Julien Pivotto's presentation on building pipelines at scale using Jenkins and Puppet. It discusses how Puppet can be used to define Jenkins job configurations and pipelines for applications and infrastructure to allow easy deployment of new pipelines. It also covers alternative approaches using Jenkins plugins to define pipelines through Groovy scripts to reduce complexity compared to Puppet management.
This document provides an overview of Spinnaker, an open source tool for continuous delivery. It discusses the traditional software delivery lifecycle and issues with manual processes. Continuous delivery is presented as a better approach using automation to deliver software frequently with automated testing and feedback. Spinnaker is introduced as a tool that provides features like pipelines, cloud drivers, and image deployments to help enable continuous delivery. The document demonstrates Spinnaker's capabilities through a multi-cloud deployment demo.
Continuous delivery is a powerful concept, but hard to achieve. One of the challenges is automating the setup of environments and the deployment of the Java EE applications. We have looked at and used quite some tools like for instance Chef, Puppet, Vagrant and Nolio. All tools had one thing in common: we had never used them. Why should we invest time in mastering those tools? There is a perfect alternative in Jenkins, a tool most developers are familiar with. Besides the basic Jenkins buildserver capabilities it offers quite some useful plugins like the Build Pipeline plugin. To setup environments the popular Docker project is used. Docker allows you to create containers from any application. Only some knowledge is required for the setup of the containers. The rest of the configuration is done through commands most people are quite familiar with.
Webinar - Continuous Integration with GitLabOlinData
The document is a presentation about continuous integration with GitLab. It discusses what continuous integration is, why it is important, and how to set up continuous integration builds using GitLab. Specifically, it defines continuous integration as integrating code regularly to prevent problems and identify issues early. It recommends gradually adopting continuous integration practices like writing test cases whenever bugs are fixed. The presentation also provides instructions on setting up a GitLab runner to enable continuous integration builds and adding a .gitlab-ci.yml file to configure builds.
Containers provide an efficient application delivery mechanism where applications can be built once and run anywhere. The document discusses using containers with a continuous integration and continuous delivery (CI/CD) workflow where source code is built into container images using tools like Jenkins and Docker, and the images are deployed to environments like AWS, Azure, or bare metal using Calm.io. It also describes setting up a microservices architecture with services, backends, and monitoring containers, and automatically scaling infrastructure using tools like Docker swarm based on monitoring information to ensure high availability with zero connection drops during maintenance. The key takeaways are to automate everything, use small container images, be cloud agnostic, and quickly recover from failures.
Drupal Continuous Integration (European Drupal Days 2015)Eugenio Minardi
The document discusses continuous integration for Drupal projects. It covers tools like Jenkins, Drush, Phing, and Features that allow automating build, test, and deployment processes. Continuous integration helps catch errors early, keep code changes integrated frequently, and make it easier to deploy updates. The document provides overviews and examples of setting up continuous integration for Drupal projects.
The document discusses Camunda's transition from a traditional Jenkins setup with virtual machines to a containerized continuous integration infrastructure using Docker and Jenkins. Some of the key problems with the previous setup included a lack of isolation between jobs, limited scalability, and difficulties maintaining the infrastructure. The new system achieves isolated and reproducible jobs through one-off Docker containers, scalability through Docker Swarm on commodity hardware, and infrastructure maintenance through immutable Docker images and infrastructure as code definitions. Lessons learned include automating as much as possible, designing for scale, testing all aspects of the new system, and controlling dependencies.
The document outlines seven habits that DevOps teams can adopt to increase application security. The habits are: 1) Increase trust and transparency between development, security, and operations teams. 2) Understand the probability and impact of specific security risks. 3) Discard detailed security roadmaps in favor of incremental improvements. 4) Use continuous delivery pipelines to incrementally improve security practices. 5) Standardize and continuously update third-party software. 6) Govern with automated audit trails. 7) Test security preparedness through "security games". Adopting these habits helps integrate security across the development lifecycle to reduce vulnerability discovery and remediation time.
Why we are migrating to Chef from PuppetYu Yamanaka
This document discusses the author's experience using Puppet and their reasons for switching to Chef. It notes that they started using Puppet in 2012 when it was more popular than Chef. While Puppet worked well initially, they found it inflexible to apply partial manifests or forget dependencies. In contrast, Chef allowed ordering resources intuitively and using Ruby syntax which they were learning to write recipes. The overall message is that Chef enabled them to "Stay Creative" and "Live Comfortably" in their work.
DevOps: What is This Puppet You Speak Of?Rob Reynolds
This document provides an overview of DevOps and Puppet for automating infrastructure management. It discusses how DevOps involves collaboration between development, QA, and operations teams. Puppet is introduced as infrastructure as code tool that allows defining and enforcing a machine's state through code. It works by installing agents on nodes that fetch configuration files (manifests) and enforce the specified configuration. The document highlights Puppet's capabilities for Windows systems and provides examples of common resource types and modules for managing Windows servers through Puppet.
A book for learning puppet by real example and by building code. Second chapters takes you through all basics of Puppet and enough ruby to work with Puppet.
Deliver on DevOps with Puppet Application Orchestration Webinar 11/19/15Puppet
DevOps seeks to align business goals with the goals of development and operations teams. Technology is quickly becoming a strategic business differentiator, bringing IT closer to the business, as customers take advantage of the new applications and services coming online. One challenge IT teams face is a timely way to deploy, configure and manage these critical business applications. A DevOps approach offers enterprises a way to accelerate application delivery.
Join Jeremy Adams, Chris Barker, and Michael Olson from the Puppet Labs team, as they discuss how Puppet Labs’ new Application Orchestration solution helps IT teams deliver on DevOps. You’ll also learn how you can:
-Easily model your application infrastructure to make installations, upgrades and ongoing management repeatable and reliable.
-Configure, deploy and update critical applications faster and without downtime.
-Quickly cycle in new technology, while maintaining and/or cycling out old technology.
Puppet and Chef are both popular configuration management tools, but they differ in their approach - Puppet uses a model-driven approach that is easier for sysadmins to learn, while Chef uses a procedural approach in Ruby that provides more power and flexibility but a steeper learning curve. Both tools are cross-platform but Puppet supports more operating systems officially. While Puppet has a larger user community currently, Chef is growing rapidly as well. Their documentation has both improved significantly over time. Pricing models include free open source versions as well as paid versions for Puppet Enterprise and Chef.
A book for learning puppet by real example and by building code. Third chapter shows a basic use case of installing tomcat and creating a module to do the same.
This document discusses the DevOps mindset and how it aims to transform organizational culture from siloed functional teams focused on control to collaborative cross-functional teams focused on enabling business goals. It outlines how traditional IT groups are separated and have competing goals around things like development, operations, quality assurance, and security. DevOps seeks to remove these silos by promoting a collaborative mindset across all functions. The document then discusses what DevOps involves in terms of people, processes, and technology changes needed for effective implementation.
Fifteen Years of DevOps -- LISA 2012 keynoteGeoff Halprin
There has been a lot of hullabaloo over the past few years around a concept called “DevOps.” The idea is that we need to break down the barriers between development and operations teams, and treat infrastructure as code, in order to move towards better software, more reliable and scalable systems, and continuous deployment.
For some of us who have been around a while, this is just a new label for something we’ve always done.
They say those that don’t learn from history are destined to repeat it. In this talk, we will look back at how the DevOps movement evolved, what it advocates, what it doesn’t address, and what you should take away from the movement that will help you in your professional life. We will also use this opportunity to look back over the past decade or two of system administration, and see how our challenges have changed, and how they have remained the same.
Les Frost, Senior Technical Architect - Capgemini
The DevOps movement is establishing itself within many organisations. Many people are asking “How do I do DevOps?” or “Can you tell me the recommended DevOps tool stack?”
By the time many organisations adopt DevOps, will the rise of serverless computing such as AWS Lambda mean that they are already out-of-date? The debate is on-going as to whether serverless computing (i.e. outsourcing the management of servers so you can focus on building that critical business functionality), will move organisations from DevOps to NoOps. AWS Lambda enables users to run code without provisioning or managing servers. With Lambda, users can run code for virtually any type of application or back end service – all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your
code to automatically trigger from other AWS services or call it directly from any web or mobile app.
This document discusses GitFlow, SourceTree, and GitLab for software development workflows. It provides an overview of main and supporting branch types in GitFlow like develop, master, feature, release and hotfix. It also summarizes the key features and uses of SourceTree for visualizing Git repositories and GitLab for hosting Git repositories and providing features like activity streams, code review, issues and more.
I have evidence that using git and GitHub for documentation and community doc techniques can give us 300 doc changes in a month. I’ve bet my career on these methods and I want to share with you.
The document discusses best practices for using Git including basic commands, branches, tags, and collaboration using GitHub. It covers Git fundamentals like committing, pushing, pulling and branching as well as more advanced topics such as rebasing, cherry-picking, stashing and using Git hooks for continuous integration. The presentation aims to help users learn to use Git more efficiently.
Introduction to Git/Github - A beginner's guideRohit Arora
Introduction to Git/Github - A beginner's guide
Agenda:
Installing Git
Introduction to Version Control
Git Basics
Creating a new local Git repository
Cloning a Git repository
Making use of Git commit history
Reverting files to previous states
Creating a Github Repository
Adding, Committing & Pushing changes
Branching
Merging Branches
Sending Pull Requests
Conflict Resolution
and 3 Exercises
Puppet Enterprise is an automation platform that allows organizations to define their infrastructure in code and automatically enforce that configuration. It was demonstrated how Puppet defines infrastructure using a common language and automates configuration across any environment. Benefits shown include significant increases in deployment speed, reductions in outages and security fix time, and more frequent deployments. Next steps suggested trying out Puppet Enterprise or learning resources to see how it can help deliver better software faster through infrastructure automation.
DevOps and Continuous Delivery Reference Architectures (including Nexus and o...Sonatype
There are numerous examples of DevOps and Continuous Delivery reference architectures available, and each of them vary in levels of detail, tools highlighted, and processes followed. Yet, there is a constant theme among the tool sets: Jenkins, Maven, Sonatype Nexus, Subversion, Git, Docker, Puppet/Chef, Rundeck, ServiceNow, and Sonar seem to show up time and again.
Cloud Academy Webinar: Recipe for DevOps Success: Capital One StyleMark Andersen
Capital One transitioned to a DevOps model to improve speed of delivery and reduce handoffs between teams. They started with a SWAT team that automated builds, deployments, and infrastructure for two applications. This proved successful and they expanded automation to more applications. Challenges included trying to automate everything at once and handoffs when automation was returned to application teams. Key lessons included focusing on automation, removing handoffs, training application teams on automation, and delivering working solutions incrementally rather than waiting for perfection.
Capital One transitioned to DevOps by starting with a SWAT team that automated builds, deployments, and infrastructure for two applications. This improved speed and removed handoffs. Challenges included trying to automate everything at once and handoffs when automation was returned to application teams. Key lessons included focusing on automation and API's, reducing handoffs, avoiding silos, and delivering working solutions over perfection.
This webinar discusses managing databases like codebases. It begins at specific times for different timezones. Attendees will be muted with questions submitted through a text box. Recordings will be available online. Presenters from Delphix and DBmaestro will discuss how databases are not as agile as code and present challenges to continuous delivery. Traditional databases require costly setup and lack automation. With DBmaestro and Delphix's tools, databases can be version controlled and deployed like code to enable parallel development and faster provisioning with self-service environments and deployment automation.
Agile & DevOps - It's all about project successAdam Stephensen
The document provides information on DevOps practices and tools from Microsoft. It discusses how DevOps enables continuous delivery of value through integrating people, processes, and tools. Benefits of DevOps include more frequent and stable releases, lower change failure rates, and empowered development teams. The document provides examples of DevOps scenarios and recommends discussing solutions and migration plans with Microsoft.
[Webinar] Test First, Fail Fast - Simplifying the Tester's Transition to DevOpsKMS Technology
DevOps is a spectacular mish-mash of development and operations processes and practices that has been growing increasingly popular in recent years. With the upward trending rate in adoption comes the need for organizations to fully understand the key practices as well as thoroughly integrating team members, especially testers, throughout the delivery pipeline. Getting started with DevOps practices can be a little tricky when choosing the right tools, people, and processes. In this webinar, we’ll focus on helping you make the switch without diminishing the team’s delivered product quality, so that the transition meets the enterprise objectives of speed and reliability.
Tune in to learn:
The biggest concern when moving to DevOps - and how to handle it
Why you need ‘Coding Testers’
The best tools for the job
The process of failing fast, and its significance to testers
Measuring the transition - recommended metrics
The value of DevOps long-term - efficiency, repeatability & reliability
Don’t worry about failing - it’s a part of the process!
This document provides an agenda and overview for a webinar on managing databases like codebases. The webinar will take place at various times for different time zones. Attendees will be muted and can ask questions in the chat. There will be presentations on industry trends in coding and databases, challenges with traditional database management, and how Delphix and DBmaestro address these challenges through automation and treating databases like code.
DevOps is the combination of cultural philosophies, practices, and tools that increases an organization's ability to deliver applications and services at high velocity: evolving and improving products at a faster pace than organizations using traditional software development and infrastructure management processes.
DevOps on AWS: Deep Dive on Continuous Delivery and the AWS Developer ToolsAmazon Web Services
Today’s cutting-edge companies have software release cycles measured in days instead of months. This agility is enabled by the DevOps practice of continuous delivery, which automates building, testing, and deploying all code changes. This automation helps you catch bugs sooner and accelerates developer productivity. In this session, we’ll share the processes that Amazon’s engineers use to practice DevOps and discuss how you can bring these processes to your company by using a new set of AWS tools (AWS CodePipeline and AWS CodeDeploy). These services were inspired by Amazon's own internal developer tools and DevOps culture.
Manchester ITExpo Talk: DevOps large and small - Cambridge SatchelJwooldridge
This document summarizes the experience of the author in leading large and small DevOps projects. For the large project at Marks & Spencer, the author introduced Continuous Integration, DevOps and Behavior Driven Development practices to a team of 650 working on 65 applications over 4 years. Challenges included legacy systems, complex test environments and vendors without DevOps practices. For the small project at Cambridge Satchel, the author helped launch a new website on a redesigned technology and team model based on DevOps principles in under 2 months, leading to increased sales. The document provides tips for starting a DevOps transformation including defining independently deployable layers, focusing investment where it matters most, and setting a high bar for new initiatives.
Continuous Testing: A Key to DevOps SuccessTechWell
As IT organizations adopt a DevOps strategy, continuous testing (CT) becomes a key ingredient of the DevOps ecosystem. CT enables faster release cycles, more changes per release, upfront isolation of risks, and reduced operations costs. The approach to scale the traditional automation testing infrastructure, test environments, and test data management requires a culture shift using new tools and techniques. Sujay Honnamane discusses a CT strategy for aspiring and already implemented DevOps organizations. Sujay shares examples of tools, techniques, and practical solutions that include continuous integration using the Jenkins CI server, service virtualization through CA Lisa tools, automated code coverage analysis to create impact-based tests, automated test script load balancing for effective use of test environments, and faster test cycles, providing a holistic approach/workflow for CT. Sujay and his teams have successfully implemented CT for several clients in their DevOps journey to achieve a repeatable and highly predictable software delivery process.
Key takeaways
- Continuous “everything” is at the heart of agile and devops
- Continuous activities result in faster delivery and higher quality
- Rapid feedback and practice are essential for confidence in your delivery process
View webinar recording - http://testhuddle.com/resource/continuous-everything/
Jonny wooldridge DevOps Large and SmallJwooldridge
This document provides details about the speaker's experience leading a large DevOps transformation project at Marks & Spencer, a large UK retailer. Some key points:
- He introduced practices like continuous integration, DevOps, and behavior driven development to a 650-person project team working on a £150 million project.
- Among the successes were establishing a software factory for efficient code management and regular release trains. Challenges included integrating legacy systems and complex test environments.
- The document discusses where teams fall on a scale from "Legacy Zone" to "Cool Zone" based on their agile practices and independently deployable software. Moving more teams to continuous delivery is an ongoing effort.
Innovate Better Through Machine data AnalyticsHal Rottenberg
This talk was presented at IP Expo Manchester in May, 2016. the themes discussed are:
- how does machine data relate to devops?
- how can tracking this data lead to better outcomes?
- what types of data are important to track?
AMIS 25: DevOps Best Practice for Oracle SOA and BPMMatt Wright
DevOps and Cloud are transforming the software release process, one which spans multiple teams across development and operations (including testing, infrastructure management), into a collaborative process, with all teams working together to deliver solutions into production faster.
This session details how to implement a continuous delivery process for Oracle SOA/BPM projects, both on-premise and in the cloud, which transform the release process into an automated, reliable, high quality delivery pipeline that that deliver projects faster, with less risk and less cost.
It details the processes and best practices that need to be established, how to use tools to automate and govern the build, deployment and configuration of code from our first initial environment through to production.
1. Learn how DevOps and Continuous Delivery can stream-line the delivery of integration / bpm projects into production.
2. Learn how DevOps plus the Cloud service can accelerate the implementation of on-premise Oracle SOA .
3. Learn best practice for implementing DevOps or Continuous Delivery for Oracle SOA projects on-cloud and on-premise.
4. How to use tools to automate and govern the build, deployment and configuration of code from dev through to production
5. How to leverage the Cloud for Dev and Test, and the benefits this provides.
my understanding of fundamentals of DevOps and how it relates conceptually to Agile, Scrum, Kanban, etc.
SlideShare does not allow uploading a new version of existing presentation. Hence I have to upload the new verson.
Goto https://www.slideshare.net/nitinbhide/devops-understanding-core-concepts for latest version.
Puppet camp2021 testing modules and controlrepoPuppet
This document discusses testing Puppet code when using modules versus a control repository. It recommends starting with simple syntax and unit tests using PDK or rspec-puppet for modules, and using OnceOver for testing control repositories, as it is specially designed for this purpose. OnceOver allows defining classes, nodes, and a test matrix to run syntax, unit, and acceptance tests across different configurations. Moving from simple to more complex testing approaches like acceptance tests is suggested. PDK and OnceOver both have limitations for testing across operating systems that may require customizing spec tests. Infrastructure for running acceptance tests in VMs or containers is also discussed.
This document appears to be for a PuppetCamp 2021 presentation by Corey Osman of NWOPS, LLC. It includes information about Corey Osman and NWOPS, as well as sections on efficient development, presentation content, demo main points, Git strategies including single branch and environment branch strategies, and workflow improvements. Contact information is provided at the bottom.
The document discusses operational verification and how Puppet is working on a new module to provide more confidence in infrastructure health. It introduces the concept of adding check resources to catalogs to validate configurations and service health directly during Puppet runs. Examples are provided of how this could detect issues earlier than current methods. Next steps outlined include integrating checks into more resource types, fixing reporting, integrating into modules, and gathering feedback. This allows testing and monitoring to converge by embedding checks within configurations.
This document provides tips and tricks for using Puppet with VS Code, including links to settings examples and recommended extensions to install like Gitlens, Remote Development Pack, Puppet Extension, Ruby, YAML Extension, and PowerShell Extension. It also mentions there will be a demo.
- The document discusses various patterns and techniques the author has found useful when working with Puppet modules over 10+ years, including some that may be considered unorthodox or anti-patterns by some.
- Key topics covered include optimization of reusable modules, custom data types, Bolt tasks and plans, external facts, Hiera classification, ensuring resources for presence/absence, application abstraction with Tiny Puppet, and class-based noop management.
- The author argues that some established patterns like roles and profiles can evolve to be more flexible, and that running production nodes in noop mode with controls may be preferable to fully enforcing on all nodes.
Applying Roles and Profiles method to compliance codePuppet
This document discusses adapting the roles and profiles design pattern to writing compliance code in Puppet modules. It begins by noting the challenges of writing compliance code, such as it touching many parts of nodes and leading to sprawling code. It then provides an overview of the roles and profiles pattern, which uses simple "front-end" roles/interfaces and more complex "back-end" profiles/implementations. The rest of the document discusses how to apply this pattern when authoring Puppet modules for compliance - including creating interface and implementation classes, using Hiera for configuration, and tools for reducing boilerplate code. It aims to provide a maintainable structure and simplify adapting to new compliance frameworks or requirements.
This document discusses Kinney Group's Puppet compliance framework for automating STIG compliance and reporting. It notes that customers often implement compliance Puppet code poorly or lack appropriate Puppet knowledge. The framework aims to standardize compliance modules that are data-driven and customizable. It addresses challenges like conflicting modules and keeping compliance current after implementation. The framework generates automated STIG checklists and plans future integration with Puppet Enterprise and Splunk for continued compliance reporting. Kinney Group cites practical experience implementing the framework for various military and government customers.
Enforce compliance policy with model-driven automationPuppet
This document discusses model-driven automation for enforcing compliance. It begins with an overview of compliance benchmarks and the CIS benchmarks. It then discusses implementing benchmarks, common challenges around configuration drift and lack of visibility, and how to define compliance policy as code. The key points are that automation is essential for compliance at scale; a model-driven approach defines how a system should be configured and uses desired-state enforcement to keep systems compliant; and defining compliance policy as code, managing it with source control, and automating it with CI/CD helps achieve continuous compliance.
This document discusses how organizations can move from a reactive approach to compliance to a proactive approach using automation. It notes that over 50% of CIOs cite security and compliance as a barrier to IT modernization. Puppet offers an end-to-end compliance solution that allows organizations to automatically eliminate configuration drift, enforce compliance at scale across operating systems and environments, and define policy as code. The solution helps organizations improve compliance from 50% to over 90% compliant. The document argues that taking a proactive automation approach to compliance can turn it into a competitive advantage by improving speed and innovation.
Automating it management with Puppet + ServiceNowPuppet
As the leading IT Service Management and IT Operations Management platform in the marketplace, ServiceNow is used by many organizations to address everything from self service IT requests to Change, Incident and Problem Management. The strength of the platform is in the workflows and processes that are built around the shared data model, represented in the CMDB. This provides the ‘single source of truth’ for the organization.
Puppet Enterprise is a leading automation platform focused on the IT Configuration Management and Compliance space. Puppet Enterprise has a unique perspective on the state of systems being managed, constantly being updated and kept accurate as part of the regular Puppet operation. Puppet Enterprise is the automation engine ensuring that the environment stays consistent and in compliance.
In this webinar, we will explore how to maximize the value of both solutions, with Puppet Enterprise automating the actions required to drive a change, and ServiceNow governing the process around that change, from definition to approval. We will introduce and demonstrate several published integration points between the two solutions, in the areas of Self-Service Infrastructure, Enriched Change Management and Automated Incident Registration.
This document promotes Puppet as a tool for hardening Windows environments. It states that Puppet can be used to harden Windows with one line of code, detect drift from desired configurations, report on missing or changing requirements, reverse engineer existing configurations, secure IIS, and export configurations to the cloud. Benefits of Puppet mentioned include hardening Windows environments, finding drift for investigation, easily passing audits, compliance reporting, easy exceptions, and exporting configurations. It also directs users to Puppet Forge modules for securing Windows and IIS.
Simplified Patch Management with Puppet - Oct. 2020Puppet
Does your company struggle with patching systems? If so, you’re not alone — most organizations have attempted to solve this issue by cobbling together multiple tools, processes, and different teams, which can make an already complicated issue worse.
Puppet helps keep hosts healthy, secure and compliant by replacing time-consuming and error prone patching processes with Puppet’s automated patching solution.
Join this webinar to learn how to do the following with Puppet:
Eliminate manual patching processes with pre-built patching automation for Windows and Linux systems.
Gain visibility into patching status across your estate regardless of OS with new patching solution from the PE console.
Ensure your systems are compliant and patched in a healthy state
How Puppet Enterprise makes patch management easy across your Windows and Linux operating systems.
Presented by: Margaret Lee, Product Manager, Puppet, and Ajay Sridhar, Sr. Sales Engineer, Puppet.
The document discusses how Puppet can be used to accelerate adoption of Microsoft Azure. It describes lift and shift migration of on-premises workloads to Azure virtual machines. It also covers infrastructure as code using Puppet and Terraform for provisioning, configuration management using Puppet Bolt, and implementing immutable infrastructure patterns on Azure. Integrations with Azure services like Key Vault, Blob Storage and metadata service are presented. Patch management and inventory of Azure resources with Puppet are also summarized.
This document discusses using Puppet Catalog Diff to analyze the impact of changes between Puppet environments or catalogs. It provides the command line usage and options for Puppet Catalog Diff. It also discusses how to integrate Puppet Catalog Diff into CI/CD pipelines for automated impact analysis when merging code changes. Additional resources like GitHub projects and Dev.to posts are provided for learning more about diffing Puppet environments and catalogs.
ServiceNow and Puppet- better together, Kevin ReeuwijkPuppet
ServiceNow and Puppet can be integrated in four key areas: 1) Self-service infrastructure allows non-Puppet experts to control infrastructure through a ServiceNow interface; 2) Enriched change management automatically generates ServiceNow change requests from Puppet changes and populates them with impact details; 3) Automated incident registration forwards details of configuration drift corrections in Puppet to ServiceNow to create incidents; and 4) Up-to-date asset management would periodically upload Puppet inventory data to ServiceNow to keep the CMDB accurate without disruptive discovery runs.
This document discusses how Puppet Relay uses Tekton pipelines to orchestrate containerized workflows. It provides an overview of how Tekton fits into the Relay architecture, with Tekton controllers managing taskrun pods to execute workflow steps defined in YAML. Triggers can initiate workflows based on events, with reusable and composable steps for tasks like provisioning infrastructure or clearing resources. Relay also includes features for parameters, secrets, outputs, and approvals to customize workflows. An ecosystem of open source integrations provides sample workflows and steps for common use cases.
100% Puppet Cloud Deployment of Legacy SoftwarePuppet
This document discusses deploying legacy software into the AWS cloud using Puppet. It proposes modeling AWS resources like security groups, autoscaling groups, and launch configurations as Puppet resources. This would allow Puppet to provision the underlying AWS infrastructure and configure servers launched in autoscaling groups. It acknowledges challenges around server reboots but suggests they can be addressed. In summary, it argues custom Puppet resources can easily model AWS resources and using Puppet to configure autoscaling servers is possible despite some challenges around rebooting servers during deployment.
This document discusses a partnership between Republic Polytechnic's School of Infocomm and Puppet to promote DevOps practices. It introduces several people involved with the partnership and outlines their mission to prepare more IT companies and individuals for jobs in the DevOps field through training courses. The document describes some short courses offered on DevOps topics and using the Puppet and Microsoft Azure platforms. It provides an example of how Republic Polytechnic has automated infrastructure configuration using Puppet to save time and reduce errors. There is a request at the end for readers to register their interest in DevOps by completing a survey.
This document discusses continuous compliance and DevSecOps best practices followed by financial services organizations.
Continuous compliance is defined as an ongoing process of proactive risk management that delivers predictable, transparent, and cost-effective compliance results. It involves continuously monitoring compliance controls, providing real-time alerts for failures and remediation recommendations, and maintaining up-to-date policies. Best practices for continuous compliance discussed include defining CIS controls and benchmarks, achieving transparent compliance dashboards and automated fixes for breaches.
DevSecOps is introduced as bringing security earlier in the application development lifecycle to minimize vulnerabilities. It aims to make everyone accountable for security. Challenges discussed include security teams struggling to keep up with DevOps pace and
The Dynamic Duo of Puppet and Vault tame SSL Certificates, Nick MaludyPuppet
The document discusses using Puppet and Vault together to dynamically manage SSL certificates. Puppet can use the vault_cert resource to request signed certificates from Vault and configure services to use the certificates. On Windows, some additional logic is needed to retrieve certificates' thumbprints and bind services to certificates using those thumbprints. This approach provides automated certificate renewal and distribution across platforms.
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
3. Applications do not answer business needs well
Long development cycle
Difficult to get clear specifications
Market can shift quickly
June
4. Applications do not answer business needs well
Long development cycle
Difficult to get clear specifications
Market can shift quickly
June
What the end user
say he needs in January
January
5. What the dev team
delivers in June
June
Applications do not answer business needs well
Long development cycle
Difficult to get clear specifications
Market can shift quickly
What the end user
say he needs in January
January
6. What the dev team
delivers in June
June
What the end user
needs in June
Applications do not answer business needs well
Long development cycle
Difficult to get clear specifications
Market can shift quickly
June
What the end user
say he needs in January
January
8. Development processes are inefficient
Bug are detected too late
Integration Hell
Code TESTOther work Fix
Start integration
9. Development processes are inefficient
Bug are detected too late
Integration Hell
A lot of wasted time
Code TESTOther work Fix
Request
Approve
&
Prioritize
Technical
Assessment
Code &
Test
Verify &
Fix
Deploy
20 min 2 min 15 min 2 h 4 h 3 min
½ week 2 weeks 2 weeks 1week ½ week
Processing Time = 6 h 40 min
Waiting Time = 6 weeks
Adapted from Implementing Lean Software Development: From Concept to Cash, Mary & Tom Poppendieck.
Start integration
11. OPS view
Relationship with OPS can be “difficult”
Performance is not only related to hardware
« Make my website faster in Asia»
Application deployment is a nightmare
12. OPS view
?
Relationship with OPS can be “difficult”
Performance is not only related to hardware
« Make my website faster in Asia»
Application deployment is a nightmare
« Our application is too slow because of your servers»
13. OPS view
?
Relationship with OPS can be “difficult”
DEV view
Performance is not only related to hardware
« Make my website faster in Asia»
Application deployment is a nightmare
« Our application is too slow because of your servers»
14. OPS view
?
Relationship with OPS can be “difficult”
DEV view
Performance is not only related to hardware
« Make my website faster in Asia»
Application deployment is a nightmare
« Our application is too slow because of your servers»
Identical servers are always “slightly” different
15. OPS view
?
Relationship with OPS can be “difficult”
DEV view
Performance is not only related to hardware
« Make my website faster in Asia»
Application deployment is a nightmare
« Our application is too slow because of your servers»
Identical servers are always “slightly” different
OPS always say “no”
Standards do not evolve
10 deploys per day, Dev & ops cooperation at Flickr
John Allspaw & Paul Hammond (Velocity 2009)
16. 6+ months to
setup a new
environment
Infrastructure is not very agile
Server
“hoarding”
17. Infrastructure is not very agile
Server
“hoarding”
Resources
are heavily
shared
Most
environments
are
underutilized
6+ months to
setup a new
environment
18. Infrastructure is not very agile
Server
“hoarding”
Resources
are heavily
shared
Most
environments
are
underutilized
6+ months to
setup a new
environment
19. Production
Infra setup
Deploy Deploy Deploy Deploy
Utilization : 100%
Infrastructure is not very agile
Server
“hoarding”
Resources
are heavily
shared
Most
environments
are
underutilized
6+ months to
setup a new
environment
20. Production
Infra setup
Deploy Deploy Deploy Deploy
Utilization : 100%
Preproduction Utilization : 10%
Infrastructure is not very agile
Server
“hoarding”
Resources
are heavily
shared
Most
environments
are
underutilized
6+ months to
setup a new
environment
21. Production
Infra setup
Deploy Deploy Deploy Deploy
Utilization : 100%
Preproduction Utilization : 10%
Test Utilization : 40%
Infrastructure is not very agile
Server
“hoarding”
Resources
are heavily
shared
Most
environments
are
underutilized
6+ months to
setup a new
environment
22. Business
Development
Operations
• Applications do not answer business needs well
• Too long to get new features
• Integration and bug fixing is painful
• A lot of wasted time
• Deployments are very painful
• A lot of misunderstanding
• Environment setup is too slow
• No on-demand resources
Summary of the issues
WHAT?
24. Agile Manifesto, 2001
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
Agile Development
Business
Development
Operations
A
G
I
L
E
27. Maintain a single source repository
Automate the build
Make your build self-testing
Every commit should build on an integration machine
Keep the build fast
Test in a clone of the production environment
Make it easy for anyone to get the latest executable
Everyone can see what’s happening
Detect problems early and solve them quickly
Continuous Integration
C
I
Business
Development
Operations
29. DevOps : bring the wall down
Business
Development
Operations
D
E
V
O
P
S
Measure, Analyze & Describe
Constraints (from DEV and OPS)
Best practices, methods
Automation
Measure
Share
Culture Align objectives on business needs
Innovate
Automate application delivery
31. Agile
infra
Control resources
Configure
resources
Deploy
applications
Create, Delete, start, stop
Servers (physical, virtual, in the cloud)
Storage volumes, networks
Define system states, when possible
Verify system states
Reconfigure systems when necessary
Provide a service to deploy applications
Automated, with rollbacks
Agile Infrastructure
Business
Development
Operations
32. Agile
infra
Control resources
Configure
resources
Deploy
applications
Create, Delete, start, stop
Servers (physical, virtual, in the cloud)
Storage volumes, networks
Define system states, when possible
Verify system states
Reconfigure systems when necessary
Provide a service to deploy applications
Automated, with rollbacks
Automation
API
API
API
Agile Infrastructure
Business
Development
Operations
36. Puppet Master
• Resolvers, time servers, standard packages
• Authentication, security
• Monitoring,…
Use case 1 : core OS configuration
Hiera
Modules
Console
ENC / API
1:Create modules
Define variables
Assign classes to nodes
Server team uses puppet as a configuration tool
37. Puppet Master
• Resolvers, time servers, standard packages
• Authentication, security
• Monitoring,…
Use case 1 : core OS configuration
Hiera
Modules
Console
ENC / API
1:Create modules
Define variables
Assign classes to nodes
4: apply/test catalog
Puppet agentPuppet agentPuppet agent
Server team uses puppet as a configuration tool
38. Puppet Master
• Resolvers, time servers, standard packages
• Authentication, security
• Monitoring,…
Use case 1 : core OS configuration
Hiera
Modules
Console
ENC / API
1:Create modules
Define variables
Assign classes to nodes
4: apply/test catalog
McollectiveGet info on nodes
Run agent on subset of nodes
Puppet agentPuppet agentPuppet agent
Server team uses puppet as a configuration tool
39. Puppet Master
• Resolvers, time servers, standard packages
• Authentication, security
• Monitoring,…
Use case 1 : core OS configuration
Hiera
Modules
Console
ENC / API
1:Create modules
Define variables
Assign classes to nodes
4: apply/test catalog
McollectiveGet info on nodes
Run agent on subset of nodes
Puppet agentPuppet agentPuppet agent
Most common puppet
usage.
Proven for large scale
deployment.
Very “Infra oriented”: not
opened to applications.
Server team uses puppet as a configuration tool
40. Binaries of the application
• Puppet manifests and modules describing deployments
Repository
Binaries
Manifest
Modules
1: put
Use case 2 : deploy applications
Developers supply :
41. Binaries of the application
• Puppet manifests and modules describing deployments
Dev
servers
Test servers
Dev
serversOther env
Repository
Binaries
Manifest
Modules
1: put 2: get
4: run
Use case 2 : deploy applications
• Chooses servers (env) where the deployment should happen
• Runs puppet and gathers reports; if run fails, forwards to DEV
Developers supply :
OPS team :
42. Binaries of the application
• Puppet manifests and modules describing deployments
Dev
servers
Test servers
Dev
serversOther env
Repository
Binaries
Manifest
Modules
1: put 2: get
4: run
Use case 2 : deploy applications
• Chooses servers (env) where the deployment should happen
• Runs puppet and gathers reports; if run fails, forwards to DEV
Much more efficient than written
deployment processes.
Much easier to understand what
fails.
Developers supply :
OPS team :
44. Using a “configuration service”
Env
Configuration
service
Deploy
API
GUI
• Associate “profiles” to nodes, define variables
• Run configuration and get reports
Give application teams the possibility to configure servers1
45. Using a “configuration service”
Env
Configuration
service
Deploy
API
GUI
• Associate “profiles” to nodes, define variables
• Run configuration and get reports
Give application teams the possibility to configure servers1
Different levels of configuration, different responsibilities2
Base OS configuration
Standard middlewares
Application
middlewares
Application
46. Can we do that with puppet?
Sure, but tricky with classic DEV / OPS model
47. Can we do that with puppet?
Sure, but tricky with classic DEV / OPS model
DEV cannot execute anything as root
SURE, but tricky with classic DEV / OPS model
48. Base OS configuration
Standard middlewares
Application
middlewares
Application2
1
Can we do that with puppet?
Some options :
Sure, but tricky with classic DEV / OPS model
• Second puppet master, or puppet apply (non root)
• Other tool
Tool separation1
DEV cannot execute anything as root
SURE, but tricky with classic DEV / OPS model
49. Base OS configuration
Standard middlewares
Application
middlewares
Application2
1
Can we do that with puppet?
Some options :
Sure, but tricky with classic DEV / OPS model
• Second puppet master, or puppet apply (non root)
• Other tool
Tool separation1
Ok to run as root but under full control2
• Custom “profile” facts (facts.d) and hiera
• Run with mcollective (limit to some tags)
• Read-only console access
DEV cannot execute anything as root
SURE, but tricky with classic DEV / OPS model
50. Base OS configuration
Standard middlewares
Application
middlewares
Application2
1
Can we do that with puppet?
Some options :
Sure, but tricky with classic DEV / OPS model
• Second puppet master, or puppet apply (non root)
• Other tool
Tool separation1
Ok to run as root but under full control2
• Custom “profile” facts (facts.d) and hiera
• Run with mcollective (limit to some tags)
• Read-only console access
Many other ways3
DEV cannot execute anything as root
SURE, but tricky with classic DEV / OPS model
51. Approach 1 : OPS write all modules
Version
Control
System
I need a mongodb
module CI Puppet
master
mongodb
module
What if DEV need custom modules (they will)
52. Approach 1 : OPS write all modules
Impossible to scale
Not efficient
Version
Control
System
I need a mongodb
module CI Puppet
master
mongodb
module
What if DEV need custom modules (they will)
53. Approach 1 : OPS write all modules
Impossible to scale
Not efficient
Version
Control
System
I need a mongodb
module CI Puppet
master
mongodb
module
What if DEV need custom modules (they will)
Approach 2 : Pull request
mongodb
module
Version
Control
System
CI
Puppet
master
Pull request
Version
Control
System
validated
module
54. Approach 1 : OPS write all modules
Impossible to scale
Not efficient
Version
Control
System
I need a mongodb
module CI Puppet
master
mongodb
module
Very limited scalabity
What if DEV need custom modules (they will)
Approach 2 : Pull request
mongodb
module
Version
Control
System
CI
Puppet
master
Pull request
Version
Control
System
validated
module
55. Approach 1 : OPS write all modules
Impossible to scale
Not efficient
Version
Control
System
I need a mongodb
module CI Puppet
master
mongodb
module
Very limited scalabity
What if DEV need custom modules (they will)
Approach 2 : Pull request
Approach 3: DEV can push to some repositories
mongodb
module
Version
Control
System
CI
Puppet
master
mongodb
module
Version
Control
System
CI
Puppet
master
Pull request
Version
Control
System
validated
module
56. Approach 1 : OPS write all modules
Impossible to scale
Not efficient
Version
Control
System
I need a mongodb
module CI Puppet
master
mongodb
module
Very limited scalabity
Complex permissions
DEV are still basically root
What if DEV need custom modules (they will)
Approach 2 : Pull request
Approach 3: DEV can push to some repositories
mongodb
module
Version
Control
System
CI
Puppet
master
mongodb
module
Version
Control
System
CI
Puppet
master
Pull request
Version
Control
System
validated
module
62. Storage / Network
Servers
• Provide application
• Ask for env
• Provide env
• Run production
From separation and control to shared responsiblities
63. Strict separation of roles
Storage / Network
Servers
• Provide application
• Ask for env
• Provide env
• Run production
From separation and control to shared responsiblities
64. Strict separation of roles
Storage / Network
Servers
• Provide application
• Ask for env
• Provide env
• Run production
• Provide programmable resources
• Provide advice
• Delegate some Prod responsability
Storage
Servers
Servers
Network
From separation and control to shared responsiblities
65. Strict separation of roles
Storage / Network
Servers
• Provide application
• Ask for env
• Provide env
• Run production
• Provide programmable resources
• Provide advice
• Delegate some Prod responsability
Storage
Servers
Servers
Network
• Provide application
• Consume environments
• Share responsibility
From separation and control to shared responsiblities
66. Strict separation of roles
Storage / Network
Servers
• Provide application
• Ask for env
• Provide env
• Run production
• Provide programmable resources
• Provide advice
• Delegate some Prod responsability
Storage
Servers
Servers
Network
• Provide application
• Consume environments
• Share responsibility
Shared responsibilities
From separation and control to shared responsiblities
67. “Designing Puppet: Roles / Profiles Design Pattern
Puppet Camp Stockholm, Feb 2013 (Craig Dunn Puppet Labs)
Resources
What it could look like with the profile/role pattern
68. “Designing Puppet: Roles / Profiles Design Pattern
Puppet Camp Stockholm, Feb 2013 (Craig Dunn Puppet Labs)
Resources
ssh ntp
dns ldap
Modules
OPS provide core OS modules
What it could look like with the profile/role pattern
69. “Designing Puppet: Roles / Profiles Design Pattern
Puppet Camp Stockholm, Feb 2013 (Craig Dunn Puppet Labs)
Resources
ssh ntp
dns ldap
Modules
mysql
apache
OPS provide core OS modules
OPS provide middleware modules
What it could look like with the profile/role pattern
70. “Designing Puppet: Roles / Profiles Design Pattern
Puppet Camp Stockholm, Feb 2013 (Craig Dunn Puppet Labs)
Resources
ssh ntp
dns ldap
Modules
mysql
apache
OS BaseProfiles
OPS provide core OS modules
OPS provide middleware modules
OPS provide Base profile
What it could look like with the profile/role pattern
71. “Designing Puppet: Roles / Profiles Design Pattern
Puppet Camp Stockholm, Feb 2013 (Craig Dunn Puppet Labs)
Resources
ssh ntp
dns ldap
Modules
mysql
apache
OS BaseProfiles Wordpress
OPS provide core OS modules
OPS provide middleware modules
OPS provide Base profile
DEV create profiles using modules
What it could look like with the profile/role pattern
72. “Designing Puppet: Roles / Profiles Design Pattern
Puppet Camp Stockholm, Feb 2013 (Craig Dunn Puppet Labs)
Resources
word
press
ssh ntp
dns ldap
Modules
mysql
apache
OS BaseProfiles Wordpress
OPS provide core OS modules
OPS provide middleware modules
OPS provide Base profile
DEV create profiles using modules
DEV create some custom modules
What it could look like with the profile/role pattern
73. “Designing Puppet: Roles / Profiles Design Pattern
Puppet Camp Stockholm, Feb 2013 (Craig Dunn Puppet Labs)
Resources
word
press
ssh ntp
dns ldap
Modules
mysql
apache
OS BaseProfiles Wordpress
Roles Roles
Wordpress-server OPS provide core OS modules
OPS provide middleware modules
OPS provide Base profile
DEV create profiles using modules
DEV create some custom modules
DEV & OPS define roles
What it could look like with the profile/role pattern
74. “Designing Puppet: Roles / Profiles Design Pattern
Puppet Camp Stockholm, Feb 2013 (Craig Dunn Puppet Labs)
Resources
word
press
ssh ntp
dns ldap
Modules
mysql
apache
OS BaseProfiles Wordpress
Roles Roles
Wordpress-server OPS provide core OS modules
OPS provide middleware modules
OPS provide Base profile
DEV create profiles using modules
DEV create some custom modules
DEV & OPS define roles
DEV & OPS define variables
Hiera
What it could look like with the profile/role pattern
75. “Designing Puppet: Roles / Profiles Design Pattern
Puppet Camp Stockholm, Feb 2013 (Craig Dunn Puppet Labs)
Resources
word
press
ssh ntp
dns ldap
Modules
mysql
apache
OS BaseProfiles Wordpress
Roles Roles
Wordpress-server OPS provide core OS modules
OPS provide middleware modules
OPS provide Base profile
DEV create profiles using modules
DEV create some custom modules
DEV & OPS define roles
DEV & OPS associate roles to nodes
DEV & OPS define variables
Hiera
Classifier
What it could look like with the profile/role pattern
77. • Automate configuration
• Declare state, keep configuration on track
• Puppet syntax is very expressive
• Variable management with hiera is very efficient
Our feedback on puppet
Puppet is an amazing tool
78. • Automate configuration
• Declare state, keep configuration on track
• Puppet syntax is very expressive
• Variable management with hiera is very efficient
Time
Expectations
Our feedback on puppet
Puppet is an amazing tool
79. • Automate configuration
• Declare state, keep configuration on track
• Puppet syntax is very expressive
• Variable management with hiera is very efficient
Time
Expectations
Puppet???
Our feedback on puppet
Puppet is an amazing tool
80. • Automate configuration
• Declare state, keep configuration on track
• Puppet syntax is very expressive
• Variable management with hiera is very efficient
Time
Expectations
Puppet???
OK, looks interesting
First puppet apply
Our feedback on puppet
Puppet is an amazing tool
81. • Automate configuration
• Declare state, keep configuration on track
• Puppet syntax is very expressive
• Variable management with hiera is very efficient
Time
Expectations
Puppet???
OK, looks interesting
First puppet apply
What the hell are:
* Modules (and classes)
* Hiera
* erb
* spaceships??
Our feedback on puppet
Puppet is an amazing tool
82. • Automate configuration
• Declare state, keep configuration on track
• Puppet syntax is very expressive
• Variable management with hiera is very efficient
Time
Expectations
Puppet???
OK, looks interesting
First puppet apply
What the hell are:
* Modules (and classes)
* Hiera
* erb
* spaceships??
First modules
Our feedback on puppet
Puppet is an amazing tool
83. • Automate configuration
• Declare state, keep configuration on track
• Puppet syntax is very expressive
• Variable management with hiera is very efficient
Time
Expectations
Puppet???
OK, looks interesting
First puppet apply
What the hell are:
* Modules (and classes)
* Hiera
* erb
* spaceships??
First modules
Wow this is big
Our feedback on puppet
Puppet is an amazing tool
84. • Automate configuration
• Declare state, keep configuration on track
• Puppet syntax is very expressive
• Variable management with hiera is very efficient
Time
Expectations
Puppet???
OK, looks interesting
First puppet apply
What the hell are:
* Modules (and classes)
* Hiera
* erb
* spaceships??
First modules
Wow this is big
Ok not that simple
Our feedback on puppet
Puppet is an amazing tool
85. • Automate configuration
• Declare state, keep configuration on track
• Puppet syntax is very expressive
• Variable management with hiera is very efficient
Time
Expectations
Puppet???
OK, looks interesting
First puppet apply
What the hell are:
* Modules (and classes)
* Hiera
* erb
* spaceships??
First modules
Wow this is big
Ok not that simple
Too big! We are lost
* Variables?
* Classification
* Module conflicts
Our feedback on puppet
Puppet is an amazing tool
86. • Automate configuration
• Declare state, keep configuration on track
• Puppet syntax is very expressive
• Variable management with hiera is very efficient
Time
Expectations
Puppet???
OK, looks interesting
First puppet apply
What the hell are:
* Modules (and classes)
* Hiera
* erb
* spaceships??
First modules
Wow this is big
Ok not that simple
Too big! We are lost
* Variables?
* Classification
* Module conflicts
Best practices
* Roles / Profiles
* Variable location
Our feedback on puppet
Puppet is an amazing tool
87. • Automate configuration
• Declare state, keep configuration on track
• Puppet syntax is very expressive
• Variable management with hiera is very efficient
Time
Expectations
Puppet???
OK, looks interesting
First puppet apply
What the hell are:
* Modules (and classes)
* Hiera
* erb
* spaceships??
First modules
Wow this is big
Ok not that simple
Too big! We are lost
* Variables?
* Classification
* Module conflicts
Best practices
* Roles / Profiles
* Variable location
Our feedback on puppet
• Setups can be complex
• Many solutions to a problem
• Use it for what it does best
Try adapting processes first
• Look for best practices
Puppet is an amazing tool
You can do (almost) anything with puppet, but
88. The pace of innovation in IT is accelerating
New time-to-market challenges will require continuous delivery
We will not get continuous delivery without DEVOPS
Puppet is an amazing DEVOPS tool and will help you
Conclusion
89. The pace of innovation in IT is accelerating
New time-to-market challenges will require continuous delivery
We will not get continuous delivery without DEVOPS
Puppet is an amazing DEVOPS tool and will help you
Conclusion
But tools cannot do everything: puppet is not a magic solution
90. The pace of innovation in IT is accelerating
New time-to-market challenges will require continuous delivery
We will not get continuous delivery without DEVOPS
Puppet is an amazing DEVOPS tool and will help you
Conclusion
• Finding the best way to use puppet for you will take time
• Providing a configuration service will be a challenge
• Processes will need to change
DEV and OPS roles are evolving and Organizations will need to adapt
But tools cannot do everything: puppet is not a magic solution