Puppet Modules are a great way to reuse code, share your development with other people and take advantage of the hundreds of modules already available in the community. But how to create, test and publish them as easily as possible? now that infrastructure is defined as code, we need to use development best practices to build, test, deploy and use Puppet modules themselves. Three steps for a fully automated process
* Continuous Integration of Puppet Modules
* Automatic release and upload to the Puppet Forge
* Deploy to Puppet master
Puppet for Java developers - JavaZone NO 2012Carlos Sanchez
Example code at https://github.com/carlossg/puppet-for-java-devs
More info at http://blog.carlossanchez.eu/tag/devops
Video at http://vimeo.com/49483627
Puppet is an infrastructure-as-code tool that allows easy and automated provisioning of servers, defining the packages, configuration, services,... in code. Enabling DevOps culture, tools like Puppet help drive Agile development all the way to operations and systems administration, and along with continuous integration tools like Jenkins, it is a key piece to accomplish repeatability and continuous delivery, automating the operations side during development, QA or production, and enabling testing of systems configuration.
Traditionally a field for system administrators, Puppet can empower developers, allowing both to collaborate coding the infrastructure needed for their developments, whether it runs in hardware, virtual machines or cloud. Developers and sysadmins can define what JDK version must be installed, application server, version, configuration files, war and jar files,... and easily make changes that propagate across all nodes.
Using Vagrant, a command line automation layer for VirtualBox, they can also spin off virtual machines in their local box, easily from scratch with the same configuration as production servers, do development or testing and tear them down afterwards.
We’ll show how to install and manage Puppet nodes with JDK, multiple application server instances with installed web applications, database, configuration files and all the supporting services. Including getting up and running with Vagrant and VirtualBox for quickstart and Puppet experiments, as well as setting up automated testing of the Puppet code.
More info at http://blog.carlossanchez.eu/tag/devops
The DevOps movement aims to improve communication between developers and operations teams to solve critical issues such as fear of change and risky deployments. But the same way that Agile development would likely fail without continuous integration tools, the DevOps principles need tools to make them real, and provide the automation required to actually be implemented. Most of the so called DevOps tools focus on the operations side, and there should be more than that, the automation must cover the full process, Dev to QA to Ops and be as automated and agile as possible. Tools in each part of the workflow have evolved in their own silos, and with the support of their own target teams. But a true DevOps mentality requires a seamless process from the start of development to the end in production deployments and maintenance, and for a process to be successful there must be tools that take the burden out of humans.
Apache Maven has arguably been the most successful tool for development, project standardization and automation introduced in the last years. On the operations side we have open source tools like Puppet or Chef that are becoming increasingly popular to automate infrastructure maintenance and server provisioning.
In this presentation we will introduce an end-to-end development-to-production process that will take advantage of Maven and Puppet, each of them at their strong points, and open source tools to automate the handover between them, automating continuous build and deployment, continuous delivery, from source code to any number of application servers managed with Puppet, running either in physical hardware or the cloud, handling new continuous integration builds and releases automatically through several stages and environments such as development, QA, and production.
More info at http://blog.carlossanchez.eu/tag/devops
Video en español: http://youtu.be/E_OE4l3t5BA
The DevOps movement aims to improve communication between developers and operations teams to solve critical issues such as fear of change and risky deployments. But the same way that Agile development would likely fail without continuous integration tools, the DevOps principles need tools to make them real, and provide the automation required to actually be implemented. Most of the so called DevOps tools focus on the operations side, and there should be more than that, the automation must cover the full process, Dev to QA to Ops and be as automated and agile as possible. Tools in each part of the workflow have evolved in their own silos, and with the support of their own target teams. But a true DevOps mentality requires a seamless process from the start of development to the end in production deployments and maintenance, and for a process to be successful there must be tools that take the burden out of humans.
Apache Maven has arguably been the most successful tool for development, project standardization and automation introduced in the last years. On the operations side we have open source tools like Puppet or Chef that are becoming increasingly popular to automate infrastructure maintenance and server provisioning.
In this presentation we will introduce an end-to-end development-to-production process that will take advantage of Maven and Puppet, each of them at their strong points, and open source tools to automate the handover between them, automating continuous build and deployment, continuous delivery, from source code to any number of application servers managed with Puppet, running either in physical hardware or the cloud, handling new continuous integration builds and releases automatically through several stages and environments such as development, QA, and production.
More info at http://blog.carlossanchez.eu/2011/11/15/from-dev-to-devops-slides-from-apachecon-na-vancouver-2011/
The DevOps movement aims to improve communication between developers and operations teams to solve critical issues such as fear of change and risky deployments. But the same way that Agile development would likely fail without continuous integration tools, the DevOps principles need tools to make them real, and provide the automation required to actually be implemented. Most of the so called DevOps tools focus on the operations side, and there should be more than that, the automation must cover the full process, Dev to QA to Ops and be as automated and agile as possible. Tools in each part of the workflow have evolved in their own silos, and with the support of their own target teams. But a true DevOps mentality requires a seamless process from the start of development to the end in production deployments and maintenance, and for a process to be successful there must be tools that take the burden out of humans.
Apache Maven has arguably been the most successful tool for development, project standardization and automation introduced in the last years. On the operations side we have open source tools like Puppet or Chef that are becoming increasingly popular to automate infrastructure maintenance and server provisioning.
In this presentation we will introduce an end-to-end development-to-production process that will take advantage of Maven and Puppet, each of them at their strong points, and open source tools to automate the handover between them, automating continuous build and deployment, continuous delivery, from source code to any number of application servers managed with Puppet, running either in physical hardware or the cloud, handling new continuous integration builds and releases automatically through several stages and environments such as development, QA, and production.
From Dev to DevOps - Apache Barcamp Spain 2011Carlos Sanchez
UPDATE: updated slides at http://www.slideshare.net/carlossg/from-dev-to-devops-conferencia-agile-spain-2011
The DevOps movement aims to improve communication between developers and operations teams to solve critical issues such as fear of change and risky deployments. But the same way that Agile development would likely fail without continuous integration tools, the DevOps principles need tools to make them real, and provide the automation required to actually be implemented. Most of the so called DevOps tools focus on the operations side, and there should be more than that, the automation must cover the full process, Dev to QA to Ops and be as automated and agile as possible. Tools in each part of the workflow have evolved in their own silos, and with the support of their own target teams. But a true DevOps mentality requires a seamless process from the start of development to the end in production deployments and maintenance, and for a process to be successful there must be tools that take the burden out of humans.
Apache Maven has arguably been the most successful tool for development, project standardization and automation introduced in the last years. On the operations side we have open source tools like Puppet or Chef that are becoming increasingly popular to automate infrastructure maintenance and server provisioning.
In this presentation we will introduce an end-to-end development-to-production process that will take advantage of Maven and Puppet, each of them at their strong points, and open source tools to automate the handover between them, automating continuous build and deployment, continuous delivery, from source code to any number of application servers managed with Puppet, running either in physical hardware or the cloud, handling new continuous integration builds and releases automatically through several stages and environments such as development, QA, and production.
Continuous Delivery with Maven, Puppet and Tomcat - ApacheCon NA 2013Carlos Sanchez
Continuous Integration, with Apache Continuum or Jenkins, can be extended to fully manage deployments and production environments, running in Tomcat for instance, in a full Continuous Delivery cycle using infrastructure-as-code tools like Puppet, allowing to manage multiple servers and their configurations.
Puppet is an infrastructure-as-code tool that allows easy and automated provisioning of servers, defining the packages, configuration, services,... in code. Enabling DevOps culture, tools like Puppet help drive Agile development all the way to operations and systems administration, and along with continuous integration tools like Apache Continuum or Jenkins, it is a key piece to accomplish repeatability and continuous delivery, automating the operations side during development, QA or production, and enabling testing of systems configuration.
Traditionally a field for system administrators, Puppet can empower developers, allowing both to collaborate coding the infrastructure needed for their developments, whether it runs in hardware, virtual machines or cloud. Developers and sysadmins can define what JDK version must be installed, application server, version, configuration files, war and jar files,... and easily make changes that propagate across all nodes.
Using Vagrant, a command line automation layer for VirtualBox, they can also spin off virtual machines in their local box, easily from scratch with the same configuration as production servers, do development or testing and tear them down afterwards.
We will show how to install and manage Puppet nodes with JDK, multiple Tomcat instances with installed web applications, database, configuration files and all the supporting services. Including getting up and running with Vagrant and VirtualBox for quickstart and Puppet experiments, as well as setting up automated testing of the Puppet code.
Code testing and Continuous Integration are just the first step in a source code to production process. Combined with infrastructure-as-code tools such as Puppet the whole process can be automated, and tested!
Puppet for Java developers - JavaZone NO 2012Carlos Sanchez
Example code at https://github.com/carlossg/puppet-for-java-devs
More info at http://blog.carlossanchez.eu/tag/devops
Video at http://vimeo.com/49483627
Puppet is an infrastructure-as-code tool that allows easy and automated provisioning of servers, defining the packages, configuration, services,... in code. Enabling DevOps culture, tools like Puppet help drive Agile development all the way to operations and systems administration, and along with continuous integration tools like Jenkins, it is a key piece to accomplish repeatability and continuous delivery, automating the operations side during development, QA or production, and enabling testing of systems configuration.
Traditionally a field for system administrators, Puppet can empower developers, allowing both to collaborate coding the infrastructure needed for their developments, whether it runs in hardware, virtual machines or cloud. Developers and sysadmins can define what JDK version must be installed, application server, version, configuration files, war and jar files,... and easily make changes that propagate across all nodes.
Using Vagrant, a command line automation layer for VirtualBox, they can also spin off virtual machines in their local box, easily from scratch with the same configuration as production servers, do development or testing and tear them down afterwards.
We’ll show how to install and manage Puppet nodes with JDK, multiple application server instances with installed web applications, database, configuration files and all the supporting services. Including getting up and running with Vagrant and VirtualBox for quickstart and Puppet experiments, as well as setting up automated testing of the Puppet code.
More info at http://blog.carlossanchez.eu/tag/devops
The DevOps movement aims to improve communication between developers and operations teams to solve critical issues such as fear of change and risky deployments. But the same way that Agile development would likely fail without continuous integration tools, the DevOps principles need tools to make them real, and provide the automation required to actually be implemented. Most of the so called DevOps tools focus on the operations side, and there should be more than that, the automation must cover the full process, Dev to QA to Ops and be as automated and agile as possible. Tools in each part of the workflow have evolved in their own silos, and with the support of their own target teams. But a true DevOps mentality requires a seamless process from the start of development to the end in production deployments and maintenance, and for a process to be successful there must be tools that take the burden out of humans.
Apache Maven has arguably been the most successful tool for development, project standardization and automation introduced in the last years. On the operations side we have open source tools like Puppet or Chef that are becoming increasingly popular to automate infrastructure maintenance and server provisioning.
In this presentation we will introduce an end-to-end development-to-production process that will take advantage of Maven and Puppet, each of them at their strong points, and open source tools to automate the handover between them, automating continuous build and deployment, continuous delivery, from source code to any number of application servers managed with Puppet, running either in physical hardware or the cloud, handling new continuous integration builds and releases automatically through several stages and environments such as development, QA, and production.
More info at http://blog.carlossanchez.eu/tag/devops
Video en español: http://youtu.be/E_OE4l3t5BA
The DevOps movement aims to improve communication between developers and operations teams to solve critical issues such as fear of change and risky deployments. But the same way that Agile development would likely fail without continuous integration tools, the DevOps principles need tools to make them real, and provide the automation required to actually be implemented. Most of the so called DevOps tools focus on the operations side, and there should be more than that, the automation must cover the full process, Dev to QA to Ops and be as automated and agile as possible. Tools in each part of the workflow have evolved in their own silos, and with the support of their own target teams. But a true DevOps mentality requires a seamless process from the start of development to the end in production deployments and maintenance, and for a process to be successful there must be tools that take the burden out of humans.
Apache Maven has arguably been the most successful tool for development, project standardization and automation introduced in the last years. On the operations side we have open source tools like Puppet or Chef that are becoming increasingly popular to automate infrastructure maintenance and server provisioning.
In this presentation we will introduce an end-to-end development-to-production process that will take advantage of Maven and Puppet, each of them at their strong points, and open source tools to automate the handover between them, automating continuous build and deployment, continuous delivery, from source code to any number of application servers managed with Puppet, running either in physical hardware or the cloud, handling new continuous integration builds and releases automatically through several stages and environments such as development, QA, and production.
More info at http://blog.carlossanchez.eu/2011/11/15/from-dev-to-devops-slides-from-apachecon-na-vancouver-2011/
The DevOps movement aims to improve communication between developers and operations teams to solve critical issues such as fear of change and risky deployments. But the same way that Agile development would likely fail without continuous integration tools, the DevOps principles need tools to make them real, and provide the automation required to actually be implemented. Most of the so called DevOps tools focus on the operations side, and there should be more than that, the automation must cover the full process, Dev to QA to Ops and be as automated and agile as possible. Tools in each part of the workflow have evolved in their own silos, and with the support of their own target teams. But a true DevOps mentality requires a seamless process from the start of development to the end in production deployments and maintenance, and for a process to be successful there must be tools that take the burden out of humans.
Apache Maven has arguably been the most successful tool for development, project standardization and automation introduced in the last years. On the operations side we have open source tools like Puppet or Chef that are becoming increasingly popular to automate infrastructure maintenance and server provisioning.
In this presentation we will introduce an end-to-end development-to-production process that will take advantage of Maven and Puppet, each of them at their strong points, and open source tools to automate the handover between them, automating continuous build and deployment, continuous delivery, from source code to any number of application servers managed with Puppet, running either in physical hardware or the cloud, handling new continuous integration builds and releases automatically through several stages and environments such as development, QA, and production.
From Dev to DevOps - Apache Barcamp Spain 2011Carlos Sanchez
UPDATE: updated slides at http://www.slideshare.net/carlossg/from-dev-to-devops-conferencia-agile-spain-2011
The DevOps movement aims to improve communication between developers and operations teams to solve critical issues such as fear of change and risky deployments. But the same way that Agile development would likely fail without continuous integration tools, the DevOps principles need tools to make them real, and provide the automation required to actually be implemented. Most of the so called DevOps tools focus on the operations side, and there should be more than that, the automation must cover the full process, Dev to QA to Ops and be as automated and agile as possible. Tools in each part of the workflow have evolved in their own silos, and with the support of their own target teams. But a true DevOps mentality requires a seamless process from the start of development to the end in production deployments and maintenance, and for a process to be successful there must be tools that take the burden out of humans.
Apache Maven has arguably been the most successful tool for development, project standardization and automation introduced in the last years. On the operations side we have open source tools like Puppet or Chef that are becoming increasingly popular to automate infrastructure maintenance and server provisioning.
In this presentation we will introduce an end-to-end development-to-production process that will take advantage of Maven and Puppet, each of them at their strong points, and open source tools to automate the handover between them, automating continuous build and deployment, continuous delivery, from source code to any number of application servers managed with Puppet, running either in physical hardware or the cloud, handling new continuous integration builds and releases automatically through several stages and environments such as development, QA, and production.
Continuous Delivery with Maven, Puppet and Tomcat - ApacheCon NA 2013Carlos Sanchez
Continuous Integration, with Apache Continuum or Jenkins, can be extended to fully manage deployments and production environments, running in Tomcat for instance, in a full Continuous Delivery cycle using infrastructure-as-code tools like Puppet, allowing to manage multiple servers and their configurations.
Puppet is an infrastructure-as-code tool that allows easy and automated provisioning of servers, defining the packages, configuration, services,... in code. Enabling DevOps culture, tools like Puppet help drive Agile development all the way to operations and systems administration, and along with continuous integration tools like Apache Continuum or Jenkins, it is a key piece to accomplish repeatability and continuous delivery, automating the operations side during development, QA or production, and enabling testing of systems configuration.
Traditionally a field for system administrators, Puppet can empower developers, allowing both to collaborate coding the infrastructure needed for their developments, whether it runs in hardware, virtual machines or cloud. Developers and sysadmins can define what JDK version must be installed, application server, version, configuration files, war and jar files,... and easily make changes that propagate across all nodes.
Using Vagrant, a command line automation layer for VirtualBox, they can also spin off virtual machines in their local box, easily from scratch with the same configuration as production servers, do development or testing and tear them down afterwards.
We will show how to install and manage Puppet nodes with JDK, multiple Tomcat instances with installed web applications, database, configuration files and all the supporting services. Including getting up and running with Vagrant and VirtualBox for quickstart and Puppet experiments, as well as setting up automated testing of the Puppet code.
Code testing and Continuous Integration are just the first step in a source code to production process. Combined with infrastructure-as-code tools such as Puppet the whole process can be automated, and tested!
Preparation study for Docker Event
Mulodo Open Study Group (MOSG) @Ho chi minh, Vietnam
http://www.meetup.com/Open-Study-Group-Saigon/events/229781420/
Raphaël Pinson's talk on "Configuration surgery with Augeas" at PuppetCamp Geneva '12. Video at http://youtu.be/H0MJaIv4bgk
Learn more: www.puppetlabs.com
PuppetCamp SEA 1 - Puppet Deployment at OnAppWalter Heck
Wai Keen Woon, CTO CDN Division OnApp Malaysia, gave an interesting overview of what the Puppet architecture at OnApp looks like. The CDN division at OnApp is a large provider of CDN services, and as such makes a very interesting candidate for a case study.
Walter Heck, founder of OlinData, presented a step-by-step guide on how to set up a proper puppet repository, complete with the brand new PuppetDB, exported resources and usage of open source modules.
Test-Driven Infrastructure with Ansible, Test Kitchen, Serverspec and RSpecMartin Etmajer
The goal of Continuous Delivery is, briefly, to get features into your users' or customers' hands as quickly and confidently as possible. In order to succeed, Development and Operations teams need to align and come up with both working and deployable software in short, regular intervals. Chef, Puppet, Ansible & Co. enable teams to code up application runtime environments, but alone do not allow for building quality into their processes. In this presentation I will show how you can apply the "Red, Green, Refactor Cycle" of Test-Driven Development and combine it with your configuration management or orchestration tool of choice in order to come up with better infrastructure that can automatically be tested using Ansible, Test Kitchen, Docker, Serverspec and RSpec.
Presentation on how Puppet has been introduced in Seat Pagine Gialle to automate system administration tasks and easy the cooperation between Ops and Others.
Preparation study for Docker Event
Mulodo Open Study Group (MOSG) @Ho chi minh, Vietnam
http://www.meetup.com/Open-Study-Group-Saigon/events/229781420/
Raphaël Pinson's talk on "Configuration surgery with Augeas" at PuppetCamp Geneva '12. Video at http://youtu.be/H0MJaIv4bgk
Learn more: www.puppetlabs.com
PuppetCamp SEA 1 - Puppet Deployment at OnAppWalter Heck
Wai Keen Woon, CTO CDN Division OnApp Malaysia, gave an interesting overview of what the Puppet architecture at OnApp looks like. The CDN division at OnApp is a large provider of CDN services, and as such makes a very interesting candidate for a case study.
Walter Heck, founder of OlinData, presented a step-by-step guide on how to set up a proper puppet repository, complete with the brand new PuppetDB, exported resources and usage of open source modules.
Test-Driven Infrastructure with Ansible, Test Kitchen, Serverspec and RSpecMartin Etmajer
The goal of Continuous Delivery is, briefly, to get features into your users' or customers' hands as quickly and confidently as possible. In order to succeed, Development and Operations teams need to align and come up with both working and deployable software in short, regular intervals. Chef, Puppet, Ansible & Co. enable teams to code up application runtime environments, but alone do not allow for building quality into their processes. In this presentation I will show how you can apply the "Red, Green, Refactor Cycle" of Test-Driven Development and combine it with your configuration management or orchestration tool of choice in order to come up with better infrastructure that can automatically be tested using Ansible, Test Kitchen, Docker, Serverspec and RSpec.
Presentation on how Puppet has been introduced in Seat Pagine Gialle to automate system administration tasks and easy the cooperation between Ops and Others.
Infrastructure testing with Jenkins, Puppet and Vagrant - Agile Testing Days ...Carlos Sanchez
Extend Continuous Integration to automatically test your infrastructure.
Continuous Integration can be extended to test deployments and production environments, in a Continuous Delivery cycle, using infrastructure-as-code tools like Puppet, allowing to manage multiple servers and their configurations, and test the infrastructure the same way continuous integration tools do with developers’ code.
Puppet is an infrastructure-as-code tool that allows easy and automated provisioning of servers, defining the packages, configuration, services, … in code. Enabling DevOps culture, tools like Puppet help drive Agile development all the way to operations and systems administration, and along with continuous integration tools like Jenkins, it is a key piece to accomplish repeatability and continuous delivery, automating the operations side during development, QA or production, and enabling testing of systems configuration.
Using Vagrant, a command line automation layer for VirtualBox, we can easily spin off virtual machines with the same configuration as production servers, run our test suite, and tear them down afterwards.
We will show how to set up automated testing of an application and associated infrastructure and configurations, creating on demand virtual machines for testing, as part of your continuous integration process.
We all love infrastructure as code, we automate everything ™ but how many
of us can really say we could destroy and recreate our core infrastructure
without human intervention. Can you be sure there isnt a DNS problem or
that all the things ™ are done in the right order This talk walks the
audience through a green fields exercise that sets up service discovery
using Consul, infrastructure as code using terraform, using images build
with packer and configured using puppet.
Through the magic of virtualization technology (Vagrant) and Puppet, a companion Enterprise grade provisioning technology, we explore how to make the complex configuration game a walk in the park. Bring new team members up to speed in minutes, eliminate variances in configurations, and make integration issues a thing of the past.
Welcome to the new age of team development!
Similar to How to Develop Puppet Modules: From Source to the Forge With Zero Clicks (20)
Using Containers for Continuous Integration and Continuous Delivery. KubeCon ...Carlos Sanchez
Building and testing is a great use case for containers, both due to the dynamic and isolation aspects, but it increases complexity when scaling to multiple nodes and clusters.
Jenkins is an example of an application that can take advantage of Kubernetes technology to run Continuous Integration and Continuous Delivery workloads. Jenkins and Kubernetes can be integrated to transparently use on demand containers to run build agents and jobs, and isolate job execution. It also supports CI/CD-as-code using Jenkins Pipelines and automated deployments to Kubernetes clusters. The presentation will allow a better understanding of how to use Jenkins on Kubernetes for container based, totally dynamic, large scale CI and CD.
Using Kubernetes for Continuous Integration and Continuous DeliveryCarlos Sanchez
Learn how to scale your Continuous Integration and Continuous Delivery environment using containers. The Kubernetes project provides a container orchestration solution that greatly simplifies app deployments in large clusters and you can use Jenkins and Kubernetes together to run jobs on-demand.
Building and testing is a great use case for containers, both due to the dynamic and isolation aspects, but it increases complexity when scaling to multiple nodes and clusters.
Jenkins is an example of an application that can take advantage of Kubernetes technology to run Continuous Integration and Continuous Delivery workloads. Jenkins and Kubernetes can be integrated to transparently use on demand containers to run build agents and jobs, and isolate job execution. It also supports CI/CD-as-code using Jenkins Pipelines and automated deployments to Kubernetes clusters. The presentation will allow a better understanding of how to use Jenkins on Kubernetes for container based, totally dynamic, large scale CI and CD.
Divide and Conquer: Easier Continuous Delivery using Micro-ServicesCarlos Sanchez
Docker has revolutionized the way people think about applications and deployments. It provides a simple way to run and distribute Linux containers for a variety of use cases, from lightweight virtual machines to complex distributed micro-services architectures.
Containers allow to run services in isolation with a minimum performance penalty, increased speed, easier configuration and less complexity, making it ideal for continuous integration and continuous delivery based workloads.
But testing a distributed micro-services architecture is no easy task, requiring a shift in mindset and tooling to accommodate the new architecture.
We will provide insight on our experience creating a Jenkins platform based on distributed Docker containers running on Apache Mesos and Marathon, applicable for all types of applications, but specially Java and JVM based ones.
Using Kubernetes for Continuous Integration and Continuous Delivery. Java2daysCarlos Sanchez
Learn how to scale your Continuous Integration and Continuous Delivery environment using containers. The Kubernetes project provides a container orchestration solution that greatly simplifies app deployments in large clusters and you can use Jenkins and Kubernetes together to run jobs on-demand.
Building and testing is a great use case for containers, both due to the dynamic and isolation aspects, but it increases complexity when scaling to multiple nodes and clusters.
Jenkins is an example of an application that can take advantage of Kubernetes technology to run Continuous Integration and Continuous Delivery workloads. Jenkins and Kubernetes can be integrated to transparently use on demand containers to run build agents and jobs, and isolate job execution. It also supports CI/CD-as-code using Jenkins Pipelines and automated deployments to Kubernetes clusters. The presentation will allow a better understanding of how to use Jenkins on Kubernetes for container based, totally dynamic, large scale CI and CD.
Using Containers for Continuous Integration and Continuous DeliveryCarlos Sanchez
Building and testing is a great use case for containers, both due to the dynamic and isolation aspects, but it increases complexity when scaling to multiple nodes and clusters.
However, the Kubernetes project provides a container orchestration solution that greatly simplifies app deployments in large clusters, and allows to dynamically run any containerized workload. Jenkins is an example of an application that can take advantage of such technology to run Continuous Integration and Continuous Delivery workloads.
The Jenkins Kubernetes plugin can transparently use on demand containers to run build agents and jobs, and isolate job execution. It also supports CI/CD-as-code using Jenkins Pipelines.
The presentation will allow a better understanding of Kubernetes, and how to use Jenkins on Kubernetes for container based large scale, showing also the challenges of running distributed applications (particularly JVM apps).
Divide and Conquer: Easier Continuous Delivery using Micro-ServicesCarlos Sanchez
Docker has revolutionized the way people think about applications and deployments. It provides a simple way to run and distribute Linux containers for a variety of use cases, from lightweight virtual machines to complex distributed micro-services architectures.
Containers allow to run services in isolation with a minimum performance penalty, increased speed, easier configuration and less complexity, making it ideal for continuous integration and continuous delivery based workloads. But testing a distributed micro-services architecture is no easy task, requiring a shift in mindset and tooling to accommodate the new architecture.
We will provide insight on our experience creating a Jenkins platform based on distributed Docker containers running on Apache Mesos and Marathon, applicable for all types of applications, but specially Java and JVM based ones.
Using Containers for Building and Testing: Docker, Kubernetes and Mesos. FOSD...Carlos Sanchez
Building and testing is a great use case for containers, both due to the dynamic and isolation aspects, but running in just one machine is not enough and quickly needs to scale to a clustered setup. But which cluster technology should be used? Docker Swarm? Apache Mesos? Kubernetes? how do they compare? All of them can be used to dynamically run a cluster of containers.
Building and testing is a great use case for containers, both due to the dynamic and isolation aspects, but running in just one machine is not enough and quickly needs to scale to a clustered setup. But which cluster technology should be used? Docker Swarm? Apache Mesos? Kubernetes? how do they compare? All of them can be used to dynamically run a cluster of containers.
The Jenkins platform is an example of dynamically scaling by using several Docker cluster and orchestration platforms, using containers to run build agents and jobs, and also isolate job execution.
This talk will cover these main container clusters, outlining the pros and cons, the current state of the art of the technologies and Jenkins support.
The presentation will allow a better understanding of using Docker in the main Docker cluster/orchestration platforms out there (Docker Swarm, Apache Mesos, Kubernetes), sharing my experience and helping people decide which one to use, going through Jenkins examples and current support.
Testing Distributed Micro Services. Agile Testing Days 2017Carlos Sanchez
Docker is revolutionizing the way people think about applications and deployments. It provides a simple way to run and distribute Linux containers for a variety of use cases, from lightweight virtual machines to complex distributed micro-services architectures.
Containers allow to run services in isolation with a minimum performance penalty, increased speed, easier configuration and less complexity, making it ideal for continuous integration and continuous delivery based workloads.
But testing a distributed micro-services architecture is no easy task, requiring a shift in mindset and tooling to accommodate the new architecture.
We will provide insight on our experience creating a Jenkins platform based on distributed Docker containers running on Apache Mesos and Marathon, applicable for all types of applications, but specially Java and JVM based ones.
CI and CD at Scale: Scaling Jenkins with Docker and Apache MesosCarlos Sanchez
In this presentation Carlos Sanchez will share his experience running Jenkins at scale, using Docker and Apache Mesos to create one of the biggest (if not the biggest) Jenkins clusters to date.
By taking advantage of Apache Mesos, the Jenkins platform is dynamically scaled to run jobs across hundreds of Jenkins masters, on Docker containers distributed across the Mesos cluster. Jenkins slaves are dynamically created based on load, using the Jenkins Mesos and Docker plugins, running in containers distributed across multiple hosts, and isolating job execution.
This presentation will allow a better understanding of Apache Mesos and the challenges of running Docker containerized and distributed applications, particularly JVM ones, by sharing a real world use case, including good and bad decisions and how they affected the development.
From Monolith to Docker Distributed ApplicationsCarlos Sanchez
Docker is revolutionizing the way people think about applications and deployments. It provides a simple way to run and distribute Linux containers for a variety of use cases, from lightweight virtual machines to complex distributed microservice architectures. But migrating an existing Java application to a distributed microservice architecture is no easy task, requiring a shift in the software development, networking, and storage to accommodate the new architecture. This presentation provides insights into the experience of the speaker and his colleagues in creating a Jenkins platform based on distributed Docker containers running on Apache Mesos and Marathon and applicable to all types of applications, especially Java- and JVM-based ones.
From Monolith to Docker Distributed Applications. JavaOneCarlos Sanchez
Docker is revolutionizing the way people think about applications and deployments. It provides a simple way to run and distribute Linux containers for a variety of use cases, from lightweight virtual machines to complex distributed microservice architectures. But migrating an existing Java application to a distributed microservice architecture is no easy task, requiring a shift in the software development, networking, and storage to accommodate the new architecture. This presentation provides insights into the experience of the speaker and his colleagues in creating a Jenkins platform based on distributed Docker containers running on Apache Mesos and Marathon and applicable to all types of applications, especially Java- and JVM-based ones.
Scaling Jenkins with Docker: Swarm, Kubernetes or Mesos?Carlos Sanchez
The Jenkins platform can be dynamically scaled by using several Docker cluster and orchestration platforms, using containers to run slaves and jobs and also isolating job execution. But which cluster technology should be used? Docker Swarm? Apache Mesos? Kubernetes? How do they compare? All of them can be used to dynamically run jobs inside containers. This talk will cover these main container clusters, outlining the pros and cons of each, the current state of the art of the technologies and Jenkins support.
CI and CD at Scale: Scaling Jenkins with Docker and Apache MesosCarlos Sanchez
In this presentation Carlos Sanchez will share his experience running Jenkins at scale, using Docker and Apache Mesos to create one of the biggest (if not the biggest) Jenkins clusters to date.
By taking advantage of Apache Mesos, the Jenkins platform is dynamically scaled to run jobs across hundreds of Jenkins masters, on Docker containers distributed across the Mesos cluster. Jenkins slaves are dynamically created based on load, using the Jenkins Mesos and Docker plugins, running in containers distributed across multiple hosts, and isolating job execution.
This presentation will allow a better understanding of Apache Mesos and the challenges of running Docker containerized and distributed applications, particularly JVM ones, by sharing a real world use case, including good and bad decisions and how they affected the development.
From Monolith to Docker Distributed ApplicationsCarlos Sanchez
Docker is revolutionizing the way people think about applications and deployments. It provides a simple way to run and distribute Linux containers for a variety of use cases, from lightweight virtual machines to complex distributed micro-services architectures.
Containers allow to run services in isolation with a minimum performance penalty, increased speed, easier configuration and less complexity, making it ideal for continuous integration and continuous delivery based workloads. But migrating an existing application to a distributed microservices architecture is no easy task, requiring a shift in the software development, networking and storage to accommodate the new architecture.
We will provide insight on our experience creating a Jenkins platform based on distributed Docker containers running on Apache Mesos and Marathon, applicable for all types of applications, but specially Java and JVM based nones.
Scaling Jenkins with Docker and KubernetesCarlos Sanchez
Docker is revolutionizing the way people think about applications and deployments. It provides a simple way to run and distribute Linux containers for a variety of use cases, from lightweight virtual machines to complex distributed micro-services architectures. Kubernetes is an open source project to manage a cluster of Linux containers as a single system, managing and running Docker containers across multiple Docker hosts, offering co-location of containers, service discovery and replication control. It was started by Google and now it is supported by Microsoft, RedHat, IBM and Docker Inc amongst others. Jenkins Continuous Integration environment can be dynamically scaled by using the Kubernetes and Docker plugins, using containers to run slaves and jobs, and also isolate job execution.
Linux containers and Docker specifically have revolutionized the way applications are run at scale, but testing can greatly benefit from those technologies too.Containers allow to run tests in isolation with a minimum performance penalty, increased speed with respect to virtual machine based tests and easier configuration and less complexity for integration testing. Testing with containers allows running tests in a new, clean environment for each execution, minimizing false positives and environment corruption. At the same time it allows reusing container clusters to run development, testing and production workloads.You will learn to effectively use Jenkins with Docker and Kubernetes, a multi host Docker clustering technology, to run your Jenkins jobs in isolated containers for each execution at scale.
http://www.agiletestingdays.com/session/using-docker-for-testing/
XP Days Ukraine 2015 Talk http://xpdays.com.ua/programs/scaling-docker-with-kubernetes/
Kubernetes is an open source project to manage a cluster of Linux containers as a single system, managing and running Docker containers across multiple Docker hosts, offering co-location of containers, service discovery and replication control. It was started by Google and now it is supported by Microsoft, RedHat, IBM and Docker Inc amongst others.
Once you are using Docker containers the next question is how to scale and start containers across multiple Docker hosts, balancing the containers across them. Kubernetes also adds a higher level API to define how containers are logically grouped, allowing to define pools of containers, load balancing and affinity.
Scaling Jenkins with Docker and KubernetesCarlos Sanchez
Docker is revolutionizing the way people think about applications and deployments. It provides a simple way to run and distribute Linux containers for a variety of use cases, from lightweight virtual machines to complex distributed micro-services architectures. Kubernetes is an open source project to manage a cluster of Linux containers as a single system, managing and running Docker containers across multiple Docker hosts, offering co-location of containers, service discovery and replication control. It was started by Google and now it is supported by Microsoft, RedHat, IBM and Docker Inc amongst others. Jenkins Continuous Integration environment can be dynamically scaled by using the Kubernetes and Docker plugins, using containers to run slaves and jobs, and also isolate job execution.
DevoxxFR 2015 Talk http://cfp.devoxx.fr/2015/talk/WXY-1157/Scaling_Docker_with_Kubernetes
Kubernetes is an open source project to manage a cluster of Linux containers as a single system, managing and running Docker containers across multiple Docker hosts, offering co-location of containers, service discovery and replication control. It was started by Google and now it is supported by Microsoft, RedHat, IBM and Docker Inc amongst others.
Once you are using Docker containers the next question is how to scale and start containers across multiple Docker hosts, balancing the containers across them. Kubernetes also adds a higher level API to define how containers are logically grouped, allowing to define pools of containers, load balancing and affinity.
Enterprise Build And Test In The Cloud
JavaOne 2009 San Francisco
http://www.carlossanchez.eu
Building and testing software can be a time- and resource-consuming task. Cloud computing/on-demand services such as Amazon EC2 provide a cost-effective way to scale applications and, for building and testing software, can reduce the time needed to find and correct problems, meaning a cost reduction as well.
Properly configuring your build tools (Maven, Ant,...), continuous integration servers (Continuum, Cruise Control,...), and testing tools (TestNG, Selenium,...) can enable you to run all the building/testing process in a cloud environment, simulating high-load environments, distributing long-running tests to reduce their execution time, using different environments for client or server applications, and so on -- and in the case of on-demand services such as Amazon EC2, pay only for the time you use it.
In this presentation we will introduce a development process and architecture using popular open source tools for the build and test process such as Apache Maven or Ant for building, Apache Continuum as continuous integration server, TestNG and Selenium for testing, and how to configure them to achieve the best results and performance in several typical use cases (long running testing processes, different client platforms,...) by using he Amazon Elastic Computing Cloud EC2, and therefore reducing time and costs compared to other solutions.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
16. rspec-puppet
require 'spec_helper'
describe 'maven::maven' do
context "when downloading maven from another repo" do
let(:params) { { :repo => {
'url' => 'http://repo1.maven.org/maven2',
'username' => 'u',
'password' => 'p'
}
} }
it 'should fetch maven with username and password' do
should contain_wget__authfetch('fetch-maven').with(
'source' => 'http://repo1.maven.org/...ven-3.0.5-bin.tar.gz',
'user' => 'u',
'password' => 'p')
end
end
end
17. hosts
node 'agent' inherits 'parent' {
include wget
include maestro::test::dependencies
include maestro_nodes::agentrvm
}
18. hosts
require 'spec_helper'
describe 'agent' do
it do should contain_class('maestro::agent').with(
'agent_name' => 'agent-01',
'stomp_host' => 'maestro.maestrodev.net')
end
it { should_not contain_service('maestro') }
it { should_not contain_service('activemq') }
it { should_not contain_service('jenkins') }
it { should_not contain_service('postgresqld') }
it { should_not contain_service('maestro-test-hub') }
it { should_not contain_service('sonar') }
it { should_not contain_service('archiva') }
end
19. rspec-puppet with facts
require 'spec_helper'
describe 'wget' do
context 'running on OS X' do
let(:facts) { {:operatingsystem => 'Darwin'} }
it { should_not contain_package('wget') }
end
context 'running on CentOS' do
let(:facts) { {:operatingsystem => 'CentOS'} }
it { should contain_package('wget') }
end
context 'no version specified' do
it { should contain_package('wget').with_ensure('installed') }
end
context 'version is 1.2.3' do
let(:params) { {:version => '1.2.3'} }
it { should contain_package('wget').with_ensure('1.2.3') }
end
end
21. extending for reuse
describe 'maestro::maestro' do
include_context :centos
let(:facts) { super().merge({
:operatingsystem => 'RedHat'
})}
end
22. shared_examples
require 'spec_helper'
describe 'nginx::package' do
shared_examples 'redhat' do |operatingsystem|
let(:facts) {{ :operatingsystem => operatingsystem }}
it { should contain_package('nginx') }
it { should contain_package('gd') }
it { should contain_package('libXpm') }
it { should contain_package('libxslt') }
it { should contain_yumrepo('nginx-release').with_enabled('1') }
end
shared_examples 'debian' do |operatingsystem|
let(:facts) {{ :operatingsystem => operatingsystem }}
it { should contain_file('/etc/apt/sources.list.d/nginx.list') }
end
23. shared_examples (cont.)
context 'RedHat' do
it_behaves_like 'redhat', 'centos'
it_behaves_like 'redhat', 'fedora'
it_behaves_like 'redhat', 'rhel'
it_behaves_like 'redhat', 'redhat'
it_behaves_like 'redhat', 'scientific'
end
context 'debian' do
it_behaves_like 'debian', 'debian'
it_behaves_like 'debian', 'ubuntu'
end
context 'other' do
let(:facts) {{ :operatingsystem => 'xxx' }}
it { expect { subject }.to raise_error(Puppet::Error, /Module
nginx is not supported on xxx/) }
end
end
26. Rakefile
require 'puppetlabs_spec_helper/rake_tasks'
build # Build puppet module package
clean # Clean a built module package
coverage # Generate code coverage information
lint # Check puppet manifests with puppet-lint
spec # Run spec tests in a clean fixtures directory
spec_clean # Clean up the fixtures directory
spec_prep # Create the fixtures directory
spec_standalone # Run spec tests on an existing fixtures directory
31. Puppetfile
forge 'http://forge.puppetlabs.com'
mod 'maestrodev/activemq', '>=1.0'
mod 'saz/limits', ">=2.0.1"
mod 'maestrodev/maestro_nodes', '>=1.1.0'
mod 'maestrodev/maestro_demo', '>=1.0.2'
mod 'maestrodev', :path => './private_modules/maestrodev'
mod 'nginx', :git => 'https://github.com/jfryman/puppet-nginx.git'
34. librarian-puppet
clean # Cleans out the cache and install paths.
init # Initializes the current directory
install # Resolves and installs all of the dependencies you
specify
outdated # Lists outdated dependencies.
package # Cache the puppet modules in vendor/puppet/cache
show # Shows dependencies
update # Updates and installs the dependencies you specify
35. librarian-puppet for fixtures
# use librarian-puppet to manage fixtures instead
of .fixtures.yml. Offers more possibilities like explicit version
management, forge downloads,...
task :librarian_spec_prep do
sh "librarian-puppet install --path=spec/fixtures/modules/"
end
task :spec_prep => :librarian_spec_prep
39. Vagrantfile
Vagrant.configure("2") do |config|
config.vm.synced_folder ".", "/etc/puppet/modules/rvm"
# install the epel module needed for rvm in CentOS
config.vm.provision :shell, :inline => "test -d /etc/puppet/modules/epel || puppet
module install stahnma/epel -v 0.0.3"
config.vm.provision :puppet do |puppet|
puppet.manifests_path = "tests"
puppet.manifest_file = "init.pp"
end
config.vm.define :centos63 do |config|
config.vm.box = "CentOS-6.3-x86_64-minimal"
config.vm.box_url = "https://repo.maestrodev.com/archiva/repository/public-releases/
com/maestrodev/vagrant/CentOS/6.3/CentOS-6.3-x86_64-minimal.box"
end
config.vm.define :centos64 do |config|
config.vm.box = "CentOS-6.4-x86_64-minimal"
config.vm.box_url = "https://repo.maestrodev.com/archiva/repository/public-releases/
com/maestrodev/vagrant/CentOS/6.4/CentOS-6.4-x86_64-minimal.box"
end
end
40. Rakefile
desc "Integration test with Vagrant"
task :integration do
sh %{vagrant destroy --force}
sh %{vagrant up}
sh %{vagrant destroy --force}
end
41. Rakefile
# start one at a time
desc "Integration test with Vagrant"
task :integration do
sh %{vagrant destroy --force}
["centos63", "centos64"].each do |vm|
sh %{vagrant up #{vm}}
sh %{vagrant destroy --force #{vm}}
end
sh %{vagrant destroy --force}
end
47. Rake
module:bump # Bump module version to the next minor
module:bump_commit # Bump version and git commit
module:clean # Runs clean again
module:push # Push module to the Puppet Forge
module:release # Release the Puppet module, doing a
clean, build, tag, push, bump_commit
and git push
module:tag # Git tag with the current module version
55. Rakefile (cont.)
desc "Integration test with Vagrant"
task :integration do
sh %{vagrant destroy --force}
failed = []
["centos64", "debian6"].each do |vm|
sh %{vagrant up #{vm}} do |ok|
if ok
sh %{vagrant destroy --force #{vm}}
else
failed << vm
end
end
end
fail("Machines failed to start: #{failed.join(', ')}")
end
70. Auto update
Automatically update all the modules and tell me if
it’s broken
bonus point: automatically edit the Gemfile,
Puppetfile, Modulefile constraints
72. Photo Credits
Brick wall - Luis Argerich
http://www.flickr.com/photos/lrargerich/4353397797/
Agile vs. Iterative flow - Christopher Little
http://en.wikipedia.org/wiki/File:Agile-vs-iterative-flow.jpg
DevOps - Rajiv.Pant
http://en.wikipedia.org/wiki/File:Devops.png
Pimientos de Padron - Howard Walfish
http://www.flickr.com/photos/h-bomb/4868400647/
Compiling - XKCD
http://xkcd.com/303/
Printer in 1568 - Meggs, Philip B
http://en.wikipedia.org/wiki/File:Printer_in_1568-ce.png
Relativity - M. C. Escher
http://en.wikipedia.org/wiki/File:Escher%27s_Relativity.jpg
Teacher and class - Herald Post
http://www.flickr.com/photos/heraldpost/5169295832/