Walter Heck, founder of OlinData, presented a step-by-step guide on how to set up a proper puppet repository, complete with the brand new PuppetDB, exported resources and usage of open source modules.
PuppetCamp SEA 1 - Puppet Deployment at OnAppWalter Heck
Wai Keen Woon, CTO CDN Division OnApp Malaysia, gave an interesting overview of what the Puppet architecture at OnApp looks like. The CDN division at OnApp is a large provider of CDN services, and as such makes a very interesting candidate for a case study.
A bit of history, frustration-driven development, and why and how we started looking into Puppet at Opera Software. What we're doing, successes, pain points and what we're going to do with Puppet and Config Management next.
This is a new version of a talk I presented at a Varnish Users Group meeting in Paris in 2012. We've added a few useful tools and improved our Puppet module since then.
Presented at the Devops Norway meetup in Oslo on 17th of September 2014.
“warpdrive”, making Python web application deployment magically easy.Graham Dumpleton
Ask a beginner to deploy a Python web application and they will often complain it is too hard. Although we have standards for how a Python web application should interface with a web server, the web servers for Python all work differently, with a myriad of options and being difficult to set up properly.
In this talk you will be given a preview of a project called 'warpdrive', a project being developed to simplify the process of deploying a Python web application.
The 'warpdrive' project makes it easy to run your Python web application on your own system, but it can also create a Docker image for your application, providing you with an easy path to deploying it on a Docker service.
How 'warpdrive' works is also compatible with next generation Platform as a Service (PaaS) offerings such as the latest OpenShift, which has been reimplemented around Docker and Kubernetes.
See how working on and deploying your Python web application could be made so much easier using 'warpdrive'.
Using Puppet to Create a Dynamic Network - PuppetConf 2013Puppet
"Using Puppet to Create a Dynamic Network" by Thomas Uphill Infrastructure Analyst, Costco Wholesale.
Presentation Overview: Complex networks often need complex configurations and a lot of care and attention to individual severs. Using hiera, exported resources, custom facts, defined types, augeas and some forge modules we will explore the possibilities for having puppet take care of the complex configuration. We'll start with a few simple examples of exported resources and scale up to having hiera key off custom facts and having exported augeas resources build configurations.
Speaker Bio: An early adopter of puppet, Thomas has been using puppet since 0.24. He started with puppet for workstation management at the Institute for Advanced Study where he also helped develop the Springdale Linux distribution. He currently works with puppet at Costco Wholesale Headquarters In Issaquah Washington. He has been working with Red Hat systems since 7.3 and currently holds an RCHA.
More info at http://blog.carlossanchez.eu/2011/11/15/from-dev-to-devops-slides-from-apachecon-na-vancouver-2011/
The DevOps movement aims to improve communication between developers and operations teams to solve critical issues such as fear of change and risky deployments. But the same way that Agile development would likely fail without continuous integration tools, the DevOps principles need tools to make them real, and provide the automation required to actually be implemented. Most of the so called DevOps tools focus on the operations side, and there should be more than that, the automation must cover the full process, Dev to QA to Ops and be as automated and agile as possible. Tools in each part of the workflow have evolved in their own silos, and with the support of their own target teams. But a true DevOps mentality requires a seamless process from the start of development to the end in production deployments and maintenance, and for a process to be successful there must be tools that take the burden out of humans.
Apache Maven has arguably been the most successful tool for development, project standardization and automation introduced in the last years. On the operations side we have open source tools like Puppet or Chef that are becoming increasingly popular to automate infrastructure maintenance and server provisioning.
In this presentation we will introduce an end-to-end development-to-production process that will take advantage of Maven and Puppet, each of them at their strong points, and open source tools to automate the handover between them, automating continuous build and deployment, continuous delivery, from source code to any number of application servers managed with Puppet, running either in physical hardware or the cloud, handling new continuous integration builds and releases automatically through several stages and environments such as development, QA, and production.
PuppetCamp SEA 1 - Puppet Deployment at OnAppWalter Heck
Wai Keen Woon, CTO CDN Division OnApp Malaysia, gave an interesting overview of what the Puppet architecture at OnApp looks like. The CDN division at OnApp is a large provider of CDN services, and as such makes a very interesting candidate for a case study.
A bit of history, frustration-driven development, and why and how we started looking into Puppet at Opera Software. What we're doing, successes, pain points and what we're going to do with Puppet and Config Management next.
This is a new version of a talk I presented at a Varnish Users Group meeting in Paris in 2012. We've added a few useful tools and improved our Puppet module since then.
Presented at the Devops Norway meetup in Oslo on 17th of September 2014.
“warpdrive”, making Python web application deployment magically easy.Graham Dumpleton
Ask a beginner to deploy a Python web application and they will often complain it is too hard. Although we have standards for how a Python web application should interface with a web server, the web servers for Python all work differently, with a myriad of options and being difficult to set up properly.
In this talk you will be given a preview of a project called 'warpdrive', a project being developed to simplify the process of deploying a Python web application.
The 'warpdrive' project makes it easy to run your Python web application on your own system, but it can also create a Docker image for your application, providing you with an easy path to deploying it on a Docker service.
How 'warpdrive' works is also compatible with next generation Platform as a Service (PaaS) offerings such as the latest OpenShift, which has been reimplemented around Docker and Kubernetes.
See how working on and deploying your Python web application could be made so much easier using 'warpdrive'.
Using Puppet to Create a Dynamic Network - PuppetConf 2013Puppet
"Using Puppet to Create a Dynamic Network" by Thomas Uphill Infrastructure Analyst, Costco Wholesale.
Presentation Overview: Complex networks often need complex configurations and a lot of care and attention to individual severs. Using hiera, exported resources, custom facts, defined types, augeas and some forge modules we will explore the possibilities for having puppet take care of the complex configuration. We'll start with a few simple examples of exported resources and scale up to having hiera key off custom facts and having exported augeas resources build configurations.
Speaker Bio: An early adopter of puppet, Thomas has been using puppet since 0.24. He started with puppet for workstation management at the Institute for Advanced Study where he also helped develop the Springdale Linux distribution. He currently works with puppet at Costco Wholesale Headquarters In Issaquah Washington. He has been working with Red Hat systems since 7.3 and currently holds an RCHA.
More info at http://blog.carlossanchez.eu/2011/11/15/from-dev-to-devops-slides-from-apachecon-na-vancouver-2011/
The DevOps movement aims to improve communication between developers and operations teams to solve critical issues such as fear of change and risky deployments. But the same way that Agile development would likely fail without continuous integration tools, the DevOps principles need tools to make them real, and provide the automation required to actually be implemented. Most of the so called DevOps tools focus on the operations side, and there should be more than that, the automation must cover the full process, Dev to QA to Ops and be as automated and agile as possible. Tools in each part of the workflow have evolved in their own silos, and with the support of their own target teams. But a true DevOps mentality requires a seamless process from the start of development to the end in production deployments and maintenance, and for a process to be successful there must be tools that take the burden out of humans.
Apache Maven has arguably been the most successful tool for development, project standardization and automation introduced in the last years. On the operations side we have open source tools like Puppet or Chef that are becoming increasingly popular to automate infrastructure maintenance and server provisioning.
In this presentation we will introduce an end-to-end development-to-production process that will take advantage of Maven and Puppet, each of them at their strong points, and open source tools to automate the handover between them, automating continuous build and deployment, continuous delivery, from source code to any number of application servers managed with Puppet, running either in physical hardware or the cloud, handling new continuous integration builds and releases automatically through several stages and environments such as development, QA, and production.
How we use Varnish at Opera Software, from the beginning (2009) to now.
Presentation hold for the 5th Varnish Users Group meeting (VUG5) held in Paris on March 22nd 2012.
From Dev to DevOps - Apache Barcamp Spain 2011Carlos Sanchez
UPDATE: updated slides at http://www.slideshare.net/carlossg/from-dev-to-devops-conferencia-agile-spain-2011
The DevOps movement aims to improve communication between developers and operations teams to solve critical issues such as fear of change and risky deployments. But the same way that Agile development would likely fail without continuous integration tools, the DevOps principles need tools to make them real, and provide the automation required to actually be implemented. Most of the so called DevOps tools focus on the operations side, and there should be more than that, the automation must cover the full process, Dev to QA to Ops and be as automated and agile as possible. Tools in each part of the workflow have evolved in their own silos, and with the support of their own target teams. But a true DevOps mentality requires a seamless process from the start of development to the end in production deployments and maintenance, and for a process to be successful there must be tools that take the burden out of humans.
Apache Maven has arguably been the most successful tool for development, project standardization and automation introduced in the last years. On the operations side we have open source tools like Puppet or Chef that are becoming increasingly popular to automate infrastructure maintenance and server provisioning.
In this presentation we will introduce an end-to-end development-to-production process that will take advantage of Maven and Puppet, each of them at their strong points, and open source tools to automate the handover between them, automating continuous build and deployment, continuous delivery, from source code to any number of application servers managed with Puppet, running either in physical hardware or the cloud, handling new continuous integration builds and releases automatically through several stages and environments such as development, QA, and production.
More info at http://blog.carlossanchez.eu/tag/devops
The DevOps movement aims to improve communication between developers and operations teams to solve critical issues such as fear of change and risky deployments. But the same way that Agile development would likely fail without continuous integration tools, the DevOps principles need tools to make them real, and provide the automation required to actually be implemented. Most of the so called DevOps tools focus on the operations side, and there should be more than that, the automation must cover the full process, Dev to QA to Ops and be as automated and agile as possible. Tools in each part of the workflow have evolved in their own silos, and with the support of their own target teams. But a true DevOps mentality requires a seamless process from the start of development to the end in production deployments and maintenance, and for a process to be successful there must be tools that take the burden out of humans.
Apache Maven has arguably been the most successful tool for development, project standardization and automation introduced in the last years. On the operations side we have open source tools like Puppet or Chef that are becoming increasingly popular to automate infrastructure maintenance and server provisioning.
In this presentation we will introduce an end-to-end development-to-production process that will take advantage of Maven and Puppet, each of them at their strong points, and open source tools to automate the handover between them, automating continuous build and deployment, continuous delivery, from source code to any number of application servers managed with Puppet, running either in physical hardware or the cloud, handling new continuous integration builds and releases automatically through several stages and environments such as development, QA, and production.
Nathan Vonnahme's presentation on writing custom plugins for Nagios.
The presentation was given during the Nagios World Conference North America held Sept 25-28th, 2012 in Saint Paul, MN. For more information on the conference (including photos and videos), visit: http://go.nagios.com/nwcna
Puppet for Java developers - JavaZone NO 2012Carlos Sanchez
Example code at https://github.com/carlossg/puppet-for-java-devs
More info at http://blog.carlossanchez.eu/tag/devops
Video at http://vimeo.com/49483627
Puppet is an infrastructure-as-code tool that allows easy and automated provisioning of servers, defining the packages, configuration, services,... in code. Enabling DevOps culture, tools like Puppet help drive Agile development all the way to operations and systems administration, and along with continuous integration tools like Jenkins, it is a key piece to accomplish repeatability and continuous delivery, automating the operations side during development, QA or production, and enabling testing of systems configuration.
Traditionally a field for system administrators, Puppet can empower developers, allowing both to collaborate coding the infrastructure needed for their developments, whether it runs in hardware, virtual machines or cloud. Developers and sysadmins can define what JDK version must be installed, application server, version, configuration files, war and jar files,... and easily make changes that propagate across all nodes.
Using Vagrant, a command line automation layer for VirtualBox, they can also spin off virtual machines in their local box, easily from scratch with the same configuration as production servers, do development or testing and tear them down afterwards.
We’ll show how to install and manage Puppet nodes with JDK, multiple application server instances with installed web applications, database, configuration files and all the supporting services. Including getting up and running with Vagrant and VirtualBox for quickstart and Puppet experiments, as well as setting up automated testing of the Puppet code.
How to Develop Puppet Modules: From Source to the Forge With Zero ClicksCarlos Sanchez
Puppet Modules are a great way to reuse code, share your development with other people and take advantage of the hundreds of modules already available in the community. But how to create, test and publish them as easily as possible? now that infrastructure is defined as code, we need to use development best practices to build, test, deploy and use Puppet modules themselves. Three steps for a fully automated process
* Continuous Integration of Puppet Modules
* Automatic release and upload to the Puppet Forge
* Deploy to Puppet master
Dennis Matotek, Technical Lead Platforms at Experian Hitwise Australia, gave an excellent presentation on setting up puppet using vagrant, puppet and testing, including a full demo of rspec-puppet and Jenkins.
Continuous Delivery with Maven, Puppet and Tomcat - ApacheCon NA 2013Carlos Sanchez
Continuous Integration, with Apache Continuum or Jenkins, can be extended to fully manage deployments and production environments, running in Tomcat for instance, in a full Continuous Delivery cycle using infrastructure-as-code tools like Puppet, allowing to manage multiple servers and their configurations.
Puppet is an infrastructure-as-code tool that allows easy and automated provisioning of servers, defining the packages, configuration, services,... in code. Enabling DevOps culture, tools like Puppet help drive Agile development all the way to operations and systems administration, and along with continuous integration tools like Apache Continuum or Jenkins, it is a key piece to accomplish repeatability and continuous delivery, automating the operations side during development, QA or production, and enabling testing of systems configuration.
Traditionally a field for system administrators, Puppet can empower developers, allowing both to collaborate coding the infrastructure needed for their developments, whether it runs in hardware, virtual machines or cloud. Developers and sysadmins can define what JDK version must be installed, application server, version, configuration files, war and jar files,... and easily make changes that propagate across all nodes.
Using Vagrant, a command line automation layer for VirtualBox, they can also spin off virtual machines in their local box, easily from scratch with the same configuration as production servers, do development or testing and tear them down afterwards.
We will show how to install and manage Puppet nodes with JDK, multiple Tomcat instances with installed web applications, database, configuration files and all the supporting services. Including getting up and running with Vagrant and VirtualBox for quickstart and Puppet experiments, as well as setting up automated testing of the Puppet code.
This is story of our journey from SaltStack to Puppet and beyond. This talk will answer following questions:
- why we moved from SaltStack
- why Puppet was chosen
- how to use Puppet OpenSource in painless way
- which orchestration tool to use with Puppet
- what is next
One-Man Ops with Puppet & Friends.
If you're getting started in Amazon AWS here's 7 tools that will help you be successful, a few tips to make your life easier and some common pitfalls to avoid.
OpenNebula Conf 2014 | Puppet and OpenNebula - David LutterkortNETWAYS
Many facets of using an IaaS cloud like OpenNebula can be greatly
simplified by using a configuration management tool such as Puppet. This
includes the management of hosts as well as the management of cloud
resources such as virtual machines and networks. Of course, Puppet can also
play an important role in the management of the actual workload of virtual
machine instances. Besides using it in the traditional, purely agent-based
way, it is also possible to use Puppet during the building of machine
images. This serves two purposes: firstly, it speeds up the initial Puppet
run when an instance is launched off that image, sometimes quite
dramatically. Secondly, it supports operating immutable infrastructure
without losing Puppet’s benefits to organize and simplify the description
of the entire infrastructure.
This talk will show how Puppet can be used by adminsitrators to manage
OpenNebula hosts, and by users to manage their infrastructure as well as
how to use Puppet during image builds.
How we use Varnish at Opera Software, from the beginning (2009) to now.
Presentation hold for the 5th Varnish Users Group meeting (VUG5) held in Paris on March 22nd 2012.
From Dev to DevOps - Apache Barcamp Spain 2011Carlos Sanchez
UPDATE: updated slides at http://www.slideshare.net/carlossg/from-dev-to-devops-conferencia-agile-spain-2011
The DevOps movement aims to improve communication between developers and operations teams to solve critical issues such as fear of change and risky deployments. But the same way that Agile development would likely fail without continuous integration tools, the DevOps principles need tools to make them real, and provide the automation required to actually be implemented. Most of the so called DevOps tools focus on the operations side, and there should be more than that, the automation must cover the full process, Dev to QA to Ops and be as automated and agile as possible. Tools in each part of the workflow have evolved in their own silos, and with the support of their own target teams. But a true DevOps mentality requires a seamless process from the start of development to the end in production deployments and maintenance, and for a process to be successful there must be tools that take the burden out of humans.
Apache Maven has arguably been the most successful tool for development, project standardization and automation introduced in the last years. On the operations side we have open source tools like Puppet or Chef that are becoming increasingly popular to automate infrastructure maintenance and server provisioning.
In this presentation we will introduce an end-to-end development-to-production process that will take advantage of Maven and Puppet, each of them at their strong points, and open source tools to automate the handover between them, automating continuous build and deployment, continuous delivery, from source code to any number of application servers managed with Puppet, running either in physical hardware or the cloud, handling new continuous integration builds and releases automatically through several stages and environments such as development, QA, and production.
More info at http://blog.carlossanchez.eu/tag/devops
The DevOps movement aims to improve communication between developers and operations teams to solve critical issues such as fear of change and risky deployments. But the same way that Agile development would likely fail without continuous integration tools, the DevOps principles need tools to make them real, and provide the automation required to actually be implemented. Most of the so called DevOps tools focus on the operations side, and there should be more than that, the automation must cover the full process, Dev to QA to Ops and be as automated and agile as possible. Tools in each part of the workflow have evolved in their own silos, and with the support of their own target teams. But a true DevOps mentality requires a seamless process from the start of development to the end in production deployments and maintenance, and for a process to be successful there must be tools that take the burden out of humans.
Apache Maven has arguably been the most successful tool for development, project standardization and automation introduced in the last years. On the operations side we have open source tools like Puppet or Chef that are becoming increasingly popular to automate infrastructure maintenance and server provisioning.
In this presentation we will introduce an end-to-end development-to-production process that will take advantage of Maven and Puppet, each of them at their strong points, and open source tools to automate the handover between them, automating continuous build and deployment, continuous delivery, from source code to any number of application servers managed with Puppet, running either in physical hardware or the cloud, handling new continuous integration builds and releases automatically through several stages and environments such as development, QA, and production.
Nathan Vonnahme's presentation on writing custom plugins for Nagios.
The presentation was given during the Nagios World Conference North America held Sept 25-28th, 2012 in Saint Paul, MN. For more information on the conference (including photos and videos), visit: http://go.nagios.com/nwcna
Puppet for Java developers - JavaZone NO 2012Carlos Sanchez
Example code at https://github.com/carlossg/puppet-for-java-devs
More info at http://blog.carlossanchez.eu/tag/devops
Video at http://vimeo.com/49483627
Puppet is an infrastructure-as-code tool that allows easy and automated provisioning of servers, defining the packages, configuration, services,... in code. Enabling DevOps culture, tools like Puppet help drive Agile development all the way to operations and systems administration, and along with continuous integration tools like Jenkins, it is a key piece to accomplish repeatability and continuous delivery, automating the operations side during development, QA or production, and enabling testing of systems configuration.
Traditionally a field for system administrators, Puppet can empower developers, allowing both to collaborate coding the infrastructure needed for their developments, whether it runs in hardware, virtual machines or cloud. Developers and sysadmins can define what JDK version must be installed, application server, version, configuration files, war and jar files,... and easily make changes that propagate across all nodes.
Using Vagrant, a command line automation layer for VirtualBox, they can also spin off virtual machines in their local box, easily from scratch with the same configuration as production servers, do development or testing and tear them down afterwards.
We’ll show how to install and manage Puppet nodes with JDK, multiple application server instances with installed web applications, database, configuration files and all the supporting services. Including getting up and running with Vagrant and VirtualBox for quickstart and Puppet experiments, as well as setting up automated testing of the Puppet code.
How to Develop Puppet Modules: From Source to the Forge With Zero ClicksCarlos Sanchez
Puppet Modules are a great way to reuse code, share your development with other people and take advantage of the hundreds of modules already available in the community. But how to create, test and publish them as easily as possible? now that infrastructure is defined as code, we need to use development best practices to build, test, deploy and use Puppet modules themselves. Three steps for a fully automated process
* Continuous Integration of Puppet Modules
* Automatic release and upload to the Puppet Forge
* Deploy to Puppet master
Dennis Matotek, Technical Lead Platforms at Experian Hitwise Australia, gave an excellent presentation on setting up puppet using vagrant, puppet and testing, including a full demo of rspec-puppet and Jenkins.
Continuous Delivery with Maven, Puppet and Tomcat - ApacheCon NA 2013Carlos Sanchez
Continuous Integration, with Apache Continuum or Jenkins, can be extended to fully manage deployments and production environments, running in Tomcat for instance, in a full Continuous Delivery cycle using infrastructure-as-code tools like Puppet, allowing to manage multiple servers and their configurations.
Puppet is an infrastructure-as-code tool that allows easy and automated provisioning of servers, defining the packages, configuration, services,... in code. Enabling DevOps culture, tools like Puppet help drive Agile development all the way to operations and systems administration, and along with continuous integration tools like Apache Continuum or Jenkins, it is a key piece to accomplish repeatability and continuous delivery, automating the operations side during development, QA or production, and enabling testing of systems configuration.
Traditionally a field for system administrators, Puppet can empower developers, allowing both to collaborate coding the infrastructure needed for their developments, whether it runs in hardware, virtual machines or cloud. Developers and sysadmins can define what JDK version must be installed, application server, version, configuration files, war and jar files,... and easily make changes that propagate across all nodes.
Using Vagrant, a command line automation layer for VirtualBox, they can also spin off virtual machines in their local box, easily from scratch with the same configuration as production servers, do development or testing and tear them down afterwards.
We will show how to install and manage Puppet nodes with JDK, multiple Tomcat instances with installed web applications, database, configuration files and all the supporting services. Including getting up and running with Vagrant and VirtualBox for quickstart and Puppet experiments, as well as setting up automated testing of the Puppet code.
This is story of our journey from SaltStack to Puppet and beyond. This talk will answer following questions:
- why we moved from SaltStack
- why Puppet was chosen
- how to use Puppet OpenSource in painless way
- which orchestration tool to use with Puppet
- what is next
One-Man Ops with Puppet & Friends.
If you're getting started in Amazon AWS here's 7 tools that will help you be successful, a few tips to make your life easier and some common pitfalls to avoid.
OpenNebula Conf 2014 | Puppet and OpenNebula - David LutterkortNETWAYS
Many facets of using an IaaS cloud like OpenNebula can be greatly
simplified by using a configuration management tool such as Puppet. This
includes the management of hosts as well as the management of cloud
resources such as virtual machines and networks. Of course, Puppet can also
play an important role in the management of the actual workload of virtual
machine instances. Besides using it in the traditional, purely agent-based
way, it is also possible to use Puppet during the building of machine
images. This serves two purposes: firstly, it speeds up the initial Puppet
run when an instance is launched off that image, sometimes quite
dramatically. Secondly, it supports operating immutable infrastructure
without losing Puppet’s benefits to organize and simplify the description
of the entire infrastructure.
This talk will show how Puppet can be used by adminsitrators to manage
OpenNebula hosts, and by users to manage their infrastructure as well as
how to use Puppet during image builds.
Puppet getting started will show the different components used in puppet environments, starting with facter and puppet to different webinterfaces like puppet enterprise console and foreman. It will also cover an exemplary design for scaling the puppet master and for development livecycle of modules. Furthermore an example for design of modules will be given.
You hate certificates? Struggling with the Puppet PKI? You'd prefer to get rid of security just to avoid having such trouble? Or no problems as you enjoy the benefits of Puppet Enterprise, but still curious to find out what's going on behind the scenes?
This speech wants to invite you to dive into the beautiful world of X.509 PKI infrastructures. Certificates are like pets. They are cute and lovely as long as you care about them. And grumpy as soon as they get the feeling that you don't.
So let's find out what your pets need to feel comfortable. After a jumpstart introduction into the X.509 wilderness we are going to inspect different ways of handling your whole Puppet (and MCollective) certificate lifecycle.
Security matters!
Bulletproof Networks provides managed hosting services to some of the largest companies in Australia. Bulletproof implements strong isolation of customer environments, and this can present unique challenges when re-using Puppet code across our customer base. Additionally, the environments range in size from small to very large, and our tools + processes need to be able to handle both uses cases equally well.
In this talk Lindsay + Mick will cover how Bulletproof's approach to these problems has evolved over the last 4 years, and some of the tools Bulletproof has developed and built upon to provide an awesome service to our customers.
A guide through where to look for errors when they happen in the various parts of Puppet Enterprise ( the console, Live Management, puppet master, Activemq, MCollective, agent), what some of those errors mean, and what warnings and errors are red herrings/normally occurring.
Celia Cottle
Support Engineer, Puppet Labs
Celia Cottle is a Support Engineer at Puppet Labs, where she troubleshoots and resolves issues for Puppet Enterprise customers. She comes from Portland State University, where she worked for the College of Engineering and Computer Science doing technical support, while getting her degree in Communication. She’s been working in IT for over five years and enjoys problem solving, working with a wide range of OSes and software, and the variety of challenges that supporting Puppet Enterprise brings. She currently resides in Portland, Oregon.
OSDC 2016 - Continous Integration in Data Centers - Further 3 Years later by ...NETWAYS
I gave a talk titled "Continuous Integration in data centers“ at OSDC in 2013, presenting ways how to realize continuous integration/delivery with Jenkins and related tools.Three years later we gained new tools in our continuous delivery pipeline, including Docker, Gerrit and Goss. Over the years we also had to deal with different problems caused by faster release cycles, a growing team and gaining new projects. We therefore established code review in our pipeline, improved our test infrastructure and invested in our infrastructure automation.In this talk I will discuss the lessons we learned over the last years, demonstrate how a proper continuous delivery pipeline can improve your life and how open source tools like Jenkins, Docker and Gerrit can be leveraged for setting up such an environment.
Paul gave a very insightful presentation on how Puppet can help manage the Cloud and specifically, how it helps Nubefy to manage their Cloud product even better.
PuppetCamp SEA @ Blk 71 - What's New in Puppet DBWalter Heck
Nick Lewis, who came down to Singapore all the way from the Puppet Labs headquarters in Portland, Oregon, is one of the first developers at Puppet Labs and also actively develops Puppet DB. He gave a very interesting talk and demonstration about how Puppet DB work as well as its latest updates.
PuppetCamp SEA @ Blk 71 - Puppet: The Year That WasWalter Heck
Nigel Kersten started off the day with a very interesting and informative talk about the past, current and future of Puppet. He showed Puppet's link with the worldwide tech community and how they plan to make the Puppet experience even better. He also gave updates on what Puppet Labs has done recently, as well as elaborated on the improvements of Puppet 3.0, Puppet DB and Puppet Enterprise. Nigel also mentioned that Puppet Labs is still dedicated on fixing any issues that any updates or the community may have, and that the company also hopes to improve things moving towards the future.
Edward Tan gave a great presentation (slides in vim!) on using puppet on FreeBSD. He introduced FreeBSD and showed us how puppet interacts with the system.
PuppetCamp SEA 1 - Version Control with PuppetWalter Heck
Choon Ming Goh, System Administrator at OnApp Malaysia, gave a presentation on how OnApp implements version control. Since they have quite a few repositories, this is all puppetised and that is quite a nice way of doing version control.
James Turnbull, VP of Tech Operations at Puppetlabs, started off the day with a very interesting and informative talk about the past, current and future of Puppet. He showed they have a strong link to their community and plan to keep it that way. He explained that they grew from very small to 70+ people over the last year, and that brings some issues with it. They are very dedicated to fixing those issues though, and hope to improve things moving towards the future.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
1. Hands-on: getting your feet wet
with puppet
PuppetDB, Exported Resources, 3rd party open source modules,
git submodules, inventory service
June 5th, 2012
Puppet Camp Southeast Asia
Kuala Lumpur, Malaysia
Walter Heck, OlinData
2. Overview
• Introduction OlinData
• Checkup
• Set up puppet & puppetdb
• Set up a 2nd node
• Add an open source puppet module
• Implement it and show exported resources usage
• Future of Puppet in South East Asia
3. Introduction OlinData
• OlinData
▫ MySQL Consulting
▫ Tribily Server Monitoring as a Service (http://tribily.com)
▫ Puppet training and consulting
• Founded in 2008
▫ Setup to be run remotely and location independent
• Started using Puppet in 2010
▫ Official puppetlabs partner since 02-2012
▫ Experience with large, medium and small
infrastructures
4. Checkup
• Who is using puppet? Who's going to?
Haven't decided yet?
• Who is using puppet in production?
▫ Stored configs? Open source
modules? Exported resources?
Inventory service?
5. Prerequisites
• Good mood for tinkering
• VirtualBox Debian 6.0.4 64bit VM
• Internet connection (preferrably > 28k8)
6. Doing the minimum prep
• Get repository .deb package and
install it
▫ This should be automated into your bootstrapping of course!
# wget http://apt.puppetlabs.com/puppetlabs-release_1.0-3_all.deb
# dpkg -i puppetlabs-release_1.0-3_all.deb
# aptitude update
# aptitude install puppetmaster-passenger puppet puppetdb
puppetdb-terminus
8. Add permissions for inventory service
• Add permissions to auth.conf
#NOTE: refine this on a production server!
path /facts
auth any
method find, search
allow *
9. Set up SSL certs
• Run the ssl generating script
#/usr/sbin/puppetdb-ssl-setup
• Set the generated password in jetty config file
#cat /etc/puppetdb/ssl/puppetdb_keystore_pw.txt
#vim /etc/puppetdb/conf.d/jetty.ini
[..]
key-password=tP35htAMH8PUcYVtCAmSVhYbf
trust-password=tP35htAMH8PUcYVtCAmSVhYbf
• Set ownership for /etc/puppetdb/ssl
#chown -R puppetdb:puppetdb /etc/puppetdb/ssl
10. Check ssl certs
• Check ssl certs for puppetdb against puppet
# keytool -list -keystore /etc/puppetdb/ssl/
keystore.jks
Enter keystore password:
Keystore type: JKS
Keystore provider: SUN
Your keystore contains 1 entry
debian-puppetcamp.example.com, Jun 4, 2012,
PrivateKeyEntry,
Certificate fingerprint (MD5):
D7:F1:03:5F:E0:1A:C3:DB:E1:23:C4:CE:43:FA:24:24
# puppet cert fingerprint debian-
puppetcamp.example.com --digest=md5
debian-puppetcamp.example.com
D7:F1:03:5F:E0:1A:C3:DB:E1:23:C4:CE:43:FA:24:24
11. Restart
• Restart apache/passenger & puppetdb
# /etc/init.d/puppetdb restart && apache2ctl restart
• Sit back and watch puppetdb log
2012-06-04 18:02:22,154 WARN [main] [bonecp.BoneCPConfig] JDBC username was not set in
config!
2012-06-04 18:02:22,154 WARN [main] [bonecp.BoneCPConfig] JDBC password was not set in
config!
2012-06-04 18:02:23,050 INFO [BoneCP-pool-watch-thread] [HSQLDB37B6BA305B.ENGINE]
checkpointClose start
2012-06-04 18:02:23,109 INFO [BoneCP-pool-watch-thread] [HSQLDB37B6BA305B.ENGINE]
checkpointClose end
2012-06-04 18:02:23,160 INFO [main] [cli.services] Starting broker
2012-06-04 18:02:24,890 INFO [main] [journal.Journal] ignoring zero length, partially
initialised journal data file: db-1.log number = 1 , length = 0
2012-06-04 18:02:25,051 INFO [main] [cli.services] Starting 1 command processor threads
2012-06-04 18:02:25,063 INFO [main] [cli.services] Starting query server
2012-06-04 18:02:25,064 INFO [main] [cli.services] Starting database compactor (60 minute
interval)
2012-06-04 18:02:25,087 INFO [clojure-agent-send-off-pool-1] [mortbay.log] Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2012-06-04 18:02:25,090 INFO [clojure-agent-send-off-pool-1] [mortbay.log] jetty-6.1.x
2012-06-04 18:02:25,140 INFO [clojure-agent-send-off-pool-1] [mortbay.log] Started
SocketConnector@debian-puppetcamp.example.com:8080
2012-06-04 18:02:25,885 INFO [clojure-agent-send-off-pool-1] [mortbay.log] Started
SslSocketConnector@debian-puppetcamp.example.com:8081
12. Test run!
• Check for listening connections
#netstat -ln | grep 808
tcp6 0 0 127.0.1.1:8080 :::* LISTEN
tcp6 0 0 127.0.1.1:8081 :::* LISTEN
• Run puppet
# puppet agent -t
No LSB modules are available.
info: Caching catalog for debian-
puppetcamp.example.com
info: Applying configuration version '1338804503'
notice: Finished catalog run in 0.09 seconds
14. The first beginnings of a new world
• Add 2 nodes to /etc/puppet/manifests/site.pp
node 'debian-puppetcamp.example.com' {
file { '/tmp/puppet.txt':
ensure => present,
content => "This is host ${::hostname}n"
}
}
node 'debian-node.example.com' {
file { '/tmp/puppet.txt':
ensure => present,
content => "This is host ${::hostname}n"
}
}
15. Adding a node
• Install puppet
# aptitude install puppet
• Point to puppetmaster
# vim /etc/hosts
<ip_of_puppetmaster> puppet
16. Signing the node
• Run puppet once to generate cert request
# puppetd -t
info: Creating a new SSL key for debian-node.example.com
warning: peer certificate won't be verified in this SSL session
info: Caching certificate for ca
warning: peer certificate won't be verified in this SSL session
warning: peer certificate won't be verified in this SSL session
info: Creating a new SSL certificate request for debian-node.example.com
info: Certificate Request fingerprint (md5): 17:E0:87:45:F7:05:44:EE:F2:65:89:7B:56:62:CA:A9
warning: peer certificate won't be verified in this SSL session
warning: peer certificate won't be verified in this SSL session
warning: peer certificate won't be verified in this SSL session
Exiting; no certificate found and waitforcert is disabled
• Sign the request on the master
# puppet cert --list --all
debian-node.example.com (17:E0:87:45:F7:05:44:EE:F2:65:89:7B:56:62:CA:A9)
+ debian-puppetcamp.example.com (64:A6:C8:9F:FC:50:3E:79:9D:0D:19:04:4B:29:68:D1) (alt names:
DNS:debian-puppetcamp.example.com, DNS:puppet, DNS:puppet.example.com)
# puppet cert --sign debian-node.example.com
notice: Signed certificate request for debian-node.example.com
notice: Removing file Puppet::SSL::CertificateRequest debian-node.example.com at '/var/lib/puppet/
ssl/ca/requests/debian-node.example.com.pem'
17. Run puppet and check result
• Run puppet on node
# puppetd -t
warning: peer certificate won't be verified in this SSL session
info: Caching certificate for debian-node.example.com
No LSB modules are available.
info: Caching certificate_revocation_list for ca
info: Caching catalog for debian-node.example.com
info: Applying configuration version '1338822174'
notice: /Stage[main]//Node[debian-node.example.com]/File[/tmp/puppet.txt]/ensure: created
info: Creating state file /var/lib/puppet/state/state.yaml
notice: Finished catalog run in 0.06 seconds
• Check result
# cat /tmp/puppet.txt
This is Host debian-node
• Say “YEAH!”
18. Adding a git submodule
• Clone the firewall submodule from github
# git submodule add https://github.com/puppetlabs/puppetlabs-
firewall.git modules/firewall
Cloning into modules/firewall...
remote: Counting objects: 1065, done.
remote: Compressing objects: 100% (560/560), done.
remote: Total 1065 (delta 384), reused 1012 (delta 341)
Receiving objects: 100% (1065/1065), 158.69 KiB | 117 KiB/s,
done.
Resolving deltas: 100% (384/384), done.
• Commit it to the main repo
# git add * && git commit -m 'Added 2 node defs and firewall submodule'
[master d0bab6f] Added 2 node defs and firewall submodule
Committer: root <root@debian-puppetcamp.example.com>
3 files changed, 17 insertions(+), 0 deletions(-)
create mode 100644 .gitmodules
create mode 100644 manifests/site.pp
create mode 160000 modules/firewall
19. Using the new firewall submodule
• Adjust manifests/site.pp
node 'basenode' {
@@firewall { "200 allow conns to the puppetmaster from ${::fqdn}":
chain => 'INPUT',
action => 'accept',
proto => 'tcp',
dport => 8140,
source => $::ipaddress_eth1,
tag => 'role:puppetmaster'
}
}
#Our puppet master
node 'debian-puppetcamp.example.com' inherits basenode {
# Gather all Firewall rules here
Firewall<<| tag == 'role:puppetmaster' |>>
}
# Our sample node
node 'debian-node.example.com' inherits basenode {
}
20. Running puppet agent
• Execute puppet runs on both nodes
root@debian-puppetcamp:/etc/puppet# puppetd -t
info: Loading facts in /etc/puppet/modules/firewall/lib/facter/iptables.rb
No LSB modules are available.
info: Caching catalog for debian-puppetcamp.example.com
info: Applying configuration version '1338825096'
notice: /Firewall[200 allow conns to the puppetmaster from debian-
puppetcamp.example.com]/ensure: created
notice: Finished catalog run in 0.47 seconds
root@debian-node:~# puppetd -t
No LSB modules are available.
info: Caching catalog for debian-node.example.com
info: Applying configuration version '1338825096'
notice: Finished catalog run in 0.03 seconds
root@debian-puppetcamp:/etc/puppet# puppetd -t
info: Loading facts in /etc/puppet/modules/firewall/lib/facter/iptables.rb
No LSB modules are available.
info: Caching catalog for debian-puppetcamp.example.com
info: Applying configuration version '1338825096'
notice: /Firewall[200 allow conns to the puppetmaster from debian-
node.example.com]/ensure: created
notice: Finished catalog run in 0.22 seconds
21. Checking results
• Iptables on puppetmaster
# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- 192.168.0.111 anywhere multiport dports
8140 /* 200 allow conns to the puppetmaster from debian-node.example.com */
ACCEPT tcp -- 192.168.0.109 anywhere multiport dports
8140 /* 200 allow conns to the puppetmaster from debian-puppetcamp.example.com */
[..]
22. Inventory service
• Query for all nodes having debian squeeze
root@debian-puppetcamp:/etc/puppet# curl -k -H "Accept: yaml" https://puppet:
8140/production/facts_search/search?facts.lsbdistcodename=squeeze
&facts.operatingsystem=Debian
---
- debian-puppetcamp.example.com
- debian-node.example.com
• Query for facts about a certain node
root@debian-puppetcamp:/etc/puppet# curl -k -H "Accept: yaml"
https://puppet:8140/production/facts/debian-puppetcamp.example.com
--- !ruby/object:Puppet::Node::Facts
expiration: 2012-06-04 18:38:21.174542 +08:00
name: debian-puppetcamp.example.com
values:
productname: VirtualBox
Kernelmajversion: "2.6"
ipaddress_eth0: 10.0.2.15
kernelversion: 2.6.32
[..]
23.
24. OlinData and Puppet
• Training
▫ Upcoming trainings:
– Singapore – August 6-8
– Hyderabad – July 11-14
▫ Cheaper then in the West (50% or more discount!)
▫ Expanding to 5 countries in 5 months
• Consulting
▫ Remote consulting worldwide
▫ Ongoing hands-on engineering
▫ Start from scratch or improve existing environment