Reliable and scalable applications need repeatable, automated application deployment. Configuration management tools like Chef, Puppet and others make it easy to deploy an entire application stack, but support for Perl applications has lagged behind other popular, dynamic languages.
The Perl community has responded to these challenges with tools like perlbrew, local::lib, carton and others to make it easier to manage an application and its dependencies in isolation. This presentation will show you how to make those tools work with Chef for complete automation of Perl application deployment.
Cooking Perl with Chef: Real World Tutorial with JitterbugDavid Golden
This tutorial provides a command-by-command walk-through for deploying the Jitterbug continuous integration application using the Chef configuration management tool
More info at http://blog.carlossanchez.eu/tag/devops
The DevOps movement aims to improve communication between developers and operations teams to solve critical issues such as fear of change and risky deployments. But the same way that Agile development would likely fail without continuous integration tools, the DevOps principles need tools to make them real, and provide the automation required to actually be implemented. Most of the so called DevOps tools focus on the operations side, and there should be more than that, the automation must cover the full process, Dev to QA to Ops and be as automated and agile as possible. Tools in each part of the workflow have evolved in their own silos, and with the support of their own target teams. But a true DevOps mentality requires a seamless process from the start of development to the end in production deployments and maintenance, and for a process to be successful there must be tools that take the burden out of humans.
Apache Maven has arguably been the most successful tool for development, project standardization and automation introduced in the last years. On the operations side we have open source tools like Puppet or Chef that are becoming increasingly popular to automate infrastructure maintenance and server provisioning.
In this presentation we will introduce an end-to-end development-to-production process that will take advantage of Maven and Puppet, each of them at their strong points, and open source tools to automate the handover between them, automating continuous build and deployment, continuous delivery, from source code to any number of application servers managed with Puppet, running either in physical hardware or the cloud, handling new continuous integration builds and releases automatically through several stages and environments such as development, QA, and production.
Continuous Delivery with Maven, Puppet and Tomcat - ApacheCon NA 2013Carlos Sanchez
Continuous Integration, with Apache Continuum or Jenkins, can be extended to fully manage deployments and production environments, running in Tomcat for instance, in a full Continuous Delivery cycle using infrastructure-as-code tools like Puppet, allowing to manage multiple servers and their configurations.
Puppet is an infrastructure-as-code tool that allows easy and automated provisioning of servers, defining the packages, configuration, services,... in code. Enabling DevOps culture, tools like Puppet help drive Agile development all the way to operations and systems administration, and along with continuous integration tools like Apache Continuum or Jenkins, it is a key piece to accomplish repeatability and continuous delivery, automating the operations side during development, QA or production, and enabling testing of systems configuration.
Traditionally a field for system administrators, Puppet can empower developers, allowing both to collaborate coding the infrastructure needed for their developments, whether it runs in hardware, virtual machines or cloud. Developers and sysadmins can define what JDK version must be installed, application server, version, configuration files, war and jar files,... and easily make changes that propagate across all nodes.
Using Vagrant, a command line automation layer for VirtualBox, they can also spin off virtual machines in their local box, easily from scratch with the same configuration as production servers, do development or testing and tear them down afterwards.
We will show how to install and manage Puppet nodes with JDK, multiple Tomcat instances with installed web applications, database, configuration files and all the supporting services. Including getting up and running with Vagrant and VirtualBox for quickstart and Puppet experiments, as well as setting up automated testing of the Puppet code.
From Dev to DevOps - Apache Barcamp Spain 2011Carlos Sanchez
UPDATE: updated slides at http://www.slideshare.net/carlossg/from-dev-to-devops-conferencia-agile-spain-2011
The DevOps movement aims to improve communication between developers and operations teams to solve critical issues such as fear of change and risky deployments. But the same way that Agile development would likely fail without continuous integration tools, the DevOps principles need tools to make them real, and provide the automation required to actually be implemented. Most of the so called DevOps tools focus on the operations side, and there should be more than that, the automation must cover the full process, Dev to QA to Ops and be as automated and agile as possible. Tools in each part of the workflow have evolved in their own silos, and with the support of their own target teams. But a true DevOps mentality requires a seamless process from the start of development to the end in production deployments and maintenance, and for a process to be successful there must be tools that take the burden out of humans.
Apache Maven has arguably been the most successful tool for development, project standardization and automation introduced in the last years. On the operations side we have open source tools like Puppet or Chef that are becoming increasingly popular to automate infrastructure maintenance and server provisioning.
In this presentation we will introduce an end-to-end development-to-production process that will take advantage of Maven and Puppet, each of them at their strong points, and open source tools to automate the handover between them, automating continuous build and deployment, continuous delivery, from source code to any number of application servers managed with Puppet, running either in physical hardware or the cloud, handling new continuous integration builds and releases automatically through several stages and environments such as development, QA, and production.
Cooking Perl with Chef: Real World Tutorial with JitterbugDavid Golden
This tutorial provides a command-by-command walk-through for deploying the Jitterbug continuous integration application using the Chef configuration management tool
More info at http://blog.carlossanchez.eu/tag/devops
The DevOps movement aims to improve communication between developers and operations teams to solve critical issues such as fear of change and risky deployments. But the same way that Agile development would likely fail without continuous integration tools, the DevOps principles need tools to make them real, and provide the automation required to actually be implemented. Most of the so called DevOps tools focus on the operations side, and there should be more than that, the automation must cover the full process, Dev to QA to Ops and be as automated and agile as possible. Tools in each part of the workflow have evolved in their own silos, and with the support of their own target teams. But a true DevOps mentality requires a seamless process from the start of development to the end in production deployments and maintenance, and for a process to be successful there must be tools that take the burden out of humans.
Apache Maven has arguably been the most successful tool for development, project standardization and automation introduced in the last years. On the operations side we have open source tools like Puppet or Chef that are becoming increasingly popular to automate infrastructure maintenance and server provisioning.
In this presentation we will introduce an end-to-end development-to-production process that will take advantage of Maven and Puppet, each of them at their strong points, and open source tools to automate the handover between them, automating continuous build and deployment, continuous delivery, from source code to any number of application servers managed with Puppet, running either in physical hardware or the cloud, handling new continuous integration builds and releases automatically through several stages and environments such as development, QA, and production.
Continuous Delivery with Maven, Puppet and Tomcat - ApacheCon NA 2013Carlos Sanchez
Continuous Integration, with Apache Continuum or Jenkins, can be extended to fully manage deployments and production environments, running in Tomcat for instance, in a full Continuous Delivery cycle using infrastructure-as-code tools like Puppet, allowing to manage multiple servers and their configurations.
Puppet is an infrastructure-as-code tool that allows easy and automated provisioning of servers, defining the packages, configuration, services,... in code. Enabling DevOps culture, tools like Puppet help drive Agile development all the way to operations and systems administration, and along with continuous integration tools like Apache Continuum or Jenkins, it is a key piece to accomplish repeatability and continuous delivery, automating the operations side during development, QA or production, and enabling testing of systems configuration.
Traditionally a field for system administrators, Puppet can empower developers, allowing both to collaborate coding the infrastructure needed for their developments, whether it runs in hardware, virtual machines or cloud. Developers and sysadmins can define what JDK version must be installed, application server, version, configuration files, war and jar files,... and easily make changes that propagate across all nodes.
Using Vagrant, a command line automation layer for VirtualBox, they can also spin off virtual machines in their local box, easily from scratch with the same configuration as production servers, do development or testing and tear them down afterwards.
We will show how to install and manage Puppet nodes with JDK, multiple Tomcat instances with installed web applications, database, configuration files and all the supporting services. Including getting up and running with Vagrant and VirtualBox for quickstart and Puppet experiments, as well as setting up automated testing of the Puppet code.
From Dev to DevOps - Apache Barcamp Spain 2011Carlos Sanchez
UPDATE: updated slides at http://www.slideshare.net/carlossg/from-dev-to-devops-conferencia-agile-spain-2011
The DevOps movement aims to improve communication between developers and operations teams to solve critical issues such as fear of change and risky deployments. But the same way that Agile development would likely fail without continuous integration tools, the DevOps principles need tools to make them real, and provide the automation required to actually be implemented. Most of the so called DevOps tools focus on the operations side, and there should be more than that, the automation must cover the full process, Dev to QA to Ops and be as automated and agile as possible. Tools in each part of the workflow have evolved in their own silos, and with the support of their own target teams. But a true DevOps mentality requires a seamless process from the start of development to the end in production deployments and maintenance, and for a process to be successful there must be tools that take the burden out of humans.
Apache Maven has arguably been the most successful tool for development, project standardization and automation introduced in the last years. On the operations side we have open source tools like Puppet or Chef that are becoming increasingly popular to automate infrastructure maintenance and server provisioning.
In this presentation we will introduce an end-to-end development-to-production process that will take advantage of Maven and Puppet, each of them at their strong points, and open source tools to automate the handover between them, automating continuous build and deployment, continuous delivery, from source code to any number of application servers managed with Puppet, running either in physical hardware or the cloud, handling new continuous integration builds and releases automatically through several stages and environments such as development, QA, and production.
More info at http://blog.carlossanchez.eu/2011/11/15/from-dev-to-devops-slides-from-apachecon-na-vancouver-2011/
The DevOps movement aims to improve communication between developers and operations teams to solve critical issues such as fear of change and risky deployments. But the same way that Agile development would likely fail without continuous integration tools, the DevOps principles need tools to make them real, and provide the automation required to actually be implemented. Most of the so called DevOps tools focus on the operations side, and there should be more than that, the automation must cover the full process, Dev to QA to Ops and be as automated and agile as possible. Tools in each part of the workflow have evolved in their own silos, and with the support of their own target teams. But a true DevOps mentality requires a seamless process from the start of development to the end in production deployments and maintenance, and for a process to be successful there must be tools that take the burden out of humans.
Apache Maven has arguably been the most successful tool for development, project standardization and automation introduced in the last years. On the operations side we have open source tools like Puppet or Chef that are becoming increasingly popular to automate infrastructure maintenance and server provisioning.
In this presentation we will introduce an end-to-end development-to-production process that will take advantage of Maven and Puppet, each of them at their strong points, and open source tools to automate the handover between them, automating continuous build and deployment, continuous delivery, from source code to any number of application servers managed with Puppet, running either in physical hardware or the cloud, handling new continuous integration builds and releases automatically through several stages and environments such as development, QA, and production.
Puppet for Java developers - JavaZone NO 2012Carlos Sanchez
Example code at https://github.com/carlossg/puppet-for-java-devs
More info at http://blog.carlossanchez.eu/tag/devops
Video at http://vimeo.com/49483627
Puppet is an infrastructure-as-code tool that allows easy and automated provisioning of servers, defining the packages, configuration, services,... in code. Enabling DevOps culture, tools like Puppet help drive Agile development all the way to operations and systems administration, and along with continuous integration tools like Jenkins, it is a key piece to accomplish repeatability and continuous delivery, automating the operations side during development, QA or production, and enabling testing of systems configuration.
Traditionally a field for system administrators, Puppet can empower developers, allowing both to collaborate coding the infrastructure needed for their developments, whether it runs in hardware, virtual machines or cloud. Developers and sysadmins can define what JDK version must be installed, application server, version, configuration files, war and jar files,... and easily make changes that propagate across all nodes.
Using Vagrant, a command line automation layer for VirtualBox, they can also spin off virtual machines in their local box, easily from scratch with the same configuration as production servers, do development or testing and tear them down afterwards.
We’ll show how to install and manage Puppet nodes with JDK, multiple application server instances with installed web applications, database, configuration files and all the supporting services. Including getting up and running with Vagrant and VirtualBox for quickstart and Puppet experiments, as well as setting up automated testing of the Puppet code.
"Puppet at GitHub / ChatOps" from PuppetConf 2012, by Jesse Newland
Video of "Puppet at GitHub": http://bit.ly/WVS3vQ
Learn more about Puppet: http://bit.ly/QQoAP1
Abstract: Ops at GitHub has a unique challenge - keeping up with the rabid pace of features and products that the GitHub team develops. In this talk, we'll focus on tools and techniques we use to rapidly and confidently ship infrastructure changes/features with Puppet using Puppet-Rspec, CI, Puppet-Lint, branch puppet deploys, and Hubot.
Speaker Bio: Jesse Newland does Ops at GitHub. His favorite hobby is SPOF wack-a-mole, followed closely by guitar and piano. Prior to GitHub, Jesse was the CTO at Rails Machine where he ran a large private cloud and managed several hundred production Ruby on Rails applications using Puppet. To the delight and/or chagrin of the Puppet community, Jesse is to blame for Moonshine, the Ruby DSL for Puppet before Puppet had a Ruby DSL.
"Puppet at Pinterest", by Ryan Park, Operations Engineer at Pinterest. Talk from PuppetConf 2012.
Video of "Puppet at Pinterest": http://youtu.be/aU-bCbBq8zs
Learn more about Puppet: http://bit.ly/QQoAP1
Abstract: A case study of how Pinterest uses Puppet to manage its infrastructure. Pinterest has hundreds of Amazon EC2 virtual servers and uses Puppet Dashboard as the “source of truth” about its server inventory. Pinterest built a REST API for this database, which powers tools and automated scripts that integrate Puppet with internal systems and with Amazon Web Services.
Speaker Bio: Ryan Park leads operations and infrastructure at Pinterest, one of 2012’s fastest growing web sites. Pinterest’s entire infrastructure is in the cloud, built atop hundreds of Amazon EC2 virtual server instances. Ryan introduced Puppet to their infrastructure as soon as he joined the company, and they now use Puppet as the primary tool for managing their infrastructure. Prior to joining Pinterest, Ryan was the Head of Operations at PBworks, an online team collaboration service.
Dennis Matotek, Technical Lead Platforms at Experian Hitwise Australia, gave an excellent presentation on setting up puppet using vagrant, puppet and testing, including a full demo of rspec-puppet and Jenkins.
Building kubectl plugins with Quarkus | DevNation Tech TalkRed Hat Developers
We all know how flexible Kubernetes extensions can be - Tekton and Knative are examples. But did you know it's also pretty easy to extend kubectl, the Kubernetes superstar CLI? In this session we see how a kubectl plugin is designed and then from scratch, we will build our own plugin using Quarkus. That will give us the opportunity to discover the command mode of Quarkus, rediscover how native compilation can create super fast binaries, and see how the Kubernetes-client extensions make it super easy to interact with a Kubernetes cluster.
In recent years there has been a tremendous amount of progress and innovation around tools and applications available to web developers that improve the quality, efficiency and speed of our applications, and it is hard to keep up with all of it.
PuppetCamp SEA 1 - Puppet Deployment at OnAppWalter Heck
Wai Keen Woon, CTO CDN Division OnApp Malaysia, gave an interesting overview of what the Puppet architecture at OnApp looks like. The CDN division at OnApp is a large provider of CDN services, and as such makes a very interesting candidate for a case study.
Scaling Puppet Enterprise with Compile Masters requires you to provision new machines and manually configure them, as well as your Puppet Master server.
Learn how you can automatically provision and configure new Compile Master nodes for your AWS Opsworks for Puppet Enterprise server by leveraging AWS Systems Manager
More info at http://blog.carlossanchez.eu/2011/11/15/from-dev-to-devops-slides-from-apachecon-na-vancouver-2011/
The DevOps movement aims to improve communication between developers and operations teams to solve critical issues such as fear of change and risky deployments. But the same way that Agile development would likely fail without continuous integration tools, the DevOps principles need tools to make them real, and provide the automation required to actually be implemented. Most of the so called DevOps tools focus on the operations side, and there should be more than that, the automation must cover the full process, Dev to QA to Ops and be as automated and agile as possible. Tools in each part of the workflow have evolved in their own silos, and with the support of their own target teams. But a true DevOps mentality requires a seamless process from the start of development to the end in production deployments and maintenance, and for a process to be successful there must be tools that take the burden out of humans.
Apache Maven has arguably been the most successful tool for development, project standardization and automation introduced in the last years. On the operations side we have open source tools like Puppet or Chef that are becoming increasingly popular to automate infrastructure maintenance and server provisioning.
In this presentation we will introduce an end-to-end development-to-production process that will take advantage of Maven and Puppet, each of them at their strong points, and open source tools to automate the handover between them, automating continuous build and deployment, continuous delivery, from source code to any number of application servers managed with Puppet, running either in physical hardware or the cloud, handling new continuous integration builds and releases automatically through several stages and environments such as development, QA, and production.
Puppet for Java developers - JavaZone NO 2012Carlos Sanchez
Example code at https://github.com/carlossg/puppet-for-java-devs
More info at http://blog.carlossanchez.eu/tag/devops
Video at http://vimeo.com/49483627
Puppet is an infrastructure-as-code tool that allows easy and automated provisioning of servers, defining the packages, configuration, services,... in code. Enabling DevOps culture, tools like Puppet help drive Agile development all the way to operations and systems administration, and along with continuous integration tools like Jenkins, it is a key piece to accomplish repeatability and continuous delivery, automating the operations side during development, QA or production, and enabling testing of systems configuration.
Traditionally a field for system administrators, Puppet can empower developers, allowing both to collaborate coding the infrastructure needed for their developments, whether it runs in hardware, virtual machines or cloud. Developers and sysadmins can define what JDK version must be installed, application server, version, configuration files, war and jar files,... and easily make changes that propagate across all nodes.
Using Vagrant, a command line automation layer for VirtualBox, they can also spin off virtual machines in their local box, easily from scratch with the same configuration as production servers, do development or testing and tear them down afterwards.
We’ll show how to install and manage Puppet nodes with JDK, multiple application server instances with installed web applications, database, configuration files and all the supporting services. Including getting up and running with Vagrant and VirtualBox for quickstart and Puppet experiments, as well as setting up automated testing of the Puppet code.
"Puppet at GitHub / ChatOps" from PuppetConf 2012, by Jesse Newland
Video of "Puppet at GitHub": http://bit.ly/WVS3vQ
Learn more about Puppet: http://bit.ly/QQoAP1
Abstract: Ops at GitHub has a unique challenge - keeping up with the rabid pace of features and products that the GitHub team develops. In this talk, we'll focus on tools and techniques we use to rapidly and confidently ship infrastructure changes/features with Puppet using Puppet-Rspec, CI, Puppet-Lint, branch puppet deploys, and Hubot.
Speaker Bio: Jesse Newland does Ops at GitHub. His favorite hobby is SPOF wack-a-mole, followed closely by guitar and piano. Prior to GitHub, Jesse was the CTO at Rails Machine where he ran a large private cloud and managed several hundred production Ruby on Rails applications using Puppet. To the delight and/or chagrin of the Puppet community, Jesse is to blame for Moonshine, the Ruby DSL for Puppet before Puppet had a Ruby DSL.
"Puppet at Pinterest", by Ryan Park, Operations Engineer at Pinterest. Talk from PuppetConf 2012.
Video of "Puppet at Pinterest": http://youtu.be/aU-bCbBq8zs
Learn more about Puppet: http://bit.ly/QQoAP1
Abstract: A case study of how Pinterest uses Puppet to manage its infrastructure. Pinterest has hundreds of Amazon EC2 virtual servers and uses Puppet Dashboard as the “source of truth” about its server inventory. Pinterest built a REST API for this database, which powers tools and automated scripts that integrate Puppet with internal systems and with Amazon Web Services.
Speaker Bio: Ryan Park leads operations and infrastructure at Pinterest, one of 2012’s fastest growing web sites. Pinterest’s entire infrastructure is in the cloud, built atop hundreds of Amazon EC2 virtual server instances. Ryan introduced Puppet to their infrastructure as soon as he joined the company, and they now use Puppet as the primary tool for managing their infrastructure. Prior to joining Pinterest, Ryan was the Head of Operations at PBworks, an online team collaboration service.
Dennis Matotek, Technical Lead Platforms at Experian Hitwise Australia, gave an excellent presentation on setting up puppet using vagrant, puppet and testing, including a full demo of rspec-puppet and Jenkins.
Building kubectl plugins with Quarkus | DevNation Tech TalkRed Hat Developers
We all know how flexible Kubernetes extensions can be - Tekton and Knative are examples. But did you know it's also pretty easy to extend kubectl, the Kubernetes superstar CLI? In this session we see how a kubectl plugin is designed and then from scratch, we will build our own plugin using Quarkus. That will give us the opportunity to discover the command mode of Quarkus, rediscover how native compilation can create super fast binaries, and see how the Kubernetes-client extensions make it super easy to interact with a Kubernetes cluster.
In recent years there has been a tremendous amount of progress and innovation around tools and applications available to web developers that improve the quality, efficiency and speed of our applications, and it is hard to keep up with all of it.
PuppetCamp SEA 1 - Puppet Deployment at OnAppWalter Heck
Wai Keen Woon, CTO CDN Division OnApp Malaysia, gave an interesting overview of what the Puppet architecture at OnApp looks like. The CDN division at OnApp is a large provider of CDN services, and as such makes a very interesting candidate for a case study.
Scaling Puppet Enterprise with Compile Masters requires you to provision new machines and manually configure them, as well as your Puppet Master server.
Learn how you can automatically provision and configure new Compile Master nodes for your AWS Opsworks for Puppet Enterprise server by leveraging AWS Systems Manager
A story of how we went about packaging perl and all of the dependencies that our project has.
Where we were before, the chosen path, and the end result.
The pitfalls and a view on the pros and cons of the previous state of affairs versus the pros/cons of the end result.
On Friday 5 June 2015 I gave a talk called Cluster Management with Kubernetes to a general audience at the University of Edinburgh. The talk includes an example of a music store system with a Kibana front end UI and an Elasticsearch based back end which helps to make concrete concepts like pods, replication controllers and services.
Presented at AI NEXTCon Seattle 1/17-20, 2018
http://aisea18.xnextcon.com
join our free online AI group with 50,000+ tech engineers to learn and practice AI technology, including: latest AI news, tech articles/blogs, tech talks, tutorial videos, and hands-on workshop/codelabs, on machine learning, deep learning, data science, etc..
By Rafael Benevides and Edson Yanaga
Yes, Docker is great. We are all very aware of that, but now it’s time to take the next step: wrapping it all and deploying to a production environment. For this scenario, we need something more. For that “more,” we have Kubernetes by Google, a container platform based on the same technology used to deploy billions of containers per month on Google’s infrastructure. Ready to leverage your Docker skills and package your current Java app (WAR, EAR, or JAR)? Come to this session to see how your current Docker skill set can be easily mapped to Kubernetes concepts and commands. And get ready to deploy your containers in production.
Ed Seymour
Containerisation Lead – Red Hat
Ed has over 20 years experience working in software development and IT automation. With a career that started with a small software start-up, working efficiently and with agility was a necessity, and through his experience working at a global IT services company, gained valuable experience in promoting and effecting organisational change, adoption of agile methods, and automation of the software development life-cycle. At Red Hat, Ed’s role has focused on enabling customers as they embrace new organisational behaviours and structures, for example DevOps, and developing new IT services through adoption of emerging technologies, such as Cloud Management, OpenStack; Ed specialises in solutions based on containers through Docker, Kubernetes and OpenShift.
Kubernetes intro public - kubernetes user group 4-21-2015reallavalamp
Kubernetes Introduction - talk given by Daniel Smith at Kubenetes User Group meetup #2 in Mountain View on 4/21/2015.
Explains the basic concepts and principles of the Kubernetes container orchestration system.
Adopt DevOps philosophy on your Symfony projects (Symfony Live 2011)Fabrice Bernhard
This is the presentation given at the Symfony Live 2011 conference. It is an introduction to the new agile movement spreading in the technical operations community called DevOps and how to adopt it on web development projects, in particular Symfony projects.
Plan of the slides :
- Configuration Management
- Development VM
- Scripted deployment
- Continuous deployment
Tools presented in the slides:
- Puppet
- Vagrant
- Fabric
- Jenkins / Hudson
In this talk John Zaccone will present tips and best practices for developing dockerized applications. We will start with the simple question: "Why Docker?", then dive into practical knowledge for developers to apply on their own. John will cover best practices concerning Dockerfiles and the best tools to use for developing. We will also talk about the "hand-off" between developer and operations and how the two roles can work together to address broad issues such as CI/CD and security. After John's talk, stay tuned for Scott Coulton's talk that will dive deeper into Docker for Ops.
DockerCon EU 2015: Stop Being Lazy and Test Your Software!Docker, Inc.
Presented by Laura Frank, Engineer, Codeship
Testing software is necessary, no matter the size or status of your company. Introducing Docker to your development workflow can help you write and run your testing frameworks more efficiently, so that you can always deliver your best product to your customers and there are no excuses for not writing tests anymore. You’ll walk away from this talk with practical advice for using Docker to run your test frameworks more efficiently, as well as some solid knowledge of software testing principles.
Slice Recycling Performance and PitfallsDavid Golden
When you drink a soda, do you recycle the can? When you allocate a slice, do you recycle the memory? Recycling cans is good for the planet and recycling slices can be good for your Go program. But how? The garbage collector? A sync.Pool? Something else? You’ll be surprised what a difference it makes!
Did you know that CPAN comes with a free QA team? CPAN Testers is a distributed, grass-roots project with over 6.5 million test reports. This talk describes how the project benefits Perl developers and offers four important practices for any large-scale, volunteer QA effort
Eversion 101: An Introduction to Inside-Out ObjectsDavid Golden
Inside-out objects offer intriguing advantages over traditional Perl
objects, but at the cost of substantial complexity. This talk reviews pros and cons of inside-out objects and teaches the basics of how they work. It includes three core concepts, four ways to make them and five pitfalls to avoid. Familiarity
with traditional object-oriented Perl will be assumed.
I talk about my work on "BSON", a Perl module (https://metacpan.org/pod/BSON) for the binary encoded data format (http://bsonspec.org/) used by MongoDB, and the challenge of serializing strongly-typed data into and out of Perl.
What makes your code slow? How do you make it faster? And how do you prove it?
This talk will describe my adventures benchmarking and optimizing ordered hashes in Perl, culminating in the release of Hash::Ordered (http://p3rl.org/Hash::Ordered) — which outperforms all other CPAN alternatives, often by a substantial margin. We will cover:
* How to customize Benchmark.pm
* How and why to benchmark at different scales
* Why tied anything in Perl is a horrible idea
* How ordered hashes got faster from a simple algorithm change
State of the Velociraptor Mini-Keynote: Perl ToolchainDavid Golden
In 2015, the Perl "State of the Velociraptor" keynote was delivered as a series of lightning talks by community members. I was asked to speak about the Perl Toolchain and the Perl QA Hackathon. I covered the CPAN River (http://neilb.org/tag/cpan-river/) and the Berlin Consensus (http://cpan.io/ref/toolchain/berlin-consensus.html) recommendations for CPAN Standards of Care.
Distributed database consistency is a jargon-filled tarpit - of great interest to theorists but misunderstood or ignored by developers. But it doesn't have to be. What if you had a simple mental model for reasoning about consistency? What if you had simple rules of thumb for making the right tradeoffs in your applications? MongoDB staff engineer David Golden will share ideas for practical consistency and demonstrate how to achieve it with the MongoDB Perl driver.
This case study gives an inside look at optimization of the MongoDB Perl driver, including custom benchmarking tools, step-by-step changes and results that will surprise and amaze. If you ever needed to optimize some Perl and wondered how people go about it, this talk is for you.
Safer Chainsaw Juggling (Lightning Talk)David Golden
At YAPC::NA 2015, I gave a talk comparing MongoDB to Perl as another Swiss-Army chainsaw. This year, I'll give a chronology of changes that make MongoDB less likely to take your leg off.
Taking Perl to Eleven with Higher-Order FunctionsDavid Golden
Sometimes, you just need your Perl to go one higher. This talk will teach you how to use functions that return functions for powerful, succinct solutions to some repetitive coding problems. Along the way, you’ll see concrete examples using higher-order Perl to generate declarative, structured “fake” data for testing.
Perl and MongoDB both embody the twin ideals of whipuptitude and manipulexity. Both have wildly enthusiastic communities. Both are regularly reviled by outsiders. What happens when we bring them together? No children, trees or animals were be harmed during this talk.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
11. CHI
DateTime
DBI
JSON
App = perl + CPAN + your code
Moose
Plack
POE
Try::Tiny
...
12. CHI
DateTime
DBI
JSON
App = perl + CPAN + your code
Moose
Plack
POE
Try::Tiny
...
your application is the versioned set of all its compontents
13. CHI
DateTime
DBI
JSON
App
v1.0.0 = perl + CPAN + your code
Moose
Plack
POE
Try::Tiny
...
your application is the versioned set of all its compontents
14. CHI
DateTime
DBI
JSON
App
v1.0.0 = Perl + CPAN + your code
v5.14.2 Moose
Plack
POE
Try::Tiny
...
your application is the versioned set of all its compontents
15. 0.55
0.76
1.622
2.53
App
v1.0.0 = Perl + CPAN + your code
v5.14.2 2.0603
0.9989
1.354
0.11
...
your application is the versioned set of all its compontents
16. 0.55
0.76
1.622
2.53
App
v1.0.0 = Perl + CPAN + your code
v5.14.2 v1.0 2.0603
0.9989
1.354
0.11
...
your application is the versioned set of all its compontents
19. 0.55
0.76
1.622
2.53
App
v1.0.1 = Perl + CPAN + your code
v5.16.0 v1.02.0603
0.9989
1.354
0.11
...
… and you have a new version of your application
48. Repeatable deployment in five parts
application-specific Perl
application-specific @INC path
versioned application code
versioned module dependencies
automate the previous four
49. Repeatable deployment in five parts
perlbrew
application-specific @INC path
versioned application code
versioned module dependencies
automate the previous four
50. Repeatable deployment in five parts
perlbrew
local::lib
versioned application code
versioned module dependencies
automate the previous four
51. Repeatable deployment in five parts
perlbrew
local::lib
git
versioned module dependencies
automate the previous four
59. Now...
Chef ❤ Perl
(perlbrew; local::lib; carton)
60. Time for a quick Chef glossary...
(see http://wiki.opscode.com/)
61. “Cookbook”
A collection of components to configure
a particular application
Typically includes recipes, providers,
templates, etc.
(CPAN analogy → “distribution”)
62. “Recipe”
Component applied that deploys an
application or service
Typically declarative, specifying desired
resources and associated configuration
65. “Node”
A host computer managed with Chef
Often means the configuration file that
defines recipes, attributes and roles that
define the target state of a host
66. “Attribute”
A variable used in a recipe and/or provider
that customizes the configuration of a
resource
Attributes have defaults, but can be
customized for nodes or roles
67. “Role”
A collection of recipes and attributes used to
apply common configuration across multiple
nodes
68. Summary...
cookbooks include recipes and providers
roles, recipes and attributes get applied to nodes
recipes specify desired resources and customize
them with attributes
providers do the work of deploying resources
69. I wrote two Perl Chef cookbooks
for the Chef community repository
(which is like CPAN circa 1996 or so)
http://community.opscode.com/
70. 1. perlbrew – for managing perls
2. carton – for deploying apps
Also available here: https://github.com/dagolden/perl-chef
71. perlbrew cookbook resources:
perlbrew_perl – install a perl
perlbrew_lib – create a local::lib
perlbrew_cpanm – install modules to perl or lib
perlbrew_run – run shell commands under a
particular perlbrew and/or lib
72. carton cookbook resource:
carton_app – deploy an app with carton
– start in directory with the app source
– configure for a specific perlbrew perl
– install versioned dependencies with carton
– create a runit service for the app
– start the app
73. Time for an example:
Deploying a “Hello World” Plack app
https://github.com/dagolden/zzz-hello-world
74. Steps for creating Hello World
1. Write the application
2. Use carton to create a carton.lock file with
versioned dependency info
3. Write a simple cookbook for the application
4. Check it all into git
5. Deploy the application with Chef
79. use 5.008001;
use strict;
use warnings;
package ZZZ::Hello::World;
our $VERSION = "1.0";
use Plack::Request;
sub run_psgi {
my $self = shift;
my $req = Plack::Request->new(shift);
my $res = $req->new_response(200);
$res->content_type('text/html');
$res->body(<<"HERE");
<html>
<head><title>Hello World</title></head>
<body>
<p>Hello World. It is @{[scalar localtime]}</p>
...
</body>
</html>
HERE
return $res->finalize;
}
1;
(the module just returns some dynamic HTML)
82. During development, carton installs
dependencies locally and creates a versioned
dependency file called carton.lock
$ carton install
# installs dependencies into a local directory
# creates carton.lock if it doesn't exist
# carton.lock is a JSON file of dependency info
83. During deployment, carton installs dependencies
from carton.lock and runs the app with them
$ carton install
# installs dependencies into a local directory
$ carton exec Ilib starman p 8080 app.psgi
# runs the app using carton installed deps
87. # perlbrew to execute with
default['hello-world']['perl_version'] = 'perl-5.16.0'
# Install directory, repo and tag
default['hello-world']['deploy_dir'] = '/opt/hello-world'
default['hello-world']['deploy_repo'] =
'https://github.com/dagolden/zzz-hello-world.git'
default['hello-world']['deploy_tag'] = 'master'
# Service user/group/port
default['hello-world']['user'] = "nobody"
default['hello-world']['group'] = "nogroup"
default['hello-world']['port'] = 8080
(attributes are variables used in the recipe; can be customized per-node during deployment)
89. include_recipe 'carton'
package 'git-core'
git node['hello-world']['deploy_dir'] do
repository node['hello-world']['deploy_repo']
reference node['hello-world']['deploy_tag']
notifies :restart, "carton_app[hello-world]"
end
carton_app "hello-world" do
perlbrew node['hello-world']['perl_version']
command "starman -p #{node['hello-world']['port']} app.psgi"
cwd node['hello-world']['deploy_dir']
user node['hello-world']['user']
group node['hello-world']['group']
end
carton_app "hello-world" do
action :start
end
(recipe ensures carton and git are available...)
90. include_recipe 'carton'
package 'git-core'
git node['hello-world']['deploy_dir'] do
repository node['hello-world']['deploy_repo']
reference node['hello-world']['deploy_tag']
notifies :restart, "carton_app[hello-world]"
end
carton_app "hello-world" do
perlbrew node['hello-world']['perl_version']
command "starman -p #{node['hello-world']['port']} app.psgi"
cwd node['hello-world']['deploy_dir']
user node['hello-world']['user']
group node['hello-world']['group']
end
carton_app "hello-world" do
action :start
end
(git resource specifies where application code goes...)
91. include_recipe 'carton'
package 'git-core'
git node['hello-world']['deploy_dir'] do
repository node['hello-world']['deploy_repo']
reference node['hello-world']['deploy_tag']
notifies :restart, "carton_app[hello-world]"
end
carton_app "hello-world" do
perlbrew node['hello-world']['perl_version']
command "starman -p #{node['hello-world']['port']} app.psgi"
cwd node['hello-world']['deploy_dir']
user node['hello-world']['user']
group node['hello-world']['group']
end
carton_app "hello-world" do
action :start
end
(attributes parameterize the resource statement...)
92. include_recipe 'carton'
package 'git-core'
git node['hello-world']['deploy_dir'] do
repository node['hello-world']['deploy_repo']
reference node['hello-world']['deploy_tag']
notifies :restart, "carton_app[hello-world]"
end
carton_app "hello-world" do
perlbrew node['hello-world']['perl_version']
command "starman -p #{node['hello-world']['port']} app.psgi"
cwd node['hello-world']['deploy_dir']
user node['hello-world']['user']
group node['hello-world']['group']
end
carton_app "hello-world" do
action :start
end
(carton_app resources installs deps and sets up runit service...)
93. include_recipe 'carton'
package 'git-core'
git node['hello-world']['deploy_dir'] do
repository node['hello-world']['deploy_repo']
reference node['hello-world']['deploy_tag']
notifies :restart, "carton_app[hello-world]"
end
carton_app "hello-world" do
perlbrew node['hello-world']['perl_version']
command "starman -p #{node['hello-world']['port']} app.psgi"
cwd node['hello-world']['deploy_dir']
user node['hello-world']['user']
group node['hello-world']['group']
end
carton_app "hello-world" do
action :start
end
(again, attributes parameterize the resource...)
94. include_recipe 'carton'
package 'git-core'
git node['hello-world']['deploy_dir'] do
repository node['hello-world']['deploy_repo']
reference node['hello-world']['deploy_tag']
notifies :restart, "carton_app[hello-world]"
end
carton_app "hello-world" do
perlbrew node['hello-world']['perl_version']
command "starman -p #{node['hello-world']['port']} app.psgi"
cwd node['hello-world']['deploy_dir']
user node['hello-world']['user']
group node['hello-world']['group']
end
carton_app "hello-world" do
action :start
end
(finally, the resource is idempotently started...)
95. These files – and the Perl Chef
cookbooks – are all you need
97. Steps for deployment of Hello World
1. Set up a Vagrant virtual machine
2. Prepare Pantry to manage Chef Solo
3. Get Hello World cookbook and dependencies
4. Configure virtual machine for Hello World
5. Deploy
98. Steps for deployment of Hello World
1. Set up a Vagrant virtual machine
2. Prepare Pantry to manage Chef Solo
3. Get Hello World cookbook and dependencies
4. Configure virtual machine for Hello World
5. Deploy
99.
100. Vagrant is a tool for managing virtual machines
“Can I have a VirtualBox now, please?”
101. Vagrant is a tool for managing virtual machines
$ vagrant box add base
http://files.vagrantup.com/lucid32.box
$ vagrant init
$ vagrant up
102. Vagrant is great for testing Chef deployment
(and other things, besides)
103. Steps for deployment of Hello World
1. Set up a Vagrant virtual machine
2. Prepare Pantry to manage Chef Solo
3. Get Hello World cookbook and dependencies
4. Configure virtual machine for Hello World
5. Deploy
104. Chef Solo is Chef without a central
configuration server
(good for demos and smaller deployments)
105. Chef – you push config data to Chef Server
– nodes run Chef Client to pull config
from Chef Server and execute it
Chef Solo – you push config data to nodes
– you run Chef Solo remotely
106.
107. One advantage of Chef Solo...
Your config repo is canonical
(i.e. you don't have to track what you've pushed to the central server)
109. Steps for deployment of Hello World
1. Set up a Vagrant virtual machine
2. Prepare Pantry to manage Chef Solo
3. Get Hello World cookbook and dependencies
4. Configure virtual machine for Hello World
5. Deploy
111. Pantry is a tool for automating Chef Solo
$ pantry create node server.example.com
$ pantry apply node server.example.com
--role web --recipe myapp
$ pantry sync node server.example.com
112. Pantry is written in Perl and available on CPAN
(Similar to pocketknife [Ruby] and littlechef [Python])
114. Steps for deployment of Hello World
1. Set up a Vagrant virtual machine
2. Prepare Pantry to manage Chef Solo
3. Get Hello World cookbook and dependencies
4. Configure virtual machine for Hello World
5. Deploy
116. Steps for deployment of Hello World
1. Set up a Vagrant virtual machine
2. Prepare Pantry to manage Chef Solo
3. Get Hello World cookbook and dependencies
4. Configure virtual machine for Hello World
5. Deploy
118. Steps for deployment of Hello World
1. Set up a Vagrant virtual machine
2. Prepare Pantry to manage Chef Solo
3. Get Hello World cookbook and dependencies
4. Configure virtual machine for Hello World
5. Deploy
119. Four cookbooks must be downloaded
and copied to the 'cookbooks' directory
– hello-world
– carton
– perlbrew
– runit
120. Steps for deployment of Hello World
1. Set up a Vagrant virtual machine
2. Prepare Pantry to manage Chef Solo
3. Get Hello World cookbook and dependencies
4. Configure virtual machine for Hello World
5. Deploy
122. Steps for deployment of Hello World
1. Set up a Vagrant virtual machine
2. Prepare Pantry to manage Chef Solo
3. Get Hello World cookbook and dependencies
4. Configure virtual machine for Hello World
5. Deploy