R.I. Pienaar gives an introduction to MCollective at the San Francisco and Silicon Valley joint Puppet User Group, May 2013.
Silicon Valley PUG: http://www.meetup.com/SiPMUG/
SF PUG: http://www.meetup.com/SFPMUG/#past
MCollective is a framework for system management and orchestration that allows users to execute tasks across many servers simultaneously. It uses message queuing as middleware to facilitate communication between a client and servers. Users can write custom agents in Ruby to perform actions on servers in response to messages. MCollective provides features for system discovery, inventory collection, task execution, and configuration management across thousands of nodes.
Using MCollective with Chef - cfgmgmtcamp.eu 2014Zachary Stevens
It's time to move beyond SSH for infrastructure management.
MCollective is an awesome orchestration framework.
Chef is an awesome configuration management tool.
Contrary to popular belief, they work great together.
Speaker notes available here: https://dl.dropboxusercontent.com/u/369373/cfgmgmtcamp.eu%202014%20-%20Chef%20%26%20MCollective.pdf
Mcollective is an open source framework for server orchestration and parallel job execution. It provides asynchronous and event-driven communication between nodes using a message broker like RabbitMQ. Nodes can be targeted based on facts, classes, or other criteria. Plugins allow mcollective to manage configurations, run puppet, install packages, manage firewall rules, and more across large server fleets. It provides a scalable and decentralized alternative to SSH loops for orchestrating infrastructure changes and operations.
Introduction to orchestration using McollectivePuppet
"Introduction to orchestration using MCollective" by Pieter Loubser at Puppet Camp London 2013. Find the video here: http://puppetlabs.com/community/puppet-camp
The document discusses profiling Puppet performance, including profiling Facter facts, catalog compilation, and the agent run. It shows how to use tools like 'facter --timing', 'puppet apply --profile', and processing reports to identify slow areas. Specific optimizations discussed include caching external facts, avoiding repeated Hiera lookups, and profiling resources by type and time.
Testing your infrastructure with litmusBram Vogelaar
We have been able to test our puppet modules using rspec-puppet and
serverspec for a while now and the quality of our code is improving because
of it. This talk will introduce the new kid on the block litmus. This talk will show you how
to use litmus to test puppet modules and how to convert your existing modules to make use of litmus.
MCollective is a framework for system management and orchestration that allows users to execute tasks across many servers simultaneously. It uses message queuing as middleware to facilitate communication between a client and servers. Users can write custom agents in Ruby to perform actions on servers in response to messages. MCollective provides features for system discovery, inventory collection, task execution, and configuration management across thousands of nodes.
Using MCollective with Chef - cfgmgmtcamp.eu 2014Zachary Stevens
It's time to move beyond SSH for infrastructure management.
MCollective is an awesome orchestration framework.
Chef is an awesome configuration management tool.
Contrary to popular belief, they work great together.
Speaker notes available here: https://dl.dropboxusercontent.com/u/369373/cfgmgmtcamp.eu%202014%20-%20Chef%20%26%20MCollective.pdf
Mcollective is an open source framework for server orchestration and parallel job execution. It provides asynchronous and event-driven communication between nodes using a message broker like RabbitMQ. Nodes can be targeted based on facts, classes, or other criteria. Plugins allow mcollective to manage configurations, run puppet, install packages, manage firewall rules, and more across large server fleets. It provides a scalable and decentralized alternative to SSH loops for orchestrating infrastructure changes and operations.
Introduction to orchestration using McollectivePuppet
"Introduction to orchestration using MCollective" by Pieter Loubser at Puppet Camp London 2013. Find the video here: http://puppetlabs.com/community/puppet-camp
The document discusses profiling Puppet performance, including profiling Facter facts, catalog compilation, and the agent run. It shows how to use tools like 'facter --timing', 'puppet apply --profile', and processing reports to identify slow areas. Specific optimizations discussed include caching external facts, avoiding repeated Hiera lookups, and profiling resources by type and time.
Testing your infrastructure with litmusBram Vogelaar
We have been able to test our puppet modules using rspec-puppet and
serverspec for a while now and the quality of our code is improving because
of it. This talk will introduce the new kid on the block litmus. This talk will show you how
to use litmus to test puppet modules and how to convert your existing modules to make use of litmus.
PuppetCamp SEA 1 - Puppet Deployment at OnAppWalter Heck
Wai Keen Woon, CTO CDN Division OnApp Malaysia, gave an interesting overview of what the Puppet architecture at OnApp looks like. The CDN division at OnApp is a large provider of CDN services, and as such makes a very interesting candidate for a case study.
This document discusses using Puppet for scalable systems management. It begins with challenges faced by system administrators and an introduction to Puppet. It covers installing and configuring Puppet, including certificate signing. It also discusses managing infrastructure with Puppet through classes, modules, and templates. Examples of Puppet configuration are provided.
Puppet is a configuration management tool which allows easy deployment and configuration ranging from 1 to 1 thousand servers (and even more). Even though its common knowledge for devops, puppet is still a strange piece of software for developers. How does it work and what can it do for you as a developer?
Dennis Matotek, Technical Lead Platforms at Experian Hitwise Australia, gave an excellent presentation on setting up puppet using vagrant, puppet and testing, including a full demo of rspec-puppet and Jenkins.
Walter Heck, founder of OlinData, presented a step-by-step guide on how to set up a proper puppet repository, complete with the brand new PuppetDB, exported resources and usage of open source modules.
Things like Infrastructure as Code, Service Discovery and Config Management can and have helped us to quickly build and rebuild infrastructure but we haven't nearly spend enough time to train our self to review, monitor and respond to outages. Does our platform degrade in a graceful way or what does a high cpu load really mean? What can we learn from level 1 outages to be able to run our platforms more reliably.
We all love infrastructure as code, we automate everything ™. However making sure all of our infrastructure assets are monitored effectively can be slow and resource intensive multi stage process. During this talk we will investigate how we can setup nomad cluster that can automatically scale our infrastructure both horizontally as vertically to be able to cope with increased demand by users/
This talk will focus on making sure we on configuring Nomad and its new autoscaler component to be able to make data driven decisions about scaling nomad jobs in or out to fit current customers usage.
A gentle introduction to Observability and how to setup a highly available monitoring platform across multiple datacenters.
During this talk we will investigate how we can setup and monitor an monitoring setup across 2 DCs using Prometheus, Loki, Tempo, Alertmanager and Grafana. monitoring some services with some lessons learned along the way.
Puppet is an open source configuration management tool that can be used to automate the configuration and management of infrastructure and applications. It uses a client-server architecture and declarative language to define and enforce the desired state of systems. Other HashiCorp tools like Packer, Terraform, Vault and Nomad can integrate with Puppet for tasks like infrastructure provisioning, secrets management and workload orchestration. Bolt is a task orchestration tool from Puppet that can be used to automate operational tasks across infrastructure defined by tools like Terraform. Consul provides service discovery and configuration for the Puppet infrastructure.
How Danga::Socket handles asynchronous processing and how to write asynchrono...Gosuke Miyashita
The document discusses asynchronous programming in Perl and how to write asynchronous plugins for Perlbal using Danga::Socket. Key points include:
1) Danga::Socket provides asynchronous I/O event handling using its main event loop and allows adding I/O watchers and timers.
2) To write asynchronous Perlbal plugins, the main process should be based on Danga::Socket's event loop and use callbacks. The plugin must return 1 and a callback must restart processing in ClientProxy.
3) Perlbal itself may also need patching to work with asynchronous plugins by checking an async completion flag in ClientProxy before running plugins.
Current session guides through Vagrant. Shows some tips and tricks and targeted to software developers.
Practical activities can be found here: https://github.com/akranga/devops-hackathon-1
IT Infrastructure Through The Public Network Challenges And SolutionsMartin Jackson
Identifying the challenges that companies face when they wish to adopt Infrastructure as a Service like those from Amazon and Rackspace and possible solutions to those problems. This presentation seeks to provide insight and possible solutions, covering the areas of security, availability, cloud standards, interoperability, vendor lock in and performance management.
Design Summit - Migrating to Ruby 2 - Joe RafanielloManageIQ
ManageIQ currently runs on Ruby 1.9.3. This presentation is about the effort to move ManageIQ to Ruby 2.x to take advantage of new features and performance in the language and runtime engine.
For more on ManageIQ, see http://manageiq.org/
This document provides an introduction to using Ansible in a top-down approach. It discusses using Ansible to provision infrastructure including load balancers, application servers, and databases. It covers using ad-hoc commands and playbooks to configure systems. Playbooks can target groups of hosts, apply roles to automate common tasks, and allow variables to customize configurations. Selective execution allows running only certain parts of a playbook. Overall the document demonstrates how Ansible can be used to deploy and manage infrastructure and applications in a centralized, automated way.
DevOps hackathon Session 2: Basics of ChefAntons Kranga
The document discusses infrastructure provisioning using Chef. It explains that Chef uses a declarative approach where you describe the desired state rather than how to achieve it. Cookbooks contain recipes that describe resources to bring a VM to the specified state. Cookbooks are repeatable, testable units that can install packages, configure services, create users and templates. Vagrant and Chef are often used together, with Vagrant managing VMs and triggering Chef provisioning to install software inside VMs.
Ansible is an open source automation platform, written in Python, that can be used for configuration-management, application deployment, cloud provisioning, ad-hoc task-execution, multinode orchestration and so on. This talk is an introduction to Ansible for beginners, including tips like how to use containers to mimic multiple machines while iteratively automating some tasks or testing.
Things like Infrastructure as Code, Service Discovery and Config Management can and have helped us to quickly build and rebuild infrastructure but we haven't nearly spend enough time to train our self to review, monitor and respond to outages. Does our platform degrade in a graceful way or what does a high cpu load really mean? What can we learn from level 1 outages to be able to run our platforms more reliably.
We all love infrastructure as code, we automate everything ™. However making sure all of our infrastructure assets are monitored effectively can be slow and resource intensive multi stage process. During this talk we will investigate how we can setup and observe a service mesh platform using HashiCorp's Consul Connect by recording its metrics. logs and traces.
This talk will focus on configuring and analysing the metrics, logs and traces Consul Connect produces using Prometheus, Loki, Tempo and Grafana.
This document proposes using RPM packages to deploy Java applications to Red Hat Linux systems in a more automated and standardized way. Currently, deployment is a manual multi-step process that is slow, error-prone, and requires detailed application knowledge. The proposal suggests using Maven and Jenkins to build Java applications into RPM packages. These packages can then be installed, upgraded, and rolled back easily using common Linux tools like YUM. This approach simplifies deployment, improves speed, enables easy auditing of versions, and allows for faster rollbacks compared to the current process.
A story of a Ruby programmer having to understand that learning Erlang is more than just syntax. Learn differences in paradigms, pitfalls and applied use cases for this incredibly powerful language
The document discusses building internal tooling for large-scale continuous delivery. It describes implementing continuous delivery for over 100,000 nodes by developing an overlay manager framework that uses a common package format, configuration repository, and CLI tool. The framework allows configuring and deploying components across large fleets within minutes with a 98%+ success rate.
Puppet Performance Profiling - CM Camp 2015ripienaar
This talk will cover the basic life cycle of a Puppet Catalog from compilation request to report processing. It will explore the performance of some of the life cycle steps and show how you might instrument these steps using tools Puppet make available.
Along the way it will provide hints and tips on how to write performant facts and manifests.
PuppetCamp SEA 1 - Puppet Deployment at OnAppWalter Heck
Wai Keen Woon, CTO CDN Division OnApp Malaysia, gave an interesting overview of what the Puppet architecture at OnApp looks like. The CDN division at OnApp is a large provider of CDN services, and as such makes a very interesting candidate for a case study.
This document discusses using Puppet for scalable systems management. It begins with challenges faced by system administrators and an introduction to Puppet. It covers installing and configuring Puppet, including certificate signing. It also discusses managing infrastructure with Puppet through classes, modules, and templates. Examples of Puppet configuration are provided.
Puppet is a configuration management tool which allows easy deployment and configuration ranging from 1 to 1 thousand servers (and even more). Even though its common knowledge for devops, puppet is still a strange piece of software for developers. How does it work and what can it do for you as a developer?
Dennis Matotek, Technical Lead Platforms at Experian Hitwise Australia, gave an excellent presentation on setting up puppet using vagrant, puppet and testing, including a full demo of rspec-puppet and Jenkins.
Walter Heck, founder of OlinData, presented a step-by-step guide on how to set up a proper puppet repository, complete with the brand new PuppetDB, exported resources and usage of open source modules.
Things like Infrastructure as Code, Service Discovery and Config Management can and have helped us to quickly build and rebuild infrastructure but we haven't nearly spend enough time to train our self to review, monitor and respond to outages. Does our platform degrade in a graceful way or what does a high cpu load really mean? What can we learn from level 1 outages to be able to run our platforms more reliably.
We all love infrastructure as code, we automate everything ™. However making sure all of our infrastructure assets are monitored effectively can be slow and resource intensive multi stage process. During this talk we will investigate how we can setup nomad cluster that can automatically scale our infrastructure both horizontally as vertically to be able to cope with increased demand by users/
This talk will focus on making sure we on configuring Nomad and its new autoscaler component to be able to make data driven decisions about scaling nomad jobs in or out to fit current customers usage.
A gentle introduction to Observability and how to setup a highly available monitoring platform across multiple datacenters.
During this talk we will investigate how we can setup and monitor an monitoring setup across 2 DCs using Prometheus, Loki, Tempo, Alertmanager and Grafana. monitoring some services with some lessons learned along the way.
Puppet is an open source configuration management tool that can be used to automate the configuration and management of infrastructure and applications. It uses a client-server architecture and declarative language to define and enforce the desired state of systems. Other HashiCorp tools like Packer, Terraform, Vault and Nomad can integrate with Puppet for tasks like infrastructure provisioning, secrets management and workload orchestration. Bolt is a task orchestration tool from Puppet that can be used to automate operational tasks across infrastructure defined by tools like Terraform. Consul provides service discovery and configuration for the Puppet infrastructure.
How Danga::Socket handles asynchronous processing and how to write asynchrono...Gosuke Miyashita
The document discusses asynchronous programming in Perl and how to write asynchronous plugins for Perlbal using Danga::Socket. Key points include:
1) Danga::Socket provides asynchronous I/O event handling using its main event loop and allows adding I/O watchers and timers.
2) To write asynchronous Perlbal plugins, the main process should be based on Danga::Socket's event loop and use callbacks. The plugin must return 1 and a callback must restart processing in ClientProxy.
3) Perlbal itself may also need patching to work with asynchronous plugins by checking an async completion flag in ClientProxy before running plugins.
Current session guides through Vagrant. Shows some tips and tricks and targeted to software developers.
Practical activities can be found here: https://github.com/akranga/devops-hackathon-1
IT Infrastructure Through The Public Network Challenges And SolutionsMartin Jackson
Identifying the challenges that companies face when they wish to adopt Infrastructure as a Service like those from Amazon and Rackspace and possible solutions to those problems. This presentation seeks to provide insight and possible solutions, covering the areas of security, availability, cloud standards, interoperability, vendor lock in and performance management.
Design Summit - Migrating to Ruby 2 - Joe RafanielloManageIQ
ManageIQ currently runs on Ruby 1.9.3. This presentation is about the effort to move ManageIQ to Ruby 2.x to take advantage of new features and performance in the language and runtime engine.
For more on ManageIQ, see http://manageiq.org/
This document provides an introduction to using Ansible in a top-down approach. It discusses using Ansible to provision infrastructure including load balancers, application servers, and databases. It covers using ad-hoc commands and playbooks to configure systems. Playbooks can target groups of hosts, apply roles to automate common tasks, and allow variables to customize configurations. Selective execution allows running only certain parts of a playbook. Overall the document demonstrates how Ansible can be used to deploy and manage infrastructure and applications in a centralized, automated way.
DevOps hackathon Session 2: Basics of ChefAntons Kranga
The document discusses infrastructure provisioning using Chef. It explains that Chef uses a declarative approach where you describe the desired state rather than how to achieve it. Cookbooks contain recipes that describe resources to bring a VM to the specified state. Cookbooks are repeatable, testable units that can install packages, configure services, create users and templates. Vagrant and Chef are often used together, with Vagrant managing VMs and triggering Chef provisioning to install software inside VMs.
Ansible is an open source automation platform, written in Python, that can be used for configuration-management, application deployment, cloud provisioning, ad-hoc task-execution, multinode orchestration and so on. This talk is an introduction to Ansible for beginners, including tips like how to use containers to mimic multiple machines while iteratively automating some tasks or testing.
Things like Infrastructure as Code, Service Discovery and Config Management can and have helped us to quickly build and rebuild infrastructure but we haven't nearly spend enough time to train our self to review, monitor and respond to outages. Does our platform degrade in a graceful way or what does a high cpu load really mean? What can we learn from level 1 outages to be able to run our platforms more reliably.
We all love infrastructure as code, we automate everything ™. However making sure all of our infrastructure assets are monitored effectively can be slow and resource intensive multi stage process. During this talk we will investigate how we can setup and observe a service mesh platform using HashiCorp's Consul Connect by recording its metrics. logs and traces.
This talk will focus on configuring and analysing the metrics, logs and traces Consul Connect produces using Prometheus, Loki, Tempo and Grafana.
This document proposes using RPM packages to deploy Java applications to Red Hat Linux systems in a more automated and standardized way. Currently, deployment is a manual multi-step process that is slow, error-prone, and requires detailed application knowledge. The proposal suggests using Maven and Jenkins to build Java applications into RPM packages. These packages can then be installed, upgraded, and rolled back easily using common Linux tools like YUM. This approach simplifies deployment, improves speed, enables easy auditing of versions, and allows for faster rollbacks compared to the current process.
A story of a Ruby programmer having to understand that learning Erlang is more than just syntax. Learn differences in paradigms, pitfalls and applied use cases for this incredibly powerful language
The document discusses building internal tooling for large-scale continuous delivery. It describes implementing continuous delivery for over 100,000 nodes by developing an overlay manager framework that uses a common package format, configuration repository, and CLI tool. The framework allows configuring and deploying components across large fleets within minutes with a 98%+ success rate.
Puppet Performance Profiling - CM Camp 2015ripienaar
This talk will cover the basic life cycle of a Puppet Catalog from compilation request to report processing. It will explore the performance of some of the life cycle steps and show how you might instrument these steps using tools Puppet make available.
Along the way it will provide hints and tips on how to write performant facts and manifests.
The document introduces the Marionette Collective framework, a Ruby-based orchestration framework that uses MCollective for parallel execution across middleware. It allows reusable, versioned automation code to be run ad-hoc or via APIs, scripts, and REST. The framework complements configuration management and allows flexible, pluggable extensions. It also features Puppet resource discovery, authentication, authorization, auditing, and reporting.
This document summarizes a presentation about new features in Puppet 4. It discusses Puppet 4's major internal rewrite and formal language specification. Some key features covered include native data types and merging, improved iteration capabilities, resource defaults, and using facts as native data structures. It also overviewed updated functions, native modules, lookup strategies, and the all-in-one packaging of Puppet 4 agents.
Sneha discusses using Prometheus for observability and product release at DigitalOcean. She explains how they instrument services with metrics, logs, and traces from the beginning of development. They perform load testing and chaos experiments to identify issues and tune systems and alerts before release. Examples are provided for how DHCP and VPC services were monitored throughout their development cycles.
London Puppet Camp 2015: Service Discovery and PuppetPuppet
This document discusses service discovery and how it can be implemented using Puppet and Consul. It begins with an introduction to the presenter and overview of service discovery challenges. The main points are:
- Consul is an open source service discovery tool that provides service registration, health checking, and key-value storage. It uses a distributed consensus protocol for strongly consistent data.
- Consul can be integrated with Puppet in several ways, including using Puppet modules to install and configure Consul, accessing Consul's key-value store from Hiera, and defining service checks and registrations with Puppet code.
- Security features like Hiera eyaml and GPG can encrypt sensitive data when using Consul's key-value
Puppet Camp London Fall 2015 - Service Discovery and PuppetMarc Cluet
This document discusses service discovery and how it can be implemented using Consul. It begins with an introduction to the presenter and overview of service discovery challenges. The main points are:
- Consul is a service discovery tool that allows services to register themselves and discover other services via API or DNS queries. It supports health checking and secure key-value storage.
- Consul uses agents running on each node that register services and perform health checks. Services can be discovered via the REST API or DNS queries. It provides a strongly consistent key-value store.
- Puppet can integrate with Consul for service discovery via Puppet modules, Hiera backend, or direct API access. This allows dynamically generating configurations from service information in
This document describes a simple call center platform built with PHP and Asterisk. It includes:
1) Configuring Asterisk for agents and callers with dialplans, queues, and extensions.
2) Creating a PHP server to communicate with Asterisk and a Webpage using libraries like Ratchet and ReactPHP. It keeps call center state in Redis.
3) Building a simple dashboard in Vue.js to display calls, agents, connections and statistics over a WebSocket connection.
4) Deploying the solution to an Ubuntu VM using Ansible playbooks.
The biggest challenge in performance tuning is identifying the root cause of the bottleneck. Once you find it, the fix often becomes trivial. However, this detective work takes patience, skills, and effort, so we often attempt to guess the cause, by trying out tentative fixes. The result: messy code, waste of time and money, and frustration. During this talk you will learn how to correctly zoom in on the bottleneck using three levels of profiling: distributed tracing with Zipkin, metrics with Micrometer, and profiling with the Java Flight Recorder already built into your JVM. We’ll focus on the latter and learn how to read a flame graph to trace some common issues of backend systems like connection/thread pool starvation, time-consuming aspects, hot methods, and lock contention, even if these occur in library code you did not write.
Chris Swan ONUG Academy - Container Networks TutorialCohesive Networks
Slides from Chris Swan's ONUG Academy "Hands-On Container Networks" on May 12, 2015
This hands on session will begin by looking at how Docker modifies a Linux host to enable containers to be connected to a network. It will then go through how applications running in containers can be connected together, and the different options for interconnectivity on a host and between hosts. Finally we will take a look at running network application services inside of containers.
Syllabus
Learn what Docker does to your Linux host on installation.
Connect applications running across multiple containers using configuration metadata and compositing tools.
Understand the different Docker networking modes (host, container, none).
Using Pipework to customise network configuration.
Connecting containers across VMs using Open vSwitch.
Using containers for application network services sush as proxies, load balancers and for TLS termination
Learning Objective 1: Understand how containers relate to the host network, and the consequences that has for services running within containers
Learning Objective 2: Understand the different ways that containers can be networked and internetworked.
Learning Objective 3: Use containers to run network application services.
About the topic:
Containers aren’t a new thing, but the Docker project has made them a hot topic as organisations look at new ways to build, ship and run their applications. This brings new challenges for the network as containers are likely to be ten times as numerous as virtual machines. At the same time there is regulatory pressure to move away from the flat LAN model and deliver greater separation and segregation. This presentation will look at how these two forces are coming together, firstly by examining how containers are networked and some of the new approaches and challenges that come with that. This will be followed by a look at how overlay networks are being deployed to achieve ‘microsegmentation’, and ultimately drive a shift towards application centric networking. Of course these forces will collide, bringing us to contained networks of containers.
In this talk, Carlos de la Guardia shows how a Pyramid application can be deployed using a front end web server, like Apache or Nginx. He also covers how to automate deployment using buildout and a PyPI clone, and post-deployment creation of a variety of maintenance scripts and cron jobs that perform application specific tasks through Pyramid.
A link to audio of the presentation is here: http://2011ploneconference.sched.org/event/29a2f357905e4ab0fe3048c53bc1c94c
Puppet Camp Charlotte 2015: Exporting Resources: There and Back AgainPuppet
The document discusses different approaches taken by the author's organization, WFU, to manage firewall rules using Puppet. It begins by providing context about WFU's Puppet infrastructure. It then describes three attempts made to manage firewall rules: 1) hardcoding all values; 2) exporting rules explicitly and collecting them; and 3) storing rule data in Hiera and applying rules directly without exporting/collecting. Each attempt is evaluated in terms of its results. The document aims to share lessons learned in managing firewall rules with exported resources and recommends planning naming conventions and environments carefully.
This is a talk on how you can monitor your microservices architecture using Prometheus and Grafana. This has easy to execute steps to get a local monitoring stack running on your local machine using docker.
Rhebok, High Performance Rack Handler / Rubykaigi 2015Masahiro Nagano
This document discusses Rhebok, a high performance Rack handler written in Ruby. Rhebok uses a prefork architecture for concurrency and achieves 1.5-2x better performance than Unicorn. It implements efficient network I/O using techniques like IO timeouts, TCP_NODELAY, and writev(). Rhebok also uses the ultra-fast PicoHTTPParser for HTTP request parsing. The document provides an overview of Rhebok, benchmarks showing its performance, and details on its internals and architecture.
This document discusses Remote Method Invocation (RMI) in Java. It describes the basic concepts and architecture of RMI, including how remote objects are represented by stubs and skeletons. It then provides step-by-step instructions for building a simple RMI application with four required classes: an interface for the remote object, the client, the remote object implementation, and the server. It also gives an example of building a more complex RMI application for performing numerical integration remotely.
This document outlines an introduction to Docker for Java developers, including running Java microservices and applications in Docker containers. It discusses building Docker images with Maven, interacting with the Docker API in Java, continuous delivery with Jenkins and Docker, and deploying Java applications to production using Docker clusters like Kubernetes and Docker Swarm.
Session: A Reference Architecture for Running Modern APIs with NGINX Unit and...NGINX, Inc.
Building and deploying cloud native APIs is a complex operation, and can require a multitude of components. In this workshop we focus on the fundamentals of deploying the runtime API code and publishing the API through an API gateway. To achieve this we use NGINX Unit as a polyglot application server and NGINX web server as an API gateway. With this combination we deliver a solution lightweight enough for dev and strong enough for production.
You will learn how to use NGINX Unit to run one or more apps and APIs in a variety of languages, including seamlessly deploying new versions. You will then see the best practices for how to configure NGINX to perform the common API gateway functions of request routing, rate limiting, and authentication for multiple APIs. We will also touch on advanced use cases such as HTTP method enforcement, and JSON validation.
No previous experience of NGINX or NGINX Unit is required, but a basic knowledge of HTTP and JSON/REST APIs is valuable.
Web Application Development with Quasar Framework
In this tutorial, You can see a rough development process with Quasar Framework which is known as front-end framework with VueJS components.
- Frontend : Quasar (based on Vue.js)
- Backend : Google firebase
- Result
* Web Page : https://checkin.wonyong.net
* Play Store : https://play.google.com/store/apps/details?id=org.kopochecker.app
- Youtube (Korean) : https://www.youtube.com/watch?v=HEttw-RSXxg&list=PLlWoe5hcgrk4qQVIBxDA3d-5ZRfYuITxb
Similar to Introduction to MCollective - SF PUG (20)
Puppet camp2021 testing modules and controlrepoPuppet
This document discusses testing Puppet code when using modules versus a control repository. It recommends starting with simple syntax and unit tests using PDK or rspec-puppet for modules, and using OnceOver for testing control repositories, as it is specially designed for this purpose. OnceOver allows defining classes, nodes, and a test matrix to run syntax, unit, and acceptance tests across different configurations. Moving from simple to more complex testing approaches like acceptance tests is suggested. PDK and OnceOver both have limitations for testing across operating systems that may require customizing spec tests. Infrastructure for running acceptance tests in VMs or containers is also discussed.
This document appears to be for a PuppetCamp 2021 presentation by Corey Osman of NWOPS, LLC. It includes information about Corey Osman and NWOPS, as well as sections on efficient development, presentation content, demo main points, Git strategies including single branch and environment branch strategies, and workflow improvements. Contact information is provided at the bottom.
The document discusses operational verification and how Puppet is working on a new module to provide more confidence in infrastructure health. It introduces the concept of adding check resources to catalogs to validate configurations and service health directly during Puppet runs. Examples are provided of how this could detect issues earlier than current methods. Next steps outlined include integrating checks into more resource types, fixing reporting, integrating into modules, and gathering feedback. This allows testing and monitoring to converge by embedding checks within configurations.
This document provides tips and tricks for using Puppet with VS Code, including links to settings examples and recommended extensions to install like Gitlens, Remote Development Pack, Puppet Extension, Ruby, YAML Extension, and PowerShell Extension. It also mentions there will be a demo.
- The document discusses various patterns and techniques the author has found useful when working with Puppet modules over 10+ years, including some that may be considered unorthodox or anti-patterns by some.
- Key topics covered include optimization of reusable modules, custom data types, Bolt tasks and plans, external facts, Hiera classification, ensuring resources for presence/absence, application abstraction with Tiny Puppet, and class-based noop management.
- The author argues that some established patterns like roles and profiles can evolve to be more flexible, and that running production nodes in noop mode with controls may be preferable to fully enforcing on all nodes.
Applying Roles and Profiles method to compliance codePuppet
This document discusses adapting the roles and profiles design pattern to writing compliance code in Puppet modules. It begins by noting the challenges of writing compliance code, such as it touching many parts of nodes and leading to sprawling code. It then provides an overview of the roles and profiles pattern, which uses simple "front-end" roles/interfaces and more complex "back-end" profiles/implementations. The rest of the document discusses how to apply this pattern when authoring Puppet modules for compliance - including creating interface and implementation classes, using Hiera for configuration, and tools for reducing boilerplate code. It aims to provide a maintainable structure and simplify adapting to new compliance frameworks or requirements.
This document discusses Kinney Group's Puppet compliance framework for automating STIG compliance and reporting. It notes that customers often implement compliance Puppet code poorly or lack appropriate Puppet knowledge. The framework aims to standardize compliance modules that are data-driven and customizable. It addresses challenges like conflicting modules and keeping compliance current after implementation. The framework generates automated STIG checklists and plans future integration with Puppet Enterprise and Splunk for continued compliance reporting. Kinney Group cites practical experience implementing the framework for various military and government customers.
Enforce compliance policy with model-driven automationPuppet
This document discusses model-driven automation for enforcing compliance. It begins with an overview of compliance benchmarks and the CIS benchmarks. It then discusses implementing benchmarks, common challenges around configuration drift and lack of visibility, and how to define compliance policy as code. The key points are that automation is essential for compliance at scale; a model-driven approach defines how a system should be configured and uses desired-state enforcement to keep systems compliant; and defining compliance policy as code, managing it with source control, and automating it with CI/CD helps achieve continuous compliance.
This document discusses how organizations can move from a reactive approach to compliance to a proactive approach using automation. It notes that over 50% of CIOs cite security and compliance as a barrier to IT modernization. Puppet offers an end-to-end compliance solution that allows organizations to automatically eliminate configuration drift, enforce compliance at scale across operating systems and environments, and define policy as code. The solution helps organizations improve compliance from 50% to over 90% compliant. The document argues that taking a proactive automation approach to compliance can turn it into a competitive advantage by improving speed and innovation.
Automating it management with Puppet + ServiceNowPuppet
As the leading IT Service Management and IT Operations Management platform in the marketplace, ServiceNow is used by many organizations to address everything from self service IT requests to Change, Incident and Problem Management. The strength of the platform is in the workflows and processes that are built around the shared data model, represented in the CMDB. This provides the ‘single source of truth’ for the organization.
Puppet Enterprise is a leading automation platform focused on the IT Configuration Management and Compliance space. Puppet Enterprise has a unique perspective on the state of systems being managed, constantly being updated and kept accurate as part of the regular Puppet operation. Puppet Enterprise is the automation engine ensuring that the environment stays consistent and in compliance.
In this webinar, we will explore how to maximize the value of both solutions, with Puppet Enterprise automating the actions required to drive a change, and ServiceNow governing the process around that change, from definition to approval. We will introduce and demonstrate several published integration points between the two solutions, in the areas of Self-Service Infrastructure, Enriched Change Management and Automated Incident Registration.
This document promotes Puppet as a tool for hardening Windows environments. It states that Puppet can be used to harden Windows with one line of code, detect drift from desired configurations, report on missing or changing requirements, reverse engineer existing configurations, secure IIS, and export configurations to the cloud. Benefits of Puppet mentioned include hardening Windows environments, finding drift for investigation, easily passing audits, compliance reporting, easy exceptions, and exporting configurations. It also directs users to Puppet Forge modules for securing Windows and IIS.
Simplified Patch Management with Puppet - Oct. 2020Puppet
Does your company struggle with patching systems? If so, you’re not alone — most organizations have attempted to solve this issue by cobbling together multiple tools, processes, and different teams, which can make an already complicated issue worse.
Puppet helps keep hosts healthy, secure and compliant by replacing time-consuming and error prone patching processes with Puppet’s automated patching solution.
Join this webinar to learn how to do the following with Puppet:
Eliminate manual patching processes with pre-built patching automation for Windows and Linux systems.
Gain visibility into patching status across your estate regardless of OS with new patching solution from the PE console.
Ensure your systems are compliant and patched in a healthy state
How Puppet Enterprise makes patch management easy across your Windows and Linux operating systems.
Presented by: Margaret Lee, Product Manager, Puppet, and Ajay Sridhar, Sr. Sales Engineer, Puppet.
The document discusses how Puppet can be used to accelerate adoption of Microsoft Azure. It describes lift and shift migration of on-premises workloads to Azure virtual machines. It also covers infrastructure as code using Puppet and Terraform for provisioning, configuration management using Puppet Bolt, and implementing immutable infrastructure patterns on Azure. Integrations with Azure services like Key Vault, Blob Storage and metadata service are presented. Patch management and inventory of Azure resources with Puppet are also summarized.
This document discusses using Puppet Catalog Diff to analyze the impact of changes between Puppet environments or catalogs. It provides the command line usage and options for Puppet Catalog Diff. It also discusses how to integrate Puppet Catalog Diff into CI/CD pipelines for automated impact analysis when merging code changes. Additional resources like GitHub projects and Dev.to posts are provided for learning more about diffing Puppet environments and catalogs.
ServiceNow and Puppet- better together, Kevin ReeuwijkPuppet
ServiceNow and Puppet can be integrated in four key areas: 1) Self-service infrastructure allows non-Puppet experts to control infrastructure through a ServiceNow interface; 2) Enriched change management automatically generates ServiceNow change requests from Puppet changes and populates them with impact details; 3) Automated incident registration forwards details of configuration drift corrections in Puppet to ServiceNow to create incidents; and 4) Up-to-date asset management would periodically upload Puppet inventory data to ServiceNow to keep the CMDB accurate without disruptive discovery runs.
This document discusses how Puppet Relay uses Tekton pipelines to orchestrate containerized workflows. It provides an overview of how Tekton fits into the Relay architecture, with Tekton controllers managing taskrun pods to execute workflow steps defined in YAML. Triggers can initiate workflows based on events, with reusable and composable steps for tasks like provisioning infrastructure or clearing resources. Relay also includes features for parameters, secrets, outputs, and approvals to customize workflows. An ecosystem of open source integrations provides sample workflows and steps for common use cases.
100% Puppet Cloud Deployment of Legacy SoftwarePuppet
This document discusses deploying legacy software into the AWS cloud using Puppet. It proposes modeling AWS resources like security groups, autoscaling groups, and launch configurations as Puppet resources. This would allow Puppet to provision the underlying AWS infrastructure and configure servers launched in autoscaling groups. It acknowledges challenges around server reboots but suggests they can be addressed. In summary, it argues custom Puppet resources can easily model AWS resources and using Puppet to configure autoscaling servers is possible despite some challenges around rebooting servers during deployment.
This document discusses a partnership between Republic Polytechnic's School of Infocomm and Puppet to promote DevOps practices. It introduces several people involved with the partnership and outlines their mission to prepare more IT companies and individuals for jobs in the DevOps field through training courses. The document describes some short courses offered on DevOps topics and using the Puppet and Microsoft Azure platforms. It provides an example of how Republic Polytechnic has automated infrastructure configuration using Puppet to save time and reduce errors. There is a request at the end for readers to register their interest in DevOps by completing a survey.
This document discusses continuous compliance and DevSecOps best practices followed by financial services organizations.
Continuous compliance is defined as an ongoing process of proactive risk management that delivers predictable, transparent, and cost-effective compliance results. It involves continuously monitoring compliance controls, providing real-time alerts for failures and remediation recommendations, and maintaining up-to-date policies. Best practices for continuous compliance discussed include defining CIS controls and benchmarks, achieving transparent compliance dashboards and automated fixes for breaches.
DevSecOps is introduced as bringing security earlier in the application development lifecycle to minimize vulnerabilities. It aims to make everyone accountable for security. Challenges discussed include security teams struggling to keep up with DevOps pace and
The Dynamic Duo of Puppet and Vault tame SSL Certificates, Nick MaludyPuppet
The document discusses using Puppet and Vault together to dynamically manage SSL certificates. Puppet can use the vault_cert resource to request signed certificates from Vault and configure services to use the certificates. On Windows, some additional logic is needed to retrieve certificates' thumbprints and bind services to certificates using those thumbprints. This approach provides automated certificate renewal and distribution across platforms.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
2. R.I.Pienaar | rip@devco.net | http://devco.net | @ripienaar
Who am I?
• Puppet user since 0.22.x
• Architect of MCollective
• Author of Extlookup and Hiera
• Developer at Puppet Labs London
• Blog at http://devco.net
• Tweets at @ripienaar
• Volcane on IRC
3. R.I.Pienaar | rip@devco.net | http://devco.net | @ripienaar
What is MCollective?
Framework for building server
orchestration and parallel job execution
systems
24. R.I.Pienaar | rip@devco.net | http://devco.net | @ripienaar
Class Filters
$ mco package status httpd -C /apache/
Server Addressing
Nodes with Puppet Classes /apache/
applied
25. R.I.Pienaar | rip@devco.net | http://devco.net | @ripienaar
Fact Filters
$ mco package status httpd -F country=uk
Server Addressing
Nodes with the custom fact “country”
set to “uk”
26. R.I.Pienaar | rip@devco.net | http://devco.net | @ripienaar
Identity Filters
$ mco package status httpd -I devco.net
Server Addressing
One specific node called “devco.net”
27. R.I.Pienaar | rip@devco.net | http://devco.net | @ripienaar
Simple Combined Fact and Class Filters
$ mco package .... -W “country=uk /apache/”
Server Addressing
Nodes in the UK with Puppet Classes
matching /apache/
28. R.I.Pienaar | rip@devco.net | http://devco.net | @ripienaar
Compound Statements for facts, classes and data
$ mco .. -S “((country=uk and /apache/)
or customer=acme)
and puppet().config_retrieval_time > 30"
Server Addressing
UK nodes with Apache in addition to all
nodes for “customer=acme” where
Puppet compiles are slow
36. R.I.Pienaar | rip@devco.net | http://devco.net | @ripienaar
MCollective Agents
• Units of addressable logic
• An agent has actions - package agent has
update, status, install, uninstall etc
• Written in Ruby, actions can be Python,
PHP, Perl and others
• Deployable for interactive commands,
background daemons, SOA style services,
etc
• Optional custom user interfaces
37. R.I.Pienaar | rip@devco.net | http://devco.net | @ripienaar
Generate a skeleton agent
$ mco plugin generate agent nrpe actions=runcommand
Created plugin directory : nrpe
Created DDL file : nrpe/agent/nrpe.ddl
Created Agent file : nrpe/agent/nrpe.rb
Writing an Agent
38. R.I.Pienaar | rip@devco.net | http://devco.net | @ripienaar
DDL File describes the agent for UI
generation and input validation
metadata :name => "nrpe",
:description => "NRPE Agent",
:author => "R.I.Pienaar <rip@devco.net>",
:license => "ASL2.0",
:version => "0.1",
:url => "http://devco.net",
:timeout => 10
action "runcommand", :description => "Run a preconfigured NRPE command" do
input :command,
:prompt => "Command",
:description => "NRPE command to run",
:type => :string,
:validation => 'A[a-zA-Z0-9_-]+z',
:optional => false,
:maxlength => 50
output :exitcode,
:description => "Exit Code from the Nagios plugin",
:display_as => "Exit Code",
:default => 3
.
.
.
end
Writing an Agent
39. R.I.Pienaar | rip@devco.net | http://devco.net | @ripienaar
Agent logic
module MCollective
module Agent
class Nrpe<RPC::Agent
action "runcommand" do
reply[:exitcode] = run(get_nrpe_command(request[:command]),
:stdout => :output,
:stderr => :output,
:chomp => true)
end
def get_nrpe_command(command)
# not shown
end
end
end
end
Writing an Agent
40. R.I.Pienaar | rip@devco.net | http://devco.net | @ripienaar
Ready to deploy...
Writing an Agent
$ mco plugin package
Created RPM and SRPM packages for mcollective-nrpe-agent
Created RPM and SRPM packages for mcollective-nrpe-common
$ ls -l *rpm
mcollective-nrpe-agent-0.1-1.noarch.rpm
mcollective-nrpe-agent-0.1-1.src.rpm
mcollective-nrpe-common-0.1-1.noarch.rpm
mcollective-nrpe-common-0.1-1.src.rpm
41. R.I.Pienaar | rip@devco.net | http://devco.net | @ripienaar
...but test your code first using
mcollective-test gem
Writing an Agent
describe "nrpe agent" do
describe "#runcommand" do
before do
@agent = MCollective::Test::LocalAgentTest.new("nrpe",
:agent_file => “agent/nrpe.rb”).plugin
end
it "should return correct status" do
@agent.expects(:get_nrpe_command).returns("/bin/true")
result = @agent.call(:runcommand, :command => "rspec")
result.should be_successful
result.should have_data_items(:exitcode=>0)
end
end
end
Custom matchers
and helpers
42. R.I.Pienaar | rip@devco.net | http://devco.net | @ripienaar
Post deployment, interact using
standard RPC user interface
Writing an Agent
$ mco rpc nrpe runcommand command=check_load
Discovering hosts using the mongo method .... 28
* [ ============================================================> ]
28 / 28
devco.net Request Aborted
UNKNOWN
Summary of Exit Code:
OK : 27
WARNING : 1
UNKNOWN : 0
CRITICAL : 0
Finished processing 28 / 28 hosts in 418.42 ms
Agents can provide custom
aggregation plugins
43. R.I.Pienaar | rip@devco.net | http://devco.net | @ripienaar
...or web interfaces with auto-generated
user interfaces based on the DDL
Writing an Agent
44. R.I.Pienaar | rip@devco.net | http://devco.net | @ripienaar
...or web interfaces with auto-generated
user interfaces based on the DDL
Writing an Agent
45. R.I.Pienaar | rip@devco.net | http://devco.net | @ripienaar
New in MCollective 2
• Entirely rewritten messaging layer
• Asynchronous mode
• Additional non-broadcast based comms
• Reliable messaging with TTLs and Queues
• Batched mode to affect nodes in groups
• Improved RabbitMQ and ActiveMQ support
• Pluggable discovery against your own source of truth
• Data plugins for discovery, data query and ACLs
• Plugin Generators and Packaging
• Improved Security
• MS Windows Support
• DDL based pluggableValidation on clients and servers