Nagios-Plugins is the official plugin distribution for Nagios. It includes over 50 plugins written in C and Perl for most basic monitoring tasks. It is currently maintained mostly by volunteers; the current active developers are: Ton Voon (project lead), Holger Weiss, Matthias Elbe and Thomas Guyot-Sionnest.
In this talk Thomas will look into some noteworthy features added recently in the Nagios-Plugins distribution and show how they can be useful in real-life situations. He will especially emphasize on the extra-opts addition which allows moving plugin parameters to one or more .ini files. In a second part Thomas will introduce the current and upcoming projects for Nagios-Plugins.
RPM packaging provides advantages for software deployment including being an open standard format that is well integrated with many environments. The document discusses how RPM builds work and examples of RPM packaging pipelines. It also explains how RPMs can be used effectively within Docker containers and environments, providing cleaner Dockerfiles and images while allowing software to be deployed both with or without Docker.
PostgreSQL is a well-known relational database. But in the last few years, it has gained capabilities that previously belonged only to "NoSQL" databases. In this talk, I describe several of PostgreSQL that give it such capabilities.
Puppet Virtual Bolt Workshop - 23 April 2020 (Singapore)Puppet
Bolt can be used to execute agentless automation against remote hosts. It allows running commands, scripts, tasks, and plans on targets via SSH, WinRM, or PCP without requiring any agents. The workshop covers using Bolt commands, scripts, tasks, and plans. It teaches converting scripts to tasks and tasks to plans. Participants learn to use bolt.yaml for configuration, inventory files for targets, and Puppetfiles to manage dependencies. Later labs cover applying Puppet manifests with Bolt and building cross-platform plans. The recap emphasizes the progression from interactive tools to reusable automation and leveraging existing modules and Puppet Enterprise.
This document provides an introduction to the Logstash log processing tool. It begins by defining what a log is, then describes the theoretical and actual life cycle of a log. It notes common problems with log management. The document introduces Logstash as an open source log processing solution and describes its architecture and components, including inputs, filters, outputs, and how events flow through the system. It provides examples of using Logstash plugins and processing logs.
JRuby allows developers to write plugins for data processing systems like Norikra and Embulk in Ruby while taking advantage of Java libraries and the JVM. Norikra is a stream processing system that allows SQL queries over data streams. It is written in JRuby and uses the Java Esper library. Embulk is an open-source ETL tool that loads data between databases and file formats using plugins. Both systems use a plugin architecture where plugins can be written in JRuby or Java and are distributed as Ruby gems. This allows for a pluggable ecosystem that benefits from Ruby's productivity while utilizing Java libraries and the JVM's performance.
This document provides an overview of shellcode mastering techniques. It discusses the basics of shellcode including features, types, and development tasks. It covers basic shellcode techniques like call/ret algorithms and delta offset approaches. Optimization techniques are explored like instruction format, opcode maps, and common rules. Examples of optimized shellcode from a past competition are analyzed to extract the optimization changes between versions. Practice tasks are provided to write shellcode that performs a reverse connect and executes a second stage payload. Questions from attendees are solicited at the end.
This document discusses using Ruby for distributed storage systems. It describes components like Bigdam, which is Treasure Data's new data ingestion pipeline. Bigdam uses microservices and a distributed key-value store called Bigdam-pool to buffer data. The document discusses designing and testing Bigdam using mocking, interfaces, and integration tests in Ruby. It also explores porting Bigdam-pool from Java to Ruby and investigating Ruby's suitability for tasks like asynchronous I/O, threading, and serialization/deserialization.
RPM packaging provides advantages for software deployment including being an open standard format that is well integrated with many environments. The document discusses how RPM builds work and examples of RPM packaging pipelines. It also explains how RPMs can be used effectively within Docker containers and environments, providing cleaner Dockerfiles and images while allowing software to be deployed both with or without Docker.
PostgreSQL is a well-known relational database. But in the last few years, it has gained capabilities that previously belonged only to "NoSQL" databases. In this talk, I describe several of PostgreSQL that give it such capabilities.
Puppet Virtual Bolt Workshop - 23 April 2020 (Singapore)Puppet
Bolt can be used to execute agentless automation against remote hosts. It allows running commands, scripts, tasks, and plans on targets via SSH, WinRM, or PCP without requiring any agents. The workshop covers using Bolt commands, scripts, tasks, and plans. It teaches converting scripts to tasks and tasks to plans. Participants learn to use bolt.yaml for configuration, inventory files for targets, and Puppetfiles to manage dependencies. Later labs cover applying Puppet manifests with Bolt and building cross-platform plans. The recap emphasizes the progression from interactive tools to reusable automation and leveraging existing modules and Puppet Enterprise.
This document provides an introduction to the Logstash log processing tool. It begins by defining what a log is, then describes the theoretical and actual life cycle of a log. It notes common problems with log management. The document introduces Logstash as an open source log processing solution and describes its architecture and components, including inputs, filters, outputs, and how events flow through the system. It provides examples of using Logstash plugins and processing logs.
JRuby allows developers to write plugins for data processing systems like Norikra and Embulk in Ruby while taking advantage of Java libraries and the JVM. Norikra is a stream processing system that allows SQL queries over data streams. It is written in JRuby and uses the Java Esper library. Embulk is an open-source ETL tool that loads data between databases and file formats using plugins. Both systems use a plugin architecture where plugins can be written in JRuby or Java and are distributed as Ruby gems. This allows for a pluggable ecosystem that benefits from Ruby's productivity while utilizing Java libraries and the JVM's performance.
This document provides an overview of shellcode mastering techniques. It discusses the basics of shellcode including features, types, and development tasks. It covers basic shellcode techniques like call/ret algorithms and delta offset approaches. Optimization techniques are explored like instruction format, opcode maps, and common rules. Examples of optimized shellcode from a past competition are analyzed to extract the optimization changes between versions. Practice tasks are provided to write shellcode that performs a reverse connect and executes a second stage payload. Questions from attendees are solicited at the end.
This document discusses using Ruby for distributed storage systems. It describes components like Bigdam, which is Treasure Data's new data ingestion pipeline. Bigdam uses microservices and a distributed key-value store called Bigdam-pool to buffer data. The document discusses designing and testing Bigdam using mocking, interfaces, and integration tests in Ruby. It also explores porting Bigdam-pool from Java to Ruby and investigating Ruby's suitability for tasks like asynchronous I/O, threading, and serialization/deserialization.
Does the SPL still have any relevance in the Brave New World of PHP7?Mark Baker
The Standard PHP Library (SPL) provides core PHP functionality like autoloading, exceptions, and file handling. However, its iterators and data structures may have limited relevance in PHP7 due to performance issues and the availability of alternatives like Rudi Theunissen's PHP7 data structures extension which offers faster and more memory efficient implementations of common data structures like stacks and queues. While SPL interfaces are still useful, many of its other components could be replaced by more optimized third party extensions for modern PHP applications.
Does the SPL still have any relevance in the Brave New World of PHP7?Mark Baker
The Standard PHP Library (SPL) provides core PHP functionality like autoloading, exceptions, and file handling. However, its iterators and data structures may have limited relevance in PHP7 due to performance issues and the availability of better third party alternatives like Rudi Theunissen's PHP7 data structures library. The SPL interfaces also have questionable ongoing usefulness. Overall the SPL still contains useful features but some parts could be improved or replaced.
This document summarizes lessons learned from building the Dutch public broadcasting company's website omroep.nl. Key points include:
- The site was built using Ruby on Rails with 6 developers over 6 months to handle 30,000-40,000 daily pageviews and traffic spikes.
- Extensive testing was done including over 2,000 RSpec tests and 410 Cucumber scenarios to help ensure quality.
- Caching was heavily used to improve performance including caching pages, fragments, and external data from feeds.
- Resilience was important given the large amounts of external data from various sources, and errors were rescued and logged.
- Ongoing monitoring and optimization was needed to
This document provides lessons learned from building the Dutch public broadcasting company's website omroep.nl. Key points include using Ruby on Rails, BDD with RSpec and Cucumber, caching everything possible, rescuing errors, testing extensively, and handling large amounts of external data from various XML/RSS feeds and APIs. Performance was optimized through techniques like moving static assets to a front proxy, page caching, fragment caching, and using Memcache. The team of 6 people built the CMS from scratch over 6 months.
Property Based Testing is an process to build robust systems.
It facilitates a deeper understanding of the system under test. It can be used on any testing level: unit, integration or functional.
The presentation introduces how Property Based Testing works, how to use it with PHPUnit, and in what way it differentiates from example based tests.
It talks about strategies to find good properties to check for.
This presentation was built for the Meet-Magento conference 2020 in Mumbai.
The document discusses asynchronous and non-blocking I/O with JRuby. It explains that asynchronous operations are better than synchronous operations because they use fewer resources and allow for parallelism. It provides an example of building a JRuby application with the Ratpack framework that makes asynchronous HTTP requests to eBay's API in a non-blocking way using promises. It also discusses using RxJava and Hystrix with Ratpack to build a book management application that handles data and API requests asynchronously.
Logstash is a tool for managing logs that allows for input, filter, and output plugins to collect, parse, and deliver logs and log data. It works by treating logs as events that are passed through the input, filter, and output phases, with popular plugins including file, redis, grok, elasticsearch and more. The document also provides guidance on using Logstash in a clustered configuration with an agent and server model to optimize log collection, processing, and storage.
Logging logs with Logstash - Devops MK 10-02-2016Steve Howe
The document discusses Logging logs with Logstash. It provides an overview of the components of the ELK (Elasticsearch, Logstash, Kibana) stack including Logstash, which is used to collect, parse, and store logs. Logstash uses Logstash-forwarder as a shipper to collect logs from multiple sources and outputs to Elasticsearch for storage and analysis. Kibana is used for visualization and searching logs stored in Elasticsearch. The document also discusses configuration, scaling, and some tricks and gotchas when using the ELK stack.
Share about git internal mechanism about how git commands such as git init, git add ,git commit, git branch etc. work!
This is also my reading notes of these two books --- <<git>> and <<pro>>
Elasticsearch, Logstash, Kibana. Cool search, analytics, data mining and more...Oleksiy Panchenko
In the age of information and big data, ability to quickly and easily find a needle in a haystack is extremely important. Elasticsearch is a distributed and scalable search engine which provides rich and flexible search capabilities. Social networks (Facebook, LinkedIn), media services (Netflix, SoundCloud), Q&A sites (StackOverflow, Quora, StackExchange) and even GitHub - they all find data for you using Elasticsearch. In conjunction with Logstash and Kibana, Elasticsearch becomes a powerful log engine which allows to process, store, analyze, search through and visualize your logs.
Video: https://www.youtube.com/watch?v=GL7xC5kpb-c
Scripts for the Demo: https://github.com/opanchenko/morning-at-lohika-ELK
Data Analytics Service Company and Its Ruby UsageSATOSHI TAGOMORI
Treasure Data is a data analytics service company that makes heavy use of Ruby in its platform and services. It uses Ruby for components like Fluentd (log collection), Embulk (data loading), scheduling, and its Rails-based API and console. Java and JRuby are also used for components involving Hadoop and Presto processing. The company's architecture includes collectors that ingest data, a PlazmaDB for storage, workers that process jobs on Hadoop and Presto clusters, and schedulers that queue and schedule those jobs using technologies like PerfectSched and PerfectQueue which are written in Ruby. Hive jobs are built programmatically using Ruby to generate configurations and submit the jobs to underlying Hadoop clusters.
This document provides an overview and instructions for setting up Elasticsearch. It discusses:
- How to set up the Elasticsearch workshop environment by installing required software and cloning the GitHub repository.
- Key concepts about Elasticsearch including its distributed and schema-free nature, how it is document oriented, and how indexes, types, documents, and fields relate to a relational database.
- Core components like clusters, nodes, shards, and replicas. It also distinguishes between filters and queries.
- Steps for connecting to Elasticsearch, inserting, searching, updating and deleting data.
- Advanced search techniques including filters, multi-field search, and human language processing using analyzers, stop words, synonyms and normalization
A story of how we went about packaging perl and all of the dependencies that our project has.
Where we were before, the chosen path, and the end result.
The pitfalls and a view on the pros and cons of the previous state of affairs versus the pros/cons of the end result.
We're talking about serious log crunching and intelligence gathering with Elastic, Logstash, and Kibana.
ELK is an end-to-end stack for gathering structured and unstructured data from servers. It delivers insights in real time using the Kibana dashboard giving unprecedented horizontal visibility. The visualization and search tools will make your day-to-day hunting a breeze.
During this brief walkthrough of the setup, configuration, and use of the toolset, we will show you how to find the trees from the forest in today's modern cloud environments and beyond.
- The document discusses using the ELK stack (Elasticsearch, Logstash, Kibana) to perform real-time log search, analysis, and monitoring. It provides examples of using Logstash and Elasticsearch for parsing and indexing application logs, and using Kibana for visualization and analysis.
- The document identifies several performance and stability issues with Logstash and Elasticsearch including high CPU usage from grok filtering, GeoIP filtering performance, and Elasticsearch relocation and recovery times. It proposes solutions like custom filtering plugins, tuning Elasticsearch configuration, and optimizing mappings.
- Rsyslog is presented as an alternative to Logstash for log collection with better performance. Examples are given of using Rsyslog plugins and Rainerscript for efficient
The document provides an overview of NoSQL databases and discusses various types including document databases, column-family stores, and key-value pairs. It provides examples of MongoDB, CouchDB, Redis, HBase and their data models, query operations, and architectures.
Originally delivered as Lightning Talk at Lucene Eurocon 2011 in Barcelona, this quick presentation shows how to use Sematext's SPM service to monitor Solr, OS, JVM, and more.
H-Hypermap - Heatmap Analytics at Scale: Presented by David Smiley, D W Smile...Lucidworks
This document provides an agenda and overview for a presentation on H-Hypermap, a project to build a search platform called the Billion Object Platform (BOP) to index and search over billions of geo-tagged tweets in near real-time. The presentation will cover the architecture using Apache Kafka, Solr sharding, and techniques for fast geo-spatial queries and heatmaps. It will also discuss experiences using technologies like Kotlin, Dropwizard, Docker and Kontena.
The document provides an overview of a hackathon being led by Simon Bennetts on extending the OWASP Zed Attack Proxy (ZAP) tool. The plan is to give an overview of how to extend ZAP, discuss potential topics to cover such as implementing scripts, scan rules, and extensions, and then have hands-on hacking sessions with assistance from Simon. Simon outlines many possible topics for discussion, including the ZAP project structure, development environment, documentation, scripting, active and passive scan rules, extensions, and features or fixes to work on.
SymfonyCon Madrid 2014 - Rock Solid Deployment of Symfony AppsPablo Godel
Web applications are becoming increasingly more complex, so deployment is not just transferring files with FTP anymore. We will go over the different challenges and how to deploy our PHP applications effectively, safely and consistently with the latest tools and techniques. We will also look at tools that complement deployment with management, configuration and monitoring.
Does the SPL still have any relevance in the Brave New World of PHP7?Mark Baker
The Standard PHP Library (SPL) provides core PHP functionality like autoloading, exceptions, and file handling. However, its iterators and data structures may have limited relevance in PHP7 due to performance issues and the availability of alternatives like Rudi Theunissen's PHP7 data structures extension which offers faster and more memory efficient implementations of common data structures like stacks and queues. While SPL interfaces are still useful, many of its other components could be replaced by more optimized third party extensions for modern PHP applications.
Does the SPL still have any relevance in the Brave New World of PHP7?Mark Baker
The Standard PHP Library (SPL) provides core PHP functionality like autoloading, exceptions, and file handling. However, its iterators and data structures may have limited relevance in PHP7 due to performance issues and the availability of better third party alternatives like Rudi Theunissen's PHP7 data structures library. The SPL interfaces also have questionable ongoing usefulness. Overall the SPL still contains useful features but some parts could be improved or replaced.
This document summarizes lessons learned from building the Dutch public broadcasting company's website omroep.nl. Key points include:
- The site was built using Ruby on Rails with 6 developers over 6 months to handle 30,000-40,000 daily pageviews and traffic spikes.
- Extensive testing was done including over 2,000 RSpec tests and 410 Cucumber scenarios to help ensure quality.
- Caching was heavily used to improve performance including caching pages, fragments, and external data from feeds.
- Resilience was important given the large amounts of external data from various sources, and errors were rescued and logged.
- Ongoing monitoring and optimization was needed to
This document provides lessons learned from building the Dutch public broadcasting company's website omroep.nl. Key points include using Ruby on Rails, BDD with RSpec and Cucumber, caching everything possible, rescuing errors, testing extensively, and handling large amounts of external data from various XML/RSS feeds and APIs. Performance was optimized through techniques like moving static assets to a front proxy, page caching, fragment caching, and using Memcache. The team of 6 people built the CMS from scratch over 6 months.
Property Based Testing is an process to build robust systems.
It facilitates a deeper understanding of the system under test. It can be used on any testing level: unit, integration or functional.
The presentation introduces how Property Based Testing works, how to use it with PHPUnit, and in what way it differentiates from example based tests.
It talks about strategies to find good properties to check for.
This presentation was built for the Meet-Magento conference 2020 in Mumbai.
The document discusses asynchronous and non-blocking I/O with JRuby. It explains that asynchronous operations are better than synchronous operations because they use fewer resources and allow for parallelism. It provides an example of building a JRuby application with the Ratpack framework that makes asynchronous HTTP requests to eBay's API in a non-blocking way using promises. It also discusses using RxJava and Hystrix with Ratpack to build a book management application that handles data and API requests asynchronously.
Logstash is a tool for managing logs that allows for input, filter, and output plugins to collect, parse, and deliver logs and log data. It works by treating logs as events that are passed through the input, filter, and output phases, with popular plugins including file, redis, grok, elasticsearch and more. The document also provides guidance on using Logstash in a clustered configuration with an agent and server model to optimize log collection, processing, and storage.
Logging logs with Logstash - Devops MK 10-02-2016Steve Howe
The document discusses Logging logs with Logstash. It provides an overview of the components of the ELK (Elasticsearch, Logstash, Kibana) stack including Logstash, which is used to collect, parse, and store logs. Logstash uses Logstash-forwarder as a shipper to collect logs from multiple sources and outputs to Elasticsearch for storage and analysis. Kibana is used for visualization and searching logs stored in Elasticsearch. The document also discusses configuration, scaling, and some tricks and gotchas when using the ELK stack.
Share about git internal mechanism about how git commands such as git init, git add ,git commit, git branch etc. work!
This is also my reading notes of these two books --- <<git>> and <<pro>>
Elasticsearch, Logstash, Kibana. Cool search, analytics, data mining and more...Oleksiy Panchenko
In the age of information and big data, ability to quickly and easily find a needle in a haystack is extremely important. Elasticsearch is a distributed and scalable search engine which provides rich and flexible search capabilities. Social networks (Facebook, LinkedIn), media services (Netflix, SoundCloud), Q&A sites (StackOverflow, Quora, StackExchange) and even GitHub - they all find data for you using Elasticsearch. In conjunction with Logstash and Kibana, Elasticsearch becomes a powerful log engine which allows to process, store, analyze, search through and visualize your logs.
Video: https://www.youtube.com/watch?v=GL7xC5kpb-c
Scripts for the Demo: https://github.com/opanchenko/morning-at-lohika-ELK
Data Analytics Service Company and Its Ruby UsageSATOSHI TAGOMORI
Treasure Data is a data analytics service company that makes heavy use of Ruby in its platform and services. It uses Ruby for components like Fluentd (log collection), Embulk (data loading), scheduling, and its Rails-based API and console. Java and JRuby are also used for components involving Hadoop and Presto processing. The company's architecture includes collectors that ingest data, a PlazmaDB for storage, workers that process jobs on Hadoop and Presto clusters, and schedulers that queue and schedule those jobs using technologies like PerfectSched and PerfectQueue which are written in Ruby. Hive jobs are built programmatically using Ruby to generate configurations and submit the jobs to underlying Hadoop clusters.
This document provides an overview and instructions for setting up Elasticsearch. It discusses:
- How to set up the Elasticsearch workshop environment by installing required software and cloning the GitHub repository.
- Key concepts about Elasticsearch including its distributed and schema-free nature, how it is document oriented, and how indexes, types, documents, and fields relate to a relational database.
- Core components like clusters, nodes, shards, and replicas. It also distinguishes between filters and queries.
- Steps for connecting to Elasticsearch, inserting, searching, updating and deleting data.
- Advanced search techniques including filters, multi-field search, and human language processing using analyzers, stop words, synonyms and normalization
A story of how we went about packaging perl and all of the dependencies that our project has.
Where we were before, the chosen path, and the end result.
The pitfalls and a view on the pros and cons of the previous state of affairs versus the pros/cons of the end result.
We're talking about serious log crunching and intelligence gathering with Elastic, Logstash, and Kibana.
ELK is an end-to-end stack for gathering structured and unstructured data from servers. It delivers insights in real time using the Kibana dashboard giving unprecedented horizontal visibility. The visualization and search tools will make your day-to-day hunting a breeze.
During this brief walkthrough of the setup, configuration, and use of the toolset, we will show you how to find the trees from the forest in today's modern cloud environments and beyond.
- The document discusses using the ELK stack (Elasticsearch, Logstash, Kibana) to perform real-time log search, analysis, and monitoring. It provides examples of using Logstash and Elasticsearch for parsing and indexing application logs, and using Kibana for visualization and analysis.
- The document identifies several performance and stability issues with Logstash and Elasticsearch including high CPU usage from grok filtering, GeoIP filtering performance, and Elasticsearch relocation and recovery times. It proposes solutions like custom filtering plugins, tuning Elasticsearch configuration, and optimizing mappings.
- Rsyslog is presented as an alternative to Logstash for log collection with better performance. Examples are given of using Rsyslog plugins and Rainerscript for efficient
The document provides an overview of NoSQL databases and discusses various types including document databases, column-family stores, and key-value pairs. It provides examples of MongoDB, CouchDB, Redis, HBase and their data models, query operations, and architectures.
Originally delivered as Lightning Talk at Lucene Eurocon 2011 in Barcelona, this quick presentation shows how to use Sematext's SPM service to monitor Solr, OS, JVM, and more.
H-Hypermap - Heatmap Analytics at Scale: Presented by David Smiley, D W Smile...Lucidworks
This document provides an agenda and overview for a presentation on H-Hypermap, a project to build a search platform called the Billion Object Platform (BOP) to index and search over billions of geo-tagged tweets in near real-time. The presentation will cover the architecture using Apache Kafka, Solr sharding, and techniques for fast geo-spatial queries and heatmaps. It will also discuss experiences using technologies like Kotlin, Dropwizard, Docker and Kontena.
The document provides an overview of a hackathon being led by Simon Bennetts on extending the OWASP Zed Attack Proxy (ZAP) tool. The plan is to give an overview of how to extend ZAP, discuss potential topics to cover such as implementing scripts, scan rules, and extensions, and then have hands-on hacking sessions with assistance from Simon. Simon outlines many possible topics for discussion, including the ZAP project structure, development environment, documentation, scripting, active and passive scan rules, extensions, and features or fixes to work on.
SymfonyCon Madrid 2014 - Rock Solid Deployment of Symfony AppsPablo Godel
Web applications are becoming increasingly more complex, so deployment is not just transferring files with FTP anymore. We will go over the different challenges and how to deploy our PHP applications effectively, safely and consistently with the latest tools and techniques. We will also look at tools that complement deployment with management, configuration and monitoring.
This document discusses Python tools for testing, including py.test, tox, and TravisCI. It provides information about each tool: py.test is an improved unit testing library compared to nose, with readable test results, mocking features, and JUnit XML output; tox runs tests across multiple Python versions using virtualenv; and TravisCI is a continuous integration service that runs tests automatically on GitHub code commits for open source projects in various languages including Python. Examples of configuration files for tox and TravisCI are also included.
Spring Roo Add-On Development & DistributionStefan Schmidt
This document provides an overview of creating and distributing Spring Roo add-ons. It discusses the architectural journey that led to Roo's design, including decisions to use Java and AspectJ rather than creating a new runtime. It also covers getting started with a new add-on using the Add-on Creator, implementation details like using common services and file monitoring, and how to develop add-ons that integrate with the Roo shell and OSGi container. The document concludes with pointers for starters, like reviewing example add-ons and Spring Roo source code.
Open Source Tools for Leveling Up Operations FOSSET 2014Mandi Walls
This document discusses using open source tools to improve operations workflows and processes. It introduces various tools including Git for version control, packaging tools like FPM, and testing tools like Nagios plugins. The document advocates applying principles from development like testing, version control, and automation to make operations processes more reliable, transparent and reduce risk.
The document provides an overview of the Node Package Manager (npm). It discusses how npm works to reduce friction in the software development process by making it easy for developers to install packages and dependencies without conflicts. It describes npm's vision of avoiding "dependency hell" and its strategies for achieving this like ensuring consistent interfaces and reducing excessive metadata requirements. The document also summarizes key npm commands, how installations work, and future plans like binary distributions, an automated testing system called npat, and build farms to test packages on multiple platforms.
Nathan Vonnahme's presentation on writing custom plugins for Nagios.
The presentation was given during the Nagios World Conference North America held Sept 25-28th, 2012 in Saint Paul, MN. For more information on the conference (including photos and videos), visit: http://go.nagios.com/nwcna
At Tuenti, we do two code pushes per week, sometimes modifying thousands of files and running thousands of automated tests and build operations before, to ensure not only that the code works but also that proper localization is applied, bundles are generated and files get deployed to hundreds of servers as fast and reliable as possible.
We use opensource tools like Mercurial, MySQL, Jenkins, Selenium, PHPUnit and Rsync among our own in-house ones, and have different development, testing, staging and production environments.
We had to fight with problems like statics bundling and versioning, syntax errors and of course the fact that we have +100 engineers working on the codebase, sometimes merging and releasing more than a dozen branches the same day. We also switched from Subversion to Mercurial to obtain more flexibility and faster branching operations.
With this talk we will explain the process of how code changes in ourcode repository end up in live code, detailing some practices and tips that we apply.
Using Nagios to monitor your WO systemsWO Community
Nagios is an open source monitoring tool that has been available since 1999. It is commonly used to monitor servers, services, and applications. The document discusses how to install and configure Nagios on various platforms like CentOS, Ubuntu, and Mac OS X. It also provides examples of how to monitor common services like HTTP, MySQL, disk space, and custom applications using Nagios plugins. Graphing and alerting capabilities are discussed as well. The presentation concludes with a demonstration and Q&A section.
This lecture is the first part of an introduction to SVC tools with a focus on Git and GitHub. This Lecture discusses the basic concepts as well as Installation and initial configuration of Git
OSDC 2016 - Continous Integration in Data Centers - Further 3 Years later by ...NETWAYS
I gave a talk titled "Continuous Integration in data centers“ at OSDC in 2013, presenting ways how to realize continuous integration/delivery with Jenkins and related tools.Three years later we gained new tools in our continuous delivery pipeline, including Docker, Gerrit and Goss. Over the years we also had to deal with different problems caused by faster release cycles, a growing team and gaining new projects. We therefore established code review in our pipeline, improved our test infrastructure and invested in our infrastructure automation.In this talk I will discuss the lessons we learned over the last years, demonstrate how a proper continuous delivery pipeline can improve your life and how open source tools like Jenkins, Docker and Gerrit can be leveraged for setting up such an environment.
This document provides an overview of Git and its features. Git is a distributed version control system that allows users to track changes to files. It keeps track of file versions, allows multiple developers to work independently and merge changes together, and is faster than other version control systems. The document discusses Git's history and architecture, how to install and configure Git, basic commands like add, commit and log, branching, and more advanced topics.
CSE 390 Lecture 9 - Version Control with GITPouriaQashqai1
Version control systems like Git allow developers to track changes to files over time. Git stores snapshots of files in a local repository and remote repositories can be used for collaboration. The basic Git workflow involves modifying files, staging changed files, and committing snapshots of the staged files to the local repository. Status and diff commands allow viewing changes between the working directory, staging area, and repository. Good commit messages are important for documenting changes over time.
The Art of Database Experiments – PostgresConf Silicon Valley 2018 / San JoseNikolay Samokhvalov
Future database administration will be highly automated. Until then, we still live in a world where extensive manual interactions are required from a skilled DBA. This will change soon as more "autonomous databases" reach maturity and enter the production environment.
Postgres-specific monitoring tools and systems continue to improve, detecting and analyzing performance issues and bottlenecks in production databases. However, while these tools can detect current issues, they require highly-experienced DBAs to analyze and recommend mitigations.
In this session, the speaker will present the initial results of the POSTGRES.AI project – Nancy CLI, a unified way to manage automated database experiments. Nancy CLI is an automated database management framework based on well-known open-source projects and incorporating major open-source tools and Postgres modules: pgBadger, pg_stat_kcache, auto_explain, pgreplay, and others.
Originally developed with the goal to simulate various SQL query use cases in various environments and collect data to train ML models, Nancy CLI turned out to be very a universal framework that can play a crucial role in CI/CD pipelines in any company.
Using Nancy CLI, casual DBAs and any engineers can easily conduct automated experiments today, either on AWS EC2 Spot instances or on any other servers. All you need is to tell Nancy which database to use, specify workload (synthetic or "real", generated based on the Postgres logs), and what you want to test – say, check how a new index will affect all most expensive query groups from pg_stat_statements, or compare various values of "default_statistics_target". All the collected information with a very high level of confidence will give you understanding, how various queries and overall Postgres performance will be affected when you apply this change to production.
This document provides an overview of version control with Git. It explains what version control and Git are, how to install and configure Git, how to perform basic tasks like initializing a repository and making commits, and how to collaborate using features like branching and pushing/pulling from remote repositories. Key points covered include allowing the tracking of changes, maintaining file history, and enabling multiple people to work on the same project simultaneously without conflicts.
The document discusses the OpenNTF Domino API (ODA), an open source project that provides additional capabilities for working with Java and Domino. It was started in 2013 and fills gaps for Java developers working with Domino. The ODA makes common tasks like session handling, view handling, document handling and transactions easier. It also introduces new capabilities like improved date/time functions and Xots for executing multi-threaded tasks. The document provides an overview of the ODA and examples of how it can simplify and enhance Java code that interacts with Domino.
The document discusses continuous feature development. It defines a feature as a set of expected functional behaviors from a client. Continuous feature development involves incrementally building these expected behaviors. This approach is needed because clients' expectations, business needs, user perceptions, and competitive advantages are continually changing. Managing continuous feature development presents challenges like integrating new features, maintaining stability, seamless integration, and managing trust. The document recommends practices like acceptance test-driven development, test-driven development, behavior-driven development, continuous integration, coding in feature branches, code reviews, maintaining a production branch, using staging servers, and continuous integration to help address these challenges.
Similar to OSMC 2009 | Nagios Plugins: New features and future projects by Thomas Guyot-Sionnest (20)
Malibou Pitch Deck For Its €3M Seed Roundsjcobrien
French start-up Malibou raised a €3 million Seed Round to develop its payroll and human resources
management platform for VSEs and SMEs. The financing round was led by investors Breega, Y Combinator, and FCVC.
UI5con 2024 - Boost Your Development Experience with UI5 Tooling ExtensionsPeter Muessig
The UI5 tooling is the development and build tooling of UI5. It is built in a modular and extensible way so that it can be easily extended by your needs. This session will showcase various tooling extensions which can boost your development experience by far so that you can really work offline, transpile your code in your project to use even newer versions of EcmaScript (than 2022 which is supported right now by the UI5 tooling), consume any npm package of your choice in your project, using different kind of proxies, and even stitching UI5 projects during development together to mimic your target environment.
8 Best Automated Android App Testing Tool and Framework in 2024.pdfkalichargn70th171
Regarding mobile operating systems, two major players dominate our thoughts: Android and iPhone. With Android leading the market, software development companies are focused on delivering apps compatible with this OS. Ensuring an app's functionality across various Android devices, OS versions, and hardware specifications is critical, making Android app testing essential.
Everything You Need to Know About X-Sign: The eSign Functionality of XfilesPr...XfilesPro
Wondering how X-Sign gained popularity in a quick time span? This eSign functionality of XfilesPro DocuPrime has many advancements to offer for Salesforce users. Explore them now!
What to do when you have a perfect model for your software but you are constrained by an imperfect business model?
This talk explores the challenges of bringing modelling rigour to the business and strategy levels, and talking to your non-technical counterparts in the process.
E-Invoicing Implementation: A Step-by-Step Guide for Saudi Arabian CompaniesQuickdice ERP
Explore the seamless transition to e-invoicing with this comprehensive guide tailored for Saudi Arabian businesses. Navigate the process effortlessly with step-by-step instructions designed to streamline implementation and enhance efficiency.
How Can Hiring A Mobile App Development Company Help Your Business Grow?ToXSL Technologies
ToXSL Technologies is an award-winning Mobile App Development Company in Dubai that helps businesses reshape their digital possibilities with custom app services. As a top app development company in Dubai, we offer highly engaging iOS & Android app solutions. https://rb.gy/necdnt
14 th Edition of International conference on computer visionShulagnaSarkar2
About the event
14th Edition of International conference on computer vision
Computer conferences organized by ScienceFather group. ScienceFather takes the privilege to invite speakers participants students delegates and exhibitors from across the globe to its International Conference on computer conferences to be held in the Various Beautiful cites of the world. computer conferences are a discussion of common Inventions-related issues and additionally trade information share proof thoughts and insight into advanced developments in the science inventions service system. New technology may create many materials and devices with a vast range of applications such as in Science medicine electronics biomaterials energy production and consumer products.
Nomination are Open!! Don't Miss it
Visit: computer.scifat.com
Award Nomination: https://x-i.me/ishnom
Conference Submission: https://x-i.me/anicon
For Enquiry: Computer@scifat.com
Using Query Store in Azure PostgreSQL to Understand Query PerformanceGrant Fritchey
Microsoft has added an excellent new extension in PostgreSQL on their Azure Platform. This session, presented at Posette 2024, covers what Query Store is and the types of information you can get out of it.
WWDC 2024 Keynote Review: For CocoaCoders AustinPatrick Weigel
Overview of WWDC 2024 Keynote Address.
Covers: Apple Intelligence, iOS18, macOS Sequoia, iPadOS, watchOS, visionOS, and Apple TV+.
Understandable dialogue on Apple TV+
On-device app controlling AI.
Access to ChatGPT with a guest appearance by Chief Data Thief Sam Altman!
App Locking! iPhone Mirroring! And a Calculator!!
INTRODUCTION TO AI CLASSICAL THEORY TARGETED EXAMPLESanfaltahir1010
Image: Include an image that represents the concept of precision, such as a AI helix or a futuristic healthcare
setting.
Objective: Provide a foundational understanding of precision medicine and its departure from traditional
approaches
Role of theory: Discuss how genomics, the study of an organism's complete set of AI ,
plays a crucial role in precision medicine.
Customizing treatment plans: Highlight how genetic information is used to customize
treatment plans based on an individual's genetic makeup.
Examples: Provide real-world examples of successful application of AI such as genetic
therapies or targeted treatments.
Importance of molecular diagnostics: Explain the role of molecular diagnostics in identifying
molecular and genetic markers associated with diseases.
Biomarker testing: Showcase how biomarker testing aids in creating personalized treatment plans.
Content:
• Ethical issues: Examine ethical concerns related to precision medicine, such as privacy, consent, and
potential misuse of genetic information.
• Regulations and guidelines: Present examples of ethical guidelines and regulations in place to safeguard
patient rights.
• Visuals: Include images or icons representing ethical considerations.
Content:
• Ethical issues: Examine ethical concerns related to precision medicine, such as privacy, consent, and
potential misuse of genetic information.
• Regulations and guidelines: Present examples of ethical guidelines and regulations in place to safeguard
patient rights.
• Visuals: Include images or icons representing ethical considerations.
Content:
• Ethical issues: Examine ethical concerns related to precision medicine, such as privacy, consent, and
potential misuse of genetic information.
• Regulations and guidelines: Present examples of ethical guidelines and regulations in place to safeguard
patient rights.
• Visuals: Include images or icons representing ethical considerations.
Real-world case study: Present a detailed case study showcasing the success of precision
medicine in a specific medical scenario.
Patient's journey: Discuss the patient's journey, treatment plan, and outcomes.
Impact: Emphasize the transformative effect of precision medicine on the individual's
health.
Objective: Ground the presentation in a real-world example, highlighting the practical
application and success of precision medicine.
Data challenges: Address the challenges associated with managing large sets of patient data in precision
medicine.
Technological solutions: Discuss technological innovations and solutions for handling and analyzing vast
datasets.
Visuals: Include graphics representing data management challenges and technological solutions.
Objective: Acknowledge the data-related challenges in precision medicine and highlight innovative solutions.
Data challenges: Address the challenges associated with managing large sets of patient data in precision
medicine.
Technological solutions: Discuss technological innovations and solutions
Odoo releases a new update every year. The latest version, Odoo 17, came out in October 2023. It brought many improvements to the user interface and user experience, along with new features in modules like accounting, marketing, manufacturing, websites, and more.
The Odoo 17 update has been a hot topic among startups, mid-sized businesses, large enterprises, and Odoo developers aiming to grow their businesses. Since it is now already the first quarter of 2024, you must have a clear idea of what Odoo 17 entails and what it can offer your business if you are still not aware of it.
This blog covers the features and functionalities. Explore the entire blog and get in touch with expert Odoo ERP consultants to leverage Odoo 17 and its features for your business too.
An Overview of Odoo ERP
Odoo ERP was first released as OpenERP software in February 2005. It is a suite of business applications used for ERP, CRM, eCommerce, websites, and project management. Ten years ago, the Odoo Enterprise edition was launched to help fund the Odoo Community version.
When you compare Odoo Community and Enterprise, the Enterprise edition offers exclusive features like mobile app access, Odoo Studio customisation, Odoo hosting, and unlimited functional support.
Today, Odoo is a well-known name used by companies of all sizes across various industries, including manufacturing, retail, accounting, marketing, healthcare, IT consulting, and R&D.
The latest version, Odoo 17, has been available since October 2023. Key highlights of this update include:
Enhanced user experience with improvements to the command bar, faster backend page loading, and multiple dashboard views.
Instant report generation, credit limit alerts for sales and invoices, separate OCR settings for invoice creation, and an auto-complete feature for forms in the accounting module.
Improved image handling and global attribute changes for mailing lists in email marketing.
A default auto-signature option and a refuse-to-sign option in HR modules.
Options to divide and merge manufacturing orders, track the status of manufacturing orders, and more in the MRP module.
Dark mode in Odoo 17.
Now that the Odoo 17 announcement is official, let’s look at what’s new in Odoo 17!
What is Odoo ERP 17?
Odoo 17 is the latest version of one of the world’s leading open-source enterprise ERPs. This version has come up with significant improvements explained here in this blog. Also, this new version aims to introduce features that enhance time-saving, efficiency, and productivity for users across various organisations.
Odoo 17, released at the Odoo Experience 2023, brought notable improvements to the user interface and added new functionalities with enhancements in performance, accessibility, data analysis, and management, further expanding its reach in the market.
Unlock the Secrets to Effortless Video Creation with Invideo: Your Ultimate G...The Third Creative Media
"Navigating Invideo: A Comprehensive Guide" is an essential resource for anyone looking to master Invideo, an AI-powered video creation tool. This guide provides step-by-step instructions, helpful tips, and comparisons with other AI video creators. Whether you're a beginner or an experienced video editor, you'll find valuable insights to enhance your video projects and bring your creative ideas to life.
Top Benefits of Using Salesforce Healthcare CRM for Patient Management.pdfVALiNTRY360
Salesforce Healthcare CRM, implemented by VALiNTRY360, revolutionizes patient management by enhancing patient engagement, streamlining administrative processes, and improving care coordination. Its advanced analytics, robust security, and seamless integration with telehealth services ensure that healthcare providers can deliver personalized, efficient, and secure patient care. By automating routine tasks and providing actionable insights, Salesforce Healthcare CRM enables healthcare providers to focus on delivering high-quality care, leading to better patient outcomes and higher satisfaction. VALiNTRY360's expertise ensures a tailored solution that meets the unique needs of any healthcare practice, from small clinics to large hospital systems.
For more info visit us https://valintry360.com/solutions/health-life-sciences
Top Benefits of Using Salesforce Healthcare CRM for Patient Management.pdf
OSMC 2009 | Nagios Plugins: New features and future projects by Thomas Guyot-Sionnest
1. Nagios Plugins: New features and
future projects
Nagios Plugins: New features and
future projects
Thomas Guyot-Sionnest
October 2009
Thomas Guyot-Sionnest
October 2009
Copyright Thomas GuyotCopyright Thomas Guyot--Sionnest. Released under Creative Commons, AttributionSionnest. Released under Creative Commons, Attribution--NoncommercialNoncommercial
2. New features and future
projects
New features and future
projects
I. State of the Plugins
II. Extra-Opts
III. Future Projects
I. State of the Plugins
II. Extra-Opts
III. Future Projects
3. Part I – State of the PluginsPart I – State of the Plugins
The Nagios Plugins Development Team
What we have been doing
New features
Project statistics
The Nagios Plugins Development Team
What we have been doing
New features
Project statistics
4. The Nagios Plugins TeamThe Nagios Plugins Team
• Project Lead:
• Ton Voon
• Active Developers:
• Holger Weiß
• Matthias Eble
• Thomas Guyot-Sionnest
• Project Lead:
• Ton Voon
• Active Developers:
• Holger Weiß
• Matthias Eble
• Thomas Guyot-Sionnest
5. The Nagios Plugins Team
(cont.)
The Nagios Plugins Team
(cont.)
• Infrastructure:
• SourceForge Tracker, Mailing lists, Web,
FRS…
• Tinderbox builds on team and community
computers
• Team server (courtesy of Opsera)
• Infrastructure:
• SourceForge Tracker, Mailing lists, Web,
FRS…
• Tinderbox builds on team and community
computers
• Team server (courtesy of Opsera)
6. What Are We Doing?What Are We Doing?
• Trying to do much with little resources
→ We all have real jobs!
• Many bugfixes and patches from the trackers and
mailing lists
→ We don’t want to let them accumulate
• Coding new features if there’s any time left
• Website, tinderboxes, repositories maintenance
• Translations… Anyone???
• Trying to do much with little resources
→ We all have real jobs!
• Many bugfixes and patches from the trackers and
mailing lists
→ We don’t want to let them accumulate
• Coding new features if there’s any time left
• Website, tinderboxes, repositories maintenance
• Translations… Anyone???
7. New FeaturesNew Features
• The big one:
• Extra-opts (more on this shortly)
• check_http:
• SNI support (SSL Server Name Indication)
• Sticky redirections
• Better support for doing many checks together
• check_disk:
• Better support for automount environment
• Support for +2TB filesystems
• The big one:
• Extra-opts (more on this shortly)
• check_http:
• SNI support (SSL Server Name Indication)
• Sticky redirections
• Better support for doing many checks together
• check_disk:
• Better support for automount environment
• Support for +2TB filesystems
8. New Features (cont.)New Features (cont.)
• Miscellaneous:
• check_icmp returns perfdata
• negate can “negate” the status text as well
• check_snmp supports standard thresholds, including double
values from strings
• Some plugins updated to use utils_cmd instead of popen
• Developers:
• Moved to Git
• Self-serve test suites (http, snmp…)
• Miscellaneous:
• check_icmp returns perfdata
• negate can “negate” the status text as well
• check_snmp supports standard thresholds, including double
values from strings
• Some plugins updated to use utils_cmd instead of popen
• Developers:
• Moved to Git
• Self-serve test suites (http, snmp…)
16. Part II – Extra-OptsPart II – Extra-Opts
History
Roadmap
How it works
Examples
History
Roadmap
How it works
Examples
17. Extra-Opts HistoryExtra-Opts History
• First design as a feature to hide passwords
• Initial C ini-parsing routines by Sean Finney
→ Feb. 2007
• Initial Perl (N::P) implementation by Gavin Carr
→ March 2007
• Extra-opts implemented for C Plugins by Thomas
Guyot-Sionnest
→ March 2008
• First design as a feature to hide passwords
• Initial C ini-parsing routines by Sean Finney
→ Feb. 2007
• Initial Perl (N::P) implementation by Gavin Carr
→ March 2007
• Extra-opts implemented for C Plugins by Thomas
Guyot-Sionnest
→ March 2008
18. Extra-Opts RoadmapExtra-Opts Roadmap
• Implement for Perl plugins
→ Imply migrating them to Nagios::Plugin
• Shell scripts plugin support (maybe)
• Building extra-opts by default
• Implement for Perl plugins
→ Imply migrating them to Nagios::Plugin
• Shell scripts plugin support (maybe)
• Building extra-opts by default
19. How Extra-Opts Works?How Extra-Opts Works?
• Activated as soon as --extra-opts is present
• ini-file syntax:
[section]
; This is a comment
option = [value]
• Options can be repeated (!)
• Options are mapped to plugin’s arguments
→ Both short and long options are supported
→ Options without argument: bare equal sign
• Activated as soon as --extra-opts is present
• ini-file syntax:
[section]
; This is a comment
option = [value]
• Options can be repeated (!)
• Options are mapped to plugin’s arguments
→ Both short and long options are supported
→ Options without argument: bare equal sign
20. How Extra-Opts Works
(cont.)
How Extra-Opts Works
(cont.)
• Extra-option are always parsed first, in order of appearance
• Unless specified otherwise, repeated options override previous options
(plugin-specific)
• Therefore, extra-opts can be used as default values in many cases,
overridden by command-line options
• Error triggered on missing file/section
• Optional parameter:
--extra-opts=[section][@file]
• Default section is the plugin’s name
→ i.e. check_tcp will look for section [check_tcp]
• Extra-option are always parsed first, in order of appearance
• Unless specified otherwise, repeated options override previous options
(plugin-specific)
• Therefore, extra-opts can be used as default values in many cases,
overridden by command-line options
• Error triggered on missing file/section
• Optional parameter:
--extra-opts=[section][@file]
• Default section is the plugin’s name
→ i.e. check_tcp will look for section [check_tcp]
21. How Extra-Opts Works
(cont.)
How Extra-Opts Works
(cont.)
• Look for default files in standard locations if no file argument is given
(locations are subject to change)
1. plugins.ini
a. /etc/nagios
b. /usr/local/nagios/etc
c. /usr/local/etc/nagios
d. /etc/opt/nagios
2. nagios-plugins.ini
a. /etc
b. /usr/local/etc
c. /etc/opt
• Location can be overridden with NAGIOS_CONFIG_PATH
→ Will look for plugin.ini, then nagios-plugin.ini, and finally the default
locations above
• Look for default files in standard locations if no file argument is given
(locations are subject to change)
1. plugins.ini
a. /etc/nagios
b. /usr/local/nagios/etc
c. /usr/local/etc/nagios
d. /etc/opt/nagios
2. nagios-plugins.ini
a. /etc
b. /usr/local/etc
c. /etc/opt
• Location can be overridden with NAGIOS_CONFIG_PATH
→ Will look for plugin.ini, then nagios-plugin.ini, and finally the default
locations above
22. Simple UsageSimple Usage
• Default ini file
[check_stuff]
warning = 10
critical = 20
[some_stuff]
warning = 1
critical = 2
• Command Definition
define command {
command_name check_stuff_extra
command_line $USER1$/check_stuff --extra-opts --extra-opts=$ARG1$
}
• Service Definition
define service {
use local-service
host_name localhost
service_description Some Stuff
check_command check_stuff_extra!some_stuff
}
• Default ini file
[check_stuff]
warning = 10
critical = 20
[some_stuff]
warning = 1
critical = 2
• Command Definition
define command {
command_name check_stuff_extra
command_line $USER1$/check_stuff --extra-opts --extra-opts=$ARG1$
}
• Service Definition
define service {
use local-service
host_name localhost
service_description Some Stuff
check_command check_stuff_extra!some_stuff
}
25. Part III – Future ProjectsPart III – Future Projects
Nagios Plugin Library
New Thresholds Format
Plugin State Retention
Nagios v3 Output
Nagios Plugin Library
New Thresholds Format
Plugin State Retention
Nagios v3 Output
26. Nagios Plugins LibraryNagios Plugins Library
• Why a library?
• Make it easy to contribute new and forked C
plugins
• Standardization of current plugins
• Create an official API
• Why a library?
• Make it easy to contribute new and forked C
plugins
• Standardization of current plugins
• Create an official API
27. Plugins Library: DesignPlugins Library: Design
• Design goals:
• C is not object-oriented, we can’t just do “like”
Nagios::Plugins
• Will try to stay as close as possible to N::P API (same
function names..)
• Ease the parsing of arguments, use of thresholds and
output of status, i.e. simpler code
• Include “cmd” and “tcp” helpers?
• Design goals:
• C is not object-oriented, we can’t just do “like”
Nagios::Plugins
• Will try to stay as close as possible to N::P API (same
function names..)
• Ease the parsing of arguments, use of thresholds and
output of status, i.e. simpler code
• Include “cmd” and “tcp” helpers?
28. Plugins Library: StatusPlugins Library: Status
• Mostly sparse attempts at drawing up an API
• RFC at:
http://nagiosplugins.org/rfc/nagiosplugins-c-library
(under construction)
• Join the discussion at the team meetings
tomorrow, in the #nagios-devel IRC channel or in
the mailing list
• Mostly sparse attempts at drawing up an API
• RFC at:
http://nagiosplugins.org/rfc/nagiosplugins-c-library
(under construction)
• Join the discussion at the team meetings
tomorrow, in the #nagios-devel IRC channel or in
the mailing list
29. New Thresholds FormatNew Thresholds Format
• Rationale:
• Make thresholds more intuitive
→ Anyone can explain this:
-j @10:100 -k ~:200
• Ease definition of multiple thresholds
→ No more -w -c -W -C -j -k -x -y -z…
• Give more flexibility to plugins
→ Dynamic metrics, custom options, multiple ranges…
• Based on getsubopt()
→ Like mount options – widespread
• Rationale:
• Make thresholds more intuitive
→ Anyone can explain this:
-j @10:100 -k ~:200
• Ease definition of multiple thresholds
→ No more -w -c -W -C -j -k -x -y -z…
• Give more flexibility to plugins
→ Dynamic metrics, custom options, multiple ranges…
• Based on getsubopt()
→ Like mount options – widespread
30. New Thresholds: ProposalNew Thresholds: Proposal
• Command-line option:
--threshold={threshold_definition}, --th={...}
• Definition is like a mount option:
metric=cpu,warn={range},crit={range}
metric=usedspace,ok={range},prefix=Mi
• Simple ranges
0..10
15..inf
inf..-10
• Complex ranges
(10..20)
^0..10
^(0..10)
• Command-line option:
--threshold={threshold_definition}, --th={...}
• Definition is like a mount option:
metric=cpu,warn={range},crit={range}
metric=usedspace,ok={range},prefix=Mi
• Simple ranges
0..10
15..inf
inf..-10
• Complex ranges
(10..20)
^0..10
^(0..10)
31. New Thresholds:
Advantages
New Thresholds:
Advantages
• Don’t clutter the argument namespace
• Explicit range makes them more obvious
• Dynamic metrics
→ metric=sql_{col},... where “col” is a SQL column name
in a SQL-based check
• Extensible
• Support Hysteresis?
→ warn_upper={range},warn_lower={range}
• Add new or custom parameters
• Don’t clutter the argument namespace
• Explicit range makes them more obvious
• Dynamic metrics
→ metric=sql_{col},... where “col” is a SQL column name
in a SQL-based check
• Extensible
• Support Hysteresis?
→ warn_upper={range},warn_lower={range}
• Add new or custom parameters
33. New Thresholds: StatusNew Thresholds: Status
• RFC stage. See:
http://nagiosplugins.org/rfc/new_threshold_syntax
• Need to start coding the C and Perl functions.
• Probably written as part of the C library
• RFC stage. See:
http://nagiosplugins.org/rfc/new_threshold_syntax
• Need to start coding the C and Perl functions.
• Probably written as part of the C library
34. Plugin State RetentionPlugin State Retention
• Rationale:
• Plugins sometimes need to store data between
runs
• The plugins API could provide a standardized
way of storing data
• Very early stage of design (we’re still debating on
the possible methods…)
• Rationale:
• Plugins sometimes need to store data between
runs
• The plugins API could provide a standardized
way of storing data
• Very early stage of design (we’re still debating on
the possible methods…)
35. Plugin State Retention
(cont.)
Plugin State Retention
(cont.)
• Performance data
• Pros:
→ Bound to service
→ Simple to implement
→ No Nagios modifications needed
• Cons:
→ Only small numeric values can be passed
→ Requires remote executor to pass arguments
→ May pollute performance data with state retention
key/value pairs
• Performance data
• Pros:
→ Bound to service
→ Simple to implement
→ No Nagios modifications needed
• Cons:
→ Only small numeric values can be passed
→ Requires remote executor to pass arguments
→ May pollute performance data with state retention
key/value pairs
36. Plugin State Retention
(cont.)
Plugin State Retention
(cont.)
• File-descriptor passing
• Pros:
→ Bound to service in recommended setup
→ Large/binary storage possible
→ Allows flexibility in how data is stored regardless of
plugin language
• Cons:
→ Requires wrapper or Nagios/remote-executor
modifications
→ Less intuitive - users are not familiar with this
→ May require remote executor to pass FD data or
implement local storage
• File-descriptor passing
• Pros:
→ Bound to service in recommended setup
→ Large/binary storage possible
→ Allows flexibility in how data is stored regardless of
plugin language
• Cons:
→ Requires wrapper or Nagios/remote-executor
modifications
→ Less intuitive - users are not familiar with this
→ May require remote executor to pass FD data or
implement local storage
37. Plugin State Retention
(cont.)
Plugin State Retention
(cont.)
• Plugin-based with unique service ID
• Pros:
→ Simple concept
→ Bound to service in recommended setup
→ Large/binary storage possible
• Cons:
→ May requires Nagios modifications to create unique
identifiers
→ May require remote executor to pass arguments
→ Different data storage requires modifications to the
plugin
• Plugin-based with unique service ID
• Pros:
→ Simple concept
→ Bound to service in recommended setup
→ Large/binary storage possible
• Cons:
→ May requires Nagios modifications to create unique
identifiers
→ May require remote executor to pass arguments
→ Different data storage requires modifications to the
plugin
38. Nagios v3 OutputNagios v3 Output
• Since Nagios v3 it is now possible to send much more
detailed data to Nagios
• No official plugin use this functionality yet
• The C library should include v3 output function that can be
used by plugins
• Still unclear how it should be enabled (command-line,
default) and which plugins should take advantages
• Give us your thoughts!
• Since Nagios v3 it is now possible to send much more
detailed data to Nagios
• No official plugin use this functionality yet
• The C library should include v3 output function that can be
used by plugins
• Still unclear how it should be enabled (command-line,
default) and which plugins should take advantages
• Give us your thoughts!
40. ThanksThanks
To the team
To contributors
To users - all feedback is good
To Netways
To the team
To contributors
To users - all feedback is good
To Netways