Dashing or rather Smashing is an awesome Monitoring Dashboard, but it’s a pita to deploy. This talk will document the efforts we went trough to make the deployment of both dashing and the dashboards fully automated. It also will show how we test these dashboards using docker and how we build these pipelines with the JenkinsDSL.
OSMC 2017 | Log Monitoring with Logstash and Icinga by Walter HeckNETWAYS
Many of us are using elastic stack with logstash as a way to gather logs in a central place and parse them into understandable information. Throw on Kibana for root cause analysis and Grafana for beautiful dashboards and the picture is almost complete. But there has been one thing missing: monitoring logs for issues and taking action on them in icinga. This has recently been made possible by the logstash output for icinga (https://github.com/Icinga/logstash-output-icinga). This not only allows us to raise alerts, it also allows us to do things like schedule downtimes and add comments to hosts. In this session we’ll explore the possibilities brought on by this new logstash output and show you some examples of what you can do with it.
Embracing Observability in CI/CD with OpenTelemetryCyrille Le Clerc
Discover how observability and OpenTelemetry offer unprecedented solutions for both CI/CD administrators and dev teams to troubleshoot CI platforms and solve much more problems thanks to a vibrant community and a growing ecosystem. We will see with real life CI/CD pipelines using Jenkins, Maven, and Ansible how OpenTelemetry offers unprecedented solutions to troubleshoot software delivery pipelines. How the open source and standard nature of OpenTelemetry enables the emergence of a vibrant ecosystem of OpenTelemetry aware CI/CD tools to observe the entire software supply chain and help DevOps teams solve problems that go way beyond the observability use cases we have in mind.
https://community.cncf.io/events/details/cncf-cloud-native-canada-presents-november-2021-eastern-canadian-cncf-meetup-kubernetes-123-release-update-and-cicd-observability/
Nagios is an open source network and infrastructure monitoring tool. It allows you to monitor hosts and services and be alerted when issues arise. The document discusses Nagios in depth, including its history, architecture, plugins, and various mechanisms for remote monitoring of Linux, Windows, and network devices. It provides instructions for installing Nagios on CentOS/Redhat and configuring the basic monitoring of services.
Presented by: Sahdev Zala
Presented at the All Things Open 2021
Raleigh, NC, USA
Raleigh Convention Center
Abstract: When it comes to the importance of writing secure code, it gets a unanimous vote. This is even more important for an open codebase. Remember that there are several areas where code security must be taken into consideration rather than just thinking about authentication and authorization. This talk is focused on identifying common areas in code that get overlooked and poses a security risk from general weaknesses to critical threat. You will also learn about various code analysis techniques and tools, simple examples to avoid common pitfalls and how to use GitHub to easily publish security advisories and CVEs.
Creando microservicios con Java y Microprofile - Nicaragua JUGCésar Hernández
En esta sesión los asistentes presenciaron la base teórica y práctica para la creación de micro servicios con Java, JakartaEE, MicroProfile utilizando TomEE como servidor de aplicaciones.
Puppet Camp LA 2015: Server Management with Puppet on AWS for a fast-growing ...Puppet
- Puppet is used to manage 43 physical servers and 97 EC2 instances at Thumbtack, with roughly half the EC2 instances used for staging and research.
- A custom AMI and shell script were created to automate server provisioning and distribute configuration in a standardized way across environments.
- A development workflow was established using additional Puppet masters so developers can test changes locally or on staging instances before pushing to the main Puppet master.
Its easy! contributing to open source - Devnexus 2020César Hernández
The problem developers new to open source have is joining the community, starting to contribute, and using common open source tools. In this session, attendees will learn how to contribute and become valuable a part of any open source community. Attendees will learn soft and hard skills based on two case studies: Eclipse MicroProfile and Apache TomEE projects. Attendees will learn to access the culture of open source projects, expected behavior and attitude toward new contributors; how to start small, take risks, ask lots of questions; and how to get started with common open source tools like Maven, Git, and JIRA. Students will leave this workshop the soft skills and the hard skills required to make meaningful contributions.
Making Security Usable: Product Engineer PerspectiveC4Media
Anastasiia Voitova goes through several stages of inception and implementation of database encryption and intrusion detection tools. She shows "behind the scenes" work inside a cryptographic engineering company, how customers are one of the most useful people to learn from, and how getting over "we tell you what to do" mentality makes security tools better. Filmed at qconnewyork.com.
Anastasiia Voitova is Product Engineer at Cossacklabs. She has plenty of experience in building mobile apps. She developed many applications, frequently taking care of both iOS and server sides of the system.
OSMC 2017 | Log Monitoring with Logstash and Icinga by Walter HeckNETWAYS
Many of us are using elastic stack with logstash as a way to gather logs in a central place and parse them into understandable information. Throw on Kibana for root cause analysis and Grafana for beautiful dashboards and the picture is almost complete. But there has been one thing missing: monitoring logs for issues and taking action on them in icinga. This has recently been made possible by the logstash output for icinga (https://github.com/Icinga/logstash-output-icinga). This not only allows us to raise alerts, it also allows us to do things like schedule downtimes and add comments to hosts. In this session we’ll explore the possibilities brought on by this new logstash output and show you some examples of what you can do with it.
Embracing Observability in CI/CD with OpenTelemetryCyrille Le Clerc
Discover how observability and OpenTelemetry offer unprecedented solutions for both CI/CD administrators and dev teams to troubleshoot CI platforms and solve much more problems thanks to a vibrant community and a growing ecosystem. We will see with real life CI/CD pipelines using Jenkins, Maven, and Ansible how OpenTelemetry offers unprecedented solutions to troubleshoot software delivery pipelines. How the open source and standard nature of OpenTelemetry enables the emergence of a vibrant ecosystem of OpenTelemetry aware CI/CD tools to observe the entire software supply chain and help DevOps teams solve problems that go way beyond the observability use cases we have in mind.
https://community.cncf.io/events/details/cncf-cloud-native-canada-presents-november-2021-eastern-canadian-cncf-meetup-kubernetes-123-release-update-and-cicd-observability/
Nagios is an open source network and infrastructure monitoring tool. It allows you to monitor hosts and services and be alerted when issues arise. The document discusses Nagios in depth, including its history, architecture, plugins, and various mechanisms for remote monitoring of Linux, Windows, and network devices. It provides instructions for installing Nagios on CentOS/Redhat and configuring the basic monitoring of services.
Presented by: Sahdev Zala
Presented at the All Things Open 2021
Raleigh, NC, USA
Raleigh Convention Center
Abstract: When it comes to the importance of writing secure code, it gets a unanimous vote. This is even more important for an open codebase. Remember that there are several areas where code security must be taken into consideration rather than just thinking about authentication and authorization. This talk is focused on identifying common areas in code that get overlooked and poses a security risk from general weaknesses to critical threat. You will also learn about various code analysis techniques and tools, simple examples to avoid common pitfalls and how to use GitHub to easily publish security advisories and CVEs.
Creando microservicios con Java y Microprofile - Nicaragua JUGCésar Hernández
En esta sesión los asistentes presenciaron la base teórica y práctica para la creación de micro servicios con Java, JakartaEE, MicroProfile utilizando TomEE como servidor de aplicaciones.
Puppet Camp LA 2015: Server Management with Puppet on AWS for a fast-growing ...Puppet
- Puppet is used to manage 43 physical servers and 97 EC2 instances at Thumbtack, with roughly half the EC2 instances used for staging and research.
- A custom AMI and shell script were created to automate server provisioning and distribute configuration in a standardized way across environments.
- A development workflow was established using additional Puppet masters so developers can test changes locally or on staging instances before pushing to the main Puppet master.
Its easy! contributing to open source - Devnexus 2020César Hernández
The problem developers new to open source have is joining the community, starting to contribute, and using common open source tools. In this session, attendees will learn how to contribute and become valuable a part of any open source community. Attendees will learn soft and hard skills based on two case studies: Eclipse MicroProfile and Apache TomEE projects. Attendees will learn to access the culture of open source projects, expected behavior and attitude toward new contributors; how to start small, take risks, ask lots of questions; and how to get started with common open source tools like Maven, Git, and JIRA. Students will leave this workshop the soft skills and the hard skills required to make meaningful contributions.
Making Security Usable: Product Engineer PerspectiveC4Media
Anastasiia Voitova goes through several stages of inception and implementation of database encryption and intrusion detection tools. She shows "behind the scenes" work inside a cryptographic engineering company, how customers are one of the most useful people to learn from, and how getting over "we tell you what to do" mentality makes security tools better. Filmed at qconnewyork.com.
Anastasiia Voitova is Product Engineer at Cossacklabs. She has plenty of experience in building mobile apps. She developed many applications, frequently taking care of both iOS and server sides of the system.
Slides that were presented during the webrtc Qt Cmake tutorial at IIT-RTC in October 2017 in Chicago. The slides are not yet complete, and will be updated later.
This document discusses the DevOps movement and related concepts. It provides background on how development and operations teams historically worked separately ("Devs vs Ops") and the problems that caused. DevOps aims to break down barriers between teams through practices like automation, continuous integration/delivery, infrastructure as code, and collaboration between teams from the beginning of a project. The document outlines problems DevOps aims to solve and gives examples of tools and approaches for bringing development and operations cultures together.
Virtual Puppet User Group: Puppet Development Kit (PDK) and Puppet Platform 6...Puppet
This document discusses using the Puppet Development Kit (PDK) to improve the testing and development of Puppet modules. The PDK provides tools and configuration to make module development easier and shift testing left. It allows generating module skeletons with tests, validating code quality, and testing modules against different Puppet versions. The latest PDK releases add support for testing modules against multiple Puppet versions simultaneously.
Setting up Notifications, Alerts & Webhooks with Flux v2 by Alison DowdneyWeaveworks
Watch the recording here: https://youtu.be/cakxixc-yQk
❗️ Notifications & Alerts ⚠️
When operating a cluster, different teams may wish to receive notifications about the status of their GitOps pipelines. For example, the on-call team would receive alerts about reconciliation failures in the cluster, while the dev team may wish to be alerted when a new version of an app was deployed and if the deployment is healthy.
Webhook Receivers
The GitOps toolkit controllers are by design pull-based. In order to notify the controllers about changes in Git or Helm repositories, you can setup webhooks and trigger a cluster reconciliation every time a source changes. Using webhook receivers, you can build push-based GitOps pipelines that react to external events.
Alison Dowdney, Developer Experience Engineer at Weaveworks and CNCF Ambassador, walks through how to define a provider, an alert, git commit status, exposing the webhook receiver and defining a git repository and receiver.
Resources
Flux2 Documentation: https://fluxcd.io/docs/
Flux Guide: Setup Notifications: https://fluxcd.io/docs/guides/notifications/
Flux Guide: Setup Webhook receivers: https://fluxcd.io/docs/guides/webhook-receivers/
Flux Roadmap: https://fluxcd.io/docs/roadmap/
Alison's Demo Repo: https://github.com/alisondy/flux-demos
In this presentation we will show how to integrate New Relic monitoring with Terraform infrastructure as code templates, setting up alerts, dashboards, and other monitoring artifacts as part of your application deployment pipeline. We will demonstrate an open source example and show how it behaves under a load as it fails.
DevOps aims to bridge the gap between development and operations through practices like infrastructure as code, continuous integration, continuous deployment, continuous testing, and continuous delivery. These practices allow infrastructure to be version controlled like code and for automated testing and deployment to catch errors early and provide quick feedback throughout the development process.
Monitoring Cloud Foundry environments can be challenging due to the large number of moving parts. GE Digital implemented Sensu and Graphite to provide automatic, extendable monitoring of their Cloud Foundry platforms. Sensu collects metrics from all nodes and components and sends them to Graphite for storage and visualization in Grafana. This provides visibility into the health and performance of Cloud Foundry deployments to help meet production needs.
This is the slide deck I used for the developer workshop I presented the first day of OpenStack Day India 2016. It gives and overview of how to be a contributor to OpenStack with a walk through of the various steps to get started and tips and tricks for working with the development process.
Win Spinnaker with Winnaker - Open Source North Conf 2017Medya Ghazizadeh
Spinnaker is an open source tool for deploying software releases to multiple cloud providers. Winnaker is a tool built by Target that helps automate common tasks when using Spinnaker like starting pipelines, getting stage details, integrating with chat tools, and troubleshooting errors. It removes company-specific code so others can contribute. Winnaker is distributed as a Docker container and makes it easy to pressure test Spinnaker and cloud environments by running multiple pipelines.
It’s 2021. Why are we -still- rebooting for patches? A look at Live Patching.All Things Open
Presented by: Igor Seletskiy
Presented at the All Things Open 2021
Raleigh, NC, USA
Raleigh Convention Center
Abstract: IT Teams know the drill. New security bulletins, new issues, new patches to deploy. Schedule another maintenance operation and prepare for system downtime.
There is a better way to do things. Live patching has been around in the Linux Kernel for some time now, but adoption has not been ideal so far - either because of a lack of trust in the technology or just lack of awareness - or sysadmins just enjoy interrupting their workloads or users.
Live patching consists of two aspects. First, there has to be a mechanism for function redirection in the kernel. As in many things, the kernel actually provides three different subset of tools that provide this functionality - kprobes, fprobes and Livepatching. Secondly, Live Patching relies on a set of tools to generate the actual patches to deploy, replacing the old code with new one. This is arguably the most involved part: you need to fit your new code in the proper space, you can’t overwrite other unrelated code and you need to maintain compatibility with other functions. If you change your parameter list, for example, its game over - something will break in the worst possible way.
In this talk we’ll go over issues like Consistency model, patch generation, deployment mechanisms and identify situations that are ideal candidates for live patching instead of traditional patching operations.
At GOTO Amsterdam in 2019 I presented how to create an effective cloud native developer workflow. Two years later and many new developer technologies have come and gone, but I still hear daily from cloud developers about the pain and friction associated with building, debugging, and deploying to the cloud. In this talk I'll share my latest learning on how to bring the fun and productivity back into delivering Kubernetes-based software.
In this talk, you will:
- Learn why the core tenets of continuous delivery -- speed and safety -- must be considered in all parts of the cloud native SDLC
- Explore how cloud native coding benefits from thinking separately about the inner development loop, continuous integration, continuous deployment, observability, and analysis
- Understand how cloud native best practices and tooling fit together. Learn about artifact syncing (e.g. Skaffold), dev environment bridging (e.g. Telepresence), GitOps (e.g. Argo), and observability-focused monitoring (e.g. Prometheus, Jaeger)
- Explore the importance of cultivating an effective cloud platform and associated team of experts
- Walk away with an overview of tools that can help you develop and debug effectively when using Kubernetes
Puppet Camp Sydney 2015: Puppet and AWS is easy right.....? Puppet
Puppet and AWS is easy ...... ?
This document discusses how Puppet and AWS were used to automate infrastructure at Healthdirect Australia. It describes how manual deployments, lack of testing and inconsistent environments were solved through implementing Puppet for configuration management and AWS modules. Key steps included establishing solid development practices with Vagrant and testing, integrating Puppet with continuous delivery tools, and developing AWS modules for provisioning infrastructure through Puppet. Current work aims to integrate Docker scheduling and dynamic service discovery.
How do you integrate security within a Continuous Deployment (CD) environment - where every 5 minutes a feature, an enhancement, or a bug fix needs to be released?
Traditional application security tools which require lengthy periods of configuration, tuning and
application learning have become irrelevant in these fast-pace environments. Yet, falling back only on
the secure coding practices of the developer cannot be tolerated.
Secure coding requires a new approach where security tools become part of the development environment – and eliminate any unnecessary overhead. By collaborating with development teams, understanding their needs and requirements, you can pave the way to a secure deployment in minutes.
This document discusses setting up Docker for PHP projects using DDEV. It introduces DDEV as a tool for adding Docker to PHP applications with an easy command line interface and configuration. It then demonstrates adding DDEV to a Laravel project, configuring DDEV, adding basic tests, deploying the application to GitLab for continuous integration and delivery (CI/CD) using Envoy to define tasks for the remote server and manage releases.
The document describes the Django infrastructure setup at UGent for developing and deploying web applications. It discusses setting up Git repositories for source code and packages, using Git flow for version control. It also outlines the server infrastructure with load balancers, web and app servers, and databases. Projects are deployed through a continuous deployment pipeline using Jenkins to build, test and deploy code through testing, QA and production environments. Future plans include adding asynchronous messaging, response caching, improved acceptance testing and monitoring.
The devops approach to monitoring, Open Source and Infrastructure as Code StyleJulien Pivotto
Monitoring is critical for every decent application that runs on production. Many of the monitoring tools widely used show their limits at the age of Infrastructure as Code and Cloud computing. Let's investigate how monitoring can face the new challenges: scalability, reproducability and automation
DevSecCon London 2019: A Kernel of Truth: Intrusion Detection and Attestation...DevSecCon
Matt Carroll
Infrastructure Security Engineer at Yelp
"Attestation is hard" is something you might hear from security researchers tracking nation states and APTs, but it's actually pretty true for most network-connected systems!
Modern deployment methodologies mean that disparate teams create workloads for shared worker-hosts (ranging from Jenkins to Kubernetes and all the other orchestrators and CI tools in-between), meaning that at any given moment your hosts could be running any one of a number of services, connecting to who-knows-what on the internet.
So when your network-based intrusion detection system (IDS) opaquely declares that one of these machines has made an "anomalous" network connection, how do you even determine if it's business as usual? Sure you can log on to the host to try and figure it out, but (in case you hadn't noticed) computers are pretty fast these days, and once the connection is closed it might as well not have happened... Assuming it wasn't actually a reverse shell...
At Yelp we turned to the Linux kernel to tell us whodunit! Utilizing the Linux kernel's eBPF subsystem - an in-kernel VM with syscall hooking capabilities - we're able to aggregate metadata about the calling process tree for any internet-bound TCP connection by filtering IPs and ports in-kernel and enriching with process tree information in userland. The result is "pidtree-bcc": a supplementary IDS. Now whenever there's an alert for a suspicious connection, we just search for it in our SIEM (spoiler alert: it's nearly always an engineer doing something "innovative")! And the cherry on top? It's stupid fast with negligible overhead, creating a much higher signal-to-noise ratio than the kernels firehose-like audit subsystems.
This talk will look at how you can tune the signal-to-noise ratio of your IDS by making it reflect your business logic and common usage patterns, get more work done by reducing MTTR for false positives, use eBPF and the kernel to do all the hard work for you, accidentally load test your new IDS by not filtering all RFC-1918 addresses, and abuse Docker to get to production ASAP!
As well as looking at some of the technologies that the kernel puts at your disposal, this talk will also tell pidtree-bcc's road from hackathon project to production system and how focus on demonstrating business value early on allowed the organization to give us buy-in to build and deploy a brand new project from scratch.
Mike Guthrie - Revamping Your 10 Year Old Nagios InstallationNagios
Mike Guthrie - Revamping Your 10 Year Old Nagios Installation - Mike Merideth from VictorOps talks about the challenges of
sharing responsibility for monitoring in the DevOps world. Learn several strategies for keeping your configuration correct,
consistent, and up-to-date when several people are working on it.
Continous Delivery of your InfrastructureKris Buytaert
This document discusses continuous delivery of infrastructure through automation. It promotes automating builds, testing, deployment, and infrastructure configuration to allow for frequent, reliable releases. Continuous delivery aims to allow features to be released in hours rather than years through practices like infrastructure as code and treating configuration like code. Automating builds, testing, and deploying infrastructure in a pipeline allows for consistent, low-risk releases.
Slides that were presented during the webrtc Qt Cmake tutorial at IIT-RTC in October 2017 in Chicago. The slides are not yet complete, and will be updated later.
This document discusses the DevOps movement and related concepts. It provides background on how development and operations teams historically worked separately ("Devs vs Ops") and the problems that caused. DevOps aims to break down barriers between teams through practices like automation, continuous integration/delivery, infrastructure as code, and collaboration between teams from the beginning of a project. The document outlines problems DevOps aims to solve and gives examples of tools and approaches for bringing development and operations cultures together.
Virtual Puppet User Group: Puppet Development Kit (PDK) and Puppet Platform 6...Puppet
This document discusses using the Puppet Development Kit (PDK) to improve the testing and development of Puppet modules. The PDK provides tools and configuration to make module development easier and shift testing left. It allows generating module skeletons with tests, validating code quality, and testing modules against different Puppet versions. The latest PDK releases add support for testing modules against multiple Puppet versions simultaneously.
Setting up Notifications, Alerts & Webhooks with Flux v2 by Alison DowdneyWeaveworks
Watch the recording here: https://youtu.be/cakxixc-yQk
❗️ Notifications & Alerts ⚠️
When operating a cluster, different teams may wish to receive notifications about the status of their GitOps pipelines. For example, the on-call team would receive alerts about reconciliation failures in the cluster, while the dev team may wish to be alerted when a new version of an app was deployed and if the deployment is healthy.
Webhook Receivers
The GitOps toolkit controllers are by design pull-based. In order to notify the controllers about changes in Git or Helm repositories, you can setup webhooks and trigger a cluster reconciliation every time a source changes. Using webhook receivers, you can build push-based GitOps pipelines that react to external events.
Alison Dowdney, Developer Experience Engineer at Weaveworks and CNCF Ambassador, walks through how to define a provider, an alert, git commit status, exposing the webhook receiver and defining a git repository and receiver.
Resources
Flux2 Documentation: https://fluxcd.io/docs/
Flux Guide: Setup Notifications: https://fluxcd.io/docs/guides/notifications/
Flux Guide: Setup Webhook receivers: https://fluxcd.io/docs/guides/webhook-receivers/
Flux Roadmap: https://fluxcd.io/docs/roadmap/
Alison's Demo Repo: https://github.com/alisondy/flux-demos
In this presentation we will show how to integrate New Relic monitoring with Terraform infrastructure as code templates, setting up alerts, dashboards, and other monitoring artifacts as part of your application deployment pipeline. We will demonstrate an open source example and show how it behaves under a load as it fails.
DevOps aims to bridge the gap between development and operations through practices like infrastructure as code, continuous integration, continuous deployment, continuous testing, and continuous delivery. These practices allow infrastructure to be version controlled like code and for automated testing and deployment to catch errors early and provide quick feedback throughout the development process.
Monitoring Cloud Foundry environments can be challenging due to the large number of moving parts. GE Digital implemented Sensu and Graphite to provide automatic, extendable monitoring of their Cloud Foundry platforms. Sensu collects metrics from all nodes and components and sends them to Graphite for storage and visualization in Grafana. This provides visibility into the health and performance of Cloud Foundry deployments to help meet production needs.
This is the slide deck I used for the developer workshop I presented the first day of OpenStack Day India 2016. It gives and overview of how to be a contributor to OpenStack with a walk through of the various steps to get started and tips and tricks for working with the development process.
Win Spinnaker with Winnaker - Open Source North Conf 2017Medya Ghazizadeh
Spinnaker is an open source tool for deploying software releases to multiple cloud providers. Winnaker is a tool built by Target that helps automate common tasks when using Spinnaker like starting pipelines, getting stage details, integrating with chat tools, and troubleshooting errors. It removes company-specific code so others can contribute. Winnaker is distributed as a Docker container and makes it easy to pressure test Spinnaker and cloud environments by running multiple pipelines.
It’s 2021. Why are we -still- rebooting for patches? A look at Live Patching.All Things Open
Presented by: Igor Seletskiy
Presented at the All Things Open 2021
Raleigh, NC, USA
Raleigh Convention Center
Abstract: IT Teams know the drill. New security bulletins, new issues, new patches to deploy. Schedule another maintenance operation and prepare for system downtime.
There is a better way to do things. Live patching has been around in the Linux Kernel for some time now, but adoption has not been ideal so far - either because of a lack of trust in the technology or just lack of awareness - or sysadmins just enjoy interrupting their workloads or users.
Live patching consists of two aspects. First, there has to be a mechanism for function redirection in the kernel. As in many things, the kernel actually provides three different subset of tools that provide this functionality - kprobes, fprobes and Livepatching. Secondly, Live Patching relies on a set of tools to generate the actual patches to deploy, replacing the old code with new one. This is arguably the most involved part: you need to fit your new code in the proper space, you can’t overwrite other unrelated code and you need to maintain compatibility with other functions. If you change your parameter list, for example, its game over - something will break in the worst possible way.
In this talk we’ll go over issues like Consistency model, patch generation, deployment mechanisms and identify situations that are ideal candidates for live patching instead of traditional patching operations.
At GOTO Amsterdam in 2019 I presented how to create an effective cloud native developer workflow. Two years later and many new developer technologies have come and gone, but I still hear daily from cloud developers about the pain and friction associated with building, debugging, and deploying to the cloud. In this talk I'll share my latest learning on how to bring the fun and productivity back into delivering Kubernetes-based software.
In this talk, you will:
- Learn why the core tenets of continuous delivery -- speed and safety -- must be considered in all parts of the cloud native SDLC
- Explore how cloud native coding benefits from thinking separately about the inner development loop, continuous integration, continuous deployment, observability, and analysis
- Understand how cloud native best practices and tooling fit together. Learn about artifact syncing (e.g. Skaffold), dev environment bridging (e.g. Telepresence), GitOps (e.g. Argo), and observability-focused monitoring (e.g. Prometheus, Jaeger)
- Explore the importance of cultivating an effective cloud platform and associated team of experts
- Walk away with an overview of tools that can help you develop and debug effectively when using Kubernetes
Puppet Camp Sydney 2015: Puppet and AWS is easy right.....? Puppet
Puppet and AWS is easy ...... ?
This document discusses how Puppet and AWS were used to automate infrastructure at Healthdirect Australia. It describes how manual deployments, lack of testing and inconsistent environments were solved through implementing Puppet for configuration management and AWS modules. Key steps included establishing solid development practices with Vagrant and testing, integrating Puppet with continuous delivery tools, and developing AWS modules for provisioning infrastructure through Puppet. Current work aims to integrate Docker scheduling and dynamic service discovery.
How do you integrate security within a Continuous Deployment (CD) environment - where every 5 minutes a feature, an enhancement, or a bug fix needs to be released?
Traditional application security tools which require lengthy periods of configuration, tuning and
application learning have become irrelevant in these fast-pace environments. Yet, falling back only on
the secure coding practices of the developer cannot be tolerated.
Secure coding requires a new approach where security tools become part of the development environment – and eliminate any unnecessary overhead. By collaborating with development teams, understanding their needs and requirements, you can pave the way to a secure deployment in minutes.
This document discusses setting up Docker for PHP projects using DDEV. It introduces DDEV as a tool for adding Docker to PHP applications with an easy command line interface and configuration. It then demonstrates adding DDEV to a Laravel project, configuring DDEV, adding basic tests, deploying the application to GitLab for continuous integration and delivery (CI/CD) using Envoy to define tasks for the remote server and manage releases.
The document describes the Django infrastructure setup at UGent for developing and deploying web applications. It discusses setting up Git repositories for source code and packages, using Git flow for version control. It also outlines the server infrastructure with load balancers, web and app servers, and databases. Projects are deployed through a continuous deployment pipeline using Jenkins to build, test and deploy code through testing, QA and production environments. Future plans include adding asynchronous messaging, response caching, improved acceptance testing and monitoring.
The devops approach to monitoring, Open Source and Infrastructure as Code StyleJulien Pivotto
Monitoring is critical for every decent application that runs on production. Many of the monitoring tools widely used show their limits at the age of Infrastructure as Code and Cloud computing. Let's investigate how monitoring can face the new challenges: scalability, reproducability and automation
DevSecCon London 2019: A Kernel of Truth: Intrusion Detection and Attestation...DevSecCon
Matt Carroll
Infrastructure Security Engineer at Yelp
"Attestation is hard" is something you might hear from security researchers tracking nation states and APTs, but it's actually pretty true for most network-connected systems!
Modern deployment methodologies mean that disparate teams create workloads for shared worker-hosts (ranging from Jenkins to Kubernetes and all the other orchestrators and CI tools in-between), meaning that at any given moment your hosts could be running any one of a number of services, connecting to who-knows-what on the internet.
So when your network-based intrusion detection system (IDS) opaquely declares that one of these machines has made an "anomalous" network connection, how do you even determine if it's business as usual? Sure you can log on to the host to try and figure it out, but (in case you hadn't noticed) computers are pretty fast these days, and once the connection is closed it might as well not have happened... Assuming it wasn't actually a reverse shell...
At Yelp we turned to the Linux kernel to tell us whodunit! Utilizing the Linux kernel's eBPF subsystem - an in-kernel VM with syscall hooking capabilities - we're able to aggregate metadata about the calling process tree for any internet-bound TCP connection by filtering IPs and ports in-kernel and enriching with process tree information in userland. The result is "pidtree-bcc": a supplementary IDS. Now whenever there's an alert for a suspicious connection, we just search for it in our SIEM (spoiler alert: it's nearly always an engineer doing something "innovative")! And the cherry on top? It's stupid fast with negligible overhead, creating a much higher signal-to-noise ratio than the kernels firehose-like audit subsystems.
This talk will look at how you can tune the signal-to-noise ratio of your IDS by making it reflect your business logic and common usage patterns, get more work done by reducing MTTR for false positives, use eBPF and the kernel to do all the hard work for you, accidentally load test your new IDS by not filtering all RFC-1918 addresses, and abuse Docker to get to production ASAP!
As well as looking at some of the technologies that the kernel puts at your disposal, this talk will also tell pidtree-bcc's road from hackathon project to production system and how focus on demonstrating business value early on allowed the organization to give us buy-in to build and deploy a brand new project from scratch.
Mike Guthrie - Revamping Your 10 Year Old Nagios InstallationNagios
Mike Guthrie - Revamping Your 10 Year Old Nagios Installation - Mike Merideth from VictorOps talks about the challenges of
sharing responsibility for monitoring in the DevOps world. Learn several strategies for keeping your configuration correct,
consistent, and up-to-date when several people are working on it.
Continous Delivery of your InfrastructureKris Buytaert
This document discusses continuous delivery of infrastructure through automation. It promotes automating builds, testing, deployment, and infrastructure configuration to allow for frequent, reliable releases. Continuous delivery aims to allow features to be released in hours rather than years through practices like infrastructure as code and treating configuration like code. Automating builds, testing, and deploying infrastructure in a pipeline allows for consistent, low-risk releases.
Pipeline as code for your infrastructure as CodeKris Buytaert
This document discusses infrastructure as code (IAC) and continuous delivery pipelines. It introduces Puppet as an open-source configuration management tool for defining infrastructure as code. It emphasizes treating infrastructure configuration like code by versioning it, testing it, and promoting changes through environments like development, test, and production. The document also discusses using Jenkins for continuous integration to test application and infrastructure code changes and building automated pipelines for packaging and deploying changes.
This document discusses the concepts of DevOps, SecOps, and DevSecOps. It describes how the traditional divisions between development, operations, and security can lead to problems, and how adopting a DevOps culture and practices like continuous integration, infrastructure as code, and automation can help break down silos. It emphasizes that DevSecOps is about collaboration, culture change, and bringing security practices into the development lifecycle from the beginning.
The document discusses the evolution of topics within the DevOps movement over time, including culture, automation, and monitoring. It notes how topics have shifted from specific tools like Puppet and Nagios to broader concepts like containers and microservices. The document also addresses challenges faced by operations teams in adopting new technologies, including pressure to use the latest tools, preexisting technical debt, and lack of time. It argues tools alone won't fix cultural issues and advocates focusing on core responsibilities rather than trying to manage every new technology.
This document discusses the challenges that have emerged with the rise of Docker containers in software development. It describes how Docker was initially seen as a solution to issues like unclear dependencies and availability of machines, but that its widespread adoption has introduced new problems around infrastructure ownership and maintenance. Specifically, it notes that developers often build Docker images without oversight from operations teams, resulting in images that cannot be rebuilt or secured properly. The document argues that true benefits of Docker will only be realized when development and operations teams collaborate closely on containerization following principles of automation, measurement, and infrastructure as code.
Automating MySQL operations with PuppetKris Buytaert
This document summarizes a presentation about automating MySQL operations with Puppet. It discusses:
- Why automation is important for consistency, security, and disaster recovery. Manual changes can introduce bugs and inconsistencies.
- Puppet is an open source configuration management tool that can be used to automate MySQL configuration, users, backups, replication, and high availability clustering with tools like Corosync/Pacemaker.
- Puppet modules define the desired state and Puppet ensures the actual state matches by making necessary changes. This provides auditability and change tracking through version control of Puppet code.
From Config Management Sucks to #cfgmgmtlove Kris Buytaert
This document summarizes Kris Buytaert's talk on the evolution of config management from the 1990s to present day. It discusses early approaches like manual installations and system imaging tools. It then covers the rise of infrastructure as code using tools like Puppet, Chef, and Docker. The talk addresses challenges like getting operations teams to adopt new methods and complexities that can arise from dependencies and modules. It promotes treating infrastructure like code with development practices for versioning, testing, and continuous integration/deployment.
Run stuff, Deploy Stuff, Jax London 2017 EditionKris Buytaert
This document summarizes a presentation on DevOps by Kris Buytaert. It discusses how development and operations teams used to have different goals, leading to blame between teams. DevOps aims to change culture to emphasize automation, measurement, and sharing between teams. It also discusses challenges like code that is difficult to deploy, configure, run, cluster, secure or monitor in production. The presenter advocates for defining requirements like monitoring and logs to consider work "done", and measuring everything to improve.
Icinga Camp Amsterdam - Infrastructure as CodeIcinga
Kris Buytaert discusses the importance of treating infrastructure as code using automation tools like Puppet, Chef, and Salt. This allows organizations to deploy and manage infrastructure in a reproducible, versioned manner. Manual infrastructure management is prone to errors, difficult to audit, and does not scale. Infrastructure as code improves security, speeds up deployments, and makes monitoring more reliable by ensuring all systems are deployed and configured consistently. While infrastructure as code solves many challenges, it is still software and needs to be treated like code through practices like testing and continuous integration.
On the Importance of Infrastructure as CodeKris Buytaert
Kris Buytaert discusses the importance of treating infrastructure as code using automation tools like Puppet, Chef, and Salt. This allows organizations to deploy and manage infrastructure in a reproducible, versioned manner. Manual infrastructure management is prone to errors, difficult to audit, and does not scale. Infrastructure as code helps solve problems like security, monitoring, backups and speeds up deployment times by treating infrastructure like application code.
The document discusses challenges with deploying a SaaS platform on-premises for customers. It notes that automation and tools developed for internal use may not work well for external customers due to different constraints around access, networking, security policies and variability between customer environments. Deploying on-premises requires implementing customizations for each unique customer setup, reduces ability to easily fix issues, and significantly increases costs compared to hosting in one's own infrastructure. The document recommends choosing a SaaS model over on-premises if possible to avoid these challenges.
Kris Buytaert discusses his transition from developer to operations engineer to consultant. He advocates for starting DevOps transformations with operations skills and involvement to improve success rates and adoption. Buytaert outlines four common transition cases for startups and multinationals, highlighting the importance of cultural and skills alignment between development and operations.
Kris Buytaert advocates for defining infrastructure and pipelines as code using tools like Jenkins Job DSL and Git. This allows infrastructure to be version controlled and centrally managed, with reusable jobs that can be updated in sync. Defining pipelines as code stops operators from manually "clicking" in interfaces and ensures consistency across teams.
Closing the gap between Distros(devs) and their Users(ops)Kris Buytaert
Kris Buytaert discusses the gap between software developers (devs) and operations teams (ops) when using Linux distributions. There is often a lack of communication and different priorities, with devs focused on getting the latest code quickly and ops concerned with stability, security and deployability at scale. Buytaert provides examples of issues, such as packages containing PHP code directly in /etc, and distributions packaging outdated or broken upstream software. He advocates for better collaboration between distributions, upstream projects and power users through activities like meetups, user advisory boards, and tools to more easily build and test packages. The goal is to improve culture, automation, measurement and sharing between all parties.
This document discusses the challenges of adopting DevOps practices and containerization with Docker. It notes that while Docker and containers are useful technologies, they often recreate issues if culture and collaboration between development and operations teams does not change. Several "un" problems are outlined, such as code that is unbuildable, unpackageable, undeployable etc. due to a lack of automation, configuration management, or operations involvement in the development process. The document stresses that tools are not the most important factor - it is about cultural change, collaboration, shared goals and ensuring development outputs can be supported in production environments.
OSDC 2015: Kris Buytaert | From ConfigManagementSucks to ConfigManagementLoveNETWAYS
Kris Buytaert discussed the evolution of infrastructure deployment and configuration management over the past 20 years. Early methods involved manual installations and copying config files (1996) while later approaches included tools like Mondo Rescue for single instances (2001), SystemImager for reproducible infrastructures (2003), and Kickstart/FAI for OS installation (2005). The talk advocates treating infrastructure as code using tools like Puppet, Chef, and CFEngine, with best practices like versioning, testing, and separate environments. It acknowledges early challenges in getting operators to adopt new methods but argues they are now essential for managing modern, distributed systems.
Similar to OSMC 2017 | Groovy There is a Docker in my Dashing Pipeline by Kris Buytaert (20)
Odoo releases a new update every year. The latest version, Odoo 17, came out in October 2023. It brought many improvements to the user interface and user experience, along with new features in modules like accounting, marketing, manufacturing, websites, and more.
The Odoo 17 update has been a hot topic among startups, mid-sized businesses, large enterprises, and Odoo developers aiming to grow their businesses. Since it is now already the first quarter of 2024, you must have a clear idea of what Odoo 17 entails and what it can offer your business if you are still not aware of it.
This blog covers the features and functionalities. Explore the entire blog and get in touch with expert Odoo ERP consultants to leverage Odoo 17 and its features for your business too.
An Overview of Odoo ERP
Odoo ERP was first released as OpenERP software in February 2005. It is a suite of business applications used for ERP, CRM, eCommerce, websites, and project management. Ten years ago, the Odoo Enterprise edition was launched to help fund the Odoo Community version.
When you compare Odoo Community and Enterprise, the Enterprise edition offers exclusive features like mobile app access, Odoo Studio customisation, Odoo hosting, and unlimited functional support.
Today, Odoo is a well-known name used by companies of all sizes across various industries, including manufacturing, retail, accounting, marketing, healthcare, IT consulting, and R&D.
The latest version, Odoo 17, has been available since October 2023. Key highlights of this update include:
Enhanced user experience with improvements to the command bar, faster backend page loading, and multiple dashboard views.
Instant report generation, credit limit alerts for sales and invoices, separate OCR settings for invoice creation, and an auto-complete feature for forms in the accounting module.
Improved image handling and global attribute changes for mailing lists in email marketing.
A default auto-signature option and a refuse-to-sign option in HR modules.
Options to divide and merge manufacturing orders, track the status of manufacturing orders, and more in the MRP module.
Dark mode in Odoo 17.
Now that the Odoo 17 announcement is official, let’s look at what’s new in Odoo 17!
What is Odoo ERP 17?
Odoo 17 is the latest version of one of the world’s leading open-source enterprise ERPs. This version has come up with significant improvements explained here in this blog. Also, this new version aims to introduce features that enhance time-saving, efficiency, and productivity for users across various organisations.
Odoo 17, released at the Odoo Experience 2023, brought notable improvements to the user interface and added new functionalities with enhancements in performance, accessibility, data analysis, and management, further expanding its reach in the market.
Orca: Nocode Graphical Editor for Container OrchestrationPedro J. Molina
Tool demo on CEDI/SISTEDES/JISBD2024 at A Coruña, Spain. 2024.06.18
"Orca: Nocode Graphical Editor for Container Orchestration"
by Pedro J. Molina PhD. from Metadev
Unveiling the Advantages of Agile Software Development.pdfbrainerhub1
Learn about Agile Software Development's advantages. Simplify your workflow to spur quicker innovation. Jump right in! We have also discussed the advantages.
Boost Your Savings with These Money Management AppsJhone kinadey
A money management app can transform your financial life by tracking expenses, creating budgets, and setting financial goals. These apps offer features like real-time expense tracking, bill reminders, and personalized insights to help you save and manage money effectively. With a user-friendly interface, they simplify financial planning, making it easier to stay on top of your finances and achieve long-term financial stability.
14 th Edition of International conference on computer visionShulagnaSarkar2
About the event
14th Edition of International conference on computer vision
Computer conferences organized by ScienceFather group. ScienceFather takes the privilege to invite speakers participants students delegates and exhibitors from across the globe to its International Conference on computer conferences to be held in the Various Beautiful cites of the world. computer conferences are a discussion of common Inventions-related issues and additionally trade information share proof thoughts and insight into advanced developments in the science inventions service system. New technology may create many materials and devices with a vast range of applications such as in Science medicine electronics biomaterials energy production and consumer products.
Nomination are Open!! Don't Miss it
Visit: computer.scifat.com
Award Nomination: https://x-i.me/ishnom
Conference Submission: https://x-i.me/anicon
For Enquiry: Computer@scifat.com
Measures in SQL (SIGMOD 2024, Santiago, Chile)Julian Hyde
SQL has attained widespread adoption, but Business Intelligence tools still use their own higher level languages based upon a multidimensional paradigm. Composable calculations are what is missing from SQL, and we propose a new kind of column, called a measure, that attaches a calculation to a table. Like regular tables, tables with measures are composable and closed when used in queries.
SQL-with-measures has the power, conciseness and reusability of multidimensional languages but retains SQL semantics. Measure invocations can be expanded in place to simple, clear SQL.
To define the evaluation semantics for measures, we introduce context-sensitive expressions (a way to evaluate multidimensional expressions that is consistent with existing SQL semantics), a concept called evaluation context, and several operations for setting and modifying the evaluation context.
A talk at SIGMOD, June 9–15, 2024, Santiago, Chile
Authors: Julian Hyde (Google) and John Fremlin (Google)
https://doi.org/10.1145/3626246.3653374
DevOps Consulting Company | Hire DevOps Servicesseospiralmantra
Spiral Mantra excels in providing comprehensive DevOps services, including Azure and AWS DevOps solutions. As a top DevOps consulting company, we offer controlled services, cloud DevOps, and expert consulting nationwide, including Houston and New York. Our skilled DevOps engineers ensure seamless integration and optimized operations for your business. Choose Spiral Mantra for superior DevOps services.
https://www.spiralmantra.com/devops/
Enhanced Screen Flows UI/UX using SLDS with Tom KittPeter Caitens
Join us for an engaging session led by Flow Champion, Tom Kitt. This session will dive into a technique of enhancing the user interfaces and user experiences within Screen Flows using the Salesforce Lightning Design System (SLDS). This technique uses Native functionality, with No Apex Code, No Custom Components and No Managed Packages required.
A neural network is a machine learning program, or model, that makes decisions in a manner similar to the human brain, by using processes that mimic the way biological neurons work together to identify phenomena, weigh options and arrive at conclusions.
Manyata Tech Park Bangalore_ Infrastructure, Facilities and Morenarinav14
Located in the bustling city of Bangalore, Manyata Tech Park stands as one of India’s largest and most prominent tech parks, playing a pivotal role in shaping the city’s reputation as the Silicon Valley of India. Established to cater to the burgeoning IT and technology sectors
The Comprehensive Guide to Validating Audio-Visual Performances.pdfkalichargn70th171
Ensuring the optimal performance of your audio-visual (AV) equipment is crucial for delivering exceptional experiences. AV performance validation is a critical process that verifies the quality and functionality of your AV setup. Whether you're a content creator, a business conducting webinars, or a homeowner creating a home theater, validating your AV performance is essential.
8 Best Automated Android App Testing Tool and Framework in 2024.pdfkalichargn70th171
Regarding mobile operating systems, two major players dominate our thoughts: Android and iPhone. With Android leading the market, software development companies are focused on delivering apps compatible with this OS. Ensuring an app's functionality across various Android devices, OS versions, and hardware specifications is critical, making Android app testing essential.
What to do when you have a perfect model for your software but you are constrained by an imperfect business model?
This talk explores the challenges of bringing modelling rigour to the business and strategy levels, and talking to your non-technical counterparts in the process.
Everything You Need to Know About X-Sign: The eSign Functionality of XfilesPr...XfilesPro
Wondering how X-Sign gained popularity in a quick time span? This eSign functionality of XfilesPro DocuPrime has many advancements to offer for Salesforce users. Explore them now!
Everything You Need to Know About X-Sign: The eSign Functionality of XfilesPr...
OSMC 2017 | Groovy There is a Docker in my Dashing Pipeline by Kris Buytaert
1. Groovy, there is a docker in
my application pipeline
Kris Buytaert
@krisbuytaert
2. Kris BuytaertKris Buytaert
● I used to be a Dev,I used to be a Dev,
● Then Became an OpThen Became an Op
● Chief Trolling Officer and Open SourceChief Trolling Officer and Open Source
Consultant @Consultant @inuits.euinuits.eu
● Everything is an effing DNS ProblemEverything is an effing DNS Problem
● Building Clouds since before the bookstoreBuilding Clouds since before the bookstore
● Some books, some papers, some blogsSome books, some papers, some blogs
● Evangelizing devopsEvangelizing devops
● Organiser of #devopsdays, #cfgmgmtcamp,Organiser of #devopsdays, #cfgmgmtcamp,
#loadays, ….#loadays, ….
● Part of the travelling geek circusPart of the travelling geek circus
5. NirvanaNirvana
An “ecosystem” that supports continuous delivery, fromAn “ecosystem” that supports continuous delivery, from
infrastructure, data and configuration management toinfrastructure, data and configuration management to
business.business.
Through automation of the build, deployment, and testingThrough automation of the build, deployment, and testing
process, and improved collaboration between developers,process, and improved collaboration between developers,
testers, and operations, delivery teams can get changestesters, and operations, delivery teams can get changes
released in a matter of hours — sometimes even minutes–noreleased in a matter of hours — sometimes even minutes–no
matter what the size of a project or the complexity of its codematter what the size of a project or the complexity of its code
base.base.
Continuous Delivery , Jez HumbleContinuous Delivery , Jez Humble
6. This talk:This talk:
Journey / Early steps of a team that is used toJourney / Early steps of a team that is used to
infrastructure as codeinfrastructure as code
Adopting containers step by step.Adopting containers step by step.
9. " Our job as engineers (and ops, dev-ops, QA," Our job as engineers (and ops, dev-ops, QA,
support, everyone in the company actually) is tosupport, everyone in the company actually) is to
enable the business goals. We strongly feel thatenable the business goals. We strongly feel that
in order to do that you must havein order to do that you must have the ability tothe ability to
deploy code quickly and safelydeploy code quickly and safely. Even if the. Even if the
business goals are to deploy strongly QA’d codebusiness goals are to deploy strongly QA’d code
once a month at 3am (it’s not for us, we push allonce a month at 3am (it’s not for us, we push all
the time), having a reliable and easythe time), having a reliable and easy
deployment should bedeployment should be non-negotiablenon-negotiable."."
Etsy Blog upon releasing DeployinatorEtsy Blog upon releasing Deployinator
http://codeascraft.etsy.com/2010/05/20/quantum-of-deployment/http://codeascraft.etsy.com/2010/05/20/quantum-of-deployment/
10. We need :We need :
AnAn unmodifiedunmodified artifact from build to deploy.artifact from build to deploy.
SameSame artifact on dev, staging, acceptance,artifact on dev, staging, acceptance,
production, shadow, dr …production, shadow, dr …
11. Why ops like to packageWhy ops like to package
● Packages give you featuresPackages give you features
•Consistency, security, dependenciesConsistency, security, dependencies
● Uniquely identify where files come fromUniquely identify where files come from
•Package or cfg-mgmtPackage or cfg-mgmt
● Source repo not always availableSource repo not always available
•Firewall / Cloud etc ..Firewall / Cloud etc ..
● Weird deployment locations , no easy accessWeird deployment locations , no easy access
● Little overhead when you automateLittle overhead when you automate
● CONFIG does not belong in a packageCONFIG does not belong in a package
12. Example app for today :Example app for today :
DashingDashing
13. Dashing is DeadDashing is Dead
● No it has been forkedNo it has been forked
● https://github.com/dashing-io/dashinghttps://github.com/dashing-io/dashing
● s/dashing/smashing/g;s/dashing/smashing/g;
14. Dashing {su/ro}cksDashing {su/ro}cks
The GoodThe Good
● Lots of existingLots of existing
widgetswidgets
● Easy to startEasy to start
● Simple rubySimple ruby
● Eventstream forEventstream for
debuggingdebugging
The UglyThe Ugly
● Ruby Gem hellRuby Gem hell
● Widget DeploymentWidget Deployment
from a Gist ?from a Gist ?
● No config separationNo config separation
15. Deploying DashingDeploying Dashing
● gem install dashinggem install dashing
● gem install is the new maven downloading thegem install is the new maven downloading the
internetinternet
● Reproducable ?Reproducable ?
16. A typical deploymentA typical deployment
● P all software is packagedP all software is packaged
•
CentOS mostlyCentOS mostly
•
RPM generated with fpmRPM generated with fpm
•
Build in Jenkins, uploaded to pulpBuild in Jenkins, uploaded to pulp
● C config is managed by PuppetC config is managed by Puppet
● S service is managed by PuppetS service is managed by Puppet
17. Building Ruby/python/nodeBuilding Ruby/python/node
● We need a chrootWe need a chroot
● With the right ruby/python versionWith the right ruby/python version
● With the right dependenciesWith the right dependencies
● IsolatedIsolated
● Ruby => rvmRuby => rvm
● Ruby 2.1 (dashing is pretty picky aboutRuby 2.1 (dashing is pretty picky about
versions)versions)
● What about we try this in a container ?What about we try this in a container ?
18. Pipelines ?Pipelines ?
● One to build basic dashingOne to build basic dashing
● One to build and deploy the dashboards, scriptsOne to build and deploy the dashboards, scripts
and all other dashing related stufand all other dashing related stuf
•
No hacking In production,No hacking In production,
•
Dashboards are production viewsDashboards are production views
•
Dev → prod promotionsDev → prod promotions
19. JenkinsJenkins
● Starting point :Starting point :
•
Dev jenkinsDev jenkins
•
1 master (no running jobs)1 master (no running jobs)
•
Multiple slavesMultiple slaves
● Production : diferent jenkins stack with similarProduction : diferent jenkins stack with similar
pipelinespipelines
● We need to be able to reproduce a pipelineWe need to be able to reproduce a pipeline
20. Building a dashingBuilding a dashing
container step 0container step 0
● Empty / standard distro containerEmpty / standard distro container
updatesupdates
add fpmadd fpm
epel and build dependenciesepel and build dependencies
● Triggering docker from the cli, no plugin inTriggering docker from the cli, no plugin in
Jenkins used (coz Bugz)Jenkins used (coz Bugz)
● $customer environment requires http_proxy$customer environment requires http_proxy
22. Building a dashingBuilding a dashing
container step 1container step 1
● Read rvm installation docsRead rvm installation docs
● frownfrown
● Frown againFrown again
● Containers => YoloContainers => Yolo
● fpm the whole treefpm the whole tree
23. Building a dashingBuilding a dashing
container step 2container step 2
● Take rvm containerTake rvm container
● rvm install ruby-2.1rvm install ruby-2.1
● fpm -s dir -t rpm -n rvm-ruby -v 2.1.8fpm -s dir -t rpm -n rvm-ruby -v 2.1.8
/usr/local/rvm/rubies/ruby-2.1.8/usr/local/rvm/rubies/ruby-2.1.8
24. Building a dashingBuilding a dashing
container step 3container step 3
● Take ruby-2.1 containerTake ruby-2.1 container
● rvm use 2.1rvm use 2.1
● gem install bundlegem install bundle
● gem install dashing (fills /usr/local/rvm/gems/ruby-gem install dashing (fills /usr/local/rvm/gems/ruby-
2.1.8 with gems2.1.8 with gems
● mkdir -p /opt/dashing/ && dashing new dashboardmkdir -p /opt/dashing/ && dashing new dashboard
● cd /opt/dashing/dashboardcd /opt/dashing/dashboard
● bundle installbundle install
25. ● Now we have a “reproducable” container whichNow we have a “reproducable” container which
will show an empty default dashboard uponwill show an empty default dashboard upon
launchinglaunching
● We also have an artifact which we can redeployWe also have an artifact which we can redeploy
● We killed most of those layers afterwardsWe killed most of those layers afterwards
27. A dashboardA dashboard
● git repo withgit repo with
•
Dashboards (html/erb)Dashboards (html/erb)
•
JobsJobs
•
Mostly with datasources hardcoded inMostly with datasources hardcoded in
scriptsscripts
•
Not multitenantNot multitenant
•
WidgetsWidgets
● Pipeline to deploy and test thatPipeline to deploy and test that
29. Testing the dashboardsTesting the dashboards
● Not all deploys were workingNot all deploys were working
● New job, required gems are missingNew job, required gems are missing
● TestingTesting
•
Build container with most recent dashboardBuild container with most recent dashboard
•
Based on the rpm'sBased on the rpm's
•
docker run -p 0.0.0.0:3030:3030 -ddocker run -p 0.0.0.0:3030:3030 -d
dashing/dashboardsdashing/dashboards
•
wget http://localhost:3030/wget http://localhost:3030/
30. Deploying theDeploying the
dashboardsdashboards
● Deploy 2 rpms on vm's via mcollectiveDeploy 2 rpms on vm's via mcollective
•
dashing-gemsdashing-gems
•
dashing-dashboarddashing-dashboard
on nodes with profile_dashingon nodes with profile_dashing
● mco package update dashing-gems -Fmco package update dashing-gems -F
environment=svc1prd -C profile_dashingenvironment=svc1prd -C profile_dashing
31. We need a local dockerWe need a local docker
images repositoryimages repository
● Distributed Jenkins (master + multiple slaves)Distributed Jenkins (master + multiple slaves)
● An image build on node X is not available onAn image build on node X is not available on
node Ynode Y
● Tests run on other nodeTests run on other node
docker push dashing/dashingdocker push dashing/dashing
docker push dashing/dashboardsdocker push dashing/dashboards
32. We need a local dockerWe need a local docker
images repositoryimages repository
● Pulp ?Pulp ?
•
Read only (August 2016)Read only (August 2016)
•
Good for mirrorsGood for mirrors
● Nexus / ArtifactoryNexus / Artifactory
● Docker registry (obsolete, used to be only in aDocker registry (obsolete, used to be only in a
container)container)
● Docker-distribution : packages availableDocker-distribution : packages available
33. Docker IncompatibilitiesDocker Incompatibilities
● Search path for imagesSearch path for images
•
Local firstLocal first
•
Upstream afterwardsUpstream afterwards
Docker Inc says NODocker Inc says NO
Redhat says YesRedhat says Yes
● --build-args -e--build-args -e
•
Redhat vs Docker implementation diferRedhat vs Docker implementation difer
34. Problems solvedProblems solved
● Chrooted package build, no complex mockChrooted package build, no complex mock
setups or specfilessetups or specfiles
● Internal docker repo allows reuse of buildInternal docker repo allows reuse of build
images on other nodesimages on other nodes
● Jenkins and docker “integration”Jenkins and docker “integration”
35. Rinse & RepeatRinse & Repeat
● Similar patterns forSimilar patterns for
•
Python , php, etc.Python , php, etc.
● Test can now run in containers with the correctTest can now run in containers with the correct
versionversion
● Tests can be run with multiple versions of php/Tests can be run with multiple versions of php/
phython/ruby etc..phython/ruby etc..
36. Can you Automate yourCan you Automate your
Pipeline Creation ?Pipeline Creation ?
● Pipeline as CodePipeline as Code
● Jenkins Job DSLJenkins Job DSL
● Pipeline PluginPipeline Plugin
37. Building the PipelineBuilding the Pipeline
● Dev environment for JenkinsDev environment for Jenkins
•
Fully puppetizedFully puppetized
● JobsJobs
•
Jenkins Job DSL PluginJenkins Job DSL Plugin
•
https://wiki.jenkins-ci.org/display/JENKINS/Johttps://wiki.jenkins-ci.org/display/JENKINS/Jo
b+DSL+Pluginb+DSL+Plugin
•
41. Job partsJob parts
● Logrotator : how long to keep buildsLogrotator : how long to keep builds
● Scm : git configScm : git config
● Trigger : when to buildTrigger : when to build
● Label : where to runLabel : where to run
● Steps : shell(readFileFromWorkspace('file.sh'))Steps : shell(readFileFromWorkspace('file.sh'))
● publisherspublishers
43. Pipeline Problems solvedPipeline Problems solved
● No more promoted build pluginNo more promoted build plugin
•
Manual promote in pipelineManual promote in pipeline
•
Easy visabilityEasy visability
● No more clicking around to create / editNo more clicking around to create / edit
pipelinepipeline
● One job per task, no reuse of jobs with diferentOne job per task, no reuse of jobs with diferent
parametersparameters
● Centrally managed jobs (git)Centrally managed jobs (git)
44. Solved problems bySolved problems by
ContainersContainers
● Multiversion test of application stacksMultiversion test of application stacks
•
Eg diferent puppet/ php versionsEg diferent puppet/ php versions
● Both functional and unit testing in the pipelineBoth functional and unit testing in the pipeline
● Non blocking pipeline branches for futureNon blocking pipeline branches for future
versionsversions
● Provide developers with producton alikeProvide developers with producton alike
containerscontainers
● Growing container experience with ops folksGrowing container experience with ops folks