A Comprehensive Introduction to Kubernetes. This slide deck serves as the lecture portion of a full-day Workshop covering the architecture, concepts and components of Kubernetes. For the interactive portion, please see the tutorials here:
https://github.com/mrbobbytables/k8s-intro-tutorials
An in depth overview of Kubernetes and it's various components.
NOTE: This is a fixed version of a previous presentation (a draft was uploaded with some errors)
Kubernetes has become the defacto standard as a platform for container orchestration. Its ease of extending and many integrations has paved the way for a wide variety of data science and research tooling to be built on top of it.
From all encompassing tools like Kubeflow that make it easy for researchers to build end-to-end Machine Learning pipelines to specific orchestration of analytics engines such as Spark; Kubernetes has made the deployment and management of these things easy. This presentation will showcase some of the larger research tools in the ecosystem and go into how Kubernetes has enabled this easy form of application management.
This presentation covers how app deployment model evolved from bare metal servers to Kubernetes World.
In addition to theoretical information, you will find free KATACODA workshops url to perform practices to understand the details of the each topics.
Kubernetes Overview - Deploy your app with confidenceOmer Barel
In this presentation I explain the basics of the Kubernetes platform, alongside a dive into the core primitives (building blocks) you use, as well as yaml examples for each primitive
A Comprehensive Introduction to Kubernetes. This slide deck serves as the lecture portion of a full-day Workshop covering the architecture, concepts and components of Kubernetes. For the interactive portion, please see the tutorials here:
https://github.com/mrbobbytables/k8s-intro-tutorials
An in depth overview of Kubernetes and it's various components.
NOTE: This is a fixed version of a previous presentation (a draft was uploaded with some errors)
Kubernetes has become the defacto standard as a platform for container orchestration. Its ease of extending and many integrations has paved the way for a wide variety of data science and research tooling to be built on top of it.
From all encompassing tools like Kubeflow that make it easy for researchers to build end-to-end Machine Learning pipelines to specific orchestration of analytics engines such as Spark; Kubernetes has made the deployment and management of these things easy. This presentation will showcase some of the larger research tools in the ecosystem and go into how Kubernetes has enabled this easy form of application management.
This presentation covers how app deployment model evolved from bare metal servers to Kubernetes World.
In addition to theoretical information, you will find free KATACODA workshops url to perform practices to understand the details of the each topics.
Kubernetes Overview - Deploy your app with confidenceOmer Barel
In this presentation I explain the basics of the Kubernetes platform, alongside a dive into the core primitives (building blocks) you use, as well as yaml examples for each primitive
Slides used for Orchestructure May 2018 workshop.
Labs:
https://github.com/mrbobbytables/k8s-intro-tutorials
Event Information:
https://www.meetup.com/orchestructure/events/250189685/
Kubernetes: An Introduction to the Open Source Container Orchestration PlatformMichael O'Sullivan
Originally designed by Google, Kubernetes is now an open-source platform that is used for managing applications deployed as containers across multiple hosts - now hosted under the Cloud Native Computing Foundation. It provides features for automating deployment, scaling, and maintaining these applications. Hosts are organised into clusters, and applications are deployed into these clusters as containers. Kubernetes is compatible with several container engines, notably Docker. The popularity of Kubernetes continues to increase as a result of the feature-rich tooling when compared to use of a container-engine alone, and a number of Cloud-based hosted solutions are now available, such as Google Kubernetes Engine, Amazon Elastic Container Service for Kubernetes, and IBM Cloud Container Service.
This talk will provide an introduction to the Kubernetes platform, and a detailed view of the platform architecture from both the Control Plane and Worker-node perspectives. A walk-through demonstration will also be provided. Furthermore, two additional tools that support Kubernetes will be presented and demonstrated - Helm: a package manager solution which enables easy deployment of pre-built Kubernetes software using Helm Charts, and Istio: a platform in development that aims to simplify the management of micro-services deployed on the Kubernetes platform.
Speaker Bio:
Dr. Michael J. O'Sullivan is a Software Engineer working as part of the Cloud Foundation Services team for IBM Cloud Dedicated, in the IBM Cloud division in Cork. Michael has worked on both Delivery Pipeline/Deployment Automation and Performance Testing teams, which has resulted in daily exposure to customer deployments of IBM Cloud services such as the IBM Cloud Containers Service, and the IBM Cloud Logging and Metrics Services. Michael has also worked on deployment of these services to OpenStack and VMware platforms. Michael holds a PhD in Computer Science from University College Cork (2012 - 2015), where, under the supervision of Dr. Dan Grigoras, engaged in research of Mobile Cloud Computing (MCC) - specifically, studying and implementing solutions for delivering seamless user experiences of MCC applications and services. Prior to this, Michael graduated with a 1st Class Honours Degree in Computer Science from University College Cork in 2012.
K8s in 3h - Kubernetes Fundamentals TrainingPiotr Perzyna
Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications. This training helps you understand key concepts within 3 hours.
Pluggable Infrastructure with CI/CD and DockerBob Killen
The docker cluster ecosystem is still young, and highly modular. This presentation covers some of the challenges we faced deciding on what infrastructure to deploy, and a few tips and tricks in making both applications and infrastructure easily adaptable.
Kubernetes advanced sheduling
- Taint and tolerant
- Affinity (Node & inter pod)
Learn how to place Pod like (same or different) node, rack, zone, region
Federated Kubernetes: As a Platform for Distributed Scientific ComputingBob Killen
A high level overview of Kubernetes Federation and the challenges encountered when building out a Platform for multi-institutional Research and Distributed Scientific Computing.
Kubernetes @ Squarespace: Kubernetes in the DatacenterKevin Lynch
This talk was presented at SRE NYC Meetup on August 16, 2017 at Squarespace HQ.
https://www.youtube.com/watch?v=UJ1QAKprVr4
As the engineering teams at Squarespace grow, we have been building more and more microservices. However, this has added operational strain as we try to shoehorn a growing, complex dynamic environment into our static data center infrastructure. We needed to rethink how we handle deployments, dependency management, resource allocation, monitoring, and alerting. Docker containerization and Kubernetes orchestration helps us tackle many of these problems, but the journey has been challenging. In this talk, we’ll discuss the challenges of running Kubernetes in a datacenter and how we switched to a more SLA-focused alert structure than per instance health with Prometheus and AlertManager.
Kubernetes is a great tool to run (Docker) containers in a clustered production environment. When deploying often to production we need fully automated blue-green deployments, which makes it possible to deploy without any downtime. We also need to handle external HTTP requests and SSL offloading. This requires integration with a load balancer like Ha-Proxy. Another concern is (semi) auto scaling of the Kubernetes cluster itself when running in a cloud environment. E.g. partially scale down the cluster at night.
In this technical deep dive you will learn how to setup Kubernetes together with other open source components to achieve a production ready environment that takes code from git commit to production without downtime.
Deploying your first application with KubernetesOVHcloud
Find out how to deploy your first application with Kubernetes on the OVH cloud, and direct questions to the team responsible for our upcoming Kubernetes as-a-Service solution.
3 years ago, Meetic chose to rebuild it's backend architecture using microservices and an event driven strategy. As we where moving along our old legacy application, testing features became gradually a pain, especially when those features rely on multiple changes across multiple components. Whatever the number of application you manage, unit testing is easy, as well as functional testing on a microservice. A good gherkin framework and a set of docker container can do the job. The real challenge is set in end-to-end testing even more when a feature can involve up to 60 different components.
To solve that issue, Meetic is building a Kubernetes strategy around testing. To do such a thing we need to :
- Be able to generate a docker container for each pull-request on any component of the stack
- Be able to create a full testing environment in the simplest way
- Be able to launch automated test on this newly created environment
- Have a clean-up process to destroy testing environment after tests To separate the various testing environment, we chose to use Kubernetes Namespaces each containing a variant of the Meetic stack. But when it comes to Kubernetes, managing multiple namespaces can be hard. Yaml configuration files need to be shared in a way that each people / automated job can access to them and modify them without impacting others.
This is typically why Meetic chose to develop it's own tool to manage namespace through a cli tool, or a REST API on which we can plug a friendly UI.
In this talk we will tell you the story of our CI/CD evolution to satisfy the need to create a docker container for each new pull request. And we will show you how to make end-to-end testing easier using Blackbeard, the tool we developed to handle the need to manage namespaces inspired by Helm.
Kubernetes is a container orchestration platform that provides a mechanism to manage the resources of containers in the cluster. That mechanism is known as "Requests and Limits".
Requests and limits play a key role not only in resource management but also in applications stability, capacity planning, scheduling the resources (i.e., on which node the pod will be running).
In this session we will cover:
- A quick review of Containers, Docker, and Kubernetes.
- Containers resource management in Kubernetes.
- Containers resource types in Kubernetes.
- 3 different ways to set requests and limits.
- The difference between capacity and allocatable resources.
- Tips and recap.
Atmosphere 2018: Yury Tsarev - TEST DRIVEN INFRASTRUCTURE FOR HIGHLY PERFORMI...PROIDEA
In this presentation Yury Tsarev will review practical examples of how to build infrastructure-as-code with a strong test-driven approach. While equipped with an opinionated tools selection, the audience will be provided with a generic framework to build upon where the components are fully replaceable. Further, Yury strongly believes that infrastructure code should be treated like any other code. This means applying a test driven development model, storing it in a source control system and building a regression test suite. He suggests doing this with Test Kitchen, a pluggable and extensible test orchestrator that originated in the Chef community. Using Test Kitchen’s disposable modules it is possible to test both mutable (e.g. based on Puppet/Chef/Ansible) as well as immutable infrastructure (e.g. Terraform based). Serverspec/Inspec can verify that the configuration code behaves properly. In addition, shell mocking can be used to bypass external dependencies and create hermetic infra tests. Having such a powerful test infra toolset enables DevOps/SRE teams to practice TDD, create strong CI/CD pipelines and reduce overhead by never having to test manually again.
● Fundamentals
● Key Components
● Best practices
● Spring Boot REST API Deployment
● CI with Ansible
● Ansible for AWS
● Provisioning a Docker Host
● Docker&Ansible
https://github.com/maaydin/ansible-tutorial
Slides used for Orchestructure May 2018 workshop.
Labs:
https://github.com/mrbobbytables/k8s-intro-tutorials
Event Information:
https://www.meetup.com/orchestructure/events/250189685/
Kubernetes: An Introduction to the Open Source Container Orchestration PlatformMichael O'Sullivan
Originally designed by Google, Kubernetes is now an open-source platform that is used for managing applications deployed as containers across multiple hosts - now hosted under the Cloud Native Computing Foundation. It provides features for automating deployment, scaling, and maintaining these applications. Hosts are organised into clusters, and applications are deployed into these clusters as containers. Kubernetes is compatible with several container engines, notably Docker. The popularity of Kubernetes continues to increase as a result of the feature-rich tooling when compared to use of a container-engine alone, and a number of Cloud-based hosted solutions are now available, such as Google Kubernetes Engine, Amazon Elastic Container Service for Kubernetes, and IBM Cloud Container Service.
This talk will provide an introduction to the Kubernetes platform, and a detailed view of the platform architecture from both the Control Plane and Worker-node perspectives. A walk-through demonstration will also be provided. Furthermore, two additional tools that support Kubernetes will be presented and demonstrated - Helm: a package manager solution which enables easy deployment of pre-built Kubernetes software using Helm Charts, and Istio: a platform in development that aims to simplify the management of micro-services deployed on the Kubernetes platform.
Speaker Bio:
Dr. Michael J. O'Sullivan is a Software Engineer working as part of the Cloud Foundation Services team for IBM Cloud Dedicated, in the IBM Cloud division in Cork. Michael has worked on both Delivery Pipeline/Deployment Automation and Performance Testing teams, which has resulted in daily exposure to customer deployments of IBM Cloud services such as the IBM Cloud Containers Service, and the IBM Cloud Logging and Metrics Services. Michael has also worked on deployment of these services to OpenStack and VMware platforms. Michael holds a PhD in Computer Science from University College Cork (2012 - 2015), where, under the supervision of Dr. Dan Grigoras, engaged in research of Mobile Cloud Computing (MCC) - specifically, studying and implementing solutions for delivering seamless user experiences of MCC applications and services. Prior to this, Michael graduated with a 1st Class Honours Degree in Computer Science from University College Cork in 2012.
K8s in 3h - Kubernetes Fundamentals TrainingPiotr Perzyna
Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications. This training helps you understand key concepts within 3 hours.
Pluggable Infrastructure with CI/CD and DockerBob Killen
The docker cluster ecosystem is still young, and highly modular. This presentation covers some of the challenges we faced deciding on what infrastructure to deploy, and a few tips and tricks in making both applications and infrastructure easily adaptable.
Kubernetes advanced sheduling
- Taint and tolerant
- Affinity (Node & inter pod)
Learn how to place Pod like (same or different) node, rack, zone, region
Federated Kubernetes: As a Platform for Distributed Scientific ComputingBob Killen
A high level overview of Kubernetes Federation and the challenges encountered when building out a Platform for multi-institutional Research and Distributed Scientific Computing.
Kubernetes @ Squarespace: Kubernetes in the DatacenterKevin Lynch
This talk was presented at SRE NYC Meetup on August 16, 2017 at Squarespace HQ.
https://www.youtube.com/watch?v=UJ1QAKprVr4
As the engineering teams at Squarespace grow, we have been building more and more microservices. However, this has added operational strain as we try to shoehorn a growing, complex dynamic environment into our static data center infrastructure. We needed to rethink how we handle deployments, dependency management, resource allocation, monitoring, and alerting. Docker containerization and Kubernetes orchestration helps us tackle many of these problems, but the journey has been challenging. In this talk, we’ll discuss the challenges of running Kubernetes in a datacenter and how we switched to a more SLA-focused alert structure than per instance health with Prometheus and AlertManager.
Kubernetes is a great tool to run (Docker) containers in a clustered production environment. When deploying often to production we need fully automated blue-green deployments, which makes it possible to deploy without any downtime. We also need to handle external HTTP requests and SSL offloading. This requires integration with a load balancer like Ha-Proxy. Another concern is (semi) auto scaling of the Kubernetes cluster itself when running in a cloud environment. E.g. partially scale down the cluster at night.
In this technical deep dive you will learn how to setup Kubernetes together with other open source components to achieve a production ready environment that takes code from git commit to production without downtime.
Deploying your first application with KubernetesOVHcloud
Find out how to deploy your first application with Kubernetes on the OVH cloud, and direct questions to the team responsible for our upcoming Kubernetes as-a-Service solution.
3 years ago, Meetic chose to rebuild it's backend architecture using microservices and an event driven strategy. As we where moving along our old legacy application, testing features became gradually a pain, especially when those features rely on multiple changes across multiple components. Whatever the number of application you manage, unit testing is easy, as well as functional testing on a microservice. A good gherkin framework and a set of docker container can do the job. The real challenge is set in end-to-end testing even more when a feature can involve up to 60 different components.
To solve that issue, Meetic is building a Kubernetes strategy around testing. To do such a thing we need to :
- Be able to generate a docker container for each pull-request on any component of the stack
- Be able to create a full testing environment in the simplest way
- Be able to launch automated test on this newly created environment
- Have a clean-up process to destroy testing environment after tests To separate the various testing environment, we chose to use Kubernetes Namespaces each containing a variant of the Meetic stack. But when it comes to Kubernetes, managing multiple namespaces can be hard. Yaml configuration files need to be shared in a way that each people / automated job can access to them and modify them without impacting others.
This is typically why Meetic chose to develop it's own tool to manage namespace through a cli tool, or a REST API on which we can plug a friendly UI.
In this talk we will tell you the story of our CI/CD evolution to satisfy the need to create a docker container for each new pull request. And we will show you how to make end-to-end testing easier using Blackbeard, the tool we developed to handle the need to manage namespaces inspired by Helm.
Kubernetes is a container orchestration platform that provides a mechanism to manage the resources of containers in the cluster. That mechanism is known as "Requests and Limits".
Requests and limits play a key role not only in resource management but also in applications stability, capacity planning, scheduling the resources (i.e., on which node the pod will be running).
In this session we will cover:
- A quick review of Containers, Docker, and Kubernetes.
- Containers resource management in Kubernetes.
- Containers resource types in Kubernetes.
- 3 different ways to set requests and limits.
- The difference between capacity and allocatable resources.
- Tips and recap.
Atmosphere 2018: Yury Tsarev - TEST DRIVEN INFRASTRUCTURE FOR HIGHLY PERFORMI...PROIDEA
In this presentation Yury Tsarev will review practical examples of how to build infrastructure-as-code with a strong test-driven approach. While equipped with an opinionated tools selection, the audience will be provided with a generic framework to build upon where the components are fully replaceable. Further, Yury strongly believes that infrastructure code should be treated like any other code. This means applying a test driven development model, storing it in a source control system and building a regression test suite. He suggests doing this with Test Kitchen, a pluggable and extensible test orchestrator that originated in the Chef community. Using Test Kitchen’s disposable modules it is possible to test both mutable (e.g. based on Puppet/Chef/Ansible) as well as immutable infrastructure (e.g. Terraform based). Serverspec/Inspec can verify that the configuration code behaves properly. In addition, shell mocking can be used to bypass external dependencies and create hermetic infra tests. Having such a powerful test infra toolset enables DevOps/SRE teams to practice TDD, create strong CI/CD pipelines and reduce overhead by never having to test manually again.
● Fundamentals
● Key Components
● Best practices
● Spring Boot REST API Deployment
● CI with Ansible
● Ansible for AWS
● Provisioning a Docker Host
● Docker&Ansible
https://github.com/maaydin/ansible-tutorial
ContainerCon - Test Driven InfrastructureYury Tsarev
Great external coverage of this presentation can be found at https://www.cedric-meury.ch/2016/10/test-driven-infrastructure-with-puppet-docker-test-kitchen-and-serverspec-yury-tsarev-gooddata/
Distributed Performance testing by funkloadAkhil Singh
Distributed Performance testing by funkload, sysbench.
These slides briefs the load and stress testing on apache, nginx, redis, mysql servers by using funkload and sysbench. Testing is done on a single master node setup on kubernetes cluster.
Infrastructure testing with Molecule and TestInfraTomislav Plavcic
Presentation defines what infrastructure as code is and what are some of the frameworks used for writing and testing it. More info is provided about Molecule framework which is used for testing Ansible roles and also TestInfra framework which can be used as a verifier with Molecule, but also as a standalone test framework.
Shuah Khan, Senior Kernel Developer with the Samsung Open Source Group, discusses the new Kernel Selftest framework she helped develop to aid in automated linux kernel testing and verification.
Discover more about Ansible! Ansible Tower and Ansible Containers:)
The slides which I used to give a talk about Ansible at DevOps North East. The slides have been modified to make it easier to understand in case you have missed the talk :)
Continuous Delivery with Jenkins declarative pipeline XPDays-2018-12-08Борис Зора
When you start your journey with µServices, you should be confident with your delivery lifecycle. In case of mistake, you should be able to navigate to appropriate tag in vcs to reproduce the bug with a test & go though pipeline within 3 hours to production with high confidence of quality.
We will discuss set of tools that could help you to achieve this within 3 months on your project. It does not include system decoupling suggestions. And in the same time, if you decide to break down monolith, it is better to do with dev & devOps best practices.
Scala, docker and testing, oh my! mario camouJ On The Beach
Testing is important for any system you write and at eBay it is no different. We have a number of complex Scala and Akka based applications with a large number of external dependencies. One of the challenges of testing this kind of application is replicating the complete system across all your environments: development, different flavors of testing (unit, functional, integration, capacity and acceptance) and production. This is especially true in the case of integration and capacity testing where there are a multitude of ways to manage system complexity. Wouldn’t it be nice to define the testing system architecture in one place that we can reuse in all our tests? It turns out we can do exactly that using Docker. In this talk, we will first look at how to take advantage of Docker for integration testing your Scala application. After that we will explore how this has helped us reduce the duration and complexity of our tests.
Docker–Grid (A On demand and Scalable dockerized selenium grid architecture)STePINForum
by Yogit Thakral, Senior Test Engineer & Kandeel Chauhan, Testing Lead, Naukri.com at STeP-IN SUMMIT 2018 15th International Conference on Software Testing on August 31, 2018 at Taj, MG Road, Bengaluru
Tackling New Challenges in a Virtual Focused CommunityBob Killen
The pandemic has had many communities scrambling to find ways to capture, grow, and continue to strengthen the bonds of their community members. Virtual events, zoom burn out, and increased familial responsibilities are things that are impacting not only contributors but software communities as a whole. They do pose an opportunity - They can be made more accessible, there are less financial pressures for attendees, and they open up possibilities for others that might not be able to previously contribute to open source. In this talk, we’ll go over some of the successes and failures that we have encountered over the past year, share our experiences, and explore strategies to mitigate “virtual” fatigue. Attendees will learn the following things: - How to approach virtual events and activities, both as an organizer and an attendee - Set and manage expectations with themselves and others - Technical do’s and don’ts with virtual events.
KubeCon EU 2021
KubeCon EU 2021 Keynote: Shaping Kubernetes Community CultureBob Killen
In this talk, members of the Kubernetes Steering Committee and Kubernetes Code of Conduct Committee walk through what it takes to lead a community from a technical, cultural, and community perspective, and how that stewardship improves camaraderie, code quality and longevity. Get a peek under the hood of the two community groups chartered with defining, evolving, and sustaining the values of the project.
Intro to Kubernetes SIG Contributor ExperienceBob Killen
In this 30 minute session, we will explore the projects we have been working on with Contributor Experience and the future work we have on deck. We will provide an update to the following projects and have information on how to get involved.
Interested in improving the Research experience with Kubernetes, or simply running research workloads on it? The CNCF Research User Group’s purpose is to serve as a focal point for the discussion and advancement of Research Computing using “Cloud Native” technologies. Since the group’s inception 6 months ago, key areas have been identified as gaps within the ecosystem. This session would serve as an opportunity to share with a broader audience some of the key challenges the Research-user-group has identified, and showcase project updates on key tools that the research community is developing to address these challenges. For more information visit: https://github.com/cncf/research-user-group
KubeCon EU 2020
A Peek Behind the Curtain: Managing the Kubernetes Contributor CommunityBob Killen
The Kubernetes community is a vibrant beacon in open source. It takes a village to enable a city of contributors doing what they do best. There are a lot of fun stories and lessons to be shared from helping out the community. One lesson is taken straight from the Kubernetes project itself: declarative config management. Most aspects of the community are managed using declarative configs. Adding a new SIG, GitHub org member, and even Slack channel, involves updating and PRing a change into one of the many Kubernetes repos. Adopting this methodology provides the community the means to self-manage itself. Join us as we journey through the many bits of community automation and weigh the merits of automating every aspect of our community.
SCALE 18x 2020
Academic research institutions are at a precipice. They have historically been constrained to supporting classic “job” style workloads. With the growth of new workflow practices such as streaming data, science gateways, and more “dynamic” research using lambda-like functions, they must now support a variety of workloads.
In this talk, Lindsey and Bob will discuss some difficulties faced by academic institutions and how Kubernetes offers an extensible solution to support the future of research. They will present a selection of projects currently benefiting from Kubernetes enabled tools, like Argo, Kubeflow, and kube-batch. These workflows will be demonstrated using specific examples from two large research institutions: Compute Canada, Canada’s national computation research consortium and the University of Michigan, one of the largest public Universities in the United States.
KubeCon EU 2019
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Connector Corner: Automate dynamic content and events by pushing a button
Ansible, integration testing, and you.
1. Ansible, Integration Testing, and
You. Catching configuration problems
before they happen.
...With a little help from the Chef community.
2. Who Am I?
Bob Killen / @mrbobbytables
Senior Research Cloud Administrator
http://arc-ts.umich.edu
3. Why bother testing?
Ansible is an ordered, fail-fast system. Why would further testing be needed?
Common Examples:
● Does it work with sysVinit? How about Systemd?
● Are there any issues between minor distribution releases? CentOS 7.0 vs Centos 7.1 or Ubuntu
14.04.4 vs 14.04.5.
● Supporting multiple optional or conflicting features?
● Complex dependencies?
● Requires working with other live service?
Complexity
5. Types of Testing Applicable to Ansible
● Static Code Analysis
○ Performs Code Analysis without executing any code itself.
○ Examples:
■ Ansible Syntax check (ansible-playbook --syntaxcheck).
■ Ansible-lint - Performs checks on playbooks for practices and behavior that could be
improved.
● Unit Test
○ Smallest possible ‘unit’ of code is tested in isolation.
○ Within Ansible, this would be classified as a task requisite or assertion. A service ‘should’ be
started. A file ‘should’ contain this string. When it is not encountered, fail the task.
○ Example:
■ Ansible Check Mode (ansible-playbook --check).
6. Types of Testing Applicable to Ansible (cont.)
● Integration Testing
○ Builds upon unit-tests and tests them as a group or module. Within Ansible, this would be a
role.
○ Verifies all units executed and produced the desired outcome.
○ Usually tested with a functional version of external dependencies.
● Acceptance Testing
○ Similar to Integration testing, Acceptance testing builds upon Integration testing. The intent is
to test end-to-end, say an entire playbook of multiple roles.
○ If at all possible, simulate end-user usage of the deployed system.
7. What Makes for a Good Testing Framework?
● Fast.
● Easily Repeatable.
● Modular.
● Support testing against multiple distributions or environments.
● Ability to be integrated into a CI/CD pipeline.
8. Test Kitchen!
http://kitchen.ci/
● Originally developed by Opscode in late
2012. 1.0 released in 2013, and is still
actively being maintained and developed.
● Extremely simple workflow based of a yaml
DSL.
● Large plugin ecosystem supporting roughly
50 drivers and over 20 different testing
frameworks.
● Easy to integrate with a variety of CI
systems using native Ruby tools.
● FAST FAST FAST!
9. Getting Started with Test-Kitchen
1. Install Ruby 2.0 or greater (suggestion: install a ruby version manager such
as rvm, rbenv, or churby)
2. Install bundler and test-kitchen gem install bundler test-kitchen
3. If intending to test local, install your local testing endpoint such as vagrant,
or docker:
○ Vagrant - https://www.vagrantup.com/
○ Docker - https://www.docker.com/
4. Create a file in the root of your project titled Gemfile.
5. Edit the file and add the needed drivers or plugins you intend to use.
6. Execute the command bundle install. This will install the
dependencies and manage them for the project.
10. Test Kitchen Components
● Driver - Supplies test-kitchen with the information regarding the endpoint where it will
execute actions. This includes things such as: Docker, Dsc (windows), Ec2, Google (gce),
Openstack, ssh, Vagrant and many others.
● Platforms - The systems or environments intended to test against. Examples: CentOS 7.2,
Ubuntu 16.04, and Windows 10.
● Provisioner - The application or framework that will be used to configure the system.
Ansible, Bash, Chef, Puppet, Salt, Windows DSC fall into this category.
● Verifier* - Informs test-kitchen of how it will perform a test action. Inspec, serverspec, and
ssh are examples of verifiers.
● Suite - Describes the specific tests that should be executed against a platform. If using a
verifier, supplies the suite specific verification configuration. Test suites include such
frameworks as Bats, Cucumber, Inspec, Rspec and Serverspec.
● Instance - An instance is a combination of a suite and platform, and functions as the
testable unit.
* Verifier is currently optional depending on testing method.
14. Test Kitchen Lifecycle - Create
● Creates the instance(s) as supplied by the
kitchen create command.
● The VM is spun up, container started, or ssh
endpoint contacted.
15. Test Kitchen Lifecycle - Converge
● Executes a converge against the node(s) as supplied by the
kitchen converge command.
● The converge process happens in 3 stages.
○ Provisioning dependencies are installed (install ansible)
○ Local files (ansible role) are copied to the node
○ Execute an action as supplied by the test-kitchen config.
For Ansible, this would be executing the supplied
playbook.
16. Test Kitchen Lifecycle - Verify
● Executes a verify action against the node(s) as
supplied by the kitchen verify command.
● Verify actions are dependant on both the testing
framework and verifier.
17. Test Kitchen Lifecycle - Destroy
● Simply destroys the VM, container etc as supplied
by the kitchen destroy command.
● If no instance is supplied, will destroy all.
18. Serverspec - TDD for Infrastructure
● Based off Ruby Rspec and SpecInfra.
● Configuration Management System Agnostic.
● Supports a wide range of Operating Systems Including AIX, BSD (OSX
included), Linux, Windows, and other offshoots such as SmartOS.
● Easy to understand and quick to write tests.
● 40 Resource types*, and easily extendable.
* List of available Resource types http://serverspec.org/resource_types.html
http://serverspec.org
23. Including Serverspec Tests with Roles
1. Create a folder within the root of the role directory called spec.
2. Create a file in this directory titled spec_helper.rb. The spec helper
file acts as a helper to be included in that instructs serverspec how to
communicate with the instance. With test-kitchen, these variables will
be passed as environment variables and prefixed with KITCHEN_.
3. Create your serverspec test file(s) and call it <name>_spec.rb.
4. Within your spec file, include the spec file with the following:
○ require ‘spec_helper’
5. Update the kitchen config to include the shell verifier, and then give it
the command to execute the serverspec test. Example:
spec_helper.rb
bundle exec rspec -c -f d -I serverspec
This tells rspec to load serverspec and execute the test located at spec/test_spec.rb. For a specific list of rspec
commands, execute bundle exec rspec --help.
24. Integrating Serverspec with Ansible - AnsibleSpec
● What AnsibleSpec is -
○ A ruby gem (ansible_spec) that acts as an Ansible Config Parser.
○ Makes Ansible variables available for use in Serverspec tests.
○ Supports host patterns, ranges, and dynamic inventory sources.
○ Works with multiple roles and spec files.
○ Designed to validate deployments.
● What AnsibleSpec isn’t -
○ It cannot render jinja.
○ It does have some open issues.
○ It does not play well with test-kitchen.*
25. Installing and Configuring Ansiblespec
● Add ansible_spec to your Gemfile, and execute bundle install.
● The command ansiblespec-init will create a default spec and Rakefile.
● Set 3 environment variables: PLAYBOOK, INVENTORY, and HASH_BEHAVIOUR.
○ PLAYBOOK - Path to the Ansible playbook you wish to use.
○ INVENTORY - Path to the Ansible Inventory associated with the playbook.
○ HASH_BEHAVIOUR - The merge behaviour for Ansible (options: replace or merge)
● These variables may also be set in an AnsibleSpec dot file (.ansiblespec)
That’s It!
26. Using AnsibleSpec with Serverspec
● Ansible Variables are accessed via the property hash.
27. Faking AnsibleSpec for use with Test-Kitchen
Ansiblespec’s behaviour
can be simulated with a
slightly more complicated
spec helper.
28. Automating Test-Kitchen with Rake
● Rake is like make, but for Ruby.
● It uses a ‘Rakefile’ to define tasks.
● Tasks are written in standard Ruby syntax, no XML or other dependencies
required.
29. Rake and Test Kitchen
● The kitchen config (.kitchen.yml) is
loaded and logger defined.
● The tasks are defined and instances filtered
by suite name.
● For each instance, the test-kitchen action
test will be called.
The rake task can then be called and that specific
test suite will be executed on all defined platforms.
30. Speeding up Rake Tasks with Concurrency
● Concurrency is achieved by spawning each instance in a
separate thread.
● Threads are managed via the concurrency variable
being passed at run-time as a environment variable.
● Moving task execution to the task_runner function has
the added bonus of cleaning up the code.
31. Rake, Test-Kitchen, and AWS
Adding basic AWS support is as easy as requiring
‘aws-sdk’ and loading the .kitchen.cloud.yml.
...However aws has it’s own set of issues that should be
addressed before using it in an automated fashion.
32. Rake, Test-Kitchen, and AWS Gotchas
● API Request limit triggers failed task.
● Tests can occasionally fail due to resource scarcity when using small instance types (t2.micro).
● Async SCP transfer errors can unexpectedly fail the test (error - SCP upload failed (open
failed (1)))
● Instance ebs volumes without the flag delete_on_termination: true WILL PERSIST when
an instance is deleted.
● When executing concurrent tests and a failure occurs, it’s possible test-kitchen will not be aware of the
instance, and it will NOT be possible for test-kitchen to delete through standard means.
All can be dealt with….
33. Failure due to API Request Limit
● “Request limit exceeded.” Error.
● Decrease concurrency count (suggested value: 8)
● Open PR to wrap commands in a retryable block:
○ https://github.com/test-kitchen/kitchen-ec2/pull/305
34. Failing due to Instance Resource Starvation...
● t2.micro systems start with 30 CPU Credits (1 credit == 1 core @ 100% for 1 minute), and have a
baseline performance of 10%.
● Creating over 100 instances in a 24 hour period WILL consume the initial CPU Credit.
● Use CloudWatch to monitor your t2 instance credits with the following benchmarks.
○ CPUCreditUsage
○ CPUCreditBalance
● Use an appropriate instance size.
● Purchase more instance credits.
T2 instance description: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html
CloudWatch Aggregating Stats: http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/GetSingleMetricAllDimensions.html
35. Failures with SCP Concurrency
● Tune the transport setting max_ssh_sessions
in the kitchen config file (defaults to 9).
○ ONLY available in test-kitchen 1.9.2+
● Increase the value of maxSessions in instance
sshd_config if customizing your own AMI.
36. Volumes Persisting Deletes
● Add the delete_on_termination: true flag to
the ebs volume config.
● Must be done for the public CentOS AMI.
37. Instances Persisting After a Failure
There are multiple methods of destroying objects in AWS after a failure.
...However only ONE way has proven to be truly reliable..
38.
39. ...by Destroying Everything in the Security Group
1. Filters instances for any in the security group
that are in a running or pending state.
2. Generate a list of volumes attached to those
instances.
3. Sends signal to instance to terminate.
4. Poll until all instances are destroyed.
5. Iterate through volumes gathered earlier and
send signal to destroy.
6. Poll until all volumes are destroyed.
40. Bringing it all Together...CI/CD Integration
● Many different CI/CD systems, and all integrate differently. TravisCI, Jenkins,
CircleCI, Drone.io, Concourse etc.
● Common components among all of them for working with test-kitchen:
○ Set environment variables - AWS variables
○ Execute command - Rake Command
○ Signal Success/Fail - End Result
41. TravisCI
● Only integrates with GitHub.
● Free for Public Repos.
● Ansible Galaxy’s default.
● Incredibly easy to use.
● Tests branches and Pull Requests.
● Execution controlled via .travis.yml file located at root of git repo.
● Caveats:
○ No advanced reporting. Either pass/fail by Exit Code.
○ Builds running more than 50 minutes will be killed.
○ Log limit size of 4MB, if log is going to be larger a wrapper script (example:
https://gist.github.com/roidrage/5238585) is required.
https://travis-ci.org/
42. Travis File Config
● https://docs.travis-ci.com/
● Environment variables that are not considered
secret can be placed in travis file without
encryption
● Most items are done as lists and not key/value
pairs.
43. Adding Secrets to Travis
1. Install and Configure travis CLI - https://github.com/travis-ci/travis.rb
2. Add secrets to the travis config with: travis encrypt <env var name>=value --add . This
should include AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SSH_KEY_ID,
AWS_SGROUP_ID, AWS_REGION, and AWS_AVAILABILITY_ZONE.
3. Encrypting files is similar. To encrypt the AWS ssh key, use the following: travis encrypt-file
<filename> --add. This will create a new encrypted file ending with the .enc extension.
Travis generates a pair of RSA keys on a per repo basis that can be used to store secrets in a public repo
securely.
BE CERTAIN YOU DO NOT ADD YOUR UNENCRYPTED
FILES TO THE GIT REPO!