Kubernetes advanced sheduling
- Taint and tolerant
- Affinity (Node & inter pod)
Learn how to place Pod like (same or different) node, rack, zone, region
Service Discovery in kubernetes is all about how services of kubernetes get discovered internally and externally. How does a single POD communicate to another POD the within the cluster and how does a user request reach to a specific POD in the cluster? These are some questions that are answered by this TOPIC.
Kubernetes Webinar - Using ConfigMaps & Secrets Janakiram MSV
Many applications require configuration using some combination of configuration files, command line arguments, and environment variables. ConfigMaps in Kubernetes provide mechanisms to inject containers with configuration data while keeping them portable. Secrets decouple sensitive content from the pods using a volume plug-in. This webinar will discuss the use cases and scenarios for using ConfigMaps and Secrets.
KubeCon EU 2016: Kubernetes Storage 101KubeAcademy
You have deployed your application on Kube and now you want to actually do something permanent with it?? You will need STORAGE.
This talk will be a good introduction to using storage in Kubernetes. It will cover the use of EmptyDir, HostPath and Persistent Storage options. How to configure and use each type. This talk will also discuss the security features for storage in the open source OpenShift project.
Sched Link: http://sched.co/6BcS
If you're running your container workloads on AWS EKS orchestration platform and you are trying to dynamically provision workload resources based on the current load, you might find yourself in a position where limitations and rules of node group scaling might feel a bit too rigid. This talk will focus on an interesting node lifecycle management solution from AWSlabs called Karpenter, which is an alternative approach to probably the most frequently used Cluster Autoscaler. Is this a better and more efficient way of allocating worker node resources? Would that get you around some of the node group constraints? The project has reached GA stage and still has some interesting challenges to solve on their roadmap. We will look into what the current release has to offer and how it is dealing with this challenge of improving efficient dynamic workload provisioning.
In the session, we will go through the introduction to Karpenter, emphasizing on the importance of autoscaling in kubernetes. In addition, we will walk through a short demo that depicts the working of the Karpenter.
Service Discovery in kubernetes is all about how services of kubernetes get discovered internally and externally. How does a single POD communicate to another POD the within the cluster and how does a user request reach to a specific POD in the cluster? These are some questions that are answered by this TOPIC.
Kubernetes Webinar - Using ConfigMaps & Secrets Janakiram MSV
Many applications require configuration using some combination of configuration files, command line arguments, and environment variables. ConfigMaps in Kubernetes provide mechanisms to inject containers with configuration data while keeping them portable. Secrets decouple sensitive content from the pods using a volume plug-in. This webinar will discuss the use cases and scenarios for using ConfigMaps and Secrets.
KubeCon EU 2016: Kubernetes Storage 101KubeAcademy
You have deployed your application on Kube and now you want to actually do something permanent with it?? You will need STORAGE.
This talk will be a good introduction to using storage in Kubernetes. It will cover the use of EmptyDir, HostPath and Persistent Storage options. How to configure and use each type. This talk will also discuss the security features for storage in the open source OpenShift project.
Sched Link: http://sched.co/6BcS
If you're running your container workloads on AWS EKS orchestration platform and you are trying to dynamically provision workload resources based on the current load, you might find yourself in a position where limitations and rules of node group scaling might feel a bit too rigid. This talk will focus on an interesting node lifecycle management solution from AWSlabs called Karpenter, which is an alternative approach to probably the most frequently used Cluster Autoscaler. Is this a better and more efficient way of allocating worker node resources? Would that get you around some of the node group constraints? The project has reached GA stage and still has some interesting challenges to solve on their roadmap. We will look into what the current release has to offer and how it is dealing with this challenge of improving efficient dynamic workload provisioning.
In the session, we will go through the introduction to Karpenter, emphasizing on the importance of autoscaling in kubernetes. In addition, we will walk through a short demo that depicts the working of the Karpenter.
A comprehensive walkthrough of how to manage infrastructure-as-code using Terraform. This presentation includes an introduction to Terraform, a discussion of how to manage Terraform state, how to use Terraform modules, an overview of best practices (e.g. isolation, versioning, loops, if-statements), and a list of gotchas to look out for.
For a written and more in-depth version of this presentation, check out the "Comprehensive Guide to Terraform" blog post series: https://blog.gruntwork.io/a-comprehensive-guide-to-terraform-b3d32832baca
Author: Oleg Chunikhin, www.eastbanctech.com
Kubernetes is a portable open source system for managing and orchestrating containerized cluster applications. Kubernetes solves a number of DevOps related problems out of the box in a simple and unified way – rolling updates and update rollback, canary deployment and other complicated deployment scenarios, scaling, load balancing, service discovery, logging, monitoring, persistent storage management, and much more. You will learn how in less than 30 minutes a reliable self-healing production-ready Kubernetes cluster may be deployed on AWS and used to host and operate multiple environments and applications.
Kubernetes dealing with storage and persistenceJanakiram MSV
Storage is a critical part of running containers, and Kubernetes offers some powerful primitives for managing it. This webinar discusses various strategies for adding persistence to the containerised workloads.
Everything You Need To Know About Persistent Storage in KubernetesThe {code} Team
Applications need data. Containers remain ephemeral but we don't want our data to disappear. So how does it work with Kubernetes?
This session will examine the individual pieces required for creating persistent applications in Kubernetes. You will learn about in-tree and out-of-tree storage drivers, PersistentVolumes (PV), PersistentVolumeClaims (PVC), Dyanamic Provisioning, how to use all of these in your Deployments and StatefulSets, high availability, and what happens to the volumes when you delete objects. Get ramped up on everything you need to know about using persistent storage in Kubernetes
This presentation was delivered at Open Source Summit EU 2017
Traditional virtualization technologies have been used by cloud infrastructure providers for many years in providing isolated environments for hosting applications. These technologies make use of full-blown operating system images for creating virtual machines (VMs). According to this architecture, each VM needs its own guest operating system to run application processes. More recently, with the introduction of the Docker project, the Linux Container (LXC) virtualization technology became popular and attracted the attention. Unlike VMs, containers do not need a dedicated guest operating system for providing OS-level isolation, rather they can provide the same level of isolation on top of a single operating system instance.
An enterprise application may need to run a server cluster to handle high request volumes. Running an entire server cluster on Docker containers, on a single Docker host could introduce the risk of single point of failure. Google started a project called Kubernetes to solve this problem. Kubernetes provides a cluster of Docker hosts for managing Docker containers in a clustered environment. It provides an API on top of Docker API for managing docker containers on multiple Docker hosts with many more features.
This presentation gives audiences a broad viewpoint from old to modern architecture. How Kubernetes and service mesh (istio) can help developers in those missions:
- Explain from traditional to modern architecture. The role of Kubernetes in modern architecture.
- Build basic k8s components from the ground up with illustrations: Pod; Node; Service; ReplicaSet; Deployment; Namespace; Ingress ...
- Kubernetes under the developer viewpoint: write a YAML application file and deploy k8s application to the cluster.
- Kubernetes advanced concepts: master node design, how does the auto-scale for pods/nodes work, Kubernetes networking model.
- Discuss microservice challenges. The role of the service mesh in the microservice ecosystem.
- Introduce Envoy, istio and their application in the service mesh.
Kubernetes for Beginners: An Introductory GuideBytemark
An introduction to Kubernetes for beginners. Includes the definition, architecture, benefits and misconceptions of Kubernetes. Written in plain English, ideal for both developers and non-developers who are new to Kubernetes.
Find out more about Kubernetes at Bytemark here: https://www.bytemark.co.uk/managed-kubernetes/
In the era of Microservices, Cloud Computing and Serverless architecture, it’s useful to understand Kubernetes and learn how to use it. However, the official Kubernetes documentation can be hard to decipher, especially for newcomers. In this book, I will present a simplified view of Kubernetes and give examples of how to use it for deploying microservices using different cloud providers, including Azure, Amazon, Google Cloud and even IBM.
Persistent Storage with Containers with Kubernetes & OpenShiftRed Hat Events
Manually configuring mounts for containers to various network storage platforms and services is tedious and time consuming. OpenShift and Kubernetes provides a rich library of volume plugins that allow authors of containerized applications (Pods) to declaratively specify what the storage requirements for the containers are so that OpenShift can dynamically provision and allocate the storage assets for the specified containers. As the author of the Kubernetes Persistent Volume specification, I will provide an overview of how Persistent Volume plugins work in OpenShift, demo block storage and file storage volume plugins and close with the Red Hat storage roadmap.
Presented at LinuxCon/ContainerCon by Mark Turansky, Principal Software Engineer, Red Hat
Mark Turansky is a Principal Software Engineer at Red Hat and a full-time contributor to the Kubernetes Project. Mark is the author of the Kubernetes Persistent Volume specification and a member of the Red Hat OpenShift Engineering team.
Kubernetes Webinar Series - Exploring Daemon Sets and JobsJanakiram MSV
Apart from running stateless and stateful workloads, Kubernetes can be used to run batch jobs and scheduled jobs. Daemon Sets ensure that each node of the cluster run a specific pod that may provide logging, monitoring, or storage capabilities to applications. This webinar will explore Daemon Sets and Cron Jobs in Kubernetes
Everything you ever needed to know about Kafka on Kubernetes but were afraid ...HostedbyConfluent
Kubernetes became the de-facto standard for running cloud-native applications. And many users turn to it also to run stateful applications such as Apache Kafka. You can use different tools to deploy Kafka on Kubernetes - write your own YAML files, use Helm Charts, or go for one of the available operators. But there is one thing all of these have in common. You still need very good knowledge of Kubernetes to make sure your Kafka cluster works properly in all situations. This talk will cover different Kubernetes features such as resources, affinity, tolerations, pod disruption budgets, topology spread constraints and more. And it will explain why they are important for Apache Kafka and how to use them. If you are interested in running Kafka on Kubernetes and do not know all of these, this is a talk for you.
A comprehensive walkthrough of how to manage infrastructure-as-code using Terraform. This presentation includes an introduction to Terraform, a discussion of how to manage Terraform state, how to use Terraform modules, an overview of best practices (e.g. isolation, versioning, loops, if-statements), and a list of gotchas to look out for.
For a written and more in-depth version of this presentation, check out the "Comprehensive Guide to Terraform" blog post series: https://blog.gruntwork.io/a-comprehensive-guide-to-terraform-b3d32832baca
Author: Oleg Chunikhin, www.eastbanctech.com
Kubernetes is a portable open source system for managing and orchestrating containerized cluster applications. Kubernetes solves a number of DevOps related problems out of the box in a simple and unified way – rolling updates and update rollback, canary deployment and other complicated deployment scenarios, scaling, load balancing, service discovery, logging, monitoring, persistent storage management, and much more. You will learn how in less than 30 minutes a reliable self-healing production-ready Kubernetes cluster may be deployed on AWS and used to host and operate multiple environments and applications.
Kubernetes dealing with storage and persistenceJanakiram MSV
Storage is a critical part of running containers, and Kubernetes offers some powerful primitives for managing it. This webinar discusses various strategies for adding persistence to the containerised workloads.
Everything You Need To Know About Persistent Storage in KubernetesThe {code} Team
Applications need data. Containers remain ephemeral but we don't want our data to disappear. So how does it work with Kubernetes?
This session will examine the individual pieces required for creating persistent applications in Kubernetes. You will learn about in-tree and out-of-tree storage drivers, PersistentVolumes (PV), PersistentVolumeClaims (PVC), Dyanamic Provisioning, how to use all of these in your Deployments and StatefulSets, high availability, and what happens to the volumes when you delete objects. Get ramped up on everything you need to know about using persistent storage in Kubernetes
This presentation was delivered at Open Source Summit EU 2017
Traditional virtualization technologies have been used by cloud infrastructure providers for many years in providing isolated environments for hosting applications. These technologies make use of full-blown operating system images for creating virtual machines (VMs). According to this architecture, each VM needs its own guest operating system to run application processes. More recently, with the introduction of the Docker project, the Linux Container (LXC) virtualization technology became popular and attracted the attention. Unlike VMs, containers do not need a dedicated guest operating system for providing OS-level isolation, rather they can provide the same level of isolation on top of a single operating system instance.
An enterprise application may need to run a server cluster to handle high request volumes. Running an entire server cluster on Docker containers, on a single Docker host could introduce the risk of single point of failure. Google started a project called Kubernetes to solve this problem. Kubernetes provides a cluster of Docker hosts for managing Docker containers in a clustered environment. It provides an API on top of Docker API for managing docker containers on multiple Docker hosts with many more features.
This presentation gives audiences a broad viewpoint from old to modern architecture. How Kubernetes and service mesh (istio) can help developers in those missions:
- Explain from traditional to modern architecture. The role of Kubernetes in modern architecture.
- Build basic k8s components from the ground up with illustrations: Pod; Node; Service; ReplicaSet; Deployment; Namespace; Ingress ...
- Kubernetes under the developer viewpoint: write a YAML application file and deploy k8s application to the cluster.
- Kubernetes advanced concepts: master node design, how does the auto-scale for pods/nodes work, Kubernetes networking model.
- Discuss microservice challenges. The role of the service mesh in the microservice ecosystem.
- Introduce Envoy, istio and their application in the service mesh.
Kubernetes for Beginners: An Introductory GuideBytemark
An introduction to Kubernetes for beginners. Includes the definition, architecture, benefits and misconceptions of Kubernetes. Written in plain English, ideal for both developers and non-developers who are new to Kubernetes.
Find out more about Kubernetes at Bytemark here: https://www.bytemark.co.uk/managed-kubernetes/
In the era of Microservices, Cloud Computing and Serverless architecture, it’s useful to understand Kubernetes and learn how to use it. However, the official Kubernetes documentation can be hard to decipher, especially for newcomers. In this book, I will present a simplified view of Kubernetes and give examples of how to use it for deploying microservices using different cloud providers, including Azure, Amazon, Google Cloud and even IBM.
Persistent Storage with Containers with Kubernetes & OpenShiftRed Hat Events
Manually configuring mounts for containers to various network storage platforms and services is tedious and time consuming. OpenShift and Kubernetes provides a rich library of volume plugins that allow authors of containerized applications (Pods) to declaratively specify what the storage requirements for the containers are so that OpenShift can dynamically provision and allocate the storage assets for the specified containers. As the author of the Kubernetes Persistent Volume specification, I will provide an overview of how Persistent Volume plugins work in OpenShift, demo block storage and file storage volume plugins and close with the Red Hat storage roadmap.
Presented at LinuxCon/ContainerCon by Mark Turansky, Principal Software Engineer, Red Hat
Mark Turansky is a Principal Software Engineer at Red Hat and a full-time contributor to the Kubernetes Project. Mark is the author of the Kubernetes Persistent Volume specification and a member of the Red Hat OpenShift Engineering team.
Kubernetes Webinar Series - Exploring Daemon Sets and JobsJanakiram MSV
Apart from running stateless and stateful workloads, Kubernetes can be used to run batch jobs and scheduled jobs. Daemon Sets ensure that each node of the cluster run a specific pod that may provide logging, monitoring, or storage capabilities to applications. This webinar will explore Daemon Sets and Cron Jobs in Kubernetes
Everything you ever needed to know about Kafka on Kubernetes but were afraid ...HostedbyConfluent
Kubernetes became the de-facto standard for running cloud-native applications. And many users turn to it also to run stateful applications such as Apache Kafka. You can use different tools to deploy Kafka on Kubernetes - write your own YAML files, use Helm Charts, or go for one of the available operators. But there is one thing all of these have in common. You still need very good knowledge of Kubernetes to make sure your Kafka cluster works properly in all situations. This talk will cover different Kubernetes features such as resources, affinity, tolerations, pod disruption budgets, topology spread constraints and more. And it will explain why they are important for Apache Kafka and how to use them. If you are interested in running Kafka on Kubernetes and do not know all of these, this is a talk for you.
Implement Advanced Scheduling Techniques in Kubernetes Kublr
Is advanced scheduling in Kubernetes achievable? Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations?
Oleg Chunikhin addressed those questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and cost-effective resources on AWS, coupled with the ability to deliver a minimum set of functionalities that cover the majority of needs – without configuration complexity. You’ll get a run-down of the pitfalls and things to keep in mind for this route.
This third part of Linux internals talks about Thread programming and using various synchronization mechanisms like mutex and semaphores. These constructs helps users to write efficient programs in Linux environment
Presentation from the Boulder/Denver Big Data Meetup on 2/20/2020 in Boulder, CO. Topics covered: Troubleshooting Spark jobs (groupby, shuffle) for big data, tuning AWS EMR Spark clusters, EMR cluster resource utilization, writing scaleable Scala for scanning S3 metadata.
Spark Gotchas and Lessons Learned (2/20/20)Jen Waller
Presentation from the Boulder/Denver Big Data Meetup on 2/20/2020 in Boulder, CO. Topics covered: Troubleshooting Spark jobs (groupby, shuffle) for big data, tuning AWS EMR Spark clusters, EMR cluster resource utilization, writing scaleable Scala for scanning S3 metadata.
SolrCloud in Public Cloud: Scaling Compute Independently from Storage - Ilan ...Lucidworks
Running SolrCloud in Public Cloud is the future. This presentation and the code that will be contributed back to the community will allow such clusters to be highly efficient, scalable and elastic. Attendees will understand the challenges and potential of sharing index data between servers.
Speakers: Ilan Ginzburg & Yonik Seeley, Salesforce
Cassandra - A Decentralized Structured Storage SystemVarad Meru
Slides created as a part of CS 295's week 4 on NoSQL Basics.
CS 295 (Cloud Computing and BigData) at UCI - https://sites.google.com/site/cs295cloudcomputing/
Presentation slides from DevConf.cz 2017
Challenges, take-aways and recommendations on scaling up OpenShift's logging and metrics stack.
Authors:
Ricardo Lourenço:
https://www.linkedin.com/in/ricardopereira4it/
Elvir Kuric
https://www.linkedin.com/in/elvirkuric/
Richard Boyd, Civitas Learning
Google Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.
What does that mean? If you’re thinking about moving towards a container-centric world then you should consider using Kubernetes. In this talk I’ll go through the architecture of Kubernetes and give an overview of how it works, capping it off with a tech demo and light Q and A.
Even wondered what Kubernetes was all about? Ever felt intimidated trying to understand the difference between Daemon sets and Replica sets? Well this presentation is for you.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
3. Taint and toleration
Taints
● It allows rejecting deployment of pods to certain node by adding taints to
node. (ex : Master node)
● Format : <key>=<value>:<effect>
● Taints effect
○ NoSchedule: Pod won’t be scheduled to the node if they don’t tolerate the taint
○ PreferNoSchedule: It is soft version of Noschedule, meaning the scheduler will try to avoid
scheduling the pod to the node, but will schedule it to the node if it can’t schedule it
somewhere else
○ NoExecute: Pod is evicted from the node if it is already running on the node, and is not
scheduled onto the node if it is not yet running on the node.
■ tolerationSeconds : if this pod is running and a matching taint is added to the node, then the pod will
stay bound to the node for 3600 seconds, and then be evicted. If the taint is removed before that time,
the pod will not be evicted. (Toration의 효과를 적용 받는 기간, 이 기간이 지나면, toration 효과가 없어지
고, 해당 Pod는 evit/제거 된다.)
5. Taint and toleration
Toleration
● It is applied to Pod and allows the pods to be deployed to node with
matching taints
● Exceptional case
6. Taint and toleration
Source : https://livebook.manning.com/#!/book/kubernetes-in-action/chapter-16/section-16-3-1
7. Node affinity
● cf. node selector : it specifies that pod should only be deployed on nodes
that matched label (Hard affinity)
● Affinity
○ Provide scheduling affinity to node based on label
○ Compared to node selector
■ It can provide “soft/preference” based affinity. (cf. node selector is hard affinity =
must)
■ Label selection is more expressive (not just “AND of exact match”)
8. Node affinity
Node affinity (beta in 1.10)
● Hard affinity :
requiredDuringSchedulingIgnoredDuringExecution
Same as node selector (but more expressive node selection
syntax)
● Soft affinity :
preferredDuringSchedulingIgnoredDuringExecution
Try to deploy pod to node that matches selector but if it is
not possible the deploy it elsewhere
pod-with-node-affinity.yaml docs/concepts/configuration
apiVersion: v1
kind: Pod
metadata:
name: with-node-affinity
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/e2e-az-name
operator: In
values:
- e2e-az1
- e2e-az2
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
containers:
- name: with-node-affinity
image: k8s.gcr.io/pause:2.0
Label key in node is “kubernetes.io/e2e-az-name” and whose value is either e2e-az1 or e2e-az2
In addition, among nodes that meet that criteria, nodes with a label whose key is
another-node-label-key and whose value is another-node-label-value should be preferred.
From : https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
9. Node affinity
Node affinity & Selector-SpreadPriority
● If you create 5 pods with affinity in 2 node cluster, all 5 pods should be created in an node
But 1 of 5 pod is created in node 2.
The reason is that besides the node affinity prioritization function, the Scheduler also uses other prioritization
functions that to decide where to schedule Pod. Selector-SpreadPriority function, which makes sure pods
belonging to the same ReplicaSet are spread around different nodes so a node failure won’t bring the wole
service down.
(Reference : Kubernetes in Action/Manning Chapter 16 page 468)
10. Pod affinity
Inter pod affinity (beta in 1.10)
● Node affinity : affinity between pod and node
● Pod affinity : give affinity between Pods themselves based on label of pod which is already
running (Same node or Different node)
● Like node affinity , it has two affinity (Hard/Soft)
○ (Hard) requiredDuringSchedulingIgnoredDuringExecution
○ (Soft) preferredDuringSchedulingIgnoredDuringExecution
● It needs to specify topologyKey and labelSelector
● Use case
○ Run db pod in same rack of backend server
○ Run front server in same zone of backend server
○ Run clustered instances in different nodes
11. Pod affinity
Inter pod affinity example
apiVersion: v1
kind: Pod
metadata:
name: with-pod-affinity
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: security
operator: In
values:
- S1
topologyKey: failure-domain.beta.kubernetes.io/zone
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: security
operator: In
values:
- S2
topologyKey: kubernetes.io/hostname
containers:
- name: with-pod-affinity
image: k8s.gcr.io/pause:2.0
Pod (with-pod-affinity)
Label selector Hard : “S1” in “security”
Label selector Soft/(Weight=100) : “S2” in
“security”
topologyKey : kubernetes.io/hostname
Pod-1
Label = S1:”security”
Pod=2
Label =
S1:”security”,S2:”securit
y”
Pod=3
Label =
S1:”non-secure”,
S2:”security”
Node 1
hostname=server1
Node 2
hostname=server2
Node 3
hostname=server2
Top priority
2nd priority
Will not selected
(it doesn’t meet hard requirement)
12. Pod affinity
Pod Anti-Affinity
Deploy pod with different place. Use podAntiAffinity instead of podAffinity
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 5
template:
…
spec:
Affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: kubernetes.io/hostname
labelSelector:
matchLabels:
app: backend
Ref : Kubernetes in action book
475
13. Pod affinity
Pod Anti-Affinity
As you can see, only two pods were scheduled—one to node1, the other to
node2. The three remaining pods are all Pending, because the Scheduler isn’t
allowed to schedule them to the same nodes.
Ref : Kubernetes in action book
475
14. Pod affinity & topologyKey
topologyKey
● Key for node label
● Node affinity is just used to select same node. Pod affinity can be used to select same node
with the pod. TopologyKey extends place concept like from same node to same rack, same
zone, same region.
● How it works
When the scheduler is deciding where to deploy a pod based on affinity setting, it find out
node based on affinity and get topologyKey value from the node. And the pod will be deployed
to one of nodes that has the matched “toplogyKey” value
(Pod 배포시 Affinity에 의해서 배포될 노드를 먼저 계산하고, 그 노드의 topologyKey를 얻은 후에, 그
toplogyKey에 해당하는 라벨을 가진 노드에 배포한다.)
15. Pod affinity & topologyKey
topologyKey example
Node-1
Label = zone:z1
Node-2
Label = zone:z1
Node-3
Label = zone:z1
Node-4
Label = zone:z2
Node-5
Label = zone:z2
Node-6
Label = zone:z2
Pod
Label = app:mongodb
Pod (node.js)
Label selector : app=mongodb
topologyKey : zone
Assume that pod(node.js) has
affinity with label selector
app=mongodb. The matched
pod is running on Node-2.
pod(node.js) has “zone” as a
topologykey. So pod(node.js)
get the zone value “z1” from
Node-2.
Based on the value, pod2 will
select one of node which has
zone=z1 (Node-1,Node-2,Node-
3)
NoSchedule :이 오염을 용인하지 않는 포드는 노드에서 예약되지 않습니다.
PreferNoSchedule : Kubernetes는 노드에이 오염을 용인하지 않는 포드 일정을 피합니다.
NoExecute : 노드가 이미 노드에서 실행 중이면 노드에서 제거되고 노드에서 노드가 아직 실행되지 않은 경우 노드에 예약되지 않습니다.
마스터 노드에는 아무 Pod나 배포 되면 안되기 때문에, Taints를 설정한다.
NoSchedule :이 오염을 용인하지 않는 포드는 노드에서 예약되지 않습니다.
PreferNoSchedule : Kubernetes는 노드에이 오염을 용인하지 않는 포드 일정을 피합니다.
NoExecute : 노드가 이미 노드에서 실행 중이면 노드에서 제거되고 노드에서 노드가 아직 실행되지 않은 경우 노드에 예약되지 않습니다.
마스터 노드에는 아무 Pod나 배포 되면 안되기 때문에, Taints를 설정한다.
Pod에 tolerations를 정해놓고, toleration이 매칭되는 node에 pod가 배포 되도록 함.