This talk provides a 101 introdution to Kubernetes from a user point of view.
Aimed at service providers, it was presented at the GPN Annual Meeting 2019. https://conferences.k-state.edu/gpn/
Author: Oleg Chunikhin, www.eastbanctech.com
Kubernetes is a portable open source system for managing and orchestrating containerized cluster applications. Kubernetes solves a number of DevOps related problems out of the box in a simple and unified way – rolling updates and update rollback, canary deployment and other complicated deployment scenarios, scaling, load balancing, service discovery, logging, monitoring, persistent storage management, and much more. You will learn how in less than 30 minutes a reliable self-healing production-ready Kubernetes cluster may be deployed on AWS and used to host and operate multiple environments and applications.
** Kubernetes Certification Training: https://www.edureka.co/kubernetes-certification **
This Edureka tutorial on "Kubernetes Architecture" will give you an introduction to popular DevOps tool - Kubernetes, and will deep dive into Kubernetes Architecture and its working. The following topics are covered in this training session:
1. What is Kubernetes
2. Features of Kubernetes
3. Kubernetes Architecture and Its Components
4. Components of Master Node and Worker Node
5. ETCD
6. Network Setup Requirements
DevOps Tutorial Blog Series: https://goo.gl/P0zAfF
Traditional virtualization technologies have been used by cloud infrastructure providers for many years in providing isolated environments for hosting applications. These technologies make use of full-blown operating system images for creating virtual machines (VMs). According to this architecture, each VM needs its own guest operating system to run application processes. More recently, with the introduction of the Docker project, the Linux Container (LXC) virtualization technology became popular and attracted the attention. Unlike VMs, containers do not need a dedicated guest operating system for providing OS-level isolation, rather they can provide the same level of isolation on top of a single operating system instance.
An enterprise application may need to run a server cluster to handle high request volumes. Running an entire server cluster on Docker containers, on a single Docker host could introduce the risk of single point of failure. Google started a project called Kubernetes to solve this problem. Kubernetes provides a cluster of Docker hosts for managing Docker containers in a clustered environment. It provides an API on top of Docker API for managing docker containers on multiple Docker hosts with many more features.
This presentation covers how app deployment model evolved from bare metal servers to Kubernetes World.
In addition to theoretical information, you will find free KATACODA workshops url to perform practices to understand the details of the each topics.
Kubernetes for Beginners: An Introductory GuideBytemark
An introduction to Kubernetes for beginners. Includes the definition, architecture, benefits and misconceptions of Kubernetes. Written in plain English, ideal for both developers and non-developers who are new to Kubernetes.
Find out more about Kubernetes at Bytemark here: https://www.bytemark.co.uk/managed-kubernetes/
Author: Oleg Chunikhin, www.eastbanctech.com
Kubernetes is a portable open source system for managing and orchestrating containerized cluster applications. Kubernetes solves a number of DevOps related problems out of the box in a simple and unified way – rolling updates and update rollback, canary deployment and other complicated deployment scenarios, scaling, load balancing, service discovery, logging, monitoring, persistent storage management, and much more. You will learn how in less than 30 minutes a reliable self-healing production-ready Kubernetes cluster may be deployed on AWS and used to host and operate multiple environments and applications.
** Kubernetes Certification Training: https://www.edureka.co/kubernetes-certification **
This Edureka tutorial on "Kubernetes Architecture" will give you an introduction to popular DevOps tool - Kubernetes, and will deep dive into Kubernetes Architecture and its working. The following topics are covered in this training session:
1. What is Kubernetes
2. Features of Kubernetes
3. Kubernetes Architecture and Its Components
4. Components of Master Node and Worker Node
5. ETCD
6. Network Setup Requirements
DevOps Tutorial Blog Series: https://goo.gl/P0zAfF
Traditional virtualization technologies have been used by cloud infrastructure providers for many years in providing isolated environments for hosting applications. These technologies make use of full-blown operating system images for creating virtual machines (VMs). According to this architecture, each VM needs its own guest operating system to run application processes. More recently, with the introduction of the Docker project, the Linux Container (LXC) virtualization technology became popular and attracted the attention. Unlike VMs, containers do not need a dedicated guest operating system for providing OS-level isolation, rather they can provide the same level of isolation on top of a single operating system instance.
An enterprise application may need to run a server cluster to handle high request volumes. Running an entire server cluster on Docker containers, on a single Docker host could introduce the risk of single point of failure. Google started a project called Kubernetes to solve this problem. Kubernetes provides a cluster of Docker hosts for managing Docker containers in a clustered environment. It provides an API on top of Docker API for managing docker containers on multiple Docker hosts with many more features.
This presentation covers how app deployment model evolved from bare metal servers to Kubernetes World.
In addition to theoretical information, you will find free KATACODA workshops url to perform practices to understand the details of the each topics.
Kubernetes for Beginners: An Introductory GuideBytemark
An introduction to Kubernetes for beginners. Includes the definition, architecture, benefits and misconceptions of Kubernetes. Written in plain English, ideal for both developers and non-developers who are new to Kubernetes.
Find out more about Kubernetes at Bytemark here: https://www.bytemark.co.uk/managed-kubernetes/
A basic introductory slide set on Kubernetes: What does Kubernetes do, what does Kubernetes not do, which terms are used (Containers, Pods, Services, Replica Sets, Deployments, etc...) and how basic interaction with a Kubernetes cluster is done.
If you’re working with just a few containers, managing them isn't too complicated. But what if you have hundreds or thousands? Think about having to handle multiple upgrades for each container, keeping track of container and node state, available resources, and more. That’s where Kubernetes comes in. Kubernetes is an open source container management platform that helps you run containers at scale. This talk will cover Kubernetes components and show how to run applications on it.
An in depth overview of Kubernetes and it's various components.
NOTE: This is a fixed version of a previous presentation (a draft was uploaded with some errors)
Related Source Code https://github.com/abdennour/meetup-deployment-k8s
Intro
Why Deployment ?
What’s Deployment ?
How Deployment?
Deployment Strategies ( in general & in k8s )
Deployment Features
Demo ( distributed )
An intro to Helm capabilities and how it helps make upgrades and rollbacks in Kubernetes,, packaging and sharing and also managing complex dependencies for K8s applications easier.
In this session, we will discuss the architecture of a Kubernetes cluster. we will go through all the master and worker components of a kubernetes cluster. We will also discuss the basic terminology of Kubernetes cluster such as Pods, Deployments, Service etc. We will also cover networking inside Kuberneets. In the end, we will discuss options available for the setup of a Kubernetes cluster.
Cloud-Native Application and KubernetesAlex Glikson
Guest lecture on Cloud-Native Applications and Kubernetes, in the Advanced Cloud Computing course (15-719) at the Carnegie Mellon University, February 2019.
A basic introductory slide set on Kubernetes: What does Kubernetes do, what does Kubernetes not do, which terms are used (Containers, Pods, Services, Replica Sets, Deployments, etc...) and how basic interaction with a Kubernetes cluster is done.
If you’re working with just a few containers, managing them isn't too complicated. But what if you have hundreds or thousands? Think about having to handle multiple upgrades for each container, keeping track of container and node state, available resources, and more. That’s where Kubernetes comes in. Kubernetes is an open source container management platform that helps you run containers at scale. This talk will cover Kubernetes components and show how to run applications on it.
An in depth overview of Kubernetes and it's various components.
NOTE: This is a fixed version of a previous presentation (a draft was uploaded with some errors)
Related Source Code https://github.com/abdennour/meetup-deployment-k8s
Intro
Why Deployment ?
What’s Deployment ?
How Deployment?
Deployment Strategies ( in general & in k8s )
Deployment Features
Demo ( distributed )
An intro to Helm capabilities and how it helps make upgrades and rollbacks in Kubernetes,, packaging and sharing and also managing complex dependencies for K8s applications easier.
In this session, we will discuss the architecture of a Kubernetes cluster. we will go through all the master and worker components of a kubernetes cluster. We will also discuss the basic terminology of Kubernetes cluster such as Pods, Deployments, Service etc. We will also cover networking inside Kuberneets. In the end, we will discuss options available for the setup of a Kubernetes cluster.
Cloud-Native Application and KubernetesAlex Glikson
Guest lecture on Cloud-Native Applications and Kubernetes, in the Advanced Cloud Computing course (15-719) at the Carnegie Mellon University, February 2019.
OSDC 2018 | Three years running containers with Kubernetes in Production by T...NETWAYS
The talk gives a state of the art update of experiences with deploying applications in Kubernetes on scale. If in clouds or on premises, Kubernetes took over the leading role as a container operating system. The central paradigm of stateless containers connected to storage and services is the core of Kubernetes. However, it can be extended to distributed databases, Machine Learning, Windows VMs in Kubernetes. All these applications have been considered as edge cases a few years ago, however, are going more and more mainstream today.
Method of NUMA-Aware Resource Management for Kubernetes 5G NFV Clusterbyonggon chun
Introduce the container runtime environment which is set up with Kubernetes and various CRI runtimes(Docker, Containerd, CRI-O) and the method of NUMA-aware resource management(CPU Manager, Topology Manager, Etc) for CNF(Containerized Network Function) within Kubernetes and related issues.
Serverless frameworks are changing the way we do computing. In open source container world, Kubernetes is playing a pivotal role in manifesting this. This presentation will go deep into various features of Kubernetes to create serverless functions.
Also includes a comparative study of various serverless frameworks such as Kubeless, Fission and Funktion are available in open source world. Will conclude with an implementation demo and some real world use cases.
Presented in serverless summit 2017: www.inserverless.com
Kubernetes for FaaS (Function as a Service) - Serverless evolution, some basic constructs, kubenetes features, comparisons - from Serverless conference 2017 Bangalore.
An RSVP app designed to be deployed by the dockers on the Kubernetes Minikube Cluster. Front end with flask framework and MongoDB as a backend database.
Youtube video:https://youtu.be/KnjnQj-FvfQ
Modern big data and machine learning in the era of cloud, docker and kubernetesSlim Baltagi
There is a major shift in web and mobile application architecture from the ‘old-school’ one to a modern ‘micro-services’ architecture based on containers. Kubernetes has been quite successful in managing those containers and running them in distributed computing environments.
Now enabling Big Data and Machine Learning on Kubernetes will allow IT organizations to standardize on the same Kubernetes infrastructure. This will propel adoption and reduce costs.
Kubeflow is an open source framework dedicated to making it easy to use the machine learning tool of your choice and deploy your ML applications at scale on Kubernetes. Kubeflow is becoming an industry standard as well!
Both Kubernetes and Kubeflow will enable IT organizations to focus more effort on applications rather than infrastructure.
Watch this presentation and learn about Kubernetes Networking:
How to build applications without knowing subnets & IP addresses and build modern cloud-friendly applications in an agile fashion.
Mattia Gandolfi - Improving utilization and portability with Containers and C...Codemotion
Google has pioneered the usage of containers at huge scale. Learn how we designed our systems to handle insane traffic loads, orchestrating complex, globally distributed applications, and how you can leverage this infrastructure and our agile development technologies to embrace the power of DevOps and Cloud on our Google Cloud Platform.
Deploy at scale with CoreOS Kubernetes and Apache StratosChris Haddad
Platform-as-a-Service (PaaS) streamlines DevOps and allows developers to focus on application development. The PaaS handles provisioning, scaling, high availability, and tenancy.
Integration with the Docker platform, CoreOS Linux distribution, and Kubernetes container management system bring more scalability and flexibility to a PaaS. This session will include installing and deploying sample applications using Docker,CoreOS and Kubernetes, and a walkthrough on how it can be extended to support new application containers.
This event is in collaboration and hosted in and by the Khobar PyData meetup.
Registration will not be here but on the PyData meetup page https://www.meetup.com/PyDataKhobar/events/268654243/
Modern Applications: Do you want to start your cloud-native journey? modern applications that are portable, failure resilient and behave consistently in repeatable way? did you hear of containers? Docker? Kubernetes? come, get introduced to container and how to manage and run them at scale to deploy modern day applications, come practice, share knowledge and have fun with Docker, and Kubernetes. Better have your notebook fully charged!
Containers are becoming a fundamental technology skill to master for any job: DevOps Engineer, Software Engineer, Data Engineer or Data Scientist.
This meetup will try to answer the questions of why and how the container revolution came about by providing a short history of container technologies. Using hands-on introduction to Docker and docker-compose. We will show why the portability of containers is so important in running the same application in multiple environments.
The last section of the meetup will consist of a hands-on demonstration of the most popular container orchestration technology today, Kubernetes.
Comparing single-node and multi-node performance of an important fusion HPC c...Igor Sfiligoi
Fusion simulations have traditionally required the use of leadership scale High Performance Computing (HPC) resources in order to produce advances in physics. The impressive improvements in compute and memory capacity of many-GPU compute nodes are now allowing for some problems that once required a multi-node setup to be also solvable on a single node. When possible, the increased interconnect bandwidth can result in order of magnitude higher science throughput, especially for communication-heavy applications. In this paper we analyze the performance of the fusion simulation tool CGYRO, an Eulerian gyrokinetic turbulence solver designed and optimized for collisional, electromagnetic, multiscale simulation, which is widely used in the fusion research community. Due to the nature of the problem, the application has to work on a large multi-dimensional computational mesh as a whole, requiring frequent exchange of large amounts of data between the compute processes. In particular, we show that the average-scale nl03 benchmark CGYRO simulation can be run at an acceptable speed on a single Google Cloud instance with 16 A100 GPUs, outperforming 8 NERSC Perlmutter Phase1 nodes, 16 ORNL Summit nodes and 256 NERSC Cori nodes. Moving from a multi-node to a single-node GPU setup we get comparable simulation times using less than half the number of GPUs. Larger benchmark problems, however, still require a multi-node HPC setup due to GPU memory capacity needs, since at the time of writing no vendor offers nodes with a sufficient GPU memory setup. The upcoming external NVSWITCH does however promise to deliver an almost equivalent solution for up to 256 NVIDIA GPUs.
Presented at PEARC22.
Paper DOI: https://doi.org/10.1145/3491418.3535130
The anachronism of whole-GPU accountingIgor Sfiligoi
NVIDIA has been making steady progress in increasing the compute performance of its GPUs, resulting in order of magnitude compute throughput improvements over the years. With several models of GPUs coexisting in many deployments, the traditional accounting method of treating all GPUs as being equal is not reflecting compute output anymore. Moreover, for applications that require significant CPU-based compute to complement the GPU-based compute, it is becoming harder and harder to make full use of the newer GPUs, requiring sharing of those GPUs between multiple applications in order to maximize the achievable science output. This further reduces the value of whole-GPU accounting, especially when the sharing is done at the infrastructure level. We thus argue that GPU accounting for throughput-oriented infrastructures should be expressed in GPU core hours, much like it is normally done for the CPUs. While GPU core compute throughput does change between GPU generations, the variability is similar to what we expect to see among CPU cores. To validate our position, we present an extensive set of run time measurements of two IceCube photon propagation workflows on 14 GPU models, using both on-prem and Cloud resources. The measurements also outline the influence of GPU sharing at both HTCondor and Kubernetes infrastructure level.
Presented at PEARC22.
Document DOI: https://doi.org/10.1145/3491418.3535125
Auto-scaling HTCondor pools using Kubernetes compute resourcesIgor Sfiligoi
HTCondor has been very successful in managing globally distributed, pleasantly parallel scientific workloads, especially as part of the Open Science Grid. HTCondor system design makes it ideal for integrating compute resources provisioned from anywhere, but it has very limited native support for autonomously provisioning resources managed by other solutions. This work presents a solution that allows for autonomous, demand-driven provisioning of Kubernetes-managed resources. A high-level overview of the employed architectures is presented, paired with the description of the setups used in both on-prem and Cloud deployments in support of several Open Science Grid communities. The experience suggests that the described solution should be generally suitable for contributing Kubernetes-based resources to existing HTCondor pools.
Presented at PEARC22.
Paper DOI: https://doi.org/10.1145/3491418.3535123
Performance Optimization of CGYRO for Multiscale Turbulence SimulationsIgor Sfiligoi
Overview of the recent performance optimization of CGYRO, an Eulerian GyroKinetic Fusion Plasma solver, with emphasize on the Multiscale Turbulence Simulations.
Presented at the joint US-Japan Workshop on Exascale Computing Collaboration and6th workshop of US-Japan Joint Institute for Fusion Theory (JIFT) program (Jan 18th 2022).
Comparing GPU effectiveness for Unifrac distance computeIgor Sfiligoi
Poster presented at PEAC21.
The poster contains the complete scaling plots for both unweighted and weighted normalized Unifrac compute for sample sizes ranging from 1k to 307k on both GPUs and CPUs.
Managing Cloud networking costs for data-intensive applications by provisioni...Igor Sfiligoi
Presented at PEARC21.
Many scientific high-throughput applications can benefit from the elastic nature of Cloud resources, especially when there is a need to reduce time to completion. Cost considerations are usually a major issue in such endeavors, with networking often a major component; for data-intensive applications, egress networking costs can exceed the compute costs. Dedicated network links provide a way to lower the networking costs, but they do add complexity. In this paper we provide a description of a 100 fp32 PFLOPS Cloud burst in support of IceCube production compute, that used Internet2 Cloud Connect service to provision several logically-dedicated network links from the three major Cloud providers, namely Amazon Web Services, Microsoft Azure and Google Cloud Platform, that in aggregate enabled approximately 100 Gbps egress capability to on-prem storage. It provides technical details about the provisioning process, the benefits and limitations of such a setup and an analysis of the costs incurred.
Accelerating Key Bioinformatics Tasks 100-fold by Improving Memory AccessIgor Sfiligoi
Presented at PEARC21.
Most experimental sciences now rely on computing, and biolog- ical sciences are no exception. As datasets get bigger, so do the computing costs, making proper optimization of the codes used by scientists increasingly important. Many of the codes developed in recent years are based on the Python-based NumPy, due to its ease of use and good performance characteristics. The composable nature of NumPy, however, does not generally play well with the multi-tier nature of modern CPUs, making any non-trivial multi- step algorithm limited by the external memory access speeds, which are hundreds of times slower than the CPU’s compute capabilities. In order to fully utilize the CPU compute capabilities, one must keep the working memory footprint small enough to fit in the CPU caches, which requires splitting the problem into smaller portions and fusing together as many steps as possible. In this paper, we present changes based on these principles to two important func- tions in the scikit-bio library, principal coordinates analysis and the Mantel test, that resulted in over 100x speed improvement in these widely used, general-purpose tools.
Using A100 MIG to Scale Astronomy Scientific OutputIgor Sfiligoi
Presented at GTC21.
The raw computing power of GPUs has been steadily increasing, significantly outpacing the CPU gains. This poses a problem for many GPU-enabled scientific applications that use CPU code paths to feed data to the GPU code, resulting in lower GPU utilization, and thus reduced gains in scientific output. Applications that are high-throughput in nature, such as astronomy-focused IceCube and LIGO, can partially work around the problem by running several instances of the executable on the same GPU. This approach, however, is sub-optimal both in terms of application performance and workflow management complexity. The recently introduced Multi-Instance GPU (MIG) capability, available on the NVIDIA A100 GPU, provides a much cleaner and easier-to-use alternative by allowing the logical slicing of the powerful GPU and assigning different slices to different applications. And at least in the case of IceCube, it can provide over 3x more scientific output on the same hardware.
Using commercial Clouds to process IceCube jobsIgor Sfiligoi
Presented at EDUCAUSE CCCG March 2021.
The IceCube Neutrino Observatory is the world’s premier facility to detect neutrinos.
Built at the south pole in natural ice, it requires extensive and expensive calibration to properly track the neutrinos.
Most of the required compute power comes from on-prem resources through the Open Science Grid,
but IceCube can easily harness the Cloud compute at any scale, too, as demonstrated by a series of Cloud bursts.
This talk provides both details of the performed Cloud bursts, as well as some insight in the science itself.
Fusion simulations have traditionally required the use of leadership scale HPC resources in order to produce advances in physics. One such package is CGYRO, a premier tool for multi-scale plasma turbulence simulation. CGYRO is a typical HPC application that will not fit into a single node, as it requires several TeraBytes of memory and O(100) TFLOPS compute capability for cutting-edge simulations. CGYRO also requires high-throughput and low-latency networking, due to its reliance on global FFT computations. While in the past such compute may have required hundreds, or even thousands of nodes, recent advances in hardware capabilities allow for just tens of nodes to deliver the necessary compute power. We explored the feasibility of running CGYRO on Cloud resources provided by Microsoft on their Azure platform, using the infiniband-connected HPC resources in spot mode. We observed both that CPU-only resources were very efficient, and that running in spot mode was doable, with minimal side effects. The GPU-enabled resources were less cost effective but allowed for higher scaling.
For IceCube, large amount of photon propagation simulation is needed to properly calibrate natural Ice. Simulation is compute intensive and ideal for GPU compute. This Cloud run was more data intensive than precious ones, producing 130 TB of output data. To keep egress costs in check, we created dedicated network links via the Internet2 Cloud Connect Service.
Scheduling a Kubernetes Federation with AdmiraltyIgor Sfiligoi
Presented at OSG All-Hands Meeting 2020 - USCMS-USATLAS Session.
This talk presented the PRP experience with using Admiralty as a Kubernetes federation solution, with both discussion of why we need it, why Admiralty is the best (if not actually the only) solution for our needs, and how it works.
Accelerating microbiome research with OpenACCIgor Sfiligoi
Presented at OpenACC Summit 2020.
UniFrac is a commonly used metric in microbiome research for comparing microbiome profiles to one another. Computing UniFrac on modest sample sizes used to take a workday on a server class CPU-only node, while modern datasets would require a large compute cluster to be feasible. After porting to GPUs using OpenACC, the compute of the same modest sample size now takes only a few minutes on a single NVIDIA V100 GPU, while modern datasets can be processed on a single GPU in hours. The OpenACC programming model made the porting of the code to GPUs extremely simple; the first prototype was completed in just over a day. Getting full performance did however take much longer, since proper memory access is fundamental for this application.
Demonstrating a Pre-Exascale, Cost-Effective Multi-Cloud Environment for Scie...Igor Sfiligoi
Presented at PEARC20.
This talk presents expanding the IceCube’s production HTCondor pool using cost-effective GPU instances in preemptible mode gathered from the three major Cloud providers, namely Amazon Web Services, Microsoft Azure and the Google Cloud Platform. Using this setup, we sustained for a whole workday about 15k GPUs, corresponding to around 170 PFLOP32s, integrating over one EFLOP32 hour worth of science output for a price tag of about $60k. In this paper, we provide the reasoning behind Cloud instance selection, a description of the setup and an analysis of the provisioned resources, as well as a short description of the actual science output of the exercise.
Porting and optimizing UniFrac for GPUsIgor Sfiligoi
Poster presented at PEARC20.
UniFrac is a commonly used metric in microbiome research for comparing microbiome profiles to one another (“beta diversity”). The recently implemented Striped UniFrac added the capability to split the problem into many independent subproblems and exhibits near linear scaling. In this poster we describe steps undertaken in porting and optimizing Striped Unifrac to GPUs. We reduced the run time of computing UniFrac on the published Earth Microbiome Project dataset from 13 hours on an Intel Xeon E5-2680 v4 CPU to 12 minutes on an NVIDIA Tesla V100 GPU, and to about one hour on a laptop with NVIDIA GTX 1050 (with minor loss in precision). Computing UniFrac on a larger dataset containing 113k samples reduced the run time from over one month on the CPU to less than 2 hours on the V100 and 9 hours on an NVIDIA RTX 2080TI GPU (with minor loss in precision). This was achieved by using OpenACC for generating the GPU offload code and by improving the memory access patterns. A BSD-licensed implementation is available, which produces a Cshared library linkable by any programming language.
Demonstrating 100 Gbps in and out of the public CloudsIgor Sfiligoi
Poster presented at PEARC20.
There is increased awareness and recognition that public Cloud providers do provide capabilities not found elsewhere, with elasticity being a major driver. The value of elastic scaling is however tightly coupled to the capabilities of the networks that connect all involved resources, both in the public Clouds and at the various research institutions. This poster presents results of measurements involving file transfers inside public Cloud providers, fetching data from on-prem resources into public Cloud instances and fetching data from public Cloud storage into on-prem nodes. The networking of the three major Cloud providers, namely Amazon Web Services, Microsoft Azure and the Google Cloud Platform, has been benchmarked. The on-prem nodes were managed by either the Pacific Research Platform or located at the University of Wisconsin – Madison. The observed sustained throughput was of the order of 100 Gbps in all the tests moving data in and out of the public Clouds and throughput reaching into the Tbps range for data movements inside the public Cloud providers themselves. All the tests used HTTP as the transfer protocol.
TransAtlantic Networking using Cloud linksIgor Sfiligoi
Scientific communities have only limited amount of bandwidth available for transferring data between the US and the EU.
We know Cloud providers have plenty of bandwidth available, but at what cost?
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Welocme to ViralQR, your best QR code generator.ViralQR
Welcome to ViralQR, your best QR code generator available on the market!
At ViralQR, we design static and dynamic QR codes. Our mission is to make business operations easier and customer engagement more powerful through the use of QR technology. Be it a small-scale business or a huge enterprise, our easy-to-use platform provides multiple choices that can be tailored according to your company's branding and marketing strategies.
Our Vision
We are here to make the process of creating QR codes easy and smooth, thus enhancing customer interaction and making business more fluid. We very strongly believe in the ability of QR codes to change the world for businesses in their interaction with customers and are set on making that technology accessible and usable far and wide.
Our Achievements
Ever since its inception, we have successfully served many clients by offering QR codes in their marketing, service delivery, and collection of feedback across various industries. Our platform has been recognized for its ease of use and amazing features, which helped a business to make QR codes.
Our Services
At ViralQR, here is a comprehensive suite of services that caters to your very needs:
Static QR Codes: Create free static QR codes. These QR codes are able to store significant information such as URLs, vCards, plain text, emails and SMS, Wi-Fi credentials, and Bitcoin addresses.
Dynamic QR codes: These also have all the advanced features but are subscription-based. They can directly link to PDF files, images, micro-landing pages, social accounts, review forms, business pages, and applications. In addition, they can be branded with CTAs, frames, patterns, colors, and logos to enhance your branding.
Pricing and Packages
Additionally, there is a 14-day free offer to ViralQR, which is an exceptional opportunity for new users to take a feel of this platform. One can easily subscribe from there and experience the full dynamic of using QR codes. The subscription plans are not only meant for business; they are priced very flexibly so that literally every business could afford to benefit from our service.
Why choose us?
ViralQR will provide services for marketing, advertising, catering, retail, and the like. The QR codes can be posted on fliers, packaging, merchandise, and banners, as well as to substitute for cash and cards in a restaurant or coffee shop. With QR codes integrated into your business, improve customer engagement and streamline operations.
Comprehensive Analytics
Subscribers of ViralQR receive detailed analytics and tracking tools in light of having a view of the core values of QR code performance. Our analytics dashboard shows aggregate views and unique views, as well as detailed information about each impression, including time, device, browser, and estimated location by city and country.
So, thank you for choosing ViralQR; we have an offer of nothing but the best in terms of QR code services to meet business diversity!
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
1. An overview of the
Kubernetes architecture
Presented by Igor Sfiligoi, UCSD
Workshop at the Great Plains Network Annual Meeting 2019
GPN Annual Meeting 2019 - Kubernetes Architecture 1
2. Outline
• Kubernetes history
• Basic building blocks
• Provided bells and whistles
• Scheduling
• User interface
GPN Annual Meeting 2019 - Kubernetes Architecture 2
3. Kubernetes
• Now maintained by
Cloud Native Computing Foundation
https://kubernetes.io
Originally created by Google
• With very large and active
development community
Open source
• But also available out-of-the-box on
all major Clouds (GCP, AWS and Azure)
Can be deployed on-prem
GPN Annual Meeting 2019 - Kubernetes Architecture 3
4. Container based
• Typically Docker based
Containers are the
basic building block
• Creating custom ones almost trivial
Standard images for
many applications exist
• If state needed, must be held outside
Just remember
containers are stateless
GPN Annual Meeting 2019 - Kubernetes Architecture 4
5. Container Orchestration
• Once you have many containers on many nodes, you need something to manage the whole
• This is usually referred to as Orchestration
Attribution: https://kubernetes.io
GPN Annual Meeting 2019 - Kubernetes Architecture 5
6. Packing containers into pods
The smallest concept is actually the Pod
A Pod is a set of containers
• Having a single Container in a Pod OK
Containers within a Pod are
guaranteed to run alongside
• And can share (ephemeral) state
Pod
Container
Container
https://kubernetes.io/docs/concepts/workloads/pods/pod/
GPN Annual Meeting 2019 - Kubernetes Architecture 6
7. Packing Pods into Deployments
• If it terminates for whatever reason, it is gone
A Pod is ephemeral
• Initially launches a single Pod (no obvious benefit)
• If a Pod is removed, a new Pod is automatically re-submitted
A Deployment is persistent
• E.g. for load balancing and horizontal scaling
A Deployment can also manage multiple replicas
Great
for service
applications
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
GPN Annual Meeting 2019 - Kubernetes Architecture 7
8. Configuration
management
• Kubernetes provides an easy mechanism to inject
information into the Container images at runtime
Most applications need to be configured
Three types of information
Environment variables Whole files Secrets
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/
https://kubernetes.io/docs/concepts/configuration/secret/
GPN Annual Meeting 2019 - Kubernetes Architecture 8
9. Linking to external storage
• Most applications will need it!
External storage essential for persistency
• Local storage
• Distributed storage, e.g. CEPH, NFS, etc.
• Custom filesystems via CSI – e.g. CVMFS
Kubernetes provides the necessary hooks at Pod launch time
https://kubernetes.io/docs/concepts/storage/volumes/
https://kubernetes-csi.github.io/docs/
GPN Annual Meeting 2019 - Kubernetes Architecture 9
10. Networking
Each container get its own private IP address
A Deployment can be registered as a Service
• Gets its own IP address and DNS entry
• Traffic routes to the Pods in Deployment based on selected policy (e.g. RR)
Service can also serve as a NAT
• Routing traffic from WAN using the Kubernetes public IPs
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
GPN Annual Meeting 2019 - Kubernetes Architecture 10
11. Networking
Each container get its own private IP address
A Deployment can be registered as a Service
• Gets its own IP address and DNS entry
• Traffic routes to the Pods in Deployment based on selected policy (e.g. RR)
Service can also serve as a NAT
• Routing traffic from WAN using the Kubernetes public IPs
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
• Unprivileged Pods better for regular users to minimize risk
Privileged Pods can get access to the host/public IP
• E.g. due to the use of X.509
Useful for Network Servers tied to a specific node
GPN Annual Meeting 2019 - Kubernetes Architecture 11
12. Pod scheduling
Kubernetes comes with a pretty decent scheduler
Will match Pods to available resources (CPU, Memory, GPU, etc.)
• Nodes advertise what is available
• Pods specify what they require, may also limit itself to a subset of Nodes
• A Pod will start on a Node only if a match can be made
There is also a notion of Priorities
• If a match for a higher priority Pod cannot be made,
the scheduler will kill one or more lower priority Pods to make space for it (if at all possible)
https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
GPN Annual Meeting 2019 - Kubernetes Architecture 12
13. The DaemonSet
• E.g. a Monitoring probe
Sometimes an application must run on all the nodes
• Like a Deployment, but with fixed all-nodes scheduling
The DaemonSet automates this
https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/
GPN Annual Meeting 2019 - Kubernetes Architecture 13
14. Users and Permissions
Kubernetes does not really have a concept of a “User”
Permissions are set as part of the Namespace concept
• Anyone having access to a Namespace can operate on the objects inside that Namespace
• Including creating, monitoring and modifying them
Namespace conceptually provides virtual-private Kubernetes clusters
• But very little additional restrictions within
• And relatively hard coordinating Pods in separate Namespaces
https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
GPN Annual Meeting 2019 - Kubernetes Architecture 14
15. Users and Permissions
Kubernetes does not really have a concept of a “User”
Permissions are set as part of the Namespace concept
• Anyone having access to a Namespace can operate on the objects inside that Namespace
• Including creating, monitoring and modifying them
Namespace conceptually provides virtual-private Kubernetes clusters
• But very little additional restrictions within
• And relatively hard coordinating Pods in separate Namespaces
PRP Nautilus provides
user management as a
side concept.
https://nautilus.optiputer.net
GPN Annual Meeting 2019 - Kubernetes Architecture 15
17. YAML
Everywhere
• Both for creating/configuring
Pods/Deployments/Services
• And for querying their (detailed) status
Most interactions with Kubernetes
will involve YAML documents
• Describes itself as
“a human friendly markup language”
• Uses Python-indentation
to indicate nesting
YAML is actually quite easy to use
https://en.wikipedia.org/wiki/YAML
GPN Annual Meeting 2019 - Kubernetes Architecture 17
20. Installing kubectl
• Just a static binary
• Available for all major platforms
(Linux, MacOS, Windows)
• Detailed download instructions at
https://kubernetes.io/docs/tasks/tools/install-kubectl/
• Can be used over WAN
• Just put the config file in
~/.kube/config
Get yours from
PRP’s Nautilus
GPN Annual Meeting 2019 - Kubernetes Architecture 20
22. Acknowledgents
This work was partially funded by
US National Science Foundation (NSF) awards
CNS-1456638, CNS-1730158,
ACI-1540112, ACI-1541349,
OAC-1826967, OAC 1450871,
OAC-1659169 and OAC-1841530.
GPN Annual Meeting 2019 - Kubernetes Architecture 22