The document provides an overview of containers and Kubernetes. It discusses the need for containers due to microservices and infrastructure as code. It then covers technical details of containers like Dockerfiles, images, and registries. It also discusses Kubernetes and its components like kube-apiserver, etcd, and kubelet. Finally, it covers Kubernetes concepts like pods, services, deployments, and how they are configured.
Kubernetes 101 - A Cluster Operating Systemmikaelbarbero
The document discusses Kubernetes, an open-source system for automating deployment, scaling, and management of containerized applications. It provides an overview of Kubernetes architecture, including control plane components that manage the cluster and worker nodes that run application containers, and how developers can deploy and manage applications on Kubernetes using kubectl commands.
This presentation covers how app deployment model evolved from bare metal servers to Kubernetes World.
In addition to theoretical information, you will find free KATACODA workshops url to perform practices to understand the details of the each topics.
Language Server Protocol - Why the Hype?mikaelbarbero
The Language Server Protocol developed by Microsoft for Visual Studio Code is a language and IDE agnostic protocol which clearly separates language semantics from UI presentation. Language developers can implement the protocol and benefit from immediate support in all IDEs, while IDE developers, who implement the protocol get automatic support for all these languages without having to write any language-specific code. This session will let you learn more about the innards of the LSP. We will also have an overview of the current implementations in Eclipse, and outside Eclipse as well.
This document provides an overview of the OpenStack Magnum project, which aims to provide Container as a Service (CaaS) functionality. It discusses alternatives like Nova, Heat, and Magnum's advantages. Key features of Magnum include simplified multi-tenant containers, integration with OpenStack services, and out-of-box support for Kubernetes, Docker Swarm, and Mesos. The architecture and operation of Magnum are explained, along with its integration points within OpenStack.
This is the second session of Deep Dive into Kubernetes. It includes information on optimizing Docker image size, persistent volumes, container security, and different aspects of running Kubernetes on GKE and AWS.
Hands-On Introduction to Kubernetes at LISA17Ryan Jarvinen
This document provides an agenda and instructions for a hands-on introduction to Kubernetes tutorial. The tutorial will cover Kubernetes basics like pods, services, deployments and replica sets. It includes steps for setting up a local Kubernetes environment using Minikube and demonstrates features like rolling updates, rollbacks and self-healing. Attendees will learn how to develop container-based applications locally with Kubernetes and deploy changes to preview them before promoting to production.
Kubernetes deep dive - - Huawei 2015-10Vishnu Kannan
Kubernetes is an open-source container orchestration system that automates deployment, scaling, and management of containerized applications. It was originally designed by Google based on years of experience running containers internally. Kubernetes runs containerized applications across multiple machines, dynamically allocating resources and balancing load. It supports both public and private cloud environments as well as bare metal servers. The system aims to simplify container operations while providing portability and scalability.
Introduction to dockers and kubernetes. Learn how this helps you to build scalable and portable applications with cloud. It introduces the basic concepts of dockers, its differences with virtualization, then explain the need for orchestration and do some hands-on experiments with dockers
Kubernetes 101 - A Cluster Operating Systemmikaelbarbero
The document discusses Kubernetes, an open-source system for automating deployment, scaling, and management of containerized applications. It provides an overview of Kubernetes architecture, including control plane components that manage the cluster and worker nodes that run application containers, and how developers can deploy and manage applications on Kubernetes using kubectl commands.
This presentation covers how app deployment model evolved from bare metal servers to Kubernetes World.
In addition to theoretical information, you will find free KATACODA workshops url to perform practices to understand the details of the each topics.
Language Server Protocol - Why the Hype?mikaelbarbero
The Language Server Protocol developed by Microsoft for Visual Studio Code is a language and IDE agnostic protocol which clearly separates language semantics from UI presentation. Language developers can implement the protocol and benefit from immediate support in all IDEs, while IDE developers, who implement the protocol get automatic support for all these languages without having to write any language-specific code. This session will let you learn more about the innards of the LSP. We will also have an overview of the current implementations in Eclipse, and outside Eclipse as well.
This document provides an overview of the OpenStack Magnum project, which aims to provide Container as a Service (CaaS) functionality. It discusses alternatives like Nova, Heat, and Magnum's advantages. Key features of Magnum include simplified multi-tenant containers, integration with OpenStack services, and out-of-box support for Kubernetes, Docker Swarm, and Mesos. The architecture and operation of Magnum are explained, along with its integration points within OpenStack.
This is the second session of Deep Dive into Kubernetes. It includes information on optimizing Docker image size, persistent volumes, container security, and different aspects of running Kubernetes on GKE and AWS.
Hands-On Introduction to Kubernetes at LISA17Ryan Jarvinen
This document provides an agenda and instructions for a hands-on introduction to Kubernetes tutorial. The tutorial will cover Kubernetes basics like pods, services, deployments and replica sets. It includes steps for setting up a local Kubernetes environment using Minikube and demonstrates features like rolling updates, rollbacks and self-healing. Attendees will learn how to develop container-based applications locally with Kubernetes and deploy changes to preview them before promoting to production.
Kubernetes deep dive - - Huawei 2015-10Vishnu Kannan
Kubernetes is an open-source container orchestration system that automates deployment, scaling, and management of containerized applications. It was originally designed by Google based on years of experience running containers internally. Kubernetes runs containerized applications across multiple machines, dynamically allocating resources and balancing load. It supports both public and private cloud environments as well as bare metal servers. The system aims to simplify container operations while providing portability and scalability.
Introduction to dockers and kubernetes. Learn how this helps you to build scalable and portable applications with cloud. It introduces the basic concepts of dockers, its differences with virtualization, then explain the need for orchestration and do some hands-on experiments with dockers
This document provides an overview of Kubernetes, a container orchestration system. It begins with background on Docker containers and orchestration tools prior to Kubernetes. It then covers key Kubernetes concepts including pods, labels, replication controllers, and services. Pods are the basic deployable unit in Kubernetes, while replication controllers ensure a specified number of pods are running. Services provide discovery and load balancing for pods. The document demonstrates how Kubernetes can be used to scale, upgrade, and rollback deployments through replication controllers and services.
Continuous Delivery the hard way with KubernetesLuke Marsden
This talk shows three increasingly advanced levels of continuous delivery with Kubernetes and GitLab (as an example), arguing for a continuous delivery architecture which has an explicit _Release Manager_ component. We then propose Flux, the open source project which powers the _Deploy_ feature of Weave Cloud, as an implementation of that idea. This approach is the precursor to GitOps.
- Introduction to Kubernetes features
- A look at Kubernetes Networking and Service Discovery
- New features in Kubernetes 1.6
- Kubernetes Installation options
To know more about our Kubernetes expertise, visit our center of excellence at: http://www.opcito.com/kubernetes/
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery called Pods. ReplicaSets ensure that a specified number of pod replicas are running at any given time. Key components include Pods, Services for enabling network access to applications, and Deployments to update Pods and manage releases.
This document provides an overview of Kubernetes, including its architecture, components, concepts, and configuration. It describes that Kubernetes is an open-source container orchestration system designed by Google to manage containerized applications across multiple hosts. The key components include the master nodes which run control plane components like the API server, scheduler, and controller manager, and worker nodes which run the kubelet and containers. It also explains concepts like pods, services, deployments, networking, storage, and role-based access control (RBAC).
Presentation by Alex Mavrogiannis from Docker Inc, during the Docker Athens Meetup, January 4th 2018, on the integration of Docker Swarm and Kubernetes as orchestrators of the Docker platform.
Kubernetes nodes provide the infrastructure for containers to run by implementing container-centric features, networking, volumes, and interfacing with container runtimes. The node is an unsung hero that bridges the Kubernetes control plane with containers. Pods are the atomic scheduling unit in Kubernetes and are implemented at the node level, allowing for complex workloads using multiple cooperating containers with shared resources. Kubelets run the sync loop that manages pods and containers on each node by communicating with the API server and container runtime.
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery. Kubernetes masters manage worker nodes, and pods which are the basic building blocks, containing one or more containers. It provides self-healing, horizontal pod autoscaling, service discovery, load balancing, configuration management.
This document provides steps to deploy a WordPress application with a MySQL database on Kubernetes. It demonstrates creating secrets for database credentials, persistent volumes for database storage, services for external access, and deploying the WordPress and MySQL containers. Various Kubernetes objects like deployments, services, secrets and persistent volumes are defined in YAML files and applied to set up the WordPress application on Kubernetes.
Kubernetes seems to be the biggest buzz word currently in the DevOps world. The Google designed container orchestrator based in their 10+ years of experience running production applications using containers seems to have positioned as the market leader.
Open source, available in both Google Cloud and Azure container platforms or as a custom installation, it is ready to receive production loads.
During this talk we will discover how does Kubernetes works, its architecture, what components compose a Kubernetes cluster. We will also learn what objects can a developer use to deploy its applications on a Kubernetes cluster. We will see a live demo where we will deploy an application and then introduce changes to it without any downtime.
Kubernetes is an open-source platform for automating deployment, scaling, and management of containerized applications. It groups containerized applications into logical units called pods and uses labels to select pods and services for management at scale. Kubernetes masters manage the state of the cluster through the API server, scheduler and controller manager, while nodes run the pods and services and report back to the master.
1) Kubernetes is an open-source system for managing containerized applications and services across multiple hosts. It was created by Google in 2014 to automate deployment, scaling, and operations of application containers.
2) Kubernetes allows for automatic deployment and scaling of applications. It makes applications portable and lightweight by running them in containers.
3) The document provides an overview of key Kubernetes concepts including pods, replication controllers, and services. Pods are the smallest deployable units that can contain one or more containers which share resources. Replication controllers ensure a specified number of pod replicas are running. Services define a policy to access pods through labels.
Docker Kubernetes Istio
Understanding Docker and creating containers.
Container Orchestration based on Kubernetes
Blue Green Deployment, AB Testing, Canary Deployment, Traffic Rules based on Istio
Kubernetes is an open-source container cluster manager that was originally developed by Google. It was created as a rewrite of Google's internal Borg system using Go. Kubernetes aims to provide a declarative deployment and management of containerized applications and services. It facilitates both automatic bin packing as well as self-healing of applications. Some key features include horizontal pod autoscaling, load balancing, rolling updates, and application lifecycle management.
A Kubernetes cluster contains a set of worker
machines known as nodes that run
containerized applications
ü Every cluster has at least one worker node.
Hence, if a node fails, your application will still
be accessible from the other nodes as in a
cluster, multiple nodes are grouped
Kubernetes: An Introduction to the Open Source Container Orchestration PlatformMichael O'Sullivan
Originally designed by Google, Kubernetes is now an open-source platform that is used for managing applications deployed as containers across multiple hosts - now hosted under the Cloud Native Computing Foundation. It provides features for automating deployment, scaling, and maintaining these applications. Hosts are organised into clusters, and applications are deployed into these clusters as containers. Kubernetes is compatible with several container engines, notably Docker. The popularity of Kubernetes continues to increase as a result of the feature-rich tooling when compared to use of a container-engine alone, and a number of Cloud-based hosted solutions are now available, such as Google Kubernetes Engine, Amazon Elastic Container Service for Kubernetes, and IBM Cloud Container Service.
This talk will provide an introduction to the Kubernetes platform, and a detailed view of the platform architecture from both the Control Plane and Worker-node perspectives. A walk-through demonstration will also be provided. Furthermore, two additional tools that support Kubernetes will be presented and demonstrated - Helm: a package manager solution which enables easy deployment of pre-built Kubernetes software using Helm Charts, and Istio: a platform in development that aims to simplify the management of micro-services deployed on the Kubernetes platform.
Speaker Bio:
Dr. Michael J. O'Sullivan is a Software Engineer working as part of the Cloud Foundation Services team for IBM Cloud Dedicated, in the IBM Cloud division in Cork. Michael has worked on both Delivery Pipeline/Deployment Automation and Performance Testing teams, which has resulted in daily exposure to customer deployments of IBM Cloud services such as the IBM Cloud Containers Service, and the IBM Cloud Logging and Metrics Services. Michael has also worked on deployment of these services to OpenStack and VMware platforms. Michael holds a PhD in Computer Science from University College Cork (2012 - 2015), where, under the supervision of Dr. Dan Grigoras, engaged in research of Mobile Cloud Computing (MCC) - specifically, studying and implementing solutions for delivering seamless user experiences of MCC applications and services. Prior to this, Michael graduated with a 1st Class Honours Degree in Computer Science from University College Cork in 2012.
Continuous deployment of polyglot microservices: A practical approachJuan Larriba
This document discusses a practical approach to continuous deployment of polyglot microservices. It introduces the author and describes how traditional companies are adopting DevOps practices. The approach focuses on being continuous, using multiple programming languages as needed, immutable infrastructure with containers, reliability through functional testing, automated deployments, and practical architecture. Kubernetes and OpenShift are discussed as platform options. Lessons learned include that Kubernetes alone often fits needs better than OpenShift, and external service discovery can replace ingress controllers when using an external router.
This document provides an overview of Kubernetes concepts including:
- Kubernetes architecture with masters running control plane components like the API server, scheduler, and controller manager, and nodes running pods and node agents.
- Key Kubernetes objects like pods, services, deployments, statefulsets, jobs and cronjobs that define and manage workloads.
- Networking concepts like services for service discovery, and ingress for external access.
- Storage with volumes, persistentvolumes, persistentvolumeclaims and storageclasses.
- Configuration with configmaps and secrets.
- Authentication and authorization using roles, rolebindings and serviceaccounts.
It also discusses Kubernetes installation with minikube, and common networking and deployment
This document provides an overview of Kubernetes, a container orchestration system. It begins with background on Docker containers and orchestration tools prior to Kubernetes. It then covers key Kubernetes concepts including pods, labels, replication controllers, and services. Pods are the basic deployable unit in Kubernetes, while replication controllers ensure a specified number of pods are running. Services provide discovery and load balancing for pods. The document demonstrates how Kubernetes can be used to scale, upgrade, and rollback deployments through replication controllers and services.
Continuous Delivery the hard way with KubernetesLuke Marsden
This talk shows three increasingly advanced levels of continuous delivery with Kubernetes and GitLab (as an example), arguing for a continuous delivery architecture which has an explicit _Release Manager_ component. We then propose Flux, the open source project which powers the _Deploy_ feature of Weave Cloud, as an implementation of that idea. This approach is the precursor to GitOps.
- Introduction to Kubernetes features
- A look at Kubernetes Networking and Service Discovery
- New features in Kubernetes 1.6
- Kubernetes Installation options
To know more about our Kubernetes expertise, visit our center of excellence at: http://www.opcito.com/kubernetes/
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery called Pods. ReplicaSets ensure that a specified number of pod replicas are running at any given time. Key components include Pods, Services for enabling network access to applications, and Deployments to update Pods and manage releases.
This document provides an overview of Kubernetes, including its architecture, components, concepts, and configuration. It describes that Kubernetes is an open-source container orchestration system designed by Google to manage containerized applications across multiple hosts. The key components include the master nodes which run control plane components like the API server, scheduler, and controller manager, and worker nodes which run the kubelet and containers. It also explains concepts like pods, services, deployments, networking, storage, and role-based access control (RBAC).
Presentation by Alex Mavrogiannis from Docker Inc, during the Docker Athens Meetup, January 4th 2018, on the integration of Docker Swarm and Kubernetes as orchestrators of the Docker platform.
Kubernetes nodes provide the infrastructure for containers to run by implementing container-centric features, networking, volumes, and interfacing with container runtimes. The node is an unsung hero that bridges the Kubernetes control plane with containers. Pods are the atomic scheduling unit in Kubernetes and are implemented at the node level, allowing for complex workloads using multiple cooperating containers with shared resources. Kubelets run the sync loop that manages pods and containers on each node by communicating with the API server and container runtime.
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery. Kubernetes masters manage worker nodes, and pods which are the basic building blocks, containing one or more containers. It provides self-healing, horizontal pod autoscaling, service discovery, load balancing, configuration management.
This document provides steps to deploy a WordPress application with a MySQL database on Kubernetes. It demonstrates creating secrets for database credentials, persistent volumes for database storage, services for external access, and deploying the WordPress and MySQL containers. Various Kubernetes objects like deployments, services, secrets and persistent volumes are defined in YAML files and applied to set up the WordPress application on Kubernetes.
Kubernetes seems to be the biggest buzz word currently in the DevOps world. The Google designed container orchestrator based in their 10+ years of experience running production applications using containers seems to have positioned as the market leader.
Open source, available in both Google Cloud and Azure container platforms or as a custom installation, it is ready to receive production loads.
During this talk we will discover how does Kubernetes works, its architecture, what components compose a Kubernetes cluster. We will also learn what objects can a developer use to deploy its applications on a Kubernetes cluster. We will see a live demo where we will deploy an application and then introduce changes to it without any downtime.
Kubernetes is an open-source platform for automating deployment, scaling, and management of containerized applications. It groups containerized applications into logical units called pods and uses labels to select pods and services for management at scale. Kubernetes masters manage the state of the cluster through the API server, scheduler and controller manager, while nodes run the pods and services and report back to the master.
1) Kubernetes is an open-source system for managing containerized applications and services across multiple hosts. It was created by Google in 2014 to automate deployment, scaling, and operations of application containers.
2) Kubernetes allows for automatic deployment and scaling of applications. It makes applications portable and lightweight by running them in containers.
3) The document provides an overview of key Kubernetes concepts including pods, replication controllers, and services. Pods are the smallest deployable units that can contain one or more containers which share resources. Replication controllers ensure a specified number of pod replicas are running. Services define a policy to access pods through labels.
Docker Kubernetes Istio
Understanding Docker and creating containers.
Container Orchestration based on Kubernetes
Blue Green Deployment, AB Testing, Canary Deployment, Traffic Rules based on Istio
Kubernetes is an open-source container cluster manager that was originally developed by Google. It was created as a rewrite of Google's internal Borg system using Go. Kubernetes aims to provide a declarative deployment and management of containerized applications and services. It facilitates both automatic bin packing as well as self-healing of applications. Some key features include horizontal pod autoscaling, load balancing, rolling updates, and application lifecycle management.
A Kubernetes cluster contains a set of worker
machines known as nodes that run
containerized applications
ü Every cluster has at least one worker node.
Hence, if a node fails, your application will still
be accessible from the other nodes as in a
cluster, multiple nodes are grouped
Kubernetes: An Introduction to the Open Source Container Orchestration PlatformMichael O'Sullivan
Originally designed by Google, Kubernetes is now an open-source platform that is used for managing applications deployed as containers across multiple hosts - now hosted under the Cloud Native Computing Foundation. It provides features for automating deployment, scaling, and maintaining these applications. Hosts are organised into clusters, and applications are deployed into these clusters as containers. Kubernetes is compatible with several container engines, notably Docker. The popularity of Kubernetes continues to increase as a result of the feature-rich tooling when compared to use of a container-engine alone, and a number of Cloud-based hosted solutions are now available, such as Google Kubernetes Engine, Amazon Elastic Container Service for Kubernetes, and IBM Cloud Container Service.
This talk will provide an introduction to the Kubernetes platform, and a detailed view of the platform architecture from both the Control Plane and Worker-node perspectives. A walk-through demonstration will also be provided. Furthermore, two additional tools that support Kubernetes will be presented and demonstrated - Helm: a package manager solution which enables easy deployment of pre-built Kubernetes software using Helm Charts, and Istio: a platform in development that aims to simplify the management of micro-services deployed on the Kubernetes platform.
Speaker Bio:
Dr. Michael J. O'Sullivan is a Software Engineer working as part of the Cloud Foundation Services team for IBM Cloud Dedicated, in the IBM Cloud division in Cork. Michael has worked on both Delivery Pipeline/Deployment Automation and Performance Testing teams, which has resulted in daily exposure to customer deployments of IBM Cloud services such as the IBM Cloud Containers Service, and the IBM Cloud Logging and Metrics Services. Michael has also worked on deployment of these services to OpenStack and VMware platforms. Michael holds a PhD in Computer Science from University College Cork (2012 - 2015), where, under the supervision of Dr. Dan Grigoras, engaged in research of Mobile Cloud Computing (MCC) - specifically, studying and implementing solutions for delivering seamless user experiences of MCC applications and services. Prior to this, Michael graduated with a 1st Class Honours Degree in Computer Science from University College Cork in 2012.
Continuous deployment of polyglot microservices: A practical approachJuan Larriba
This document discusses a practical approach to continuous deployment of polyglot microservices. It introduces the author and describes how traditional companies are adopting DevOps practices. The approach focuses on being continuous, using multiple programming languages as needed, immutable infrastructure with containers, reliability through functional testing, automated deployments, and practical architecture. Kubernetes and OpenShift are discussed as platform options. Lessons learned include that Kubernetes alone often fits needs better than OpenShift, and external service discovery can replace ingress controllers when using an external router.
This document provides an overview of Kubernetes concepts including:
- Kubernetes architecture with masters running control plane components like the API server, scheduler, and controller manager, and nodes running pods and node agents.
- Key Kubernetes objects like pods, services, deployments, statefulsets, jobs and cronjobs that define and manage workloads.
- Networking concepts like services for service discovery, and ingress for external access.
- Storage with volumes, persistentvolumes, persistentvolumeclaims and storageclasses.
- Configuration with configmaps and secrets.
- Authentication and authorization using roles, rolebindings and serviceaccounts.
It also discusses Kubernetes installation with minikube, and common networking and deployment
Docker is an open-source tool that allows developers to easily deploy applications inside isolated containers. Kubernetes is an open-source system for automating deployment and management of containerized applications across clusters of hosts. It coordinates containerized applications across nodes by providing mechanisms for scheduling, service discovery, and load balancing. The key components of Kubernetes include Pods, Services, ReplicationControllers, Scheduler, API Server, etcd and Nodes.
A Comprehensive Introduction to Kubernetes. This slide deck serves as the lecture portion of a full-day Workshop covering the architecture, concepts and components of Kubernetes. For the interactive portion, please see the tutorials here:
https://github.com/mrbobbytables/k8s-intro-tutorials
This document provides an overview of Kubernetes including:
1) Kubernetes is an open-source platform for automating deployment, scaling, and operations of containerized applications. It provides container-centric infrastructure and allows for quickly deploying and scaling applications.
2) The main components of Kubernetes include Pods (groups of containers), Services (abstract access to pods), ReplicationControllers (maintain pod replicas), and a master node running key components like etcd, API server, scheduler, and controller manager.
3) The document demonstrates getting started with Kubernetes by enabling the master on one node and a worker on another node, then deploying and exposing a sample nginx application across the cluster.
Robert Barr presents on Kubernetes for Java developers. He discusses Quarkus, Micronaut and Spring Boot frameworks for building cloud-native Java applications. He provides an overview of Docker and how it can package applications. Barr then explains why Kubernetes is useful for orchestrating containers at scale, describing its architecture and key concepts like pods, deployments and services. He demonstrates running a sample application on Kubernetes and integrating with its Java client.
This document provides an overview of Linux containers, Docker, and Kubernetes. It discusses how Linux containers have limitations that Docker aimed to address by providing a platform for managing containers. However, standalone Docker has issues at scale, which Kubernetes was created to solve by offering clustering and orchestration of Docker containers across multiple hosts. Key Kubernetes concepts are explained such as pods, labels, services, and deployments. The document concludes with a reference to a Kubernetes demo.
Getting started with google kubernetes engineShreya Pohekar
This document provides an overview of Google Kubernetes Engine. It begins with introductions and defines key concepts like virtualization, containerization, Docker, and Kubernetes. It then explains what Kubernetes is and how it can orchestrate container infrastructure on-premises or in the cloud. Various Kubernetes architecture elements are outlined like pods, replica sets, deployments, and services. Security features are also summarized, including pod security policies, network policies, and using security contexts. The document concludes with a demonstration of Kubernetes Engine.
Recent momentum around the evolution of Containers are gradually increase in last two years.Containers virtualize an OS and applications running in each container believe that they have full access to their very own copy of that OS. This is analogous to what VMs do when they virtualize at a lower level, the hardware. In the case of containers, it’s the OS that does the virtualization and maintains the illusion.
Recent past many software companies have quickly adopted container technologies, including Docker Containers, aware of the threat and advantage of the approach. For example, Linux companies have also jumped into the ground, seeing as this as an opportunity to grow the Linux market. Also Microsoft is going to add features to support containers and VMware have made efforts in integrating support for Docker into virtual machine technology.
Recent momentum around the evolution of Containers are gradually increase in last two years.Containers virtualize an OS and applications running in each container believe that they have full access to their very own copy of that OS. This is analogous to what VMs do when they virtualize at a lower level, the hardware. In the case of containers, it’s the OS that does the virtualization and maintains the illusion.
Recent past many software companies have quickly adopted container technologies, including Docker Containers, aware of the threat and advantage of the approach. For example, Linux companies have also jumped into the ground, seeing as this as an opportunity to grow the Linux market. Also Microsoft is going to add features to support containers and VMware have made efforts in integrating support for Docker into virtual machine technology.
Recent momentum around the evolution of Containers are gradually increase in last two years.Containers virtualize an OS and applications running in each container believe that they have full access to their very own copy of that OS. This is analogous to what VMs do when they virtualize at a lower level, the hardware. In the case of containers, it’s the OS that does the virtualization and maintains the illusion.
This document provides an introduction and overview of Kubernetes. It discusses that Kubernetes is an open-source system for managing containerized applications across multiple hosts. It supports various cloud providers and container platforms. Kubernetes provides self-healing capabilities to automatically place, restart, and replicate applications. The document outlines key Kubernetes concepts like masters, minions, pods, services and labels. It provides examples of simple pod and replication controller configurations. It also gives a high-level overview of the Kubernetes architecture and components.
Building Distributed Systems without Docker, Using Docker Plumbing Projects -...Patrick Chanezon
Docker provides an integrated and opinionated toolset to build, ship and run distributed applications. Over the past year, the Docker codebase has been refactored extensively to extract infrastructure plumbing components that can be used independently, following the UNIX philosophy of small tools doing one thing well: runC, containerd, swarmkit, hyperkit, vpnkit, datakit and the newly introduced InfraKit.
This talk will give an overview of these tools and how you can use them to build your own distributed systems without Docker.
Patrick Chanezon & David Chung, Docker & Phil Estes, IBM
Kubernetes is an open-source container management platform. It has a master-node architecture with control plane components like the API server on the master and node components like kubelet and kube-proxy on nodes. Kubernetes uses pods as the basic building block, which can contain one or more containers. Services provide discovery and load balancing for pods. Deployments manage pods and replicasets and provide declarative updates. Key concepts include volumes for persistent storage, namespaces for tenant isolation, labels for object tagging, and selector matching.
This document provides an overview and comparison of Docker, Kubernetes, OpenShift, Fabric8, and Jube container technologies. It discusses key concepts like containers, images, and Dockerfiles. It explains how Kubernetes provides horizontal scaling of Docker through replication controllers and services. OpenShift builds on Kubernetes to provide a platform as a service with routing, multi-tenancy, and a build/deploy pipeline. Fabric8 and Jube add additional functionality for developers, with tools, libraries, logging, and pure Java Kubernetes implementations respectively.
Develop and deploy Kubernetes applications with Docker - IBM Index 2018Patrick Chanezon
Docker Desktop and Enterprise Edition now both include Kubernetes as an optional orchestration component. This talk will explain how to use Docker Desktop (Mac or Windows) to develop and debug a cloud native application, then how Docker Enterprise Edition helps you deploy it to Kubernetes in production.
Container technology is shaping the future of software development and is causing a structural change in the cloud-computing world. Developers are embracing container technology and enterprises are adopting it at an explosive rate. Containers are portion of "IT" in technology as they're a very powerful tool which streamline your development and ops processes, save company's money & make life for developers much easier.
This document introduces Virtual Kubelet, which extends the Kubernetes API to serverless container platforms. It treats the concept of pods and nodes abstractly, allowing pods to run on platforms like ACI and Fargate. Virtual Kubelet implements a provider interface to manage the pod lifecycle on these platforms. It also allows hybrid use cases like running traditional and serverless pods together. The document demonstrates how Virtual Kubelet can schedule pods to ACI from an AKS cluster and to Nomad from a Kubernetes cluster.
Building Cloud-Native Applications with Kubernetes, Helm and KubelessBitnami
This document discusses building cloud-native applications with Kubernetes, Helm, and Kubeless. It introduces cloud-native concepts like containers and microservices. It then explains how Kubernetes provides container orchestration and Helm provides application packaging. Finally, it discusses how Kubeless enables serverless functionality on Kubernetes.
Similar to Containers and Kubernetes -Notes Leo (20)
OAuth and OpenID Connect are authorization frameworks that enable third party applications (API clients) to obtain limited access to RESTful APIs on behalf of resource owners. OAuth allows API clients to obtain authorization grants, which can be exchanged for access tokens to make requests to the API. OpenID Connect is used by API clients to obtain information about the authentication of the resource owner performed by the authorization server in an ID token.
The document provides an overview of SAML (Security Assertion Markup Language), including its main components and use cases. It discusses SAML assertions, which contain statements to describe authentication, attributes, and authorization information. SAML defines request/response protocols, bindings to transport messages over protocols like HTTP, and profiles that combine assertions, protocols and bindings to provide interoperability for specific use cases. A key use case is web single sign-on, where the SAML web browser SSO profile defines how assertions, messages and bindings are used to enable SSO between an identity provider and service provider.
Kafka is an open-source distributed event streaming platform used for building real-time data pipelines and streaming apps. It allows applications to publish and subscribe to streams of records, and processes large amounts of continuous data easily and reliably. Producers write data to topics which are divided into partitions. Consumers can join a consumer group to read from topics and process the data in parallel. Records are stored on disk for a configurable period to allow consumption from past records.
NoSQL databases take different approaches to storing and querying data compared to relational databases. Key-value databases store data as unstructured blobs associated with keys, documents databases store hierarchical data as documents, columnar databases store data by column rather than by row for improved analytics performance, and graph databases natively represent relationships between nodes. Aggregate-oriented NoSQL databases group and store related data together for faster access compared to retrieving scattered relational data.
ZooKeeper is a distributed coordination service that allows distributed applications to synchronize data and configuration. It provides a simple API for applications to read, write, and watch a shared hierarchical data structure called a znode tree that is replicated across servers. ZooKeeper addresses the need for distributed applications like Hadoop and Kafka to coordinate tasks and share configuration through a common data store that remains available even if individual servers fail.
- Leo's notes summarize Oracle Database components including metadata, control files, user data, database, Oracle instance, background processes, online redo logs, archive logs, and data files.
- The notes also cover Oracle Database configuration including Oracle homes, Oracle base, data file locations, redo log groups, and archive log destinations.
- Key processes like the log writer process and database writer process are described as well as their roles in writing redo logs and data to disk.
Application Continuity with Oracle DB 12c Léopold Gault
Application Continuity is a feature of Oracle database 12c, when used through the JDBC replay driver (by java applications). You can benefit from this features when using a RAC or Data Guard.Those are my personal notes on the subject. Views expressed here are my own, and do not necessarily reflect the views of Oracle.
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
Letter and Document Automation for Bonterra Impact Management (fka Social Sol...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on automated letter generation for Bonterra Impact Management using Google Workspace or Microsoft 365.
Interested in deploying letter generation automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
3. Program Agenda
1. Containers:
1. The need for containers
2. Technical overview of containers
2. Kubernetes:
1. The need for Kubernetes
2. Technical overview of Kubernetes
4. Program Agenda
1. Containers:
1. The need for containers
2. Technical overview of containers
2. Kubernetes:
1. The need for Kubernetes
2. Technical overview of Kubernetes
5. The need for containers
1. The need for micro-services
2. The need for infrastructure as code
9. The need for containers
1. The need for micro-services
2. The need for infrastructure as code
Subject covered orally
10. Program Agenda
1. Containers:
1. The need for containers
2. Technical overview of containers
2. Kubernetes:
1. The need for Kubernetes
2. Technical overview of Kubernetes
11. Containers: what they are
A container is an image of a set of applications and configuration-
data.
Such image is:
• Immutable
• Portable
• Can be saved in a “photo album”: an images-repository.
12. Virtual Machines vs. Containers
Virtual Machines
● Each virtual machine (VM) includes the
app, the necessary binaries and libraries
and an entire guest operating system
Containers
● Containers include the app & all of its dependencies, but
share the OS kernel with other containers.
● Run as an isolated process in the userspace of the host OS
VMs
Containers
13. Let’s have a look at Wikipedia’s listing
Different levels of virtualization
source
Version of 14th Sept 2017
14. Different types of containers
• Linux Containers (LXC)
• OpenVZ
• Warden Containers (used by Pivotal CloudFoundry)
• RKT (developed by CoreOS)
• Docker
• Implementations of the Open Containers Initiative (OCI)
• …
OS-level virtualization solutions
16. Building container images
My mongoDB :
FROM ubuntu_base_image
RUN apt-get update
RUN apt-get install
mongoDB
EXPOSE 8080
ENTRY POINT
/uns/binn/mongoDb
DockerFile
Ubuntu_base_image
(from private or
public registry)
Docker deamon
> docker build
Container image
Repo
My Docker
registry
Leo’s container
image
18. About building images on top of other images
Files that are removed by subsequent layers in the system are
actually still present in the images; they’re just inaccessible.
E.g.
In terms of building images, this also means that if
server.js is changed, layer B and layer C will have to
be rebuilt (so you have to order your layers from
the least likely to change to most likely)
Image
Image
Image
Although “BigFile” is no longer accessible in the image
‘Layer C’, it is still present in Layer A, which Layer C is
built on.
With the right tools, BigFile can still be accessed by
anyone having access to the image Layer C.
In terms of network traffic, this also means that
whenever you push or pull Layer C, BigFile is still
transmitted through the network.
19. Program Agenda
1. Containers:
1. The need for containers
2. Technical overview of containers
2. Kubernetes:
1. The need for Kubernetes
2. Technical overview of Kubernetes
20. The need for Kubernetes
1. The need for declarative infrastructure as code
2. The need for cluster management of container-engines
Subject covered orally
21. The need for Kubernetes
1. The need for declarative infrastructure as code
2. The need for cluster management of container-engines
24. Program Agenda
1. Containers:
1. The need for containers
2. Technical overview of containers
2. Kubernetes:
1. The need for Kubernetes
2. Technical overview of Kubernetes
26. Components of a cluster
kube-apiserver
etcd
kube-scheduler
kube-controller-manager
Kubelet
connected to
Master node
Worker node
cloud-controller-manager
It is the front-end for the Kubernetes
control plane
controls
kube-proxy
Container runtime
(Docker, rkt, runc, etc.)
27. Components of a cluster
kube-apiserver
etcd
kube-scheduler
kube-controller-manager
Kubelet
connected to
Master node
Worker node
cloud-controller-manager
Distributed key-value store.
Provides a dynamic configuration
registry.
controls
kube-proxy
Container runtime
(Docker, rkt, runc, etc.)
28. Components of a cluster
kube-apiserver
etcd
kube-scheduler
kube-controller-manager
Kubelet
connected to
Master node
Worker node
cloud-controller-manager
Watches newly created pods that
have no node assigned yet, and
selects a node for them to run on.
controls
kube-proxy
Container runtime
(Docker, rkt, runc, etc.)
29. Components of a cluster
kube-apiserver
etcd
kube-scheduler
kube-controller-manager
Kubelet
connected to
Master node
Worker node
cloud-controller-manager
controls
kube-proxy
Container runtime
(Docker, rkt, runc, etc.)
Component on the master that runs controllers.
These controllers include:
• Node Controller: detects when nodes go down, and responds.
• Replication Controller: maintains the correct number of pods for every replication
controller object (replicaset?) in the system.
• Endpoints Controller: deploys the “Endpoints object” (i.e. services and pods) into the
cluster.
• Service Account & Token Controllers: Creates default accounts and API access tokens for
new namespaces.
30. Components of a cluster
kube-apiserver
etcd
kube-scheduler
kube-controller-manager
Kubelet
connected to
Master node
Worker node
cloud-controller-manager
controls
kube-proxy
Container runtime
(Docker, rkt, runc, etc.)
Runs controllers that interact with the underlying cloud providers.
Those controllers are specific to the cloud-provider. Those controllers are:
• Node Controller: when a node stops responding, it checks with the cloud
provider to determine if this node has been deleted
• Route Controller: sets up routes in the underlying cloud infrastructure
• Service Controller: creates, updates and deletes cloud provider load balancers
• Volume Controller: creates, attaches, and mounts volumes, and interacts with
the cloud provider to orchestrate volumes
31. Components of a cluster
kube-apiserver
etcd
kube-scheduler
kube-controller-manager
Kubelet
connected to
controls
Master node
Worker node
cloud-controller-manager
Makes sure that containers are running in a pod.
The kubelet takes a set of PodSpecs that are provided through various
mechanisms and ensures that the containers described in those PodSpecs
are running and healthy.
kube-proxy
Container runtime
(Docker, rkt, runc, etc.)
32. Components of a cluster
kube-apiserver
etcd
kube-scheduler
kube-controller-manager
Kubelet
connected to
controls
Master node
Worker node
cloud-controller-manager
kube-proxy
Enables the Kubernetes service abstraction by
maintaining network rules on the host and performing
connection forwarding.
Container runtime
(Docker, rkt, runc, etc.)
34. Pods
IP2
Shared storage
Node 1
IP1
Shared storage
(volume)
Leo:
You normally put in a pod just one container, or a
handful of containers that are tightly coupled (e.g. a
Tomcat container + a Git syncrhonizer; with both apps
interacting thru a local filesystem).
You achieve horizontal scaling by replicating pods; not
by replicating containers within a pod.
Created from an image
37. Communication between containers within a same
pod
Node 1
IP1
Shared storage
(volume)
From: localhost:8080
To: localhost:3306
Kubernetes has an “IP-per-pod model”: containers within a
same pod share the same IP address, and communicate with
each other using distinct ports, on localhost.
I know this is anti-pattern. It
is just an example.
38. Pods and network
Private overlay network within the Kubernetes cluster
Node 1
Node 2
Real network
IP3IP1
IP2
39. The need for services
Private overlay network within the Kubernetes cluster
Node 1
Node 2
Real network
IP3IP1
IP2
Weblogic cluster
Managed server1
Managed server2
App which is a client of the
Weblogic cluster
40. Services and network
Private overlay network within the Kubernetes cluster
Node 1
Node 2
Real network
IP3
ServiceA
IP4
IP1
IP2
Acts like a LB
between Pods
41. Service
A level of abstraction providing an external and durable access to a set of pods.
A service :
• encompasses serval Pods,
• has its own (private) IP (thus allowing consuming services to use the Service’s IP,
instead of the Pod’s, which may change frequently),
• load balances the IP packets it receives to its Pods.
42. Services and network
Private overlay network within the Kubernetes cluster
Node 1
Node 2
Real network
IP3
ServiceA
IP4
IP1
IP2
Can optionally be made
reachable from the real
network
Acts like a LB
between Pods
43. Services and network
Private overlay network within the Kubernetes cluster
Real network
IP3
ServiceA
IP4
IP1
IP2
Can optionally be made
reachable from the real
network
Port of your choosing
E.g. with the service-type “NodePort”:
each hosting node will act as a NAT
server specifically for this IP; i.e. it will
associate one of its port to the IP4
Acts like a LB
between Pods
Port of your choosing
44. Service
A level of abstraction providing an external and durable access to a set of pods.
A service :
• encompasses serval Pods,
• has its own (private) IP (thus allowing consuming services to use the Service’s IP,
instead of the Pod’s, which may change frequently),
• load balances the IP packets it receives to its Pods.
• Provides 3 types of access:
• ClusterIP: the service is only visible from inside the cluster
45. Services and network
Private overlay network within the Kubernetes cluster
Node 1
Node 2
Real network
IP3
ServiceA
IP4
IP1
IP2
Acts like a LB
between Pods
46. Service
A level of abstraction providing an external and durable access to a set of pods.
A service :
• encompasses serval Pods,
• has its own (private) IP (thus allowing consuming services to use the Service’s IP,
instead of the Pod’s, which may change frequently),
• load balances the IP packets it receives to its Pods.
• Provides 3 types of access:
• ClusterIP: the service is only visible from inside the cluster
• NodePort: each node in the cluster maps an external port to the service’s private IP
47. Services and network
Private overlay network within the Kubernetes cluster
Real network
IP3
ServiceA
IP4
IP1
IP2
Can optionally be made
reachable from the real
network
Port of your choosing
E.g. with the service-type “NodePort”:
each hosting node will act as a NAT
server specifically for this IP; i.e. it will
associate one of its port to the IP4
Acts like a LB
between Pods
Port of your choosing
48. Service
A level of abstraction providing an external and durable access to a set of pods.
A service :
• encompasses serval Pods,
• has its own (private) IP (thus allowing consuming services to use the Service’s IP,
instead of the Pod’s, which may change frequently),
• load balances the IP packets it receives to its Pods.
• Provides 3 types of access:
• ClusterIP: the service is only visible from inside the cluster
• NodePort: each node in the cluster maps an external port to the service’s private IP
• LoadBalancer: a LB from the cloud provider will forward the traffic from the service the
nodes within it. (like NodePort, but on top of this, an external LB is configured to balance the
traffic between the nodes:servicePort?)
49. Services and network
Private overlay network within the Kubernetes cluster
Real (private) network
IP3
ServiceA
IP4
IP1
IP2
Port of your choosing
Acts like a LB
between Pods
Port of your choosing
load balancer
(cloud service)
51. Deployment features
Additional: enforce replicasets, by
• deploying the pods,
• monitoring them,
• Stop/restart them,
• redeploying them on another node if
needed.
• Perform rolling updates
• Undo an update if requested
Deployments
Deployments are a declarative way to ensure that the number of Pods running is equal to what the user declared to want.
Deployments keep our Pods up and running, even when the nodes they run on fail.
If Pods are declaratively updated (e.g. container image changed) or scaled, the Deployment will handle that.
52. Deployment spec vs Pod spec
Example of deployment Example of pod
The same as a pod spec
Specific to deployment spec
53. Deployment spec vs Pod spec
Example of deployment Example of pod
The same as a pod spec
Specific to deployment spec
54. Deployment spec vs Pod spec
Example of deployment Example of pod
The same as a pod spec
Specific to deployment spec
Services identify their pods,
and thus their deployments,
thanks to labels
56. E.g. of deployment spec
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template: # create pods using pod definition in this template
metadata:
# unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is
# generated from the deployment name
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
deployment.yaml
Kubernetes applies IP addresses at the Pod scope - containers within a Pod share their network namespaces - including their IP address. This means that containers within a Pod can all reach each other’s ports on localhost. This does imply that containers within a Pod must coordinate port usage, but this is no different than processes in a VM. This is called the “IP-per-pod” model.
https://kubernetes.io/docs/concepts/cluster-administration/networking/