This is a presentation from the OpenStack Austin Summit. It talks about managing containers in an OpenStack native way where containers are treated as first class citizens.
1. The document discusses integrating Magnum and Senlin for autoscaling containers across multiple levels. Magnum provisions and manages container clusters while Senlin provides clustering and autoscaling capabilities.
2. A design is proposed where Senlin policies and triggers drive autoscaling at both the container and cluster level. Scaling would be coordinated between applications and clusters through control flows in both directions.
3. Integration with existing autoscaling in container orchestration engines like Kubernetes is also considered, where Senlin could leverage autoscalers but also handle scaling requirements not met by the native solutions.
Senlin clustering service deep dive on Austin summit. This slide introduces the background, the overall architecture of the senlin project. It also highlights features released in 1.0.0 and features planed for 2.0.0.
Exploring Magnum and Senlin integration for autoscaling containersTon Ngo
1. The document discusses integrating Magnum and Senlin for autoscaling containers across multiple levels. Magnum provisions and manages container clusters while Senlin provides clustering and autoscaling capabilities.
2. A key goal is to coordinate scaling at both the application and cluster level based on policies. This includes scaling containers within clusters managed by COEs like Kubernetes, as well as adding or removing nodes to clusters.
3. The demo shows autoscaling a Kubernetes cluster managed by Magnum using Senlin profiles, policies, and triggers to scale pods and nodes based on metrics from Ceilometer. This provides an integrated OpenStack solution for multi-level autoscaling of containers.
This document discusses autoscaling in Kubernetes. It describes horizontal and vertical autoscaling, and how Kubernetes can autoscale nodes and pods. For nodes, it proposes using Google Compute Engine's managed instance groups and cloud autoscaler to automatically scale the number of nodes based on resource utilization. For pods, it discusses using an autoscaler controller to scale the replica counts of replication controllers based on metrics from cAdvisor or Google Cloud Monitoring. Issues addressed include rebalancing pods and handling autoscaling during rolling updates.
Kubernetes is an open source container orchestration system that automates the deployment, maintenance, and scaling of containerized applications. It groups related containers into logical units called pods and handles scheduling pods onto nodes in a compute cluster while ensuring their desired state is maintained. Kubernetes uses concepts like labels and pods to organize containers that make up an application for easy management and discovery.
2016 08-30 Kubernetes talk for Waterloo DevOpscraigbox
This document discusses Kubernetes and container orchestration on Google Cloud Platform. It provides an overview of Kubernetes and how it allows users to manage applications and deploy containers across clusters. Key points include that Kubernetes was created at Google and is now open source, it provides tools for scheduling, load balancing and ensuring availability of containerized applications, and that adoption is growing rapidly across startups and enterprises due to benefits like portability and ease of updating clusters.
Why do containers suddenly matter so much when they have been around since 1998? Take a look at the potential of OpenStack's Magnum, Murano and Nova-Docker in the context leveraging the incredible interest in Linux Containers brought about by Docker.
Check out www.stackengine.com to learn more about our excellent container management solution.
1. The document discusses integrating Magnum and Senlin for autoscaling containers across multiple levels. Magnum provisions and manages container clusters while Senlin provides clustering and autoscaling capabilities.
2. A design is proposed where Senlin policies and triggers drive autoscaling at both the container and cluster level. Scaling would be coordinated between applications and clusters through control flows in both directions.
3. Integration with existing autoscaling in container orchestration engines like Kubernetes is also considered, where Senlin could leverage autoscalers but also handle scaling requirements not met by the native solutions.
Senlin clustering service deep dive on Austin summit. This slide introduces the background, the overall architecture of the senlin project. It also highlights features released in 1.0.0 and features planed for 2.0.0.
Exploring Magnum and Senlin integration for autoscaling containersTon Ngo
1. The document discusses integrating Magnum and Senlin for autoscaling containers across multiple levels. Magnum provisions and manages container clusters while Senlin provides clustering and autoscaling capabilities.
2. A key goal is to coordinate scaling at both the application and cluster level based on policies. This includes scaling containers within clusters managed by COEs like Kubernetes, as well as adding or removing nodes to clusters.
3. The demo shows autoscaling a Kubernetes cluster managed by Magnum using Senlin profiles, policies, and triggers to scale pods and nodes based on metrics from Ceilometer. This provides an integrated OpenStack solution for multi-level autoscaling of containers.
This document discusses autoscaling in Kubernetes. It describes horizontal and vertical autoscaling, and how Kubernetes can autoscale nodes and pods. For nodes, it proposes using Google Compute Engine's managed instance groups and cloud autoscaler to automatically scale the number of nodes based on resource utilization. For pods, it discusses using an autoscaler controller to scale the replica counts of replication controllers based on metrics from cAdvisor or Google Cloud Monitoring. Issues addressed include rebalancing pods and handling autoscaling during rolling updates.
Kubernetes is an open source container orchestration system that automates the deployment, maintenance, and scaling of containerized applications. It groups related containers into logical units called pods and handles scheduling pods onto nodes in a compute cluster while ensuring their desired state is maintained. Kubernetes uses concepts like labels and pods to organize containers that make up an application for easy management and discovery.
2016 08-30 Kubernetes talk for Waterloo DevOpscraigbox
This document discusses Kubernetes and container orchestration on Google Cloud Platform. It provides an overview of Kubernetes and how it allows users to manage applications and deploy containers across clusters. Key points include that Kubernetes was created at Google and is now open source, it provides tools for scheduling, load balancing and ensuring availability of containerized applications, and that adoption is growing rapidly across startups and enterprises due to benefits like portability and ease of updating clusters.
Why do containers suddenly matter so much when they have been around since 1998? Take a look at the potential of OpenStack's Magnum, Murano and Nova-Docker in the context leveraging the incredible interest in Linux Containers brought about by Docker.
Check out www.stackengine.com to learn more about our excellent container management solution.
This document discusses Kubernetes and container technologies. It provides an overview of Kubernetes architecture, components like pods and services, and tools for managing Kubernetes clusters. It also discusses running Kubernetes on bare metal and Oracle's Kubernetes installer for easily deploying Kubernetes on Oracle Cloud Infrastructure.
The document discusses challenges of deploying Kubernetes on-premise, including how load balancers are provisioned without cloud providers, using Nginx and Haproxy for load balancing on bare metal. It also covers how persistent volumes are provisioned with CSI drivers like Ember CSI to interface with storage backends, and tools for deploying and managing on-premise Kubernetes clusters like RKE.
The Operator Pattern - Managing Stateful Services in KubernetesQAware GmbH
Cloud Native Night, January 2018, Mainz: Talk by Jakob Karalus (@krallistic, IT Consultant at codecentric)
Join our Meetup: https://www.meetup.com/de-DE/Cloud-Native-Night
Abstract: While it's easy to deploy stateless application with Kubernetes, it's harder for stateful software. Since applications often require custom functionality that Kubernetes can't provide, developers want to add more specialized patterns like automatic backups, failover or rebalancing to their Kubernetes deployments. In this talk, we will look at the Operator Pattern and other possibilities to extend the functionality of Kubernetes and how to use them to operate stateful applications.
[Spark Summit 2017 NA] Apache Spark on KubernetesTimothy Chen
This document summarizes a presentation about running Apache Spark on Kubernetes. It discusses how Spark jobs can be scheduled and run on Kubernetes, including scheduling the driver and executor pods. Key points of the design include the Kubernetes scheduler backend for Spark and components like the file staging server. The roadmap outlines upcoming support for features like Spark Streaming and improvements to dynamic allocation.
How to integrate Kubernetes in OpenStack: You need to know these projectinwin stack
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications, while OpenStack is a free and open-source software platform for cloud computing, networking, and storage. The document discusses different ways to integrate Kubernetes and OpenStack, including using Zun to provide an OpenStack API for launching and managing containers, Magnum to offer container orchestration engines for deploying and managing containers, Kolla and Kolla Kubernetes to deploy OpenStack on Kubernetes, Kuryr Kubernetes to bridge networking models between containers and OpenStack, and Stackube which uses Kubernetes as the compute fabric controller instead of Nova.
Kubernetes is making the promise of changing the datacenter from being a group of computer to "a computer" itself. This presentation outlines the new features in K8S with 1.1 and 1.2 release.
This document discusses Google Kubernetes Engine (GKE). It introduces containers and Kubernetes, then summarizes GKE as a container platform that fully manages master nodes. GKE provides automated operations like cluster autoscaling and node auto-repair. It allows creating multiple node pools with different configurations. GKE also enables high availability clusters across zones and monitoring with Stackdriver. Demos show using GKE to run game servers and implementing continuous integration and delivery pipelines.
Kafka on Kubernetes: Keeping It Simple (Nikki Thean, Etsy) Kafka Summit SF 2019confluent
Cloud migration: it's practically a rite of passage for anyone who's built infrastructure on bare metal. When we migrated our 5-year-old Kafka deployment from the datacenter to GCP, we were faced with the task of making our highly mutable server infrastructure more cloud-friendly. This led to a surprising decision: we chose to run our Kafka cluster on Kubernetes. I'll share war stories from our Kafka migration journey, explain why we chose Kubernetes over arguably simpler options like GCP VMs, and present the lessons we learned while making our way toward a stable and self-healing Kubernetes deployment. I'll also go through some improvements in the more recent Kafka releases that make upgrades crucial for any Kafka deployment on immutable and ephemeral infrastructure. You'll learn what happens when you try to run one complex distributed system on top of another, and come away with some handy tricks for automating cloud cluster management, plus some migration pitfalls to avoid. And if you're not sure whether running Kafka on Kubernetes is right for you, our experiences should provide some extra data points that you can use as you make that decision.
1. The document discusses using OpenStack for a 4G core network, including performance issues and solutions when virtualizing the EPC network functions using OpenStack.
2. Key performance issues identified include high CPU usage, competing for CPU resources, latency, throughput, and packet loss. Solutions proposed are CPU pinning, NUMA awareness, hugepages, DPDK, SR-IOV, and offloading processing to smart NICs.
3. Going forward, the next steps discussed are using OVS-DPDK for offloading, SDN, containers, and cloud architectures for 5G.
Stateful set in kubernetes implementation & usecases Krishna-Kumar
This document summarizes a presentation on StatefulSets in Kubernetes. It discusses why StatefulSets are useful for running stateful applications in containers, the differences between stateful and stateless applications, how volumes are used in StatefulSets, examples of running single-instance and multi-instance stateful applications like Zookeeper, and the current status and future roadmap of StatefulSets in Kubernetes.
Being a cloud native developer requires learning some new language and new skills like circuit-breakers, canaries, service mesh, linux containers, dark launches, tracers, pods and sidecars. In this session, we will introduce you to cloud native architecture by demonstrating numerous principles and techniques for building and deploying Java microservices via Spring Boot, Wildfly Swarm and Vert.x, while leveraging Istio on Kubernetes with OpenShift.
OpenStack on Kubernetes (BOS Summit / May 2017 update)rhirschfeld
This document discusses using Kubernetes as an underlay platform for OpenStack. Some key points:
1. Kubernetes is becoming more widely used and understood by operators compared to OpenStack. Using Kubernetes as an underlay could improve simplicity, stability, and upgrade processes for OpenStack.
2. There are still many technical challenges to address, such as networking, storage, tooling to manage OpenStack on Kubernetes, and ensuring containers meet Kubernetes' immutable infrastructure requirements.
3. Using Kubernetes as an underlay risks further confusing the messaging around OpenStack by implying Kubernetes is more stable or a replacement target. Clear communication will be important to avoid undermining OpenStack.
Kubernetes Day 2017 - Build, Ship and Run Your APP, Production !!smalltown
This document summarizes a talk about building, shipping, and running applications in production using containers on AWS. It discusses migrating an existing service from an on-premise data center to AWS, refactoring the application into microservices and containerizing it using Docker. It then covers setting up a Kubernetes cluster on CoreOS to orchestrate the containers across AWS, addressing challenges like application state, updates and monitoring. Terraform is presented as a way to define infrastructure as code and provision AWS resources. Logging, metrics collection and monitoring the Kubernetes cluster are also discussed.
Implement Advanced Scheduling Techniques in Kubernetes Kublr
Is advanced scheduling in Kubernetes achievable? Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations?
Oleg Chunikhin addressed those questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and cost-effective resources on AWS, coupled with the ability to deliver a minimum set of functionalities that cover the majority of needs – without configuration complexity. You’ll get a run-down of the pitfalls and things to keep in mind for this route.
This document discusses methods for providing high availability services in Kubernetes including NodePort, cloud provider load balancers, Ingress, and Keepalived VIP. NodePort exposes services on each node's IP at a static port. Cloud provider load balancers rely on the cloud platform to provide an external IP for services. Ingress is for HTTP load balancing but does not fully support external networking. Keepalived VIP uses a virtual IP address, IP to service mapping, and daemonset to provide high availability services on bare metal clusters without a cloud provider.
The document discusses Kubernetes cluster autoscaler, including how it works, deployment steps, configuration, and limitations. It describes setting up the cluster autoscaler manager, preparing extra nodes, protecting nodes from scale down, and installing the autoscaler using Helm. Some key points are that it can automatically scale a cluster from 0 to 1000 nodes handling 30 pods each, but has restrictions like not supporting regional instance groups and taking 10 minutes to scale nodes down.
Kubernetes pods / container scheduling 201 - pod and node affinity and anti-affinity, node selectors, taints and tolerations, persistent volumes constraints, scheduler configuration and custom scheduler development and more.
This presentation about Kubernetes, targeted for Java Developers was given for the first time (in French) at the Montreal Java User Group on May 2nd, 2018
Kubernetes for java developers - Tutorial at Oracle Code One 2018Anthony Dahanne
You’re a Java developer? Already familiar with Docker? Want to know more about Kubernetes and its ecosystem for developers? During this session, you’ll get familiar with core Kubernetes concepts (pods, deployments, services, volumes, and so on) before seeing the most-popular and most-productive Kubernetes tools in action, with a special focus on Java development. By the end of the session, you’ll have a better understanding of how you can leverage Kubernetes to speed up your Java deployments on-premises or to any cloud.
This document discusses Kubernetes and container technologies. It provides an overview of Kubernetes architecture, components like pods and services, and tools for managing Kubernetes clusters. It also discusses running Kubernetes on bare metal and Oracle's Kubernetes installer for easily deploying Kubernetes on Oracle Cloud Infrastructure.
The document discusses challenges of deploying Kubernetes on-premise, including how load balancers are provisioned without cloud providers, using Nginx and Haproxy for load balancing on bare metal. It also covers how persistent volumes are provisioned with CSI drivers like Ember CSI to interface with storage backends, and tools for deploying and managing on-premise Kubernetes clusters like RKE.
The Operator Pattern - Managing Stateful Services in KubernetesQAware GmbH
Cloud Native Night, January 2018, Mainz: Talk by Jakob Karalus (@krallistic, IT Consultant at codecentric)
Join our Meetup: https://www.meetup.com/de-DE/Cloud-Native-Night
Abstract: While it's easy to deploy stateless application with Kubernetes, it's harder for stateful software. Since applications often require custom functionality that Kubernetes can't provide, developers want to add more specialized patterns like automatic backups, failover or rebalancing to their Kubernetes deployments. In this talk, we will look at the Operator Pattern and other possibilities to extend the functionality of Kubernetes and how to use them to operate stateful applications.
[Spark Summit 2017 NA] Apache Spark on KubernetesTimothy Chen
This document summarizes a presentation about running Apache Spark on Kubernetes. It discusses how Spark jobs can be scheduled and run on Kubernetes, including scheduling the driver and executor pods. Key points of the design include the Kubernetes scheduler backend for Spark and components like the file staging server. The roadmap outlines upcoming support for features like Spark Streaming and improvements to dynamic allocation.
How to integrate Kubernetes in OpenStack: You need to know these projectinwin stack
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications, while OpenStack is a free and open-source software platform for cloud computing, networking, and storage. The document discusses different ways to integrate Kubernetes and OpenStack, including using Zun to provide an OpenStack API for launching and managing containers, Magnum to offer container orchestration engines for deploying and managing containers, Kolla and Kolla Kubernetes to deploy OpenStack on Kubernetes, Kuryr Kubernetes to bridge networking models between containers and OpenStack, and Stackube which uses Kubernetes as the compute fabric controller instead of Nova.
Kubernetes is making the promise of changing the datacenter from being a group of computer to "a computer" itself. This presentation outlines the new features in K8S with 1.1 and 1.2 release.
This document discusses Google Kubernetes Engine (GKE). It introduces containers and Kubernetes, then summarizes GKE as a container platform that fully manages master nodes. GKE provides automated operations like cluster autoscaling and node auto-repair. It allows creating multiple node pools with different configurations. GKE also enables high availability clusters across zones and monitoring with Stackdriver. Demos show using GKE to run game servers and implementing continuous integration and delivery pipelines.
Kafka on Kubernetes: Keeping It Simple (Nikki Thean, Etsy) Kafka Summit SF 2019confluent
Cloud migration: it's practically a rite of passage for anyone who's built infrastructure on bare metal. When we migrated our 5-year-old Kafka deployment from the datacenter to GCP, we were faced with the task of making our highly mutable server infrastructure more cloud-friendly. This led to a surprising decision: we chose to run our Kafka cluster on Kubernetes. I'll share war stories from our Kafka migration journey, explain why we chose Kubernetes over arguably simpler options like GCP VMs, and present the lessons we learned while making our way toward a stable and self-healing Kubernetes deployment. I'll also go through some improvements in the more recent Kafka releases that make upgrades crucial for any Kafka deployment on immutable and ephemeral infrastructure. You'll learn what happens when you try to run one complex distributed system on top of another, and come away with some handy tricks for automating cloud cluster management, plus some migration pitfalls to avoid. And if you're not sure whether running Kafka on Kubernetes is right for you, our experiences should provide some extra data points that you can use as you make that decision.
1. The document discusses using OpenStack for a 4G core network, including performance issues and solutions when virtualizing the EPC network functions using OpenStack.
2. Key performance issues identified include high CPU usage, competing for CPU resources, latency, throughput, and packet loss. Solutions proposed are CPU pinning, NUMA awareness, hugepages, DPDK, SR-IOV, and offloading processing to smart NICs.
3. Going forward, the next steps discussed are using OVS-DPDK for offloading, SDN, containers, and cloud architectures for 5G.
Stateful set in kubernetes implementation & usecases Krishna-Kumar
This document summarizes a presentation on StatefulSets in Kubernetes. It discusses why StatefulSets are useful for running stateful applications in containers, the differences between stateful and stateless applications, how volumes are used in StatefulSets, examples of running single-instance and multi-instance stateful applications like Zookeeper, and the current status and future roadmap of StatefulSets in Kubernetes.
Being a cloud native developer requires learning some new language and new skills like circuit-breakers, canaries, service mesh, linux containers, dark launches, tracers, pods and sidecars. In this session, we will introduce you to cloud native architecture by demonstrating numerous principles and techniques for building and deploying Java microservices via Spring Boot, Wildfly Swarm and Vert.x, while leveraging Istio on Kubernetes with OpenShift.
OpenStack on Kubernetes (BOS Summit / May 2017 update)rhirschfeld
This document discusses using Kubernetes as an underlay platform for OpenStack. Some key points:
1. Kubernetes is becoming more widely used and understood by operators compared to OpenStack. Using Kubernetes as an underlay could improve simplicity, stability, and upgrade processes for OpenStack.
2. There are still many technical challenges to address, such as networking, storage, tooling to manage OpenStack on Kubernetes, and ensuring containers meet Kubernetes' immutable infrastructure requirements.
3. Using Kubernetes as an underlay risks further confusing the messaging around OpenStack by implying Kubernetes is more stable or a replacement target. Clear communication will be important to avoid undermining OpenStack.
Kubernetes Day 2017 - Build, Ship and Run Your APP, Production !!smalltown
This document summarizes a talk about building, shipping, and running applications in production using containers on AWS. It discusses migrating an existing service from an on-premise data center to AWS, refactoring the application into microservices and containerizing it using Docker. It then covers setting up a Kubernetes cluster on CoreOS to orchestrate the containers across AWS, addressing challenges like application state, updates and monitoring. Terraform is presented as a way to define infrastructure as code and provision AWS resources. Logging, metrics collection and monitoring the Kubernetes cluster are also discussed.
Implement Advanced Scheduling Techniques in Kubernetes Kublr
Is advanced scheduling in Kubernetes achievable? Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations?
Oleg Chunikhin addressed those questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and cost-effective resources on AWS, coupled with the ability to deliver a minimum set of functionalities that cover the majority of needs – without configuration complexity. You’ll get a run-down of the pitfalls and things to keep in mind for this route.
This document discusses methods for providing high availability services in Kubernetes including NodePort, cloud provider load balancers, Ingress, and Keepalived VIP. NodePort exposes services on each node's IP at a static port. Cloud provider load balancers rely on the cloud platform to provide an external IP for services. Ingress is for HTTP load balancing but does not fully support external networking. Keepalived VIP uses a virtual IP address, IP to service mapping, and daemonset to provide high availability services on bare metal clusters without a cloud provider.
The document discusses Kubernetes cluster autoscaler, including how it works, deployment steps, configuration, and limitations. It describes setting up the cluster autoscaler manager, preparing extra nodes, protecting nodes from scale down, and installing the autoscaler using Helm. Some key points are that it can automatically scale a cluster from 0 to 1000 nodes handling 30 pods each, but has restrictions like not supporting regional instance groups and taking 10 minutes to scale nodes down.
Kubernetes pods / container scheduling 201 - pod and node affinity and anti-affinity, node selectors, taints and tolerations, persistent volumes constraints, scheduler configuration and custom scheduler development and more.
This presentation about Kubernetes, targeted for Java Developers was given for the first time (in French) at the Montreal Java User Group on May 2nd, 2018
Kubernetes for java developers - Tutorial at Oracle Code One 2018Anthony Dahanne
You’re a Java developer? Already familiar with Docker? Want to know more about Kubernetes and its ecosystem for developers? During this session, you’ll get familiar with core Kubernetes concepts (pods, deployments, services, volumes, and so on) before seeing the most-popular and most-productive Kubernetes tools in action, with a special focus on Java development. By the end of the session, you’ll have a better understanding of how you can leverage Kubernetes to speed up your Java deployments on-premises or to any cloud.
Get you Java application ready for Kubernetes !Anthony Dahanne
In this demos loaded talk we’ll explore the best practices to create a Docker image for a Java app (it’s 2019 and new comers such as Jib, CNCF buildpacks are interesting alternatives to Docker builds !) - and how to integrate best with the Kubernetes ecosystem : after explaining main Kubernetes objects and notions, we’ll discuss Helm charts and productivity tools such as Skaffold, Draft and Telepresence.
Tell the history of Container/Docker/Kubernetes, and show the key elements of them.
After view this document, you could know the main feature of Container Docker and Kubernetes.
Very basic infomation about how these technique work together.
Dev opsec dockerimage_patch_n_lifecyclemanagement_kanedafromparis
Lors de cette présentation, nous allons dans un premier temps rappeler la spécificité de docker par rapport à une VM (PID, cgroups, etc) parler du système de layer et de la différence entre images et instances puis nous présenterons succinctement kubernetes.
Ensuite, nous présenterons un processus « standard » de propagation d’une version CI/CD (développement, préproduction, production) à travers les tags docker.
Enfin, nous parlerons des différents composants constituant une application docker (base-image, tooling, librairie, code).
Une fois cette introduction réalisée, nous parlerons du cycle de vie d’une application à travers ses phases de développement, BAU pour mettre en avant que les failles de sécurité en période de développement sont rapidement corrigées par de nouvelles releases, mais pas nécessairement en BAU où les releases sont plus rares. Nous parlerons des diverses solutions (jfrog Xray, clair, …) pour le suivie des automatique des CVE et l’automatisation des mises à jour. Enfin, nous ferons un bref retour d’expérience pour parler des difficultés rencontrées et des propositions d’organisation mises en oeuvre.
Cette présentation bien qu’illustrée par des implémentations techniques est principalement organisationnelle.
Kubernetes is a container cluster manager that aims to provide a platform for automating deployment, scaling, and operations of application containers across clusters of machines. It uses pods as the basic building block, which are groups of application containers that share storage and networking resources. Kubernetes includes control planes for replication, scheduling, and services to expose applications. It supports deployment of multi-tier applications through replication controllers, services, labels, and pod templates.
Federated Kubernetes: As a Platform for Distributed Scientific ComputingBob Killen
A high level overview of Kubernetes Federation and the challenges encountered when building out a Platform for multi-institutional Research and Distributed Scientific Computing.
This document provides an overview of the OpenStack Magnum project, which aims to provide Container as a Service (CaaS) functionality. It discusses alternatives like Nova, Heat, and Magnum's advantages. Key features of Magnum include simplified multi-tenant containers, integration with OpenStack services, and out-of-box support for Kubernetes, Docker Swarm, and Mesos. The architecture and operation of Magnum are described.
This document provides an overview of the OpenStack Magnum project, which aims to provide Container as a Service (CaaS) functionality. It discusses alternatives like Nova, Heat, and Magnum's advantages. Key features of Magnum include simplified multi-tenant containers, integration with OpenStack services, and out-of-box support for Kubernetes, Docker Swarm, and Mesos. The architecture and operation of Magnum are explained, along with its integration points within OpenStack.
1. Docker EE will include an unmodified Kubernetes distribution to provide orchestration capabilities alongside Docker Swarm.
2. When running mixed workloads across orchestrators, resource contention is a risk and it is recommended to separate workloads by orchestrator on each node for now.
3. Docker EE aims to address the shortcomings of running mixed workloads to better support this in the future.
This document provides an overview of Kubernetes including:
1) Kubernetes is an open-source platform for automating deployment, scaling, and operations of containerized applications. It provides container-centric infrastructure and allows for quickly deploying and scaling applications.
2) The main components of Kubernetes include Pods (groups of containers), Services (abstract access to pods), ReplicationControllers (maintain pod replicas), and a master node running key components like etcd, API server, scheduler, and controller manager.
3) The document demonstrates getting started with Kubernetes by enabling the master on one node and a worker on another node, then deploying and exposing a sample nginx application across the cluster.
Kubernetes is designed to be an extensible system. But what is the vision for Kubernetes Extensibility? Do you know the difference between webhooks and cloud providers, or between CRI, CSI, and CNI? In this talk we will explore what extension points exist, how they have evolved, and how to use them to make the system do new and interesting things. We’ll give our vision for how they will probably evolve in the future, and talk about the sorts of things we expect the broader Kubernetes ecosystem to build with them.
Robert Barr presents on Kubernetes for Java developers. He discusses Quarkus, Micronaut and Spring Boot frameworks for building cloud-native Java applications. He provides an overview of Docker and how it can package applications. Barr then explains why Kubernetes is useful for orchestrating containers at scale, describing its architecture and key concepts like pods, deployments and services. He demonstrates running a sample application on Kubernetes and integrating with its Java client.
Kubernetes is an open-source container management platform. It has a master-node architecture with control plane components like the API server on the master and node components like kubelet and kube-proxy on nodes. Kubernetes uses pods as the basic building block, which can contain one or more containers. Services provide discovery and load balancing for pods. Deployments manage pods and replicasets and provide declarative updates. Key concepts include volumes for persistent storage, namespaces for tenant isolation, labels for object tagging, and selector matching.
DevNetCreate - ACI and Kubernetes IntegrationHank Preston
This document provides an overview of Kubernetes and how it can be integrated with Cisco Application Centric Infrastructure (ACI) through the ACI Networking plugin for Kubernetes. It discusses Kubernetes concepts like pods, deployments, services and namespaces. It then explains how the ACI plugin maps these Kubernetes objects to ACI objects like endpoint groups, contracts and virtual device contexts to provide network isolation and policies. The rest of the document outlines a hands-on lab where users can set up their own Kubernetes cluster integrated with ACI and deploy applications with different levels of network isolation.
Recent momentum around the evolution of Containers are gradually increase in last two years.Containers virtualize an OS and applications running in each container believe that they have full access to their very own copy of that OS. This is analogous to what VMs do when they virtualize at a lower level, the hardware. In the case of containers, it’s the OS that does the virtualization and maintains the illusion.
Recent past many software companies have quickly adopted container technologies, including Docker Containers, aware of the threat and advantage of the approach. For example, Linux companies have also jumped into the ground, seeing as this as an opportunity to grow the Linux market. Also Microsoft is going to add features to support containers and VMware have made efforts in integrating support for Docker into virtual machine technology.
Recent momentum around the evolution of Containers are gradually increase in last two years.Containers virtualize an OS and applications running in each container believe that they have full access to their very own copy of that OS. This is analogous to what VMs do when they virtualize at a lower level, the hardware. In the case of containers, it’s the OS that does the virtualization and maintains the illusion.
Recent past many software companies have quickly adopted container technologies, including Docker Containers, aware of the threat and advantage of the approach. For example, Linux companies have also jumped into the ground, seeing as this as an opportunity to grow the Linux market. Also Microsoft is going to add features to support containers and VMware have made efforts in integrating support for Docker into virtual machine technology.
ContainerD is a daemon that controls the runC runtime to execute and manage containers according to the OCI specification. It has a gRPC API and a low-level CLI (ctr) for debugging. ContainerD is designed to be embedded in larger systems rather than directly used by end-users. It focuses on container execution, images, storage, and networking.
Recent momentum around the evolution of Containers are gradually increase in last two years.Containers virtualize an OS and applications running in each container believe that they have full access to their very own copy of that OS. This is analogous to what VMs do when they virtualize at a lower level, the hardware. In the case of containers, it’s the OS that does the virtualization and maintains the illusion.
This document provides an overview of Kubernetes concepts including:
- Kubernetes architecture with masters running control plane components like the API server, scheduler, and controller manager, and nodes running pods and node agents.
- Key Kubernetes objects like pods, services, deployments, statefulsets, jobs and cronjobs that define and manage workloads.
- Networking concepts like services for service discovery, and ingress for external access.
- Storage with volumes, persistentvolumes, persistentvolumeclaims and storageclasses.
- Configuration with configmaps and secrets.
- Authentication and authorization using roles, rolebindings and serviceaccounts.
It also discusses Kubernetes installation with minikube, and common networking and deployment
Similar to Managing Container Clusters in OpenStack Native Way (20)
A short tech show on how to achieve VM HA by integrating Heat, Ceilometer and Nova; and another show about deploying a cluster of VMs across multiple regions then scale it.
An experience sharing of the OpenStack deployment at Suning.com, a large online retailer in China. The talk presents the challenges and opportunities on orchestrating the enterprise workloads using Heat.
Deploy an Elastic, Resilient, Load-Balanced Cluster in 5 Minutes with SenlinQiming Teng
This is a talk from the Austin OpenStack summit. It demonstrates how a resilient, elastic and load-balanced cluster can be deployed using senlin, heat, ceilometer, lbaas v2, nova.
Senlin is an OpenStack project that provides autonomic management and auto-scaling capabilities for collections of cloud applications. It allows users to define clusters of resources and attach policies to control their behavior. Senlin supports core concepts like profiles to define resource types, clusters to group resources, policies to control cluster behavior through scaling and placement rules, and events to notify of state changes. The project aims to provide a standalone service for auto-scaling that is not dependent on Heat but can still integrate with it and other OpenStack services.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...Zilliz
Join us to introduce Milvus Lite, a vector database that can run on notebooks and laptops, share the same API with Milvus, and integrate with every popular GenAI framework. This webinar is perfect for developers seeking easy-to-use, well-integrated vector databases for their GenAI apps.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
2. OTSUKA, Motohiro/Yuanying
NEC Solution Innovators
OpenStack Magnum Core Reviewer
Haiwei Xu
NEC Solution Innovators
OpenStack Senlin Core Reviewer
Qiming Teng
IBM, Research Scientist
OpenStack Senlin PTL, OpenStack Heat Core Reviewer
3. Agenda
• Why containers if you already have OpenStack
• What are the use cases?
• The many roads leading to Roma
• Container as first-class citizens on OpenStack
• Deployment and management
• Technology Gaps
• Experience Sharing and Outlook
• What we can do today
• Things to expect in Newton cycle
5. Photographer: Captain Albert E. Theberge, NOAA Corps (ret.)
from http://www.photolib.noaa.gov/coastline/line3174.htm
6. X-ray: NASA/CXC/RIKEN/D.Takei et al; Optical: NASA/STScI; Radio: NRAO/VLA
from http://www.nasa.gov/sites/default/files/thumbnails/image/gkper.jpg
7. Advantages of container technology
Server
Host OS
Hypervisor
Guest OS
libs / bins
Application
Guest OS
libs / bins
Application
Server
Host OS
libs / bins
Application
libs / bins
Application
Virtual Machine Container
8. Container Image
libs / bins
Application
Container Image
libs / bins
Application
Advantages of container technology
Server A
Host OS
Container Image
libs / bins
Application
Container Image
Server B
Host OS
libs / bins
Application
Dockerfile
Docker Registry
Development Production
Version managiment
10. Major Use Cases
• For application/service users
• IF self-serviced THEN
deploy/launch; simple configuration
ENDIF
• go...
• For application developers
• Develop, Commit, Test
• Build, Deploy, Push
• Pull, Patch, Push
• For cloud deployers/operators
• Build Infrastructure
• Install, Configure, Upgrade
• Monitor, Fix, Bill,
11. All Roads Lead to Roma
• How many roads do we have?
• nova lxc
• nova docker
• heat docker
• heat deployment
• magnum bay
• docker swarm
• kubernetes
• mesos
• marathon, ...
• openstack ansible
• kolla
• kolla-mesos
• .....
12. Nova: Docker / LXC
Ironic
Bare
Metal
Bare
Metal
VM VM VM
VM VM VM
Docker/LXC
???
Virtualization Bare metal Container
VM VM VM
VM VM VM
VM VM VM
VM VM VM
libvirt VMware Xen
Nova
driver
nova-docker virt driver
LXC (libvirt) driver
18. Balancing across the abstraction layer
• Container as another compute API?
• maybe pm, vm, lwVM
• so many backends
• An abstraction over all existing container management
software?
• it is possible, but many questions to be answered, e.g. why?
• do you really need to switch between these software frequently?
• are you willing to develop a client software to interact with all of them?
• So ... container clustering
• better integration with OpenStack
• ease of use
21. Senlin Features
• Profiles: A specification for the objects to be managed
• Policies: Rules to be checked/enforced before/after actions are performed
21
(others)
Senlin
Nova
Docker
Heat
Ironic BareMetal
VMs
Stacks
Containers
placement
deletion
scaling
health
load-balance
affinity
Policies as Plugins Profiles as Plugins Cluster/Nodes Managed
22. Senlin Server Architecture
openstacksdk
identity
compute
orchestration
network
...
engineengine lock
scheduler
actions
nodecluster
service
registry
receiverparser
drivers
openstack
dummy
(others)
dbapi
rpc client
policies
placement
deletion
scaling
health
load-balance
affinity
receiver
webhoook
MsgQueue
extension points
for external
monitoring
services
extension points
facilitating a
smarter cluster
management
extension points to talk to different
endpoints for object CRUD operations
extension points for interfacing
with different services or clouds
profiles
os.heat.stack
(others)
os.nova.server
senlin-api
WSGI
middleware
apiv1
23. Senlin Server Architecture (for containers)
engineengine lock
scheduler
actions
nodecluster
service
registry
receiverparser
drivers
docker-py
dummy
lxc
dbapi
rpc client
policies
placement
deletion
scaling
health
load-balance
affinity
receiver
webhoook
MsgQueue
extension points
for external
monitoring
services
extension points
facilitating a
smarter cluster
management
extension points to talk to different
endpoints for object CRUD operations
extension points for interfacing
with different services or clouds
profiles
container.docker
(others)
container.lxc
senlin-api
WSGI
middleware
apiv1
29. Container node and container cluster
node
Heat stack
Nova server
Profile type
Container
Nova server
Heat stack
Container
Nova server
Template for Heat Heat stack
container
cluster
Template for container
Template for Nova
Nova serverNova server
Heat stackHeat stack
containercontainer
30. How to create a container cluster?
container profile
cluster1
vm server
vm server
vm server
cluster1
container
vm
vm
container
container
vm
cluster2
container
container
container
container
container
31. The scalability of vm cluster and container cluster
cluster1
container
vm
container
vm
cluster2
container
container
user
Placement policy
Deletion policy
Scaling policy
Placement policy
Deletion policy
Scaling policy
vm
container container
The section which I will talk is “Why Containers, If you already have OpenStack.”
Container is a type of virtualization technology, and we can use it as computing resource.
But computing resource?
OpenStack already has the Nova, which is an abstraction layoer of computing resource.
Basically Nova handles virtual machine, and Nova provides abstraction layer for managing virtual machine.
So if container is a type of virtual machine, “Why Containers, If you already have OpenStack.”
This diagram shows the difference between virtual machine and container model.
Left side is a traditional virtual machine model.
And right side is a container model.
Virtual machine requires a hypervisor which emulate and translate the hardware,
and has its own OS.
Container provide isolation for processes sharing compute resources.
They are similar to virtualized machine but share the host kernel and avoid hardware emulation.
So you can use a Host resources more effectively than Virtual machine.
Soin this case, you can use container like a virtual machine,
It means that nova can manage containers.
In addition, Docker provides a simple tools and eco system for container which makes container technology become very polular .
You can create container image easily using Dockerfile.
You can share container image using docker registry,
Container Image has all the additional dependencies that application need, above what is provided by host.
You can move application from host to host easily.
Futthermore container scalability and elasticity are much better than virtual machine,
and thanks to some managiment tools like kubernetes and docke swarm, managiment of containers between defferent host become much easier.
So OpenStack needs this technology to make cloud managiment easiler.
This slide show major use cases of container technology.
The first one is for application users, who only want the application to be started quickly,
The don’t care how the application is started.
For the application developers, they care about application lifecycles, version managiment and portablicty.
And for cloud operators they care about how to manage infrastructure effectively, how to upgrade the system and so on.
Let’s see what container technology exists in OpenStack.
we have nova lxc, docker and magnum so many projects supporting container technlogy.
For example, Nova has lxc driver and docker driver which provides same interface with virtual machine.
User can start container like virtual machine.
This model doesn’t support all the advantage of container technology,
But this can meet the application user’s needs who just want to deploy application quickly.
Next heat.
Heat has two way to manage cotainer.
One is Docker::Container resource, and the other is SoftwareConfig or StructuredConfig Resource.
This can also meet the application user’s needs but it has limitaion of managing the containers after they are created.
And the next is magnum.
Magnum is a container orchestration engine as a service which deploy and manage COE.
When magnum deploy the COE, user can use all of the advantage of container technology through it’s COE specific tools such as kubectl or docker cli.
This can meet the developer and operator use cases.
But you must manage containers without an OpenStack native way.
Next one is Kolla.
Kolla is a OpenStack as a Service.
which uses container technology to make managiment openstack easier.
This is one caese of operator’s usecases.
this is just use a containers but not a way to manage containers itself.
So in order to manage container well in OpenStack, we need to find the new solution.
But we have a some problem to solve it.
The commuty has disscussed a lot about this issues.
Create an unified api which can support vm, baremetal and containers?
But the usecase of vm and conrainer are different so we can’t provider unified API.
next issue is how create an unified abstraction api for container orchestration engine?
This also have same problem which is a difference between kuber
But we can’t get an agreement on it.
As introduced previously Senlin is a project which provides clustering service, currently it only supports vm cluster. When supports container clustering, Senlin will do it in the similar way of vm clustering.
So at first, we need a new type profile – a container type profile. In the profile we will define some properties which will be used to create containers. All necessary properties can be defined in this profile. The format is similar to Nova server format.
With the profile , we can create container nodes and container clusters, When create containers, we need host vms, and the host vms are also managed by Senlin.So Senlin can manage both vm layer and container layer.
This is the workflow of creating container cluster. We can create multiple containers in one vm depending on the vm resource.
We can see physically containers are running on vms, but logically the container cluster(cluster2) and the vm cluster(cluster1) are separated clusters, Senlin can manage them separately.
That means to end uses who just want containers, they may just see the container cluster.
Lets see how to manage the resources.
The scalability control is the advantage of Senlin. When we have a vm cluster and container cluster, sometime the resource is not enough, it need to scale out.
Senlin invents policies to tell the cluster how to scale out/scale in. As we see the policies are attached to the cluster. When the resource is not enough, we got an alarm from the ceilometer, the policy will be triggered. Then it will tell the vm cluster to create a vm, after that the scaling policy attached to the container cluster will be triggered, then a new container will be created.
This is the scale out model, of course when the resources are idle, some vm/containers will be deleted. This is the way how senlin control cluster scalability.
About the design for container cluster, we still have some issues to think about. Everyone has its own advantage. When starting a container we need to determine starting it on which cluster which node. So do we need a scheduler to do this job. In senlin we have a placement policy, in the policy we can define where to start to nodes, it is a kind of scheduler, but very simple one, it’s not smart enough, we still need to improve it to meet our needs. Anyway it is a solution of this issue.
So we hope we can use Kuryr to create container network automatically just like we create vm network.
After all about container cluster support in senlin, we have had some discussions and have made some agreements on some issues. But we still want to hear more voices from the community, we need your ideas, your suggestions and also new hands, so please join us if you are interested in this job. You can find us on IRC senin channel and can also join our weekly meeting, any ideas are appreciated.