Kubernetes and OpenStack at Scale at OpenStack Summit Boston 2017
Imagine being able to stand up thousands of tenants with thousands of apps, running thousands of Docker-formatted container images and routes, all on a self-healing cluster and elastic infrastructure. Now, take that one step further - all of those images being updatable through a single upload to the registry, and with zero downtime. In this session, you will see just that.
In this presentation, we will walk through a recent benchmarking deployment using Kubernetes and OpenStack on the Cloud Native Computing Foundation’s (CNCF's) 1,000 node cluster with OpenStack and Red Hat’s OpenShift Container Platform, the enterprise-ready Kubernetes for developers.
You'll also what's been happening in subsequent rounds of testing in Red Hat's own SCALE lab and the CNCF cluster and how we are working with the relevant open source communities including OpenStack, Kubernetes, and Ansible to continue to raise the bar for horizontal scaling of these platforms via community powered innovation.
This document discusses deploying WSO2 middleware on Kubernetes. It provides an overview of Kubernetes architecture and components, and how various Kubernetes features like pods, replication controllers, services, and overlay networking are used. It also describes WSO2 Docker images, Carbon reference architectures for Kubernetes, and the deployment workflow. Monitoring of Kubernetes cluster health using tools like cAdvisor, Heapster, Grafana and InfluxDB is also covered briefly.
This document provides an introduction to Kubernetes and Container Network Interface (CNI). It begins with an introduction to the presenter and their background. It then discusses the differences between VMs and containers before explaining why Kubernetes is needed for container orchestration. The rest of the document details the architecture of Kubernetes, including the master node, worker nodes, pods, labels, replica sets, deployments, services, and how to build a Kubernetes cluster. It concludes with a brief introduction to CNI and a call for questions.
KubeVirt is an add-on for Kubernetes that allows for virtual machines to be scheduled alongside containers. It provides a dedicated API for managing virtual machines as pods. The presentation discusses how KubeVirt could provide a migration path for workloads from VMs to containers and converge infrastructure by allowing OpenStack and other platforms to use KubeVirt and Kubernetes for scheduling. It also covers demoing KubeVirt and potential approaches for integrating it with OpenStack, such as through a Nova virt driver or compatible API.
Containers for the Enterprise: Delivering OpenShift on OpenStack for Performa...Stephen Gordon
Imagine being able to stand up thousands of tenants with thousands of apps, running thousands of Docker-formatted container images and routes, all on a self-healing cluster. Now, take that one step further with all of those images being updatable through a single upload to the registry, and with zero downtime. In this session, Steve Gordon of the Red Hat OpenStack Platform team will show you just that. Steve will walk through a recent benchmarking deployment using the Cloud Native Computing Foundation’s (CNCF) new 1,000 node cluster with OpenStack and Red Hat’s OpenShift Container Platform, the enterprise-ready Kubernetes for developers.
How to Integrate Kubernetes in OpenStack Meng-Ze Lee
The document discusses various open source projects for integrating Kubernetes and containers into OpenStack including:
- Kolla provides production-ready containers and deployment tools for operating OpenStack clouds using Kubernetes in a scalable and reliable way.
- Magnum allows deploying and managing container orchestration engines like Docker Swarm, Mesos and Kubernetes on OpenStack.
- Zun is an OpenStack service for managing containers on OpenStack using projects like Docker and Kuryr.
- Kuryr-Kubernetes provides networking between Kubernetes and OpenStack Neutron.
In this deck from the Docker Workshop at ISC 2015, Andreas Schmidt from Cassini Consulting describes Docker in a Nutshell
"As the newest flavor of Linux Containers, Docker gained a lot of momentum in the last 12 months. With a very convenient and open API-driven architecture Docker is able to help decrease the complexity of operations and increase the productivity of computation. During the last two years Andreas, Christian, and Wolfgang gained a lot of experience with Docker and were thrilled by its possible impact early on. Andreas started working with Docker in mid-2013 and is interested in developing tools for solving Enterprise IT requirements on networking and security. In 2014 he held talks and workshops about these topics. Christian started using Docker in 2013 to virtualize a complete HPC cluster stack and since then held multiple talks about how Docker might impact HPC. Wolfgang and his partner Burak Yenier introduced Docker as a corner-stone of the UberCloud Marketplace to drastically improve and simplify access to HPC cloud resources. UberCloud just announced their new containers for computational fluid dynamics software like Fluent, STAR-CCM+ and OpenFOAM."
Watch the video presentation: http://wp.me/p3RLHQ-enP
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Presentation delivered at LinuxCon China 2017.
Open vSwitch (OVS) is a multilayer open source virtual switch. OVS is designed to enable massive network automation through programmatic extension, while still supporting standard management interfaces. OVN is a new network virtualization project that brings virtual networking to the Open vSwitch user community. OVN includes logical switches and routers, security groups, and L2/L3/L4 ACLs, implemented on top of a tunnel-based overlay network.
In this presentation, we will provide an overview of the current state of the projects and their future plans, such as:
- The current state of the Linux, DPDK, and Hyper-V ports
- A status update on a portable BPF-based datapath
- The latest stateful and OpenFlow features available in OVS
- Performance and debugging enhancement to OVN
- OVN features under development such as ACL logging and encrypted tunnels
In this video from the Docker Workshop at ISC 2015, Christian Kniep from QNIB Solutions shows how he uses Docker in his efforts to provide a HPC software stack in a box, encapsulating each layer in the HPC stack within a Linux Container.
Watch the video presentation: http://wp.me/p3RLHQ-eos
Learn more: http://qnib.org/about/
This document discusses deploying WSO2 middleware on Kubernetes. It provides an overview of Kubernetes architecture and components, and how various Kubernetes features like pods, replication controllers, services, and overlay networking are used. It also describes WSO2 Docker images, Carbon reference architectures for Kubernetes, and the deployment workflow. Monitoring of Kubernetes cluster health using tools like cAdvisor, Heapster, Grafana and InfluxDB is also covered briefly.
This document provides an introduction to Kubernetes and Container Network Interface (CNI). It begins with an introduction to the presenter and their background. It then discusses the differences between VMs and containers before explaining why Kubernetes is needed for container orchestration. The rest of the document details the architecture of Kubernetes, including the master node, worker nodes, pods, labels, replica sets, deployments, services, and how to build a Kubernetes cluster. It concludes with a brief introduction to CNI and a call for questions.
KubeVirt is an add-on for Kubernetes that allows for virtual machines to be scheduled alongside containers. It provides a dedicated API for managing virtual machines as pods. The presentation discusses how KubeVirt could provide a migration path for workloads from VMs to containers and converge infrastructure by allowing OpenStack and other platforms to use KubeVirt and Kubernetes for scheduling. It also covers demoing KubeVirt and potential approaches for integrating it with OpenStack, such as through a Nova virt driver or compatible API.
Containers for the Enterprise: Delivering OpenShift on OpenStack for Performa...Stephen Gordon
Imagine being able to stand up thousands of tenants with thousands of apps, running thousands of Docker-formatted container images and routes, all on a self-healing cluster. Now, take that one step further with all of those images being updatable through a single upload to the registry, and with zero downtime. In this session, Steve Gordon of the Red Hat OpenStack Platform team will show you just that. Steve will walk through a recent benchmarking deployment using the Cloud Native Computing Foundation’s (CNCF) new 1,000 node cluster with OpenStack and Red Hat’s OpenShift Container Platform, the enterprise-ready Kubernetes for developers.
How to Integrate Kubernetes in OpenStack Meng-Ze Lee
The document discusses various open source projects for integrating Kubernetes and containers into OpenStack including:
- Kolla provides production-ready containers and deployment tools for operating OpenStack clouds using Kubernetes in a scalable and reliable way.
- Magnum allows deploying and managing container orchestration engines like Docker Swarm, Mesos and Kubernetes on OpenStack.
- Zun is an OpenStack service for managing containers on OpenStack using projects like Docker and Kuryr.
- Kuryr-Kubernetes provides networking between Kubernetes and OpenStack Neutron.
In this deck from the Docker Workshop at ISC 2015, Andreas Schmidt from Cassini Consulting describes Docker in a Nutshell
"As the newest flavor of Linux Containers, Docker gained a lot of momentum in the last 12 months. With a very convenient and open API-driven architecture Docker is able to help decrease the complexity of operations and increase the productivity of computation. During the last two years Andreas, Christian, and Wolfgang gained a lot of experience with Docker and were thrilled by its possible impact early on. Andreas started working with Docker in mid-2013 and is interested in developing tools for solving Enterprise IT requirements on networking and security. In 2014 he held talks and workshops about these topics. Christian started using Docker in 2013 to virtualize a complete HPC cluster stack and since then held multiple talks about how Docker might impact HPC. Wolfgang and his partner Burak Yenier introduced Docker as a corner-stone of the UberCloud Marketplace to drastically improve and simplify access to HPC cloud resources. UberCloud just announced their new containers for computational fluid dynamics software like Fluent, STAR-CCM+ and OpenFOAM."
Watch the video presentation: http://wp.me/p3RLHQ-enP
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Presentation delivered at LinuxCon China 2017.
Open vSwitch (OVS) is a multilayer open source virtual switch. OVS is designed to enable massive network automation through programmatic extension, while still supporting standard management interfaces. OVN is a new network virtualization project that brings virtual networking to the Open vSwitch user community. OVN includes logical switches and routers, security groups, and L2/L3/L4 ACLs, implemented on top of a tunnel-based overlay network.
In this presentation, we will provide an overview of the current state of the projects and their future plans, such as:
- The current state of the Linux, DPDK, and Hyper-V ports
- A status update on a portable BPF-based datapath
- The latest stateful and OpenFlow features available in OVS
- Performance and debugging enhancement to OVN
- OVN features under development such as ACL logging and encrypted tunnels
In this video from the Docker Workshop at ISC 2015, Christian Kniep from QNIB Solutions shows how he uses Docker in his efforts to provide a HPC software stack in a box, encapsulating each layer in the HPC stack within a Linux Container.
Watch the video presentation: http://wp.me/p3RLHQ-eos
Learn more: http://qnib.org/about/
Join us to learn how to deploy your first containerized application on the most popular orchestration engine. You will understand the basic concepts of Kubernetes along with the terminology and the deployment architecture. We will show you everything from building a Docker image to going live with your application. Each attendee gets $300 credit to start using Google Container Engine!
KubeCon EU 2016: "rktnetes": what's new with container runtimes and KubernetesKubeAcademy
rkt is a modern container runtime, built for security, efficiency, and composability. Kubernetes is a modern cluster orchestration system allowing users. Kubernetes doesn't directly execute application containers but instead delegate to a container runtime, which is integrated at the kubelet (node) level. When Kubernetes first launched, the only supported container runtime was Docker - but in recent months, we've been hard at work integrating rkt as an alternative container runtime, aka "rktnetes". The goal of "rktnetes" is to have first-class integration between rkt and the kubelet, and allow Kubernetes users to take advantage of some of rkt's unique features.
This talk will describe how rkt works, some of the features that make it unique as a container runtime, and some of the process of integrating an alternative container runtime with Kubernetes, as well as the latest state of "rktnetes."Introduction to rkt, including special/unique features.
Sched Link: http://sched.co/6BY7
Presentation by Ross Kukulinski at the Philadelphia Docker Meetup on September 27, 2016.
This talk will introduce Kubernetes, the industry standard system for automatic deployment, scaling, and management of containerized applications. We'll walk through key concepts and you will learn how to deploy a multi-tier application to Kubernetes in 10 minutes.
Networking-odl and ODL Neutron Northbound are the key components for integrating OpenStack Neutron and OpenDaylight. They are actively developed open source projects. The document encourages giving the integration a try, providing feedback, and contributing to help further the integration of OpenStack and OpenDaylight networking.
Demystifying the Nuts & Bolts of Kubernetes ArchitectureAjeet Singh Raina
The document summarizes the architecture of Kubernetes. It uses an analogy of cargo ships and control ships to explain the different components. The master node components like the scheduler, ETCD cluster, and controller manager manage and monitor the worker nodes. The worker node components like Kubelet and kube-proxy run on each node and ensure pods and containers are running properly and can communicate. Pods are the basic building blocks that can contain one or more containers.
A Primer on Kubernetes and Google Container EngineRightScale
Docker and other container technologies offer the promise of improved productivity and portability. Kubernetes is one of the leading cluster management systems for Docker and powers the Google Container Engine managed service.
-A review of key Linux container concepts
-The role of Kubernetes in deploying Docker-based applications
-Primer on Google Container Service
-How RightScale works with containers and clusters
This document discusses testing Kubernetes and OpenShift at scale. It describes installing large clusters of 1000+ nodes, using scalability test tools like the Kubernetes performance test repo and OpenShift SVT repo to load clusters and generate traffic. Sample results show loading clusters with thousands of pods and projects, and peaks in master node resource usage when loading and deleting hundreds of pods simultaneously.
The relationship between Docker, Kubernetes and CRIHungWei Chiu
Docker, Kubernetes, and CRI standards allow different container solutions to work together. Docker contributed to the OCI specifications for container images and runtimes. Kubernetes uses the Container Runtime Interface (CRI) to support multiple container runtimes like Docker, Containerd, and CRI-O. This allows Kubernetes to work with different container solutions while maintaining compatibility through open standards.
Kubernetes uses containers managed by container engines like Docker. It separates containers from the host machine using namespaces and cgroups for isolation. Docker containers share the host kernel and use aufs for the union filesystem. Virtual machines (VMs) run a full guest operating system with virtualization provided by hypervisors like KVM/QEMU. Containers are more lightweight than VMs as they share the host kernel and have smaller base images and faster launch times and resource usage.
Making the move to microservices, containers and orchestration? In this webinar we’ll show how to deploy and configure pods to ensure high availability and how pods connect to let the outside world reach your app.
In this webinar you'll learn:
* Kubernetes core concepts
* Masters, nodes and how to register nodes to clusters.
* What to monitor and visualize, and using what tools.
Kolla Project provided an update on their Rocky release of containers and deployment tools for OpenStack clouds. Key points included 8 new Docker images added, Ceph bluestore support, ability to define resource limits per container, and over 125 code contributors from a globally distributed community during the Rocky cycle. They encourage joining the Kolla community on IRC, meetings, or by submitting bugs/reviews/patches.
Cloud is a style of computing where scalable and elastic IT-related capabilities are provided as a service using Internet technologies. WSO2 delivers one of the best Public Cloud, Managed Cloud and Private Cloud offerings with world renowned WSO2 middleware platform. WSO2 middleware stack is built from ground up with an open architecture for supporting cloud native features such as multi-tenancy, cluster discovery, artifact distribution, dynamic load balancing, autoscaling & monitoring to be able to run on any PaaS. WSO2 is now innovating on delivering a lightweight, ultra fast Gateway and a Microservices Framework for providing unprecedented agility and scalability in the cloud with Docker and Kubernetes.
In this session Imesh will walk you through WSO2 Cloud strategy on delivering heterogeneous PaaS offerings, managed and public cloud platforms for building on-premise, public and hybrid cloud solutions.
Magnum is an OpenStack service that simplifies the deployment and management of container orchestration systems, such as Kubernetes and Docker Swarm, as first-class objects on OpenStack. It allows users to easily deploy and manage multiple container clusters on OpenStack that are isolated by tenant and project. Magnum uses Heat orchestration templates to deploy container clusters and integrates with other OpenStack services like Nova, Neutron, Keystone, and Cinder.
This document provides an overview of OpenStack Nova's architecture and code structure. It begins with an introduction to Nova's mission to provide scalable on-demand access to compute resources. It then covers Nova's core components, data flows, and code organization. Key aspects summarized include Nova's use of Cells to partition compute nodes, its facade design pattern for APIs, and its reliance on common OpenStack libraries like Oslo and Stevedore for configuration, logging, and extensibility.
GUTS is a workload migration engine that automatically migrates existing workloads and virtual machines from previous generation virtualization platforms to OpenStack. It supports migrating VMs, volumes, networks, users, and other resources between OpenStack environments or from platforms like VMware to OpenStack. GUTS has API, scheduler, and migration services to orchestrate the migrations. It can convert disk formats and manage hypervisor-specific tools during the migration process. Future plans include supporting more hypervisors and resource types.
Kubespray and Ansible can be used to automate the installation of Kubernetes in a production-ready environment. Kubespray provides tools to configure highly available Kubernetes clusters across multiple Linux distributions. Ansible is an IT automation tool that can deploy software and configure systems. The document then provides a 6 step guide for installing Kubernetes on Ubuntu using kubeadm, including installing Docker, kubeadm, kubelet and kubectl, disabling swap, configuring system parameters, initializing the cluster with kubeadm, and joining nodes. It also briefly explains Kubernetes architecture including the master node, worker nodes, addons, CNI, CRI, CSI and key concepts like pods, deployments, networking,
How to integrate Kubernetes in OpenStack: You need to know these projectinwin stack
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications, while OpenStack is a free and open-source software platform for cloud computing, networking, and storage. The document discusses different ways to integrate Kubernetes and OpenStack, including using Zun to provide an OpenStack API for launching and managing containers, Magnum to offer container orchestration engines for deploying and managing containers, Kolla and Kolla Kubernetes to deploy OpenStack on Kubernetes, Kuryr Kubernetes to bridge networking models between containers and OpenStack, and Stackube which uses Kubernetes as the compute fabric controller instead of Nova.
This document provides an overview of Kubernetes 101. It begins with asking why Kubernetes is needed and provides a brief history of the project. It describes containers and container orchestration tools. It then covers the main components of Kubernetes architecture including pods, replica sets, deployments, services, and ingress. It provides examples of common Kubernetes manifest files and discusses basic Kubernetes primitives. It concludes with discussing DevOps practices after adopting Kubernetes and potential next steps to learn more advanced Kubernetes topics.
OpenStackTage Cologne - OpenStack at 99.999% availability with CephDanny Al-Gaaf
High availability is a very important and frequently discussed topic for clouds at the infrastructure level. There are several concepts to provide a HA-ready OpenStack and also software defined storage like Ceph is highly available with no single point of failure.
But what about HA if you bring OpenStack and Ceph together? What are the dependencies between them and how do they influence the availability of your cloud instances from the tenant or application point of view?
How does the design of your classic high-available data center, e.g. with two fire compartments, power backup, and redundant power and network lines impact your cluster setup? There are many different scenarios of potential failures. What does this mean regarding building and managing failure zones, especially in case of technologies like Ceph which need to be able to build a quorum to keep up running?
Deep Dive: OpenStack Summit (Red Hat Summit 2014)Stephen Gordon
This deck begins with a high-level overview of where OpenStack Compute (Nova) fits into the overall OpenStack architecture, as demonstrated in Red Hat Enterprise Linux OpenStack Platform. Before illustrating how OpenStack Compute interacts with other OpenStack components.
The session will also provide a grounding in some common Compute terminology and a deep-dive look into key areas of OpenStack Compute, including the:
Compute APIs.
Compute Scheduler.
Compute Conductor.
Compute Service.
Compute Instance lifecycle.
Intertwined with the architectural information are details on horizontally scaling and dividing compute resources as well as customization of the Compute scheduler. You’ll also learn valuable insights into key OpenStack Compute features present in OpenStack Icehouse.
Join us to learn how to deploy your first containerized application on the most popular orchestration engine. You will understand the basic concepts of Kubernetes along with the terminology and the deployment architecture. We will show you everything from building a Docker image to going live with your application. Each attendee gets $300 credit to start using Google Container Engine!
KubeCon EU 2016: "rktnetes": what's new with container runtimes and KubernetesKubeAcademy
rkt is a modern container runtime, built for security, efficiency, and composability. Kubernetes is a modern cluster orchestration system allowing users. Kubernetes doesn't directly execute application containers but instead delegate to a container runtime, which is integrated at the kubelet (node) level. When Kubernetes first launched, the only supported container runtime was Docker - but in recent months, we've been hard at work integrating rkt as an alternative container runtime, aka "rktnetes". The goal of "rktnetes" is to have first-class integration between rkt and the kubelet, and allow Kubernetes users to take advantage of some of rkt's unique features.
This talk will describe how rkt works, some of the features that make it unique as a container runtime, and some of the process of integrating an alternative container runtime with Kubernetes, as well as the latest state of "rktnetes."Introduction to rkt, including special/unique features.
Sched Link: http://sched.co/6BY7
Presentation by Ross Kukulinski at the Philadelphia Docker Meetup on September 27, 2016.
This talk will introduce Kubernetes, the industry standard system for automatic deployment, scaling, and management of containerized applications. We'll walk through key concepts and you will learn how to deploy a multi-tier application to Kubernetes in 10 minutes.
Networking-odl and ODL Neutron Northbound are the key components for integrating OpenStack Neutron and OpenDaylight. They are actively developed open source projects. The document encourages giving the integration a try, providing feedback, and contributing to help further the integration of OpenStack and OpenDaylight networking.
Demystifying the Nuts & Bolts of Kubernetes ArchitectureAjeet Singh Raina
The document summarizes the architecture of Kubernetes. It uses an analogy of cargo ships and control ships to explain the different components. The master node components like the scheduler, ETCD cluster, and controller manager manage and monitor the worker nodes. The worker node components like Kubelet and kube-proxy run on each node and ensure pods and containers are running properly and can communicate. Pods are the basic building blocks that can contain one or more containers.
A Primer on Kubernetes and Google Container EngineRightScale
Docker and other container technologies offer the promise of improved productivity and portability. Kubernetes is one of the leading cluster management systems for Docker and powers the Google Container Engine managed service.
-A review of key Linux container concepts
-The role of Kubernetes in deploying Docker-based applications
-Primer on Google Container Service
-How RightScale works with containers and clusters
This document discusses testing Kubernetes and OpenShift at scale. It describes installing large clusters of 1000+ nodes, using scalability test tools like the Kubernetes performance test repo and OpenShift SVT repo to load clusters and generate traffic. Sample results show loading clusters with thousands of pods and projects, and peaks in master node resource usage when loading and deleting hundreds of pods simultaneously.
The relationship between Docker, Kubernetes and CRIHungWei Chiu
Docker, Kubernetes, and CRI standards allow different container solutions to work together. Docker contributed to the OCI specifications for container images and runtimes. Kubernetes uses the Container Runtime Interface (CRI) to support multiple container runtimes like Docker, Containerd, and CRI-O. This allows Kubernetes to work with different container solutions while maintaining compatibility through open standards.
Kubernetes uses containers managed by container engines like Docker. It separates containers from the host machine using namespaces and cgroups for isolation. Docker containers share the host kernel and use aufs for the union filesystem. Virtual machines (VMs) run a full guest operating system with virtualization provided by hypervisors like KVM/QEMU. Containers are more lightweight than VMs as they share the host kernel and have smaller base images and faster launch times and resource usage.
Making the move to microservices, containers and orchestration? In this webinar we’ll show how to deploy and configure pods to ensure high availability and how pods connect to let the outside world reach your app.
In this webinar you'll learn:
* Kubernetes core concepts
* Masters, nodes and how to register nodes to clusters.
* What to monitor and visualize, and using what tools.
Kolla Project provided an update on their Rocky release of containers and deployment tools for OpenStack clouds. Key points included 8 new Docker images added, Ceph bluestore support, ability to define resource limits per container, and over 125 code contributors from a globally distributed community during the Rocky cycle. They encourage joining the Kolla community on IRC, meetings, or by submitting bugs/reviews/patches.
Cloud is a style of computing where scalable and elastic IT-related capabilities are provided as a service using Internet technologies. WSO2 delivers one of the best Public Cloud, Managed Cloud and Private Cloud offerings with world renowned WSO2 middleware platform. WSO2 middleware stack is built from ground up with an open architecture for supporting cloud native features such as multi-tenancy, cluster discovery, artifact distribution, dynamic load balancing, autoscaling & monitoring to be able to run on any PaaS. WSO2 is now innovating on delivering a lightweight, ultra fast Gateway and a Microservices Framework for providing unprecedented agility and scalability in the cloud with Docker and Kubernetes.
In this session Imesh will walk you through WSO2 Cloud strategy on delivering heterogeneous PaaS offerings, managed and public cloud platforms for building on-premise, public and hybrid cloud solutions.
Magnum is an OpenStack service that simplifies the deployment and management of container orchestration systems, such as Kubernetes and Docker Swarm, as first-class objects on OpenStack. It allows users to easily deploy and manage multiple container clusters on OpenStack that are isolated by tenant and project. Magnum uses Heat orchestration templates to deploy container clusters and integrates with other OpenStack services like Nova, Neutron, Keystone, and Cinder.
This document provides an overview of OpenStack Nova's architecture and code structure. It begins with an introduction to Nova's mission to provide scalable on-demand access to compute resources. It then covers Nova's core components, data flows, and code organization. Key aspects summarized include Nova's use of Cells to partition compute nodes, its facade design pattern for APIs, and its reliance on common OpenStack libraries like Oslo and Stevedore for configuration, logging, and extensibility.
GUTS is a workload migration engine that automatically migrates existing workloads and virtual machines from previous generation virtualization platforms to OpenStack. It supports migrating VMs, volumes, networks, users, and other resources between OpenStack environments or from platforms like VMware to OpenStack. GUTS has API, scheduler, and migration services to orchestrate the migrations. It can convert disk formats and manage hypervisor-specific tools during the migration process. Future plans include supporting more hypervisors and resource types.
Kubespray and Ansible can be used to automate the installation of Kubernetes in a production-ready environment. Kubespray provides tools to configure highly available Kubernetes clusters across multiple Linux distributions. Ansible is an IT automation tool that can deploy software and configure systems. The document then provides a 6 step guide for installing Kubernetes on Ubuntu using kubeadm, including installing Docker, kubeadm, kubelet and kubectl, disabling swap, configuring system parameters, initializing the cluster with kubeadm, and joining nodes. It also briefly explains Kubernetes architecture including the master node, worker nodes, addons, CNI, CRI, CSI and key concepts like pods, deployments, networking,
How to integrate Kubernetes in OpenStack: You need to know these projectinwin stack
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications, while OpenStack is a free and open-source software platform for cloud computing, networking, and storage. The document discusses different ways to integrate Kubernetes and OpenStack, including using Zun to provide an OpenStack API for launching and managing containers, Magnum to offer container orchestration engines for deploying and managing containers, Kolla and Kolla Kubernetes to deploy OpenStack on Kubernetes, Kuryr Kubernetes to bridge networking models between containers and OpenStack, and Stackube which uses Kubernetes as the compute fabric controller instead of Nova.
This document provides an overview of Kubernetes 101. It begins with asking why Kubernetes is needed and provides a brief history of the project. It describes containers and container orchestration tools. It then covers the main components of Kubernetes architecture including pods, replica sets, deployments, services, and ingress. It provides examples of common Kubernetes manifest files and discusses basic Kubernetes primitives. It concludes with discussing DevOps practices after adopting Kubernetes and potential next steps to learn more advanced Kubernetes topics.
OpenStackTage Cologne - OpenStack at 99.999% availability with CephDanny Al-Gaaf
High availability is a very important and frequently discussed topic for clouds at the infrastructure level. There are several concepts to provide a HA-ready OpenStack and also software defined storage like Ceph is highly available with no single point of failure.
But what about HA if you bring OpenStack and Ceph together? What are the dependencies between them and how do they influence the availability of your cloud instances from the tenant or application point of view?
How does the design of your classic high-available data center, e.g. with two fire compartments, power backup, and redundant power and network lines impact your cluster setup? There are many different scenarios of potential failures. What does this mean regarding building and managing failure zones, especially in case of technologies like Ceph which need to be able to build a quorum to keep up running?
Deep Dive: OpenStack Summit (Red Hat Summit 2014)Stephen Gordon
This deck begins with a high-level overview of where OpenStack Compute (Nova) fits into the overall OpenStack architecture, as demonstrated in Red Hat Enterprise Linux OpenStack Platform. Before illustrating how OpenStack Compute interacts with other OpenStack components.
The session will also provide a grounding in some common Compute terminology and a deep-dive look into key areas of OpenStack Compute, including the:
Compute APIs.
Compute Scheduler.
Compute Conductor.
Compute Service.
Compute Instance lifecycle.
Intertwined with the architectural information are details on horizontally scaling and dividing compute resources as well as customization of the Compute scheduler. You’ll also learn valuable insights into key OpenStack Compute features present in OpenStack Icehouse.
Deep dive into highly available open stack architecture openstack summit va...Arthur Berezin
This document summarizes a presentation on highly available OpenStack architecture. It discusses using Pacemaker and HAProxy for high availability enabling services. Shared databases like MariaDB Galera and message queues like RabbitMQ are made highly available. Individual OpenStack services like Keystone, Glance, Cinder, Nova, Neutron, and Horizon are made highly available through active-active clustering, load balancing, and fencing. The presentation covers topologies for controller, compute, network, and storage nodes. It provides examples of making individual services highly available and discusses ongoing work and future plans to improve high availability in OpenStack.
Do you think that Nova, Cinder, Heat, Ceilometer, and Neutron are all references to global warming and looming apocalypse? For all those who come to the OpenStack community and wonder what all the fuss is about, this quick introduction will answer your many questions. It includes a short history of the largest Open Source project in history and will touch on
the basic OpenStack components, so you will be prepared the next time someone mentions Keystone, Nova and Swift in the same sentence.
This session was presented by Beth Cohen at the OpenStack meetup on Feb 19th, 2014 in Boston. Beth works for Verizon developing cool Cloud based products that she can't talk about without a strict NDA. She is a technical leader with over 25 years of experience architecting leading-edge system infrastructures and managing complex projects in the telecom, manufacturing, financial services, government, and technology industries. She has been involved in building some of the world's largest OpenStack architectures and has way too much fun at OpenStack Summits!
The document summarizes new features in OpenStack Liberty. Key updates include improved API micro-versioning in Compute, pluggable IP address management and role-based access control in Networking, and splitting Ceilometer into multiple sub-projects for metrics, alarms and events. Emerging projects like Manila, Magnum and Zaqar also see enhancements around shared file systems, container orchestration and messaging.
Do you think of cheetahs not RabbitMQ when you hear the word Swift? Think a Nova is just a giant exploding star, not a cloud compute engine. This deck (presented at the OpenStack Boston meetup) provides introduction will answer your many questions. It covers the basic components including: Nova, Swift, Cinder, Keystone, Horizon and Glance.
This document provides an overview of OpenStack, an open source cloud computing platform. It discusses the history and origins of OpenStack at NASA and Rackspace, describes some of the core components including Nova (compute), Swift (object storage), Glance (image service), Cinder (block storage), Quantum/Neutron (networking), Keystone (identity), and Dashboard (web UI). It also outlines some key features of these components such as distributed architecture, API access, security groups, floating IPs, and pluggable networking backends. Finally, it encourages contributions to the OpenStack community through coding, documentation, translation, and other assistance.
This document provides a conceptual overview of the OpenStack architecture in 3 sentences or less:
OpenStack is an open source cloud operating system that consists of a set of interrelated services that are written in Python and provide APIs to interact with components like compute, networking, storage and identity. The core components include compute (Nova), object storage (Swift), block storage (Cinder), image service (Glance), identity (Keystone), networking (Quantum), and dashboard (Horizon), which provides a web-based user interface. Each component communicates with others via APIs to provide infrastructure as a service capabilities.
Cloud Native Night April 2016, Munich: Talk by Josef Adersberger (@adersberger, CTO at QAware).
Join our Meetup: www.meetup.com/cloud-native-muc
Abstract: This talk is about the Cloud Native Stack, cluster orchestration with Kubernetes and the QAware Cloud Native Landscape.
A few quick points for those who may be attending an OpenStack Summit for the first time. We are excited to see you in Barcelona, Spain October 25-28, 2016.
OpenStack is an open source cloud computing platform that consists of a series of related projects that control large pools of compute, storage, and networking resources throughout a datacenter, all managed through a dashboard that gives administrators control while empowering their users to provision resources through a web interface. It is developed as an open source project by an international community of developers and corporate sponsors and supports both private and public cloud deployments. Major components include compute (Nova), object storage (Swift), image service (Glance), networking (Quantum), and an identity service (Keystone).
2 Day Bootcamp for OpenStack--Cloud Training by Mirantis (Preview)Mirantis
Mirantis, the Global Engineering Services leader for OpenStack™ presents 2-day Bootcamp for OpenStack
www.mirantis.com/training
This two-day intensive course provides hands-on technical training for OpenStack aimed at system administrators and IT professionals looking to get started on an OpenStack Cloud deployment. Each of the two days will consist of lecture, demos and group exercises. Topics include:
• OpenStack Overview & Architecture: Project goals and use cases, basic operating and deployment principles
• Cloud Usage Patterns: OpenStack codebase overview; creating networks, tenants, roles, troubleshooting; Nexenta Volume Driver
• In Production: Deploying OpenStack for real-world use, and practice of OpenStack operation on multiple nodes
• Swift Object Storage: use cases, architecture, capabilities, configuration, security and deployment
• Advanced Topics: Software Defined Networking, deployment and issues workshop, VMWare/OpenStack comparison
PRE-REQUISITES: Comfortable with Linux CLI, understanding of virtualization & hypervisors, Some experience with Linux networking
All course materials will be provided by Mirantis, including access to shared compute resources for labs. A light breakfast and lunch will be available to all course participants.
Mirantis instructors are active code committers to the OpenStack project, with proven experience building OpenStack clouds in the real world. In parallel to delivering expert training, they also consult for some of the notable global companies using OpenStack – including Cisco, NASA, Dell and Internap.
OpenStack is an open source cloud project and community with broad commercial and developer support. OpenStack is currently developing two interrelated technologies: OpenStack Compute and OpenStack Object Storage. OpenStack Compute is the internal fabric of the cloud creating and managing large groups of virtual private servers and OpenStack Object Storage is software for creating redundant, scalable object storage using clusters of commodity servers to store terabytes or even petabytes of data. In this tutorial, Bret Piatt will explain how to deploy OpenStack Compute and Object Storage, including an overview of the architecture and technology requirements.
Presentation given at Open Source Summit Japan 2016 about the state of the cloud native technology (Cloud Native Computing Foundation) and the standardization of container technology (Open Container Initiative)
Presentation of OpenStack survey to Internet Research Lab at National Taiwan University, Taiwan. OpenStack framework and architecture overview. (ppt slide for download.) Materials collected from various resources, not originally produced by the author.
Briefly explained Nova, Swift, Glance, Keystone, and Quantum.
This document provides an introduction and overview of Kubernetes presented by Milos Zubal at a technology meetup. It begins with background on Milos and an outline of the topics to be covered, including the big picture of Kubernetes, its history and main features, containers, Kubernetes architecture, main components like pods and services, and deployment options. It then goes into more detail explaining each major Kubernetes concept like replicas, services, volumes, deployments and other primitives. The presentation aims to cover all of this in 30-35 minutes and concludes with questions and additional resources.
This document discusses scaling up logging and metrics in OpenShift Container Platform (OCP). It provides an overview of the logging stack including Elasticsearch, Fluentd, and Kibana. It also summarizes the metrics stack including Cassandra, Heapster, and Hawkular. The document outlines testing done to evaluate limits and scaling of these components on large OCP clusters with thousands of nodes and pods. It provides recommendations for configuring and deploying the infrastructure to support high throughput logging and metrics collection.
Sanger OpenStack presentation March 2017Dave Holland
A description of the Sanger Institute's journey with OpenStack to date, covering RHOSP, Ceph, S3, user applications, and future plans. Given at the Sanger Institute's OpenStack Day.
Brick multiplexing allows multiple storage bricks in GlusterFS to be managed by a single process, reducing resource usage. Performance testing showed no degradation with brick multiplexing enabled, and it allows faster scaling to support more persistent volumes. Memory usage is lower with brick multiplexing, allowing more volumes to be supported on the same hardware. Brick multiplexing improves scalability and is recommended to be left enabled.
This presentation talks about how to use GlusterFS in Openshift to provide Storage for application pods. If you need more details please refer http://humblec.com/persistent-volume-and-persistent-volume-claim-in-openshift-and-kubernetes-using-glusterfs-volume-plugin/
This document summarizes feedback from operator sessions at the Mitaka/Tokyo Design Summit and OpenStack London Meetup on November 18, 2015. It discusses upgrades being difficult due to unknown compatibility and integration issues. It also provides an overview of the OpenStack-Ansible project including its history, deliverables for the Liberty release, and plans for the Mitaka release.
Deploying containers and managing them on multiple Docker hosts, Docker Meetu...dotCloud
Docker is an open-source project to easily create lightweight, portable, self-sufficient containers from any application. The same container that a developer builds and tests on a laptop can run at scale, in production, on VMs, bare metal, OpenStack clusters, public clouds and more.
This document summarizes what's new in Ceph. Key updates include improved management and usability features like simplified configuration, hands-off operation, and device health tracking. It also covers new orchestrator capabilities for Kubernetes and container platforms, continued performance optimizations, and multi-cloud capabilities like object storage federation across data centers and clouds.
Spark on Kubernetes - Advanced Spark and Tensorflow Meetup - Jan 19 2017 - An...Chris Fregly
https://www.meetup.com/Advanced-Spark-and-TensorFlow-Meetup/events/227622666/
Title: Spark on Kubernetes
Abstract: Engineers across several organizations are working on support for Kubernetes as a cluster scheduler backend within Spark. While designing this, we have encountered several challenges in translating Spark to use idiomatic Kubernetes constructs natively. This talk is about our high level design decisions and the current state of our work.
Speaker:
Anirudh Ramanathan is a software engineer on the Kubernetes team at Google. His focus is on running stateful and batch workloads. Previously, he worked on GGC (Google Global Cache) and prior to that, on the infrastructure team at NVIDIA."
Kubernetes @ Squarespace (SRE Portland Meetup October 2017)Kevin Lynch
In this presentation I talk about our motivation to converting our microservices to run on Kubernetes. I discuss many of the technical challenges we encountered along the way, including networking issues, Java issues, monitoring and alerting, and managing all of our resources!
Dockerizing the Hard Services: Neutron and Novaclayton_oneill
Talk about the benefits and pitfalls involved in successfully running complex services like Neutron and Nova inside of Docker containers.
Topics include:
* What magic incantations are needed to run these services at all?
* How to prevent HA router failover on service restarts.
* How to prevent network namespaces from breaking everything.
* Bonus: How network namespace fixes also helped fix Cinder NFS backend
This document provides an overview of OpenShift Container Platform. It describes OpenShift's architecture including containers, pods, services, routes and the master control plane. It also covers key OpenShift features like self-service administration, automation, security, logging, monitoring, networking and integration with external services.
What's New with Ceph - Ceph Day Silicon ValleyCeph Community
This document discusses what's new in Ceph, including priorities around community, management/usability, performance of core Ceph components like RADOS, RBD, RGW and CephFS, and container platforms. Specific updates mentioned include centralized configuration in Mimic, Project Crimson reimplementing the OSD data path, Msgr2 network protocol, automated management features, telemetry/insights, performance optimizations, and the continued development of the Ceph dashboard.
Thomas Goirand gave an overview of OpenStack including Nova, Swift, Glance, and Keystone. Nova is responsible for managing virtual machine instances, Swift provides scalable object storage, Glance stores VM images, and Keystone handles unified authentication. The presentation discussed Nova packaging, dependencies, high availability setup, and using it with euca2ools. Swift's general principles of object storage and replication across multiple servers was also summarized.
Ceph Pacific is a major release of the Ceph distributed storage system scheduled for March 2021. It focuses on five key themes: usability, performance, ecosystem integration, multi-site capabilities, and quality. New features in Pacific include automated upgrades, improved dashboard functionality, snapshot-based CephFS mirroring, per-bucket replication in RGW, and expanded telemetry collection. Looking ahead, the Quincy release will focus on continued improvements in these areas such as resource-aware scheduling in cephadm and multi-site monitoring capabilities.
QNIBTerminal Plus InfiniBand - Containerized MPI Workloadsinside-BigData.com
In this deck, Christian Kniep presents: QNIBTerminal Plus InfiniBand - Containerized MPI Workloads.
Watch the video presentation: http://wp.me/p3RLHQ-dvM
The document discusses containerizing MPI workloads using Docker and QNIBTerminal. It provides an overview of Docker, describes the QNIBTerminal testbed which runs an HPCG benchmark on multiple Linux distributions within Docker containers, and presents results showing a low performance overhead for containerized workloads compared to bare metal. Future work is discussed around optimizing containers for HPC and benchmarking real-world applications.
Toronto RHUG: Container-native virtualizationStephen Gordon
November 2018 presentation covering Container-native virtualization, enabling OpenShift/Kubernetes as a common platform for application containers and virtual machines.
KubeVirt (Kubernetes and Cloud Native Toronto)Stephen Gordon
KubeVirt enables running virtual machines alongside containers on Kubernetes clusters. It allows virtual machines to be scheduled and managed just like containers. KubeVirt focuses on enabling existing virtualized workloads to run on Kubernetes and integrates features like storage, networking, metrics, and monitoring. Example use cases include starting with a virtual machine, building new services on VMs and containers together, and decomposing existing virtualized workloads.
OpenStackTO: Friendly coexistence of Virtual Machines and Containers on Kuber...Stephen Gordon
KubeVirt is intended to provide a convergence point for the data center of the future using Kubernetes as an infrastructure fabric for both application container and virtual machine workloads. Using a unified management approach simplifies deployments, allows for better resource utilization, and supports different workloads in a more optimal way. This session will outline how the Kubevirt project seeks to achieve this while using the extensible nature of Kubernetes in a way that provides a developer workflow that is as consistent as possible with the same patterns used for working with application containers.
Deploying Containers at Scale on OpenStackStephen Gordon
magine being able to stand up thousands of tenants with thousands of apps, running thousands of Docker-formatted container images and routes, all on a self-healing cluster. Now, take that one step further with all of those images being updatable through a single upload to the registry, and with zero downtime. In this session, Steve Gordon of the Red Hat OpenStack Platform team will show you just that. Steve will walk through a recent benchmarking deployment using the Cloud Native Computing Foundation’s (CNCF) new 1,000 node cluster with OpenStack and Red Hat’s OpenShift Container Platform, the enterprise-ready Kubernetes for developers.
A Container Stack for Openstack - OpenStack Silicon ValleyStephen Gordon
OpenStack is an Infrastructure as a Service offering that provides a powerful abstraction layer for interacting with your datacenter infrastructure, supported by a wide array of pluggable drivers for existing physical and virtual infrastructure investments. In this session, you’ll learn how OpenStack is evolving to integrate with the Linux, Docker, Kubernetes stack to provide the ideal infrastructure platform for modern containerized applications. You’ll learn how you can modernize application delivery using the Linux, Docker, Kubernetes stack provided by Red Hat while seamlessly using the authentication, network, and storage infrastructure services provided by an underlying OpenStack cloud.
Dude, This Isn't Where I Parked My Instance?Stephen Gordon
This document discusses moving OpenStack instances between compute nodes. It begins by asking what is being moved (the guest configuration, storage, and state) and then discusses reasons for moving instances like node maintenance or capacity management. It describes different mechanisms for moving instances, including evacuate, migrate, live migration, and helpers for moving all instances on a node. It then covers new enhancements in OpenStack Liberty and Mitaka related to long running live migrations, including scaling the downtime and adding configuration options to control concurrent operations and timeouts.
Compute 101 - OpenStack Summit Vancouver 2015Stephen Gordon
OpenStack Compute (Nova), has been a core component of OpenStack since the original Austin release in 2010. In the intervening years development has proceeded at a rapid pace adding support for new virtualization technologies and exposing additional features. Learn how Compute fits into the OpenStack architecture, and how it interacts with other OpenStack components and the hypervisors it manages.
This document summarizes OpenStack Compute features related to the Libvirt/KVM driver, including updates in Kilo and predictions for Liberty. Key Kilo features discussed include CPU pinning for performance, huge page support, and I/O-based NUMA scheduling. Predictions for Liberty include improved hardware policy configuration, post-plug networking scripts, further SR-IOV support, and hot resize capability. The document provides examples of how these features can be configured and their impact on guest virtual machine configuration and performance.
- OpenStack is an open-source cloud computing platform that provides infrastructure as a service capabilities. It allows workloads to scale out across thousands of virtual machines.
- The document discusses challenges faced by service providers in adopting OpenStack, evolving workload types, and key features of the OpenStack Juno release including improved support for bare metal provisioning, NUMA awareness, and networking functionality.
- The OpenStack community summit in Atlanta saw growing attendance and increased involvement from large enterprise users in areas like network functions virtualization.
Divide and conquer: resource segregation in the OpenStack cloudStephen Gordon
This document discusses resource segregation techniques in OpenStack clouds. It describes how infrastructure resources like compute hosts can be logically grouped using regions, cells, host aggregates, and availability zones. It also discusses workload segregation using server groups that define affinity and anti-affinity rules for instance placement. The goals of segregation include isolating workloads, ensuring high availability, and enabling horizontal scaling of the infrastructure.
A brief introduction to Publican for members of the OpenStack documentation community. Originally presented at the OpenStack documentation bootcamp on the 10th of September 2013
Deltacloud is an open source project that provides a standardized API for managing and deploying applications across multiple public and private clouds. It aims to abstract differences between clouds and provide tools for multi-cloud management. Deltacloud includes drivers for major cloud providers, a REST API, and the Aeolus Conductor for centralized management of clouds and resources using the Deltacloud API.
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
The Department of Veteran Affairs (VA) invited Taylor Paschal, Knowledge & Information Management Consultant at Enterprise Knowledge, to speak at a Knowledge Management Lunch and Learn hosted on June 12, 2024. All Office of Administration staff were invited to attend and received professional development credit for participating in the voluntary event.
The objectives of the Lunch and Learn presentation were to:
- Review what KM ‘is’ and ‘isn’t’
- Understand the value of KM and the benefits of engaging
- Define and reflect on your “what’s in it for me?”
- Share actionable ways you can participate in Knowledge - - Capture & Transfer
"$10 thousand per minute of downtime: architecture, queues, streaming and fin...Fwdays
Direct losses from downtime in 1 minute = $5-$10 thousand dollars. Reputation is priceless.
As part of the talk, we will consider the architectural strategies necessary for the development of highly loaded fintech solutions. We will focus on using queues and streaming to efficiently work and manage large amounts of data in real-time and to minimize latency.
We will focus special attention on the architectural patterns used in the design of the fintech system, microservices and event-driven architecture, which ensure scalability, fault tolerance, and consistency of the entire system.
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
LF Energy Webinar: Carbon Data Specifications: Mechanisms to Improve Data Acc...DanBrown980551
This LF Energy webinar took place June 20, 2024. It featured:
-Alex Thornton, LF Energy
-Hallie Cramer, Google
-Daniel Roesler, UtilityAPI
-Henry Richardson, WattTime
In response to the urgency and scale required to effectively address climate change, open source solutions offer significant potential for driving innovation and progress. Currently, there is a growing demand for standardization and interoperability in energy data and modeling. Open source standards and specifications within the energy sector can also alleviate challenges associated with data fragmentation, transparency, and accessibility. At the same time, it is crucial to consider privacy and security concerns throughout the development of open source platforms.
This webinar will delve into the motivations behind establishing LF Energy’s Carbon Data Specification Consortium. It will provide an overview of the draft specifications and the ongoing progress made by the respective working groups.
Three primary specifications will be discussed:
-Discovery and client registration, emphasizing transparent processes and secure and private access
-Customer data, centering around customer tariffs, bills, energy usage, and full consumption disclosure
-Power systems data, focusing on grid data, inclusive of transmission and distribution networks, generation, intergrid power flows, and market settlement data
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
Northern Engraving | Modern Metal Trim, Nameplates and Appliance PanelsNorthern Engraving
What began over 115 years ago as a supplier of precision gauges to the automotive industry has evolved into being an industry leader in the manufacture of product branding, automotive cockpit trim and decorative appliance trim. Value-added services include in-house Design, Engineering, Program Management, Test Lab and Tool Shops.
Must Know Postgres Extension for DBA and Developer during MigrationMydbops
Mydbops Opensource Database Meetup 16
Topic: Must-Know PostgreSQL Extensions for Developers and DBAs During Migration
Speaker: Deepak Mahto, Founder of DataCloudGaze Consulting
Date & Time: 8th June | 10 AM - 1 PM IST
Venue: Bangalore International Centre, Bangalore
Abstract: Discover how PostgreSQL extensions can be your secret weapon! This talk explores how key extensions enhance database capabilities and streamline the migration process for users moving from other relational databases like Oracle.
Key Takeaways:
* Learn about crucial extensions like oracle_fdw, pgtt, and pg_audit that ease migration complexities.
* Gain valuable strategies for implementing these extensions in PostgreSQL to achieve license freedom.
* Discover how these key extensions can empower both developers and DBAs during the migration process.
* Don't miss this chance to gain practical knowledge from an industry expert and stay updated on the latest open-source database trends.
Mydbops Managed Services specializes in taking the pain out of database management while optimizing performance. Since 2015, we have been providing top-notch support and assistance for the top three open-source databases: MySQL, MongoDB, and PostgreSQL.
Our team offers a wide range of services, including assistance, support, consulting, 24/7 operations, and expertise in all relevant technologies. We help organizations improve their database's performance, scalability, efficiency, and availability.
Contact us: info@mydbops.com
Visit: https://www.mydbops.com/
Follow us on LinkedIn: https://in.linkedin.com/company/mydbops
For more details and updates, please follow up the below links.
Meetup Page : https://www.meetup.com/mydbops-databa...
Twitter: https://twitter.com/mydbopsofficial
Blogs: https://www.mydbops.com/blog/
Facebook(Meta): https://www.facebook.com/mydbops/
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
1. KUBERNETES AND OPENSTACK AT SCALE
Will it blend?
Stephen Gordon (@xsgordon)
Principal Product Manager, Red Hat
May 8th, 2017
2. KUBERNETES AND OPENSTACK AT SCALE #OPENSTACKSUMMIT #REDHAT2
ONCE UPON A TIME...
Part 1
● 1000 OpenShift Container Platform 3.3 / Kubernetes 1.3
nodes on OpenStack infrastructure
● Presented methodology and results in Barcelona:
○ https://www.cncf.io/blog/2016/08/23/deploying-1000-
nodes-of-openshift-on-the-cncf-cluster-part-1/
● Goals were:
○ Push limits
○ Identify best practices
○ Document best practices
○ Fix issues
3. KUBERNETES AND OPENSTACK AT SCALE #OPENSTACKSUMMIT #REDHAT3
FOR OUR NEXT TRICK!
Part 2
● Goals:
○ 2048 OpenShift Container Platform 3.5 / Kubernetes 1.5
nodes on OpenStack infrastructure
○ Network ingress tier saturation test
○ Overlay2 graph driver w/ SELinux test
○ Persistent volume scalability and performance test of
Container Native Storage (glusterfs)
4. KUBERNETES AND OPENSTACK AT SCALE #OPENSTACKSUMMIT #REDHAT4
KUBERNETES SCALABILITY SIG
Scalability SIG SLAs:
● API responsiveness
○ 99% of calls return in < 1 s
● Pod startup time
○ 99% of pods start within 5s*
Also define a number of other primary and
derived metrics.
* With pre-pulled images
5. KUBERNETES AND OPENSTACK AT SCALE #OPENSTACKSUMMIT #REDHAT5
A CONTAINER STACK FOR OPENSTACK
OPENSTACK KUBERNETES
+
A wild solution appears...
Consumption of resources
Able to easily access new environments to
quickly build new apps and move on
Exposition of resources
Provide necessary environments to developers
in minutes, not weeks or months
6. KUBERNETES AND OPENSTACK AT SCALE #OPENSTACKSUMMIT #REDHAT6
A CONTAINER STACK FOR OPENSTACK
A wild solution appears...
OPENSTACK OPENSHIFT
+
Consumption of resources
Integrated platform to run, orchestrate,
monitor, and scale containers. Built around
Kubernetes and Docker.
Exposition of resources
Provide necessary environments to developers
in minutes, not weeks or months
10. KUBERNETES AND OPENSTACK AT SCALE #OPENSTACKSUMMIT #REDHAT
1
0
HOW TO TEST?
System Verification Test suite (SVT)
● Red Hat OpenShift Performance and Scalability team’s
upstream test suites:
○ Application Performance
○ Application Scalability
○ OpenShift Performance
○ OpenShift Scalability (incl. cluster-loader)
○ Networking Performance
○ Reliability/Longevity
● Also includes some additional tools e.g. image provisioner
● https://github.com/openshift/svt
11. KUBERNETES AND OPENSTACK AT SCALE #OPENSTACKSUMMIT #REDHAT
1
1
ARCHITECTURE
Baremetal Cluster (100 nodes)
OpenShift-on-OpenStack Cluster (2048 nodes)
12. KUBERNETES AND OPENSTACK AT SCALE #OPENSTACKSUMMIT #REDHAT
1
2
ARCHITECTURE (cont.)
● Software:
○ Red Hat OpenStack Platform 10, based on “Newton”
○ OpenShift Container Platform 3.5 (built around K8S 1.5)
○ Red Hat Enterprise Linux 7.3 (mostly…)
● Deployment:
○ Deployed OpenStack + Ceph using TripleO
○ Deployed OpenShift Container Platform using openshift-ansible.
● Applying previous learnings
○ Storage architecture
○ Image formatting
○ Pre-baked images (see image_provisioner tool)
14. KUBERNETES AND OPENSTACK AT SCALE #OPENSTACKSUMMIT #REDHAT
1
4
NETWORK INGRESS/ROUTING TIER
Testing HAProxy Performance
● Load generator itself runs
in a pod.
● Added SNI and TLS variants
to the test suite.
● Configuration by passing in
configmaps.
● Focused in on HTTP with
keepalive and TLS
terminated at the edge.
projects:
- num: 1
basename: centos-stress
ifexists: delete
tuning: default
templates:
- num: 1
file: ./content/quickstarts/stress/stress-pod.json
parameters:
- RUN: "wrk" # which app to execute inside WLG pod
- RUN_TIME: "120" # benchmark run-time in seconds
- PLACEMENT: "test" # Placement of the WLG pods based on node label
- WRK_DELAY: "100" # maximum delay between client requests in ms
- WRK_TARGETS: "^cakephp-" # extended RE (egrep) to filter target routes
- WRK_CONNS_PER_THREAD: "1" # how many connections per worker thread/route
- WRK_KEEPALIVE: "y" # use HTTP keepalive [yn]
- WRK_TLS_SESSION_REUSE: "y" # use TLS session reuse [yn]
- URL_PATH: "/" # target path for HTTP(S) requests
15. KUBERNETES AND OPENSTACK AT SCALE #OPENSTACKSUMMIT #REDHAT
1
5
NETWORK INGRESS/ROUTING TIER
Testing HAProxy Performance (cont.)
● 1p-mix-cpu*: nbproc=1, run on any CPU
● 1p-mix-cpu0: nbproc=1, run on core 0
● 1p-mix-cpu1: nbproc=1, run on core 1
● 1p-mix-cpu2: nbproc=1, run on core 2
● 1p-mix-cpu3: nbproc=1, run on core 3
● 1p-mix-mc10x: nbproc=1, run on any core,
sched_migration_cost=5000000
● 2p-mix-cpu*: nbproc=2, run on any core
● 4p-mix-cpu02: nbproc=4, run on core 2
17. KUBERNETES AND OPENSTACK AT SCALE #OPENSTACKSUMMIT #REDHAT
1
7
NETWORK PERFORMANCE
Testing OpenShift-sdn (OVS+VXLAN) Performance
● OpenShift includes and uses OpenShift-sdn (OpenvSwitch + VXLAN) by
default:
○ Provides full multi-tenancy
○ Is fully pluggable (as is ingress/routing tier)
○ Supports all four footprints (physical/virtual/private/public)
● Web-based workloads are mostly transactional
● Focused microbenchmark on a ping-pong test of varying payload sizes
18. KUBERNETES AND OPENSTACK AT SCALE #OPENSTACKSUMMIT #REDHAT
1
8
NETWORK PERFORMANCE
Testing OpenShift-sdn (OVS+VXLAN) Performance (cont.)
● Tested mix of payload sizes
and stream counts.
● tcp_rr-XXB-Yi
○ XX = # of bytes
○ Y = # of instances
(streams)
● Slimmed down version of
RFC2544
20. KUBERNETES AND OPENSTACK AT SCALE #OPENSTACKSUMMIT #REDHAT
2
0
OVERLAY2 w/ SELINUX
Next on storage wars...
● Until recently RHEL used Device Mapper for docker’s storage graph driver
○ Overlay support added in RHEL 7.2
○ Overlay2 supported added in RHEL 7.3
○ Overlay2 support w/ SELinux added upstream and expected in RHEL 7.4
■ https://lkml.org/lkml/2016/7/5/409
○ Device Mapper remains default in RHEL for now, Overlay2 default in Fedora
26
■ https://fedoraproject.org/wiki/Changes/DockerOverlay2
● Let’s try it out!
21. KUBERNETES AND OPENSTACK AT SCALE #OPENSTACKSUMMIT #REDHAT
2
1
OVERLAY2 w/ SELINUX
Results
● Single base
image for all
pods
● 240 pods on
the node
(rate limited
creation)
● Reasonable
memory
savings
23. KUBERNETES AND OPENSTACK AT SCALE #OPENSTACKSUMMIT #REDHAT
2
3
CONTAINER NATIVE STORAGE
Approach
● OpenShift Container Platform supports a wide variety of volume providers
via the standard Kubernetes volume interface
● Red Hat Container Native Storage is a Gluster-based persistent volume
provider deployed on OpenShift
● Used the NVMe disks as “bricks” for Gluster, exposed 1G persistent
volumes
● Container Native Storage nodes marked unschedulable for other OpenShift
pods
● Ran throughput numbers for create/delete operations, as well as API
parallelism
24. KUBERNETES AND OPENSTACK AT SCALE #OPENSTACKSUMMIT #REDHAT
2
4
CONTAINER NATIVE STORAGE
Results
● CNS allocated
volumes in constant
time
● Consistent with
results for other
persistent volume
providers
26. KUBERNETES AND OPENSTACK AT SCALE #OPENSTACKSUMMIT #REDHAT
2
6
NEXT STEPS
To infinity, and beyond!
● Filed 40+ bugs across a variety of projects and components
● Scaling and Performance Guide, new with OpenShift Container Platform
3.5
● Getting Involved
○ “Kubernetes Ops on OpenStack” forum session
■ Wednesday, May 10, 1:50pm-2:30pm
■ Hynes Convention Center MR102
○ K8S SIG Scalability
○ K8S SIG OpenStack
27. KUBERNETES AND OPENSTACK AT SCALE #OPENSTACKSUMMIT #REDHAT
2
7
REFERENCES
● Part 1: https://www.cncf.io/blog/2016/08/23/deploying-1000-nodes-of-
openshift-on-the-cncf-cluster-part-1/
● Part 2: https://www.cncf.io/blog/2017/03/28/deploying-2048-openshift-
nodes-cncf-cluster-part-2/
● Overlay2 and Device Mapper
https://developers.redhat.com/blog/2016/10/25/docker-project-can-
you-have-overlay2-speed-and-density-with-devicemapper-yep/
● Red Hat Performance and Scale Trello:
https://trello.com/b/M1bpo55E/scalability
Goals were:
Push system to it’s limit, incl. Ensuring we can reproduce work done in the community with kubernetes upstream incl. SIG Scalability (will come to this in a minute)
Identify config changes and best practices to increase capacity and performance
Document and file issues upstream and send patches where applicable
Saturation test for OpenShift’s HAProxy-based network ingress tier
Overlay2 graph driver and SELinux support from kernel v4.9
Persistent volume scalability and performance using Red Hat’ Container-Native Storage (CNS) product (gluster based)
Saturation test for OpenShift’s integrated container registry and CI/CD pipeline
Primary metrics include:
Max cores per cluster
Max pods per core
Management overhead per node
Management overhead per cluster
Derived metrics include:
Max cores per node
Max pods per machine
Max machines per cluster
Max pods per cluster
End-to-end pod startup time
Scheduler throughput
Max cluster saturation time
Pre-pulled images because high degree of variability introduced (network throughput, size of image, etc.) between images that are unrelated to k8s performance.
Why IaaS and PaaS
Exposition versus Consumption
Current state (VMs) versus future state (BM)
Culture/people challenges (developer versus operations, who is driving)
Isolation concerns
Scaling concerns
OpenStack
Open source cloud computing platform for building massively scalable clouds.
Kubernetes
Open source system for automating deployment, scaling and management of containerized applications. Provides framework for building distributed platforms.
Kubernetes container management/orchestration
Red Hat is the biggest contributor outside of Google
How did Red Hat end up on the Kubernetes horse?
We bet on a simple idea: that an open source community is the best place to build the future of application orchestration, and that only an open source community could successfully integrate the diverse range of capabilities necessary to succeed.
OpenShift
An integrated infrastructure platform to run, orchestrate, monitor and scale containers. Built around Kubernetes and Docker.
OpenShift application platform
Acquired Makara in Nov 2010
OpenShift Origin launched in Apr 2012
Docker Open Source Mar 2013
First Kubernetes commit on github Jun 2014
OpenShift v3 re-architected around Docker and Kubernetes Jun 2015 building on operational experiences obtained by OpenShift Online team with v2.
LDK!
Sandwhich:
Your applications
OpenShift masters, nodes, registry
Infrastructure services (LBaaS, Neutron, Nova, Cinder, etc.)
Architectural tenets:
Technical independence: Ensure that containers are defined such that they remain independent of the underlying infrastructure. Containers must continue to be portable across host environments.
Contextual awareness: Allow containers to easily take advantage of OpenStack shared services beyond compute (i.e. networking and storage). To do this, Red Hat Atomic Enterprise (and other Red Hat container offerings) must be context aware.
Avoid redundancy: Limit redundancies where possible to minimize performance and other resource hits. This includes limiting the number of layers between the container and the hardware.
Simplified management: Simplify management by delivering a holistic, integrated view across platforms.
Currently contextual awareness comes via the cloud provider implementation (all or nothing)
Expect to see increased experimentation with using services piecemeal/a la carte (e.g. Cinder)
Storage:
Container hosts consume OpenStack storage
Tenant isolation
Application storage managed by Kubernetes
Stateful applications
Containerized distributed storage services
Networking:
Use OpenShift-SDN to have full application isolation but get double encapsulation when using Neutron with GRE or VXLAN tunnels.
Tenant isolation via OpenStack SDN using Kuryr eventually
Use Flannel with host-gw backend to avoid double encapsulation.
Load Balancing provided by LBaaS V1 by default. Other options:
External load balancer (recommended for production)
Dedicated load balancer node - create a dedicated node for HAProxy. Good for demo/test but no HA.
None - if using single master node.
Authenticate OpenShift users using LDAP.
Re-validate Kubernetes SIG Scalability findings on equivalent OpenShift Container Platform release.
The CNCF cluster is made up of 1000 nodes deployed at Switch, Las Vegas by Intel for the use of the CNCF community.
We were using ~300
NVMe storage will come in handy later
Not product supported
application_performance: JMeter-based performance testing of applications hosted on OpenShift.
applications_scalability: Performance and scalability testing of the OpenShift web UI.
conformance: Wrappers to run a subset of e2e/conformance tests in an SVT environment (work in progress)
image_provisioner: Ansible playbooks for building AMI and qcow2 images with OpenShift rpms and Docker images baked in.
networking: Performance tests for the OpenShift SDN and kube-proxy.
openshift_performance: Performance tests for container build parallelism, projects and persistent storage (EBS, Ceph, Gluster and NFS)
openshift_scalability: Home of the infamous "cluster-loader", details in openshift_scalability/README.md
reliability: Run tests over long periods of time (weeks), cycle object quantity up and down.
Why both?
For the foreseeable future we envisage there will be baremetal, virtualized, containerized workloads
Current state is most people we see are running containers in VMs.
Cultural/people issues:
Easiest way to get going without rocking the organization wide IT boat in some cases
Concerns about potential for breakout (contrast to QEMU and use of similar constructs there)
Scale issues: # of pods per node (currently 250 and rising), workload dependent.
Availability: Ability to live migrate VMs, not impossible to live migrate a container but also not really the way things should work long term.
The Overcloud usually consists of nodes in predefined roles such as Controller nodes, Compute nodes, and different storage node types. Each of these default roles contains a set of services defined in the core Heat template collection on the director node. However, the architecture of the core Heat templates provides a method to:
Create custom roles
Add and remove services from each role
Storage Layout
Each storage node includes 2 SSDs and 10 SAS disks.
Passed NVMe to VMs for Container Native Storage (Gluster)
Ceph performs significantly better when deployed with write-journals on SSDs.
Created two write-journals on the SSDs and allocated 5 of the spinning disks to each SSD.
In all, we had 90 Ceph OSDs, equating to 158 TB of available disk space.
Image Upload
Converted to RAW for upload to glance
Use snapshot/boot-from-volume flow
Consumed ~ 700MB per VM
VM pool in Ceph this time around ~1.5 TB for 2048 VMs versus 22 TB last time for 1,000 VMs.
Reduced I/O and time to boot VMs, < 15 mins for the 2048 VMs.
Ceph’s role in this environment is to provide boot-from-volume service for our VMs (via Cinder).
Routing tier consists of nodes running HAProxy for ingress into the cluster.
Identified that there are (on average), a large number of low-throughput cluster ingress connections from clients (i.e. web browsers) to HAProxy versus a small number of high-throughput connections.
Already some changes in this space based on previous iterations:
Default connection limit of 2000 leaves plenty of room on commonly available CPU cores for additional connections.
Thus, bumped the default connection limit to 20,000 in OpenShift 3.5 out of the box.
If you have other needs to customize the configuration for HAProxy, our networking folks have made it significantly easier — as of OpenShift 3.4, the router pod now uses a configmap, making tweaks to the config that much simpler.
Load generator configured via passing in ConfigMaps
Queries Kubernetes API for list of routes.
Builds list of test targets dynamically
Zoomed in on a particularly representative workload mix
Combination of HTTP with keepalive and TLS terminated at the edge.
Chose this because it represents how most OpenShift production deployments are used - serving large numbers of web applications for internal and external use, with a range of security postures.
Graph shows throughput test with a Y-axis of Requests Per Second, higher is better.
nbproc refers to number of HAProxy processes spawned.
Sched_migration_cost is a kernel tunable that weights processes when deciding if/how the kernel should load balance them amongst available cores.
What we learned:
CPU affinity matters. But why are certain cores nearly 2x faster? This is because HAProxy is now hitting the CPU cache more often due to NUMA/PCI locality with the network adapter.
Increasing nbproc helps throughput. nbproc=2 is ~2x faster than nbproc=1, BUT we get no more boost from going to 4 cores, and in fact nbproc=4 is slower than nbproc=2. This is because there were 4 cores in this guest, and 4 busy HAProxy threads left no room for the OS to do its thing (like process interrupts).
Can improve performance over 20% from baseline with no changes other than sched_migration_cost.
By increasing it by a factor of 10, we keep HAProxy on the CPU longer, and increase our likelihood of CPU cache hits by doing so.
This is a common technique amongst the low-latency networking crowd, and is in fact recommended tuning in our Low Latency Performance Tuning Guide for RHEL7.
Provides full multi-tenancy
Encapsulation comes with tradeoffs in CPU cycles to wrap/unwrap packets
Can be mitigated via VXLAN offloading with commonly available NICs incl. Those in CNCF cluster.
Pluggable, so like OpenStack you can use other SDN solutions where integration has been done
Also expect to use Kuryr in future
Allows it to be used on any public/private footprint incl. OpenStack
RFC2544 - Benchmarking Methodology for Network Interconnect Devices
Discusses and defines a number of tests that may be used to describe the performance characteristics of a network interconnecting device.
Also describes specific formats for reporting the results of the tests.
As you would expect, adding more streams for same payload provides a notable increase.
Difference between baremetal/baremetal+pod and vm/vm+pod only becomes pronounced at largest payload size.
Bonus tuning: Large clusters with over 1000 routes or nodes require increasing the default kernel arp cache size.
We’ve increased it by a factor of 8x, and are including that tuning out of the box in OpenShift 3.5.
Reasons:
Maturity
Supportability
Security
POSIX compliance
Overlay/Overlay2
Density improvements gained by page cache sharing are very important for certain environments where there is significant overlap in base image content.
Overlay2 w/ SELinux in Linux kernel 4.9
Rate limited pod creation using “tuningset” w/ cluster-loader
Each of the 6 bumps is a batch of 40 pods.
Before it moves to the next batch, cluster-loader makes sure the previous batch is in running state.
In this way avoid crushing the API server with requests, and can examine the system’s profiles at each plateau.
The savings in terms of memory is reasonable (again, this is a “perfect world” scenario and your mileage may vary).
The reduction in disk operations below is due to subsequent container starts leveraging the kernel’s page cache rather than having to repeatedly fetch base image content from storage:
Overall found overlay2 to be very stable, and it becomes even more interesting with the addition of SELinux support.
Deployed in pods, scheduled like any application
Used Kubernetes dynamic provisioning to expose volumes to applications.
Marked unschedulable to control variability.
Roughly 6 seconds from submit to the PVC going into “Bound” state.
This number does not vary when CNS is deployed on bare metal or virtualized.
Not pictured here are our tests verifying that several other persistent volume providers respond in a very similar timeframe.