Kubernetes is proposed as a way to better manage an OpenStack cloud due to OpenStack's hundreds of microservices, bare-metal servers, large Python codebase, and complex updates. Kubernetes is built to manage thousands of microservices at scale, provides containerization to simplify dependencies, and offers self-healing and high availability. However, OpenStack is not just an application and has its own network stack and storage systems that need integration. The document provides tips for running OpenStack services on Kubernetes, including using Helm charts, official Docker images, separate databases and storage, and configuring network and compute plugins.
Scaling OpenStack Neutron in Heterogeneous EnvironmentsMartin Klein
Scaling an Openstack deployment requires serious thought and consideration, commonly it is achieved by multiplying the constrained resource. A notable exception is Neutron, in particular the L2 subsystem. The L2 network needs to scale up while most connected components are scaling out. A common approach to scaling the number of available L2 network segments in a region is the introduction of advanced overlay networks, with the drawback that all connected devices need to implement the overlay protocol. Installations requiring multiple hypervisor types or bare metal have a limited number of overlay protocol options.
Neutron provides an alternative to a single overlay protocol; separating L2 in the network fabric from the protocol used on the network edge via Hierarchical Port Binding.
This talk introduces a Neutron HPB architecture and ML2 implementation currently in use at SAP. We will discuss the issues that HPB solved and the challenges faced during our HPB deployment.
Quantum - Virtual networks for Openstacksalv_orlando
An overview of Quantum, the soon-to-be default Openstack network service.
These slides introduce Quantum, its design goals, and discusses the API. It also tries to address how quantum relates to Software Defined Networking (SDN)
Scaling OpenStack Networking Beyond 4000 Nodes with Dragonflow - Eshed Gal-Or...Cloud Native Day Tel Aviv
As OpenStack matures, more users move from “dipping a toe” to deploying at large scale, with 1000's of nodes.
OpenStack networking has long been a limiting factor in scaling beyond a few hundreds of nodes, forcing users to turn to cell splitting, or to complete offloading of the networking to the underlay systems and forfeit the overlay network altogether.
Dragonflow is a fully distributed, open source, SDN implementation of Neutron, that handles large scale deployments without splitting to cells.
In testing we've conducted, we were able to scale to 4000+ controllers (each controller is typically deployed on a compute node), while maintaining the same performance we had on a small 30 node environment.
Overview of OpenStack nova-networking evolution towards Neutron. Architecture overview of OVS plugin, ML2, and MidoNet Overlay product. Overview and example of Heat templates, along with automation of physical switches using Cumulus
Software Defined Networking is seeing a lot of momentum these days. With server virtualization solving the virtual machines problem, and large scale object storage solving the distributed storage challenge, SDN is seen as key in virtual networking.
In this talk we don't try to define SDN but rather dive straight into what in our opinion is the core enabled of SDN: the virtual switch OVS.
OVS can help manage VLAN for guest network isolation, it can re-route any traffic at L2-L4 by keeping forwarding tables controlled by a remote controller (Openfow controller). We show these few OVS capabilities and highlight how they are used in CloudStack and Xen.
Xen Summit presentation of CloudStack and Software Defined Networks. OpenVswitch is the default bridge in Xen and supported in XenServer and Xen Cloud Platform
Open stack networking_101_update_2014-os-meetupsyfauser
This is the latest Update to my OpenStack Networking / Neutron 101 Slides with some more Information and caveats on the new DVR and Gateway HA Features
Scaling OpenStack Neutron in Heterogeneous EnvironmentsMartin Klein
Scaling an Openstack deployment requires serious thought and consideration, commonly it is achieved by multiplying the constrained resource. A notable exception is Neutron, in particular the L2 subsystem. The L2 network needs to scale up while most connected components are scaling out. A common approach to scaling the number of available L2 network segments in a region is the introduction of advanced overlay networks, with the drawback that all connected devices need to implement the overlay protocol. Installations requiring multiple hypervisor types or bare metal have a limited number of overlay protocol options.
Neutron provides an alternative to a single overlay protocol; separating L2 in the network fabric from the protocol used on the network edge via Hierarchical Port Binding.
This talk introduces a Neutron HPB architecture and ML2 implementation currently in use at SAP. We will discuss the issues that HPB solved and the challenges faced during our HPB deployment.
Quantum - Virtual networks for Openstacksalv_orlando
An overview of Quantum, the soon-to-be default Openstack network service.
These slides introduce Quantum, its design goals, and discusses the API. It also tries to address how quantum relates to Software Defined Networking (SDN)
Scaling OpenStack Networking Beyond 4000 Nodes with Dragonflow - Eshed Gal-Or...Cloud Native Day Tel Aviv
As OpenStack matures, more users move from “dipping a toe” to deploying at large scale, with 1000's of nodes.
OpenStack networking has long been a limiting factor in scaling beyond a few hundreds of nodes, forcing users to turn to cell splitting, or to complete offloading of the networking to the underlay systems and forfeit the overlay network altogether.
Dragonflow is a fully distributed, open source, SDN implementation of Neutron, that handles large scale deployments without splitting to cells.
In testing we've conducted, we were able to scale to 4000+ controllers (each controller is typically deployed on a compute node), while maintaining the same performance we had on a small 30 node environment.
Overview of OpenStack nova-networking evolution towards Neutron. Architecture overview of OVS plugin, ML2, and MidoNet Overlay product. Overview and example of Heat templates, along with automation of physical switches using Cumulus
Software Defined Networking is seeing a lot of momentum these days. With server virtualization solving the virtual machines problem, and large scale object storage solving the distributed storage challenge, SDN is seen as key in virtual networking.
In this talk we don't try to define SDN but rather dive straight into what in our opinion is the core enabled of SDN: the virtual switch OVS.
OVS can help manage VLAN for guest network isolation, it can re-route any traffic at L2-L4 by keeping forwarding tables controlled by a remote controller (Openfow controller). We show these few OVS capabilities and highlight how they are used in CloudStack and Xen.
Xen Summit presentation of CloudStack and Software Defined Networks. OpenVswitch is the default bridge in Xen and supported in XenServer and Xen Cloud Platform
Open stack networking_101_update_2014-os-meetupsyfauser
This is the latest Update to my OpenStack Networking / Neutron 101 Slides with some more Information and caveats on the new DVR and Gateway HA Features
This presentations gives basic overview about networking and in depth insights about Openstack Neutron component.
Covers understanding on VLAN,VXLAN,Openstack vSwitch
Building Multi-Site and Multi-OpenStack Cloud with OpenStack CascadingJoe Huang
The slides used in the speech "Building multi-site and multi-openstack cloud with OpenStack cascading" in OpenStack Paris summit 2014. The slides cover the requirement and driving forces, case study of VDF, technologies eloboration and demo of OpenStack cascading.
Lisa Caywood and Colin Dixon's presentation at the 2017 Open Networking Summit.
OpenDaylight has become a nexus for open source integration, creating a new open networking stack and enabling a new generation of open source, agile IT infrastructure. The fifth “Boron” release provides new tooling and documentation to support application developers, as well as greater integration with industry frameworks from OPNFV and OpenStack to CORD and Atrium. Boron also brings a practical focus on two leading types of deployments: (1) direct control of virtual switches to provide network virtualization and NFV and (2) management and orchestration of existing networks to provide new features and automation. This talk will cover trends in open SDN and cloud networking, with a focus on Boron milestones. In particular, it dives into the architecture across OpenStack and OpenDaylight to enable OpenStack service function chaining support in OpenDaylight.
OpenStack and OpenContrail for FreeBSD platform by Michał Dubieleurobsdcon
Abstract
OpenStack and OpenContrail network virtualization solution form a complete suite able to successfully handle orchestration of resources and services of a contemporary cloud installations. These projects, however, have been only available for Linux hosted platforms by now. This talk is about a work underway that brings them into the FreeBSD world.
It explains in greater details an architecture of an OpenStack system and shows how support for the FreeBSD bhyve hypervisor was brought up using the libvirt library. Details of the OpenContrail network virtualization solution is also provided, with special emphasis on the lower level system entities like a vRouter kernel module, which required most of the work while developing the FreeBSD version.
Speaker bio
Michal Dubiel, M.Sc. Eng., born 17th of September 1983 in Kraków, Poland. He graduated in 2009 from the faculty of Electrical Engineering, Automatics, Computer Science and Electronics of AGH University of Science and Technology in Kraków. Throughout his career he worked for ACK Cyfronet AGH on hardware-accelerated data mining systems and later for Motorola Electronics on DSP software for LTE base stations. Currently he is working for Semihalf on various software projects ranging from low level kernel development to Software Defined Networking systems. He is mainly interested in the computer science, especially the operating systems, programming languages, networks, and digital signal processing.
Software Defined Networks (SDN) na przykładzie rozwiązania OpenContrail.Semihalf
Z prezentacji dowiesz się:
Co to są sieci programowalne i wirtualizowane (SDN / NFV)?
Jaką nową jakość wprowadzają one dla operatorów chmur obliczeniowych i centrów danych?
W jaki sposób technologia OpenContrail realizuje sieci nowej generacji?
2014 OpenStack Summit - Neutron OVS to LinuxBridge MigrationJames Denton
Presentation titled 'Migrating production workloads from OVS to LinuxBridge'. Presented at the Fall 2014 OpenStack summit in Paris, this slide deck introduced the possibility of migrating live workloads from Open vSwitch to LinuxBridge with minimal downtime.
This presentation was shown at the OpenStack Online Meetup session on August 28, 2014. It is an update to the 2013 sessions, and adds content on Services Plugin, Modular plugins, as well as an Outlook to some Juno features like DVR, HA and IPv6 Support
Presentation given at the 2017 LinuxCon China
With the booming of Container technology, it brings obvious advantages for cloud: simple and faster deployment, portability and lightweight cost. But the networking challenges are significant. Users need to restructure their network and support container deployment with current cloud framework, like container and VMs.
In this presentation, we will introduce new container networking solution, which provides one management framework to work with different network componenets through Open/friendly modelling mechnism. iCAN can simplify network deployment and management with most orchestration systems and a variety of data plane components, and design extendsible architect to define and validate Service Level Agreement(SLA) for cloud native applications, which is important factor for enterprise to deliver successful and stable service via containers.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
This presentations gives basic overview about networking and in depth insights about Openstack Neutron component.
Covers understanding on VLAN,VXLAN,Openstack vSwitch
Building Multi-Site and Multi-OpenStack Cloud with OpenStack CascadingJoe Huang
The slides used in the speech "Building multi-site and multi-openstack cloud with OpenStack cascading" in OpenStack Paris summit 2014. The slides cover the requirement and driving forces, case study of VDF, technologies eloboration and demo of OpenStack cascading.
Lisa Caywood and Colin Dixon's presentation at the 2017 Open Networking Summit.
OpenDaylight has become a nexus for open source integration, creating a new open networking stack and enabling a new generation of open source, agile IT infrastructure. The fifth “Boron” release provides new tooling and documentation to support application developers, as well as greater integration with industry frameworks from OPNFV and OpenStack to CORD and Atrium. Boron also brings a practical focus on two leading types of deployments: (1) direct control of virtual switches to provide network virtualization and NFV and (2) management and orchestration of existing networks to provide new features and automation. This talk will cover trends in open SDN and cloud networking, with a focus on Boron milestones. In particular, it dives into the architecture across OpenStack and OpenDaylight to enable OpenStack service function chaining support in OpenDaylight.
OpenStack and OpenContrail for FreeBSD platform by Michał Dubieleurobsdcon
Abstract
OpenStack and OpenContrail network virtualization solution form a complete suite able to successfully handle orchestration of resources and services of a contemporary cloud installations. These projects, however, have been only available for Linux hosted platforms by now. This talk is about a work underway that brings them into the FreeBSD world.
It explains in greater details an architecture of an OpenStack system and shows how support for the FreeBSD bhyve hypervisor was brought up using the libvirt library. Details of the OpenContrail network virtualization solution is also provided, with special emphasis on the lower level system entities like a vRouter kernel module, which required most of the work while developing the FreeBSD version.
Speaker bio
Michal Dubiel, M.Sc. Eng., born 17th of September 1983 in Kraków, Poland. He graduated in 2009 from the faculty of Electrical Engineering, Automatics, Computer Science and Electronics of AGH University of Science and Technology in Kraków. Throughout his career he worked for ACK Cyfronet AGH on hardware-accelerated data mining systems and later for Motorola Electronics on DSP software for LTE base stations. Currently he is working for Semihalf on various software projects ranging from low level kernel development to Software Defined Networking systems. He is mainly interested in the computer science, especially the operating systems, programming languages, networks, and digital signal processing.
Software Defined Networks (SDN) na przykładzie rozwiązania OpenContrail.Semihalf
Z prezentacji dowiesz się:
Co to są sieci programowalne i wirtualizowane (SDN / NFV)?
Jaką nową jakość wprowadzają one dla operatorów chmur obliczeniowych i centrów danych?
W jaki sposób technologia OpenContrail realizuje sieci nowej generacji?
2014 OpenStack Summit - Neutron OVS to LinuxBridge MigrationJames Denton
Presentation titled 'Migrating production workloads from OVS to LinuxBridge'. Presented at the Fall 2014 OpenStack summit in Paris, this slide deck introduced the possibility of migrating live workloads from Open vSwitch to LinuxBridge with minimal downtime.
This presentation was shown at the OpenStack Online Meetup session on August 28, 2014. It is an update to the 2013 sessions, and adds content on Services Plugin, Modular plugins, as well as an Outlook to some Juno features like DVR, HA and IPv6 Support
Presentation given at the 2017 LinuxCon China
With the booming of Container technology, it brings obvious advantages for cloud: simple and faster deployment, portability and lightweight cost. But the networking challenges are significant. Users need to restructure their network and support container deployment with current cloud framework, like container and VMs.
In this presentation, we will introduce new container networking solution, which provides one management framework to work with different network componenets through Open/friendly modelling mechnism. iCAN can simplify network deployment and management with most orchestration systems and a variety of data plane components, and design extendsible architect to define and validate Service Level Agreement(SLA) for cloud native applications, which is important factor for enterprise to deliver successful and stable service via containers.
Similar to You need Cloud to manage Cloud: Kubernetes as best way to manage OpenStack cloud (ENG, HighLoad++ Armenia 2022) (20)
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
JMeter webinar - integration with InfluxDB and Grafana
You need Cloud to manage Cloud: Kubernetes as best way to manage OpenStack cloud (ENG, HighLoad++ Armenia 2022)
1. Title
You Need Cloud to Manage Cloud: Kubernetes
As Best Way to Manage OpenStack Cloud
Vadim Ponomarev
2. 2
What Is OpenStack?
o open-source cloud computing platform
o created by Rackspace and NASA in 2010
o written in python
o modular and microservices architecture
o used for a public/private cloud
3. 3
Why do we need OpenStack?
o open-source self-hosted solution for private / public clouds
o Vmware alternative with zero price tag
o a strong network isolation
6. 6
What's the problem?
o hundreds of microservices
o hundreds of bare-metal servers
o a huge python codebase
o a full update at least twice a year (upstream release period)
8. 8
Why Kubernetes?
✓ built to manage thousands of microservices
✓ can scale to hundreds of nodes
✓ containerization solves the problem with dependencies
✓ self-healing, high availability, healthchecks
✓ and many other benefits ...
9. 9
But ...
o OpenStack is not just an application
o VMs will be running on k8s workers
o OpenStack has its own network stack
o a complicated order of starting
services
o a storage based on Ceph
12. 12
o do not reinvent the wheel
o use openstack-helm
o use official docker images when possible
o run all the OpenStack services in one namespace
o RTFM (if you can find it)
General Tips
13. 13
Database
o Percona XtraDB Cluster with k8s operator
o separate database cluster for Neutron (network system)
o use fast SSDs if cluster > 50 compute nodes
o monitoring
14. 14
Storage system
o CephFS is the most popular
o one Ceph cluster for k8s and for OpenStack (different pools)
o a separate physical network for a storage
o dedicated storage hosts if you have the budget:
+ to reduce load
+ to reduce chances of losing data
+ to have faster reboot of compute nodes
15. 15
How OS network works
o SDN OpenvSwitch / OVN
o L2: VXLAN / Geneve / VLAN
o L3: virtual routers / OVN
o dnsmasq DHCP / DNS
o service called “Neutron”
17. 17
Network challenges
2. External networks only VLAN based
Management Network
API Network
Data Net (VxLAN)
External Net
(VLAN)
Node 1 Node 2 Node 3
TOR
switch
Traffic to
unknown target
Unknown
unicast flood
18. 18
Network Tips: OVS
1. OpenvSwitch daemon:
o host network
o capabilities
o run as root
o mount /run directory from the
host system
21. 21
Network Tips: solutions
o use segments extension and
per-rack VLANs
o use BGP dynamic routing
plugin
o use DVR routers when it’s
possible
o use EVPN-VXLAN
network in the data center
22. 22
Compute
o Nova configures KVM on the host system
o VM can have a direct access for network/GPU cards
o Privileged libvirt container
o Mounts from the host system:
/lib/modules
/var/lib/nova
/var/lib/libvirt
/run
/sys/fs/cgroups
24. 24
Is OpenStack ready?
o Bad or non-existent healthchecks
o No graceful restart
o Multiline logs (no json support!)
25. 25
Is OpenStack ready?
o Bad monitoring abilities
o Complex dependencies between
components
o Difficult to customize images with
components
26. 26
If everything is so bad, why K8s?
o Anyway, it gives better control over hundreds of services
with K8s
o It gives more stability with updates
o Self-healing, HA, isolation, etc.
o It’s easier to control at a large scale
o K8s is more popular than OpenStack
27. ➡Body Level One
• Body Level Two
• Body Level Three
• Body Level Four
• Body Level Five
Title
я
Leaveyourfeedback!
Youcanratethetalkandgive
feedbackonwhatyou'velikedor
whatcouldbeimproved
https://www.linkedin.com/in/v-pon/
@velizarx
https://github.com/velp
Editor's Notes
Hi. My name is Vadim. During the last 5 years, I have been working with clouds based on OpenStack as a developer, DevOps, and architect. And today I want to talk about the problems that you can face with it.
I wanna start with a quick overview of what OpenStack is. Openstack is the most popular open-source solution for creating your own cloud. The first version of Openstack was released by Rackspace and NASA more than 10 years ago. OpenStack has written mainly in Python and has a modular architecture. Nowadays it is used for private and public clouds around the world and has a huge community.
Openstack is an open-source solution for private or public clouds especially when you need a self-hosted cloud. Another alternative is VMware but OpenStack is free. Also, OpenStack has strong network isolation, unlike other solutions and you can use it as a platform for customers.
OpenStack is a large system that contains tens of separate services and each of them contains hundreds of microservices running across bare-metal servers. This is a scheme of architecture with basic services from the official OpenStack website. And it's greatly simplified.
Underhood, there are many microservices, databases, message queues, schedulers, workers, etc. All these services interact with each other according to different protocols. The slide also has a highly simplified diagram of what is happening.
Let's summarise what is OpenStack based cloud:
it's hundreds of microservices
deployed on hundreds of bare-metal servers
all of these services are written in python
and we have to update everything at least twice a year (official release period of OpenStack)
And if you have an OpenStack-based cloud on bare-metal, your DevOps team looks like this every release.
So, why Kubernetes? Kubernetes was created to manage thousands of microservices deployed on hundreds or even thousands of bare-metal hosts. Packing in containers will resolve dependencies problems that arise with python microservices running on the same host. You also get the rest of the benefits of Kubernetes: self-healing, high availability, health checks, and so on.
But you can't just deploy OpenStack into Kubernetes cluster because it is not a simple web application. OpenStack is a cloud platform that should run customer VMs on the hosts as well as provide storage, network, and other infrastructure for them. It means Libvirt will be used for virtualization. Ceph will be used as a storage system. The network system will configure interfaces, bridges, and filters. And some of these systems should have root access to the host system.
Moreover OpenStack network stack really complicated and usually you should have at least 5 isolated networks: for management (number 1 on the slide), for public traffic (number 2 on the slide), an internal cluster network for cross-service communication (number 3 on the slide), for private traffic between VMs (number 4 on the slide), and last for storage. In addition, different types of nodes should have different sets of services and perform certain tasks.
And how does Kubernetes help here?
I wanna start with general tips. First of all, please, do not reinvent the wheel. OpenStack community already built helm charts and resolve lots of problems there. You can find detailed documentation for them by the link on the QR code. OpenStack community has already taken care of docker images and created lightweight images for all services. If you are not going to change the code of the services, use them without problems. QR-code to the repository you can see also on the slide. Also, it’s easier to use one Kubernetes namespace for all OpenStack components maybe except Ceph. And of course, you have to read the documentation. OpenStack well-documented ecosystem and you will find answers to most of your questions. Openstack documentation has only one problem: all documentation is generated from the repository of each project and the search engine is really bad. So this is kinda challenging.
About the database for your cluster. Based on my experience I recommend using MySQL as a database for your OpenStack installation because this is the most popular database in the community. Most of the bugs and problems have already been fixed. Also, I recommend running a separate database cluster at least for Neutron. Neutron is the name of the service which configures networks in OpenStack and can generate many heavy queries. In addition, the community sometimes makes mistakes and I have already seen a situation where the new neutron release has bugs in database queries and the database is stuck in deadlocks. Also, a good decision is to use fast disks for the database in clusters of more than 50 compute nodes. Because OpenStack's components can generate many queries. Monitoring databases is really important it will help you to find problems when the next release will come.
About the storage system, you have to know that Ceph is the most popular solution for the distributed file system in OpenStack world. OpenStack community has great experience with it. You can use one Ceph cluster for Kubernetes and for OpenStack at the same time but you have to create separate pools. You need to have a separate physical network for storage because VM creation or VM migration between hosts will generate huge network traffic. Better use at least 10Gb network interfaces, especially for storage. Also using dedicated nodes for storage is a great idea if you have a budget to reduce the load, reduce the chances of losing data, and also you will not experience problems when rebooting compute nodes.
The network layer is really complicated in Openstack and I want to make a quick overview for better understanding. Default OpenStack network stack based on several technologies. The second layer is based on OpenvSwitch with VXLAN or Geneve protocols for the tunnels between nodes and with VLANs for external networks. The third layer is virtual routers which can be configured as Linux network namespaces or OVN routers. OVN is a complete SDN solution created by OpenvSwitch team. It includes virtual routers and some additional features. You maybe know
Kubernetes CNI based on this project and OpenStack also can use it for their network system. Other network services like DHCP and DNS are configured as Dnsmasq daemons. If you want to go deeper scan the QR code to find more information. If I draw analogies with AWS I would say that OpenStack private network is AWS VPC and Openstack virtual router is AWS gateway.
And with the network system, we have several challenges. On the slide, you can see a simple diagram of how OVS works. The first point: OpenvSwitch is working with Linux kernel on the host system to configure traffic flows. It means that OpenvSwitch daemons are running inside containers and theoretically should be able to load kernel modules to the host system. Also, they should be able to configure bridges and interfaces on the host system. In addition, Kubernetes networks should not conflict with OpenStack networks on the host. So you have to split network ranges for different layers.
The second challenge: OpenStack supports only VLAN-based external networks. It means that all networks configured for public traffic will work based on raw Layer 2. This is a big problem in a large cluster because you have to configure this VLAN everywhere on the nodes. And any abnormal traffic will spread in this VLAN everywhere breaking the stable operation of the entire cluster. On the right side of the slide, you can see a simple scheme of how unknown unicast spreads in the VLAN network. Top-of-rack switches redistribute unknown unicast traffic. So any DDoS and flood traffic is more dangerous in such networks. And the problem gets worse as your cloud grows.
So how to correctly run the network subsystem of OpenStack in Kubernetes? To run the OpenvSwitch daemon correctly you have to provide three capabilities: NET_ADMIN (to allow network configuration on the host system), SYS_MODULE (because OpenvSwitch will load its own kernel modules), and SYS_NICE (for better performance). In addition, you have to connect the container to the host network and mount /run directory from the host system. You can find a small video with an explanation of how it works by the QR code.
About VLAN-based external networks problem. Good approach if the layer 2 segment is not going further than the Top-of-Rack switch. Like in this diagram we have VLANs only between a node and leaf switches and traffic does not go further than the Top-of-Rack switch. But between left and right pairs of leafs, the switch fabric should configure tunnels or routing to have connectivity from one node to another. This of course requires a special configuration of the network devices.
A completely ideal situation is when you don’t have layer 2 segments in the data center network at all. It means that each node is working like a router: handles all layer 2 traffic from the VMs and after that routing it. But this is not always possible as it requires more complicated data center network configuration. But there are several options for how to reduce the size of the L2 segment.
Here are the options provided by OpenStack community itself. First is the segmentation extension that allows you to set and control different VLANs for different parts of your network. It means that you can use segmentation as it was on the first diagram where we had VLAN only between nodes and leafs.
Neutron also provides BGP dynamic routing plugin which adds one more agent to each node but allows you to announce subnets directly from the nodes. An analogy from Kubernetes world is BGP mode which is nowadays provided by popular CNIs like Cilium or Kube-router. This is can be a silver bullet for reducing the layer 2 segments. The last two options most likely require changes to the data center network. The first is distributed virtual router or DVR. This is a type of routing configuration that allows you to create a hyper-converged cloud. But it requires configuring VLANs and BGP everywhere on Top-of-Rack switches. Another option and the best solution in most cases is using EVPN VXLAN fabric in your data center network. But it will be more expensive and harder to maintain in the future.
Let's move on to compute system. Nova is OpenStack service that configures Virtual Machines on hosts. Typically it works with Libvirt which requires extended access to the host system. Also, OpenStack supports different types of VM. It can be a simple VM or for example, a GPU VM that has direct unlimited access to one GPU card or it can be VM with direct access to a network card to run some network functions. Libvirt containers have to be run as privileged containers and should have access to lots of directories on the host system. On the slide, you can see a minimal list of these directories that have to be mounted to the container.
In addition, for virtual machine migrations, Nova requires a shared directory accessible from all nodes. The good news is that it is used to create temporary small files, and even NFS can be used for these purposes. Usually, there are no problems with data loss or conflicts.
Even after all this, you need to keep in mind that OpenStack is not ready for Kubernetes out of the box. OpenStack components were developed to be running on bare-metal servers. As a result, we do not have real health checks that can approve that a service was running correctly. Not all of the services support graceful restart which means that sometimes after pod restart you can get an inconsistent system and after this restart the system will start full synchronization just to make sure everything is OK. Also, most of the components can generate multiline logs which makes debugging much more difficult. For example, in the screenshot below this is one error that contains more than 10 lines in the log.
It can be difficult if you deal with OpenStack for the first time. After installation, you have to monitor such a complicated system, but OpenStack does not provide its exporters or other systems for monitoring. OpenStack community provides a basic exporter that collects general information about the cloud like the number of running VMs, IP addresses, and so on. You have to build your monitoring system around components if you want to understand the real state of your cloud. Lots of components are required on other OpenStack components, which means that automatically deploying and running OpenStack from scratch is impossible. Anyway, you have to manually fix some deadlocks and re-run some services. Also, you have to control the order when you deploy a new version of OpenStack components. And if you want to add something like monitoring to the container, you have to create a huge and difficult pipeline for component builds because usually, you can re-use only a basic docker image.
So why do we need Kubernetes in this case? Run OpenStack in Kubernetes much better than on bare-metal servers even if we have to fight with such problems. Kubernetes gives you more stability and control over the system that contains hundreds of components. Deployments become more stable and predictable. With the growth of infrastructure, there are more and more advantages. And of course, it's easier to find a DevOps who worked with Kubernetes than an admin who set up and supports OpenStack for bare-metal. If you know both technologies please let me know after Q&A session I can offer you an interesting job.
I hope that my talk will be useful to you and save time. My contacts are on the slide and you can write to me with any questions, I will try to help. Also, you can see the QR code where you can evaluate my talk. And we have some time for questions.