1) The document discusses implementing a multi-tenant isolation approach in a single Openshift cluster to provide isolated environments for multiple tenants.
2) Some key constraints of the multi-tenant approach are centralized management of the Openshift cluster, no direct communication between tenants, integrating tenants' infrastructure, and no interference between tenants.
3) The implementation uses techniques like tagging projects and nodes per tenant, blocking network access between tenants, and delegating access rights to provide isolated environments for each tenant in the shared Openshift cluster.
Watch this presentation and learn all about Microservices.
*Flannel, Weave, IPVLAN, MacVLAN and how they fit together with Docker, Swarm or Kubernetes
*How containers communicate with each other
*How the choice of Networking Interface impacts router and switch deployment in the Data Center
Docker networking basics & coupling with Software Defined NetworksAdrien Blind
This presentation reminds Docker networking, exposes Software Defined Network basic paradigms, and then proposes a mixed-up implementation taking benefits of a coupled use of these two technologies. Implementation model proposed could be a good starting point to create multi-tenant PaaS platforms.
As a bonus, OpenStack Neutron internal design is presented.
You can also have a look on our previous presentation related to enterprise patterns for Docker:
http://fr.slideshare.net/ArnaudMAZIN/docker-meetup-paris-enterprise-docker
This slide deck was presented on a Docker Meetup in Melbourne in March 2016. Linux namespaces and how they working together with Docker were covered in detail as an introduction to this presentation. In the main part was discussed solution that uses VXLAN networks together with EVPN BGP signalling to route traffic between Docker containers.
Netronome's Nick Tausanovitch, VP of Solutions Architecture and Silicon Product Management, Linley Data Center Conference in Santa Clara, CA on February 9, 2016.
This presentation provides an introductory overview of Linux networking options, including network namespaces, VLAN interfaces, MACVLAN interfaces, and virtual Ethernet (veth) interfaces.
Tutorial on using CoreOS Flannel for Docker networkingLorisPack Project
Flannel is an overlay based networking technique for networking Docker containers on CoreOS platforms. This tutorial explains the theory, setup instructions and limtations of the mechanism.
Watch this presentation and learn all about Microservices.
*Flannel, Weave, IPVLAN, MacVLAN and how they fit together with Docker, Swarm or Kubernetes
*How containers communicate with each other
*How the choice of Networking Interface impacts router and switch deployment in the Data Center
Docker networking basics & coupling with Software Defined NetworksAdrien Blind
This presentation reminds Docker networking, exposes Software Defined Network basic paradigms, and then proposes a mixed-up implementation taking benefits of a coupled use of these two technologies. Implementation model proposed could be a good starting point to create multi-tenant PaaS platforms.
As a bonus, OpenStack Neutron internal design is presented.
You can also have a look on our previous presentation related to enterprise patterns for Docker:
http://fr.slideshare.net/ArnaudMAZIN/docker-meetup-paris-enterprise-docker
This slide deck was presented on a Docker Meetup in Melbourne in March 2016. Linux namespaces and how they working together with Docker were covered in detail as an introduction to this presentation. In the main part was discussed solution that uses VXLAN networks together with EVPN BGP signalling to route traffic between Docker containers.
Netronome's Nick Tausanovitch, VP of Solutions Architecture and Silicon Product Management, Linley Data Center Conference in Santa Clara, CA on February 9, 2016.
This presentation provides an introductory overview of Linux networking options, including network namespaces, VLAN interfaces, MACVLAN interfaces, and virtual Ethernet (veth) interfaces.
Tutorial on using CoreOS Flannel for Docker networkingLorisPack Project
Flannel is an overlay based networking technique for networking Docker containers on CoreOS platforms. This tutorial explains the theory, setup instructions and limtations of the mechanism.
this slide is created for understand open vswitch more easily.
so I tried to make it practical. if you just follow up this scenario, then you will get some knowledge about OVS.
In this document, I mainly use only two command "ip" and "ovs-vsctl" to show you the ability of these commands.
This is a followup to our Docker networking tutorial. This slidedeck describes the options for deploying Docker container in a multi-host cluster environment. We introduce the LorisPack toolkit for connecting and isolating pods of containers deployed across multiple hosts.
Introduce the basic concept of Open vSwitch. In this slide, we talked about how Linux kernel and networking stack worked together to forward and process the network packet and also compare those Linux networking stack functionality with Open vSwitch and Openflow.
At the end of this slide, we talk about the challenge to integrate the Open vSwitch with Kubernetes, what kind of the networking function we need to resolve and what is the benefit we can get from the Open Vswitch.
Open VSwitch .. Use it for your day to day needsrranjithrajaram
Slides of open vSwitch used for Fudcon 2015.
Main agenda for this talk was.. why openvswitch is a better alternative to Linux bridge and why you should start using it as the bridge for your KVM host.
Tech Talk by Ben Pfaff: Open vSwitch - Part 2nvirters
Open vSwitch - Part 2
A previous presentation in March 2013 at Bay Area Network Virtualization meetup covered the past, present, and predicted future of Open vSwitch. This talk picks up where that one left off, covering improvements made in Open vSwitch since then, new directions for the coming year, and some related work of interest in the industry.
About Ben Pfaff (twitter: @Ben_Pfaff)
Ben joined Nicira as one of its first employees in 2007 after finishing his PhD at Stanford. Since then he has been working on what became OpenFlow and Open vSwitch. He also made some early contributions to the NOX controller. He has been involved with free software since about 1996, when he started work on GNU PSPP and joined the Debian project.
More info @ http://meetup.com/openvswitch
Follow us on twitter @nvirters
Presentation delivered at LinuxCon China 2017 by Greg Kroah-Hartman.
The Linux kernel is the largest collaborative software development projects ever. This talk will discuss exactly how Linux is developed, how fast it is happening, who is doing the work, and how we all stay sane keeping up with it. It will discuss the development model used, and how it differs from almost all "traditional" models of software development.
Can we leverage the resource of public cloud for gaming, streaming, transcoding, machine learning and visualized CAD application on demand? Yes if it provides the capability and infrastructure to utilize GPUs. Can we get high performance networking in the cloud as what I have in the bare metal environment? Yes with SR-IOV. How to achieve them? In this presentation we describe Discrete Device Assignment (also known as PCI Pass-through) support for GPU and network adapter in Linux guest and SR-IOV architectures of Linux guest with near-native performance profile running on Hyper-V. We also will share how to integrate accelerated graphics and networking capabilities in Microsoft Azure infrastructure.
Presentation delivered at LinuxCon China 2017
Real-Time is used for deadline-oriented applications and time-sensitive workloads. Real-Time KVM is the extension of KVM(Linux Kernel-based Virtual Machine) to allow the virtual machines(VM) to be a truly Real-Time operating system.Users sometimes need to run low-latency applications(such as audio/video streaming, highly interactive systems, etc) to meet their requirements in clouds. NFV is a new network concept which uses virtualization and software instead of dedicated network appliances. For some use cases of telecommunications, network latency must be within a certain range of values. Real-Time KVM can help NFV meet this requirements.
In this presentation, Pei Zhang will talk about:
(1)Real-Time KVM introduction
(2)Real-Time cloud building
(3)Real-Time KVM in NFV: VM with openvswitch, dpdk and qemu’s vhostuser
(4)Performance testing results show
Building a network emulator with Docker and Open vSwitchGoran Cetusic
A short description of container namespaces, Linux virtual Ethernet interfaces and how to use them in Docker and Open vSwitch to create a self-contained network with hundreds of nodes on a single host machine.
"One network to rule them all" - OpenStack Summit Austin 2016Phil Estes
Presentation at IBM Client Day by Kyle Mestery and Phil Estes, OpenStack Summit 2016 - Austin, Texas on April 26, 2016. "Open, Scalable and Integrated Networking for Containers and VMs" covering Project Kuryr, Docker's libnetwork, and Neutron & OVS and OVN network stacks
Docker Networking with New Ipvlan and Macvlan DriversBrent Salisbury
Docker Networking presentation at ONS2016.
Docker Macvlan and Ipvlan Networking Drivers Experimental Readme:
github.com/docker/docker/blob/master/experimental/vlan-networks.md
Kernel requirements for Ipvlan mode is v4.2+, Macvlan mode is v3.19.
If using Virtualbox to test with, use NAT mode interfaces unless you have multiple MAC addresses working in your setup. Use the 172.x.x.x subnet and gateway used by the VBox NAT network. Vmware Fusion works out of the box.
Here is a screenshot of a VirtualBox NAT interface:
https://www.dropbox.com/s/w1rf61n18y7q4f1/Screenshot%202016-03-20%2001.55.13.png?dl=0
The new virtualization technologies and cloud environments are a big challenge for testing network performance. We need a new approach for testing, using realistic scenarios and flexible tools that allow us to generate packets at high speed. Trex is an Open Source network generator with all these batteries included.
Presentation delivered at LinuxCon China 2016
UEFI HTTP/HTTPS Boot is a new feature of UEFI 2.5+. In the meantime, this feature is not yet implemented in any Linux bootloader. This Birds of a Feather session will give an introduction to UEFI HTTP/HTTPS Boot, and share a proof-of-concept implementation based on grub2 that works on both the emulator (QEMU/OVMF) and HPE ProLiant Gen10 servers.
For HTTPS, the experience and comparison will be shared between the purely software-based and UEFI-based implementations in the aspects of ease of implementation, security strength, and limitation.
Presentation created for Third and Final Year students of , The Department of Information Technology, Bharati Vidyapeeth (Deemed to be University) College of Engineering, Pune. Collage has invited myself for a training program on “Recent Trends in Information Technology”. I presented on topic of "Serverless Microservices". It is Level-100 Session.
this slide is created for understand open vswitch more easily.
so I tried to make it practical. if you just follow up this scenario, then you will get some knowledge about OVS.
In this document, I mainly use only two command "ip" and "ovs-vsctl" to show you the ability of these commands.
This is a followup to our Docker networking tutorial. This slidedeck describes the options for deploying Docker container in a multi-host cluster environment. We introduce the LorisPack toolkit for connecting and isolating pods of containers deployed across multiple hosts.
Introduce the basic concept of Open vSwitch. In this slide, we talked about how Linux kernel and networking stack worked together to forward and process the network packet and also compare those Linux networking stack functionality with Open vSwitch and Openflow.
At the end of this slide, we talk about the challenge to integrate the Open vSwitch with Kubernetes, what kind of the networking function we need to resolve and what is the benefit we can get from the Open Vswitch.
Open VSwitch .. Use it for your day to day needsrranjithrajaram
Slides of open vSwitch used for Fudcon 2015.
Main agenda for this talk was.. why openvswitch is a better alternative to Linux bridge and why you should start using it as the bridge for your KVM host.
Tech Talk by Ben Pfaff: Open vSwitch - Part 2nvirters
Open vSwitch - Part 2
A previous presentation in March 2013 at Bay Area Network Virtualization meetup covered the past, present, and predicted future of Open vSwitch. This talk picks up where that one left off, covering improvements made in Open vSwitch since then, new directions for the coming year, and some related work of interest in the industry.
About Ben Pfaff (twitter: @Ben_Pfaff)
Ben joined Nicira as one of its first employees in 2007 after finishing his PhD at Stanford. Since then he has been working on what became OpenFlow and Open vSwitch. He also made some early contributions to the NOX controller. He has been involved with free software since about 1996, when he started work on GNU PSPP and joined the Debian project.
More info @ http://meetup.com/openvswitch
Follow us on twitter @nvirters
Presentation delivered at LinuxCon China 2017 by Greg Kroah-Hartman.
The Linux kernel is the largest collaborative software development projects ever. This talk will discuss exactly how Linux is developed, how fast it is happening, who is doing the work, and how we all stay sane keeping up with it. It will discuss the development model used, and how it differs from almost all "traditional" models of software development.
Can we leverage the resource of public cloud for gaming, streaming, transcoding, machine learning and visualized CAD application on demand? Yes if it provides the capability and infrastructure to utilize GPUs. Can we get high performance networking in the cloud as what I have in the bare metal environment? Yes with SR-IOV. How to achieve them? In this presentation we describe Discrete Device Assignment (also known as PCI Pass-through) support for GPU and network adapter in Linux guest and SR-IOV architectures of Linux guest with near-native performance profile running on Hyper-V. We also will share how to integrate accelerated graphics and networking capabilities in Microsoft Azure infrastructure.
Presentation delivered at LinuxCon China 2017
Real-Time is used for deadline-oriented applications and time-sensitive workloads. Real-Time KVM is the extension of KVM(Linux Kernel-based Virtual Machine) to allow the virtual machines(VM) to be a truly Real-Time operating system.Users sometimes need to run low-latency applications(such as audio/video streaming, highly interactive systems, etc) to meet their requirements in clouds. NFV is a new network concept which uses virtualization and software instead of dedicated network appliances. For some use cases of telecommunications, network latency must be within a certain range of values. Real-Time KVM can help NFV meet this requirements.
In this presentation, Pei Zhang will talk about:
(1)Real-Time KVM introduction
(2)Real-Time cloud building
(3)Real-Time KVM in NFV: VM with openvswitch, dpdk and qemu’s vhostuser
(4)Performance testing results show
Building a network emulator with Docker and Open vSwitchGoran Cetusic
A short description of container namespaces, Linux virtual Ethernet interfaces and how to use them in Docker and Open vSwitch to create a self-contained network with hundreds of nodes on a single host machine.
"One network to rule them all" - OpenStack Summit Austin 2016Phil Estes
Presentation at IBM Client Day by Kyle Mestery and Phil Estes, OpenStack Summit 2016 - Austin, Texas on April 26, 2016. "Open, Scalable and Integrated Networking for Containers and VMs" covering Project Kuryr, Docker's libnetwork, and Neutron & OVS and OVN network stacks
Docker Networking with New Ipvlan and Macvlan DriversBrent Salisbury
Docker Networking presentation at ONS2016.
Docker Macvlan and Ipvlan Networking Drivers Experimental Readme:
github.com/docker/docker/blob/master/experimental/vlan-networks.md
Kernel requirements for Ipvlan mode is v4.2+, Macvlan mode is v3.19.
If using Virtualbox to test with, use NAT mode interfaces unless you have multiple MAC addresses working in your setup. Use the 172.x.x.x subnet and gateway used by the VBox NAT network. Vmware Fusion works out of the box.
Here is a screenshot of a VirtualBox NAT interface:
https://www.dropbox.com/s/w1rf61n18y7q4f1/Screenshot%202016-03-20%2001.55.13.png?dl=0
The new virtualization technologies and cloud environments are a big challenge for testing network performance. We need a new approach for testing, using realistic scenarios and flexible tools that allow us to generate packets at high speed. Trex is an Open Source network generator with all these batteries included.
Presentation delivered at LinuxCon China 2016
UEFI HTTP/HTTPS Boot is a new feature of UEFI 2.5+. In the meantime, this feature is not yet implemented in any Linux bootloader. This Birds of a Feather session will give an introduction to UEFI HTTP/HTTPS Boot, and share a proof-of-concept implementation based on grub2 that works on both the emulator (QEMU/OVMF) and HPE ProLiant Gen10 servers.
For HTTPS, the experience and comparison will be shared between the purely software-based and UEFI-based implementations in the aspects of ease of implementation, security strength, and limitation.
Presentation created for Third and Final Year students of , The Department of Information Technology, Bharati Vidyapeeth (Deemed to be University) College of Engineering, Pune. Collage has invited myself for a training program on “Recent Trends in Information Technology”. I presented on topic of "Serverless Microservices". It is Level-100 Session.
Multicluster Kubernetes and Service Mesh PatternsChristian Posta
Building applications for cloud-native infrastructure that are resilient, scalable, secure, and meet compliance and IT objectives gets complicated. Another wrinkle for the organizations with which we work is the fact they need to run across a hybrid deployment footprint, not just Kubernetes. At Solo.io, we build application networking technology on Envoy Proxy that helps solve difficult multi-deployment, multi-cluster, and even multi-mesh problems.
In this webinar, we’re going to explore different options and patterns for building secure, scalable, resilient applications using technology like Kubernetes and Service Mesh without leaving behind existing IT investments. We’ll see why and when to use multi-cluster topologies, how to build for high availability and team autonomy, and solve for things like service discovery, identity federation, traffic routing, and access control.
Software Defined Networking: Network VirtualizationNetCraftsmen
SDN has the potential to revolutionize the way networks are designed, sold, and operated. This presentation describes SDN, discusses what it can do, and presents use cases. It also talks about the current and potential impact of SDN on the networking industry.
Cloud Expo New York: OpenFlow Is SDN Yet SDN Is Not Only OpenFlowCohesive Networks
Cloud Expo New York: OpenFlow Is SDN Yet SDN Is Not Only OpenFlow
Software Defined Networking (SDN) is a new approach to networking, both to the data centre, and as a connection across data centers. SDN defines the networks in software, meaning designers can operate, control, and configure networks without physical access to the hardware. Effectively, SDN frees the network and applications from underlying hardware. New technologies are making it possible for enterprises to use virtualized networks over any type of hardware in any physical location - including unifying physical data centers and federating cloud-based data centers.
In his session at the 12th International Cloud Expo, Patrick Kerpan, the CEO and co-founder of CohesiveFT, will highlight customer use cases to demonstrate a broader SDN definition.
Network architecture design for microservices on GCPRaphaël FRAYSSE
Follow me on Twitter: https://twitter.com/la1nra
Presentation for the GCPUG Tokyo Network Day 2019 https://gcpug-tokyo.connpass.com/event/144935/
A tale about thinking, planning, and designing a network architecture for large-scale microservices on GCP in a post-IPO company.
Blog version available at https://blog.usejournal.com/network-architecture-design-for-microservices-on-gcp-ce8d10d5396e
Gaetano Borgione's presentation from the 2017 Open Networking Summit.
Networking is vital for cloud-native apps where distributed computing and development models require speed, simplicity, and scale for massive number of ephemeral containers. Two of the most prevalent container networking models are CNI and CNM for developers using Docker, Mesos, or Kubernetes. This session will present an overview of distributed development, how CNI and CNM models work, and how container frameworks use these models for networking. Gaetano will also discuss the additional functions users need to consider in the control plane and data plane to achieve operational scale and efficiency.
This presentation from the I Love APIs conference makes the case for why Node and Docker are great together for implementing Microservice architecture. It also provides an quick orientation for getting started with Docker Machine, Node, and Mongo with container linking and data volume containers.
Real time voice translation handig maar hoe ver staat hetSmals
Voorstelling van enkele technologieën om gesprekken onmiddellijk te vertalen zodat anderssprekenden elkaar kunnen begrijpen. De technologie staat al ver, maar het is nog niet perfect.
Presentation by Jared Jageler, David Adler, Noelia Duchovny, and Evan Herrnstadt, analysts in CBO’s Microeconomic Studies and Health Analysis Divisions, at the Association of Environmental and Resource Economists Summer Conference.
Donate to charity during this holiday seasonSERUDS INDIA
For people who have money and are philanthropic, there are infinite opportunities to gift a needy person or child a Merry Christmas. Even if you are living on a shoestring budget, you will be surprised at how much you can do.
Donate Us
https://serudsindia.org/how-to-donate-to-charity-during-this-holiday-season/
#charityforchildren, #donateforchildren, #donateclothesforchildren, #donatebooksforchildren, #donatetoysforchildren, #sponsorforchildren, #sponsorclothesforchildren, #sponsorbooksforchildren, #sponsortoysforchildren, #seruds, #kurnool
This session provides a comprehensive overview of the latest updates to the Uniform Administrative Requirements, Cost Principles, and Audit Requirements for Federal Awards (commonly known as the Uniform Guidance) outlined in the 2 CFR 200.
With a focus on the 2024 revisions issued by the Office of Management and Budget (OMB), participants will gain insight into the key changes affecting federal grant recipients. The session will delve into critical regulatory updates, providing attendees with the knowledge and tools necessary to navigate and comply with the evolving landscape of federal grant management.
Learning Objectives:
- Understand the rationale behind the 2024 updates to the Uniform Guidance outlined in 2 CFR 200, and their implications for federal grant recipients.
- Identify the key changes and revisions introduced by the Office of Management and Budget (OMB) in the 2024 edition of 2 CFR 200.
- Gain proficiency in applying the updated regulations to ensure compliance with federal grant requirements and avoid potential audit findings.
- Develop strategies for effectively implementing the new guidelines within the grant management processes of their respective organizations, fostering efficiency and accountability in federal grant administration.
ZGB - The Role of Generative AI in Government transformation.pdfSaeed Al Dhaheri
This keynote was presented during the the 7th edition of the UAE Hackathon 2024. It highlights the role of AI and Generative AI in addressing government transformation to achieve zero government bureaucracy
Jennifer Schaus and Associates hosts a complimentary webinar series on The FAR in 2024. Join the webinars on Wednesdays and Fridays at noon, eastern.
Recordings are on YouTube and the company website.
https://www.youtube.com/@jenniferschaus/videos
Jennifer Schaus and Associates hosts a complimentary webinar series on The FAR in 2024. Join the webinars on Wednesdays and Fridays at noon, eastern.
Recordings are on YouTube and the company website.
https://www.youtube.com/@jenniferschaus/videos
Preliminary findings _OECD field visits to ten regions in the TSI EU mining r...OECDregions
Preliminary findings from OECD field visits for the project: Enhancing EU Mining Regional Ecosystems to Support the Green Transition and Secure Mineral Raw Materials Supply.
3. • Focus = Synergy in ICT-services
• Services provided by different institutions /
service owners
• In close collaboration with private sector
G-Cloud = Belgian government Cloud
3
www.gcloud.belgium.be
4. Business applications
Hard infrastructure
Soft infrastructure
Platform
Standard components & applications
Housing LAN/WAN
Network
Storage
BabelFed
ITSM
Service desk
Web Content
Management
BeConnected
Unified Communications &
Collaboration
Internet Access ProtectionBackup Archiving IAM / ShaD
G-Cloud-projects
4
GreenShift
Open Source
YellowShift
Microsoft
BlueShift
IBM
RedShift
Oracle
Business Intelligence & Big
Data Analytics
Sharepoint
Virtual Machine
Hypervisor
Bare Metal
Preparation Realization Service On hold
7. About Smals
7
• In-house ICT services for government
– Governed by Belgian public institutions
– Members only
– Services provided at cost
• Focus on social security & health
• Activities:
– Software development
– Infrastructure management
– Staffing
• Approximately 1790 employees
– looking for 50 more (jobs@smals.be)
10. Proof of concept OSE 3.0
10
• Coming from single tenant OSE 2
• Set up OSE 3 proof of concept
– Single shared node pool
– Not multitenant
Too big to
succeed
Self
service
OSE 3.1:
OVS
multitenant
SDN
Multitenant
cluster
PoC
Openshift
3.0
11. Multitenant cluster
11
• Multiple partners:
– organization or government institution
• Multiple tenants per partner
Too big to
succeed
Self
service
OSE 3.1:
OVS
multitenant
SDN
Multitenant
cluster
PoC
Openshift
3.0
12. Define: tenant
• Tenant has
– Multiple teams
– Different access
rights per team
– Multiple
applications
12
13. Too big to
succeed
Self
service
OSE 3.1:
OVS
multitenant
SDN
Multitenant
cluster
Multitenant cluster constraints
13
PoC
Openshift
3.0
Centralized management
of Openshift cluster
No direct communication
between tenants
Integrate with partners’
infrastructure
No interference between
tenants
Delegate rights to tenant
14. Too big to
succeed
Self
service
OSE 3.1:
OVS
multitenant
SDN
Multitenant
cluster
Multitenant cluster constraints
14
PoC
Openshift
3.0
Centralized management
of Openshift cluster
No direct communication
between tenants
Integrate with partners’
infrastructure
No interference between
tenants
Delegate rights to tenant
16. Too big to
succeed
Self
service
OSE 3.1:
OVS
multitenant
SDN
Multitenant
cluster
Multitenant cluster constraints
16
PoC
Openshift
3.0
Centralized management
of Openshift cluster
No direct communication
between tenants
Integrate with partners’
infrastructure
No interference between
tenants
Delegate rights to tenant
17. Integrate with partners’ infra
• Pods can access resources in a partner’s
network
– Databases
– Webservices
– …
17
18. Integrate with partners’ infra
• Nodes in subnet of partner network
• Nodes in single network with master
18
19. Too big to
succeed
Self
service
OSE 3.1:
OVS
multitenant
SDN
Multitenant
cluster
Multitenant cluster constraints
19
PoC
Openshift
3.0
Centralized management
of Openshift cluster
No direct communication
between tenants
Integrate with partners’
infrastructure
No interference between
tenants
Delegate rights to tenant
20. No direct communication
• Pods from different tenants should not be
able access each other
– Pods can by default access services in other
project (in OVS subnet SDN)
– Access to pods via routes and routers (router IP)
• Pods should not be able to access resources
from a different tenant
– Databases
– Image repository
– Webservices
20
22. Too big to
succeed
Self
service
OSE 3.1:
OVS
multitenant
SDN
Multitenant
cluster
Multitenant cluster constraints
22
PoC
Openshift
3.0
Centralized management
of Openshift cluster
No direct communication
between tenants
Integrate with partners’
infrastructure
No interference between
tenants
Delegate rights to tenant
23. No interference
• A tenant should not see changes another
tenant made
• A tenant should not see effects of changes
another tenant made
23
24. No interference
• Projects are invisible to
users that do not have
access to them
• Nodes are global for
master
– solution: tag nodes per
tenant, all tenant projects
have a nodeselector
defined
• Unique names for projects
– workaround via name
convention: prefix per
tenant
24
25. Too big to
succeed
Self
service
OSE 3.1:
OVS
multitenant
SDN
Multitenant
cluster
Multitenant cluster constraints
25
PoC
Openshift
3.0
Centralized management
of Openshift cluster
No direct communication
between tenants
Integrate with partners’
infrastructure
No interference between
tenants
Delegate rights to tenant
26. Delegate rights to tenant
• Organize access rights
per tenant
– Different teams with
different accesses
– Tenant admin with access
to all
• Manage who can access
which routes
• Manage which pods can
access which resources
26
27. Organize access rights per tenant
27
• Openshift “Project”:
– Group of resources
– Access rights to those resources
– No nesting of projects (unlike Openstack & cloudforms)
28. Organize access rights per tenant
• Organize resources and access rights to them
in projects
• Tag projects as belonging to a tenant
28
29. Organize access rights per tenant
• We want to define a tenant admin
• Openshift roles: project based or cluster based
• Tenant admin contacts cluster admin
–Temporary solution (does not scale)
29
30. Manage access
• Traffic to router(s) has to pass through partner network
• Partner controls access from pods to resources in partner
network
– Needs to open access to all nodes because pod can change
nodes
30
31. OVS Multitenant SDN
31
• Use new feature “OVS multitenant SDN”?
– Would partially solve No direct communication
– We can only limit access to router based on IP
address, we still have to limit access based on node
instead of on pod
• Large impact if implemented
• Decided to wait for other solutions
Too big to
succeed
Self
service
OSE 3.1:
OVS
multitenant
SDN
Multitenant
cluster
PoC
Openshift
3.0
32. Self-service
33
• Self service for tasks that cannot be delegated
or require systems outside Openshift
– Via cloudforms using Openshift API
Too big to
succeed
Self
service
OSE 3.1:
OVS
multitenant
SDN
Multitenant
cluster
PoC
Openshift
3.0
33. Self-service
• Automatically set up
tags and
nodeselectors for our
tenant setup during
project creation
• Tenant admin is by
default project admin
of all projects within
tenant
• Other services outside
of Openshift
34
34. Too big to
succeed
Self
service
OSE 3.1:
OVS
multitenant
SDN
Multitenant
cluster
Too big to succeed
35
PoC
Openshift
3.0
• Each node keeps track of all the services in the
cluster
– Growing overhead on every node per service on
the cluster (due to ip tables)
– Noticeable for us around 500 services
– May need to think about splitting clusters
36. Summary
• Project:
– tagged with tenant
– defined nodeselector
– has to follow name convention
• Node:
– tagged with tenant: dedicated node pool
– in tenant network
– in dedicated subnet for tenant in service network
37
37. Current state
• Running version:
openshift 3.3
• 250 Nodes / 500
projects / 2000
pods
• Large mission
critical e-gov
applications – in
production
38
38. Evaluation of the design
• Good
– Pods are blocked from other tenants' resources
– Pods of one tenant cannot access pods from other tenant
– Integration with existing customer resources
– Standardized framework facilitates scheduling, capacity
planning and reporting
– Single cluster to manage
• Bad
– Dedicated node pools
• Need a buffer per node pool
• Use more nodes compared to a single shared node pool
– Standardized framework: tenant cannot deviate
– Single large cluster: unforeseen overhead (e.g IPtables)
39
39. Lessons learned
• Openshift is still adding new features
– Regularly review design
• Uncommon setup
– First to find limitations and issues
– Have to create new workarounds
40
40. Future plans
• Automatically upgrade Openshift cluster
• Set up multiple clusters
– Overhead of large sized cluster (ip tables)
– Smaller clusters to upgrade
– More flexible for partners
• External SDN
• Experiment with new functionalities (egress
router)
41