Implementing multi-tenant isolation
in a single Openshift cluster
Red Hat Forum, Breda (NL), 10/10/2017
www.gcloud.belgium.be
1
Introducing G-Cloud
2
• Focus = Synergy in ICT-services
• Services provided by different institutions /
service owners
• In close collaboration with private sector
G-Cloud = Belgian government Cloud
3
www.gcloud.belgium.be
Business applications
Hard infrastructure
Soft infrastructure
Platform
Standard components & applications
Housing LAN/WAN
Network
Storage
BabelFed
ITSM
Service desk
Web Content
Management
BeConnected
Unified Communications &
Collaboration
Internet Access ProtectionBackup Archiving IAM / ShaD
G-Cloud-projects
4
GreenShift
Open Source
YellowShift
Microsoft
BlueShift
IBM
RedShift
Oracle
Business Intelligence & Big
Data Analytics
Sharepoint
Virtual Machine
Hypervisor
Bare Metal
Preparation Realization Service On hold
G-Cloud entry points
5
Service owner:
Shared ICT services in social security & e-health since… 1939
About Smals
7
• In-house ICT services for government
– Governed by Belgian public institutions
– Members only
– Services provided at cost
• Focus on social security & health
• Activities:
– Software development
– Infrastructure management
– Staffing
• Approximately 1790 employees
– looking for 50 more (jobs@smals.be)
Over 200 member institutions
Federal – Regional – Local
Timeline
9
Proof of concept OSE 3.0
10
• Coming from single tenant OSE 2
• Set up OSE 3 proof of concept
– Single shared node pool
– Not multitenant
Too big to
succeed
Self
service
OSE 3.1:
OVS
multitenant
SDN
Multitenant
cluster
PoC
Openshift
3.0
Multitenant cluster
11
• Multiple partners:
– organization or government institution
• Multiple tenants per partner
Too big to
succeed
Self
service
OSE 3.1:
OVS
multitenant
SDN
Multitenant
cluster
PoC
Openshift
3.0
Define: tenant
• Tenant has
– Multiple teams
– Different access
rights per team
– Multiple
applications
12
Too big to
succeed
Self
service
OSE 3.1:
OVS
multitenant
SDN
Multitenant
cluster
Multitenant cluster constraints
13
PoC
Openshift
3.0
Centralized management
of Openshift cluster
No direct communication
between tenants
Integrate with partners’
infrastructure
No interference between
tenants
Delegate rights to tenant
Too big to
succeed
Self
service
OSE 3.1:
OVS
multitenant
SDN
Multitenant
cluster
Multitenant cluster constraints
14
PoC
Openshift
3.0
Centralized management
of Openshift cluster
No direct communication
between tenants
Integrate with partners’
infrastructure
No interference between
tenants
Delegate rights to tenant
Centralized management
• Shared master(s)
• Shared services
15
Too big to
succeed
Self
service
OSE 3.1:
OVS
multitenant
SDN
Multitenant
cluster
Multitenant cluster constraints
16
PoC
Openshift
3.0
Centralized management
of Openshift cluster
No direct communication
between tenants
Integrate with partners’
infrastructure
No interference between
tenants
Delegate rights to tenant
Integrate with partners’ infra
• Pods can access resources in a partner’s
network
– Databases
– Webservices
– …
17
Integrate with partners’ infra
• Nodes in subnet of partner network
• Nodes in single network with master
18
Too big to
succeed
Self
service
OSE 3.1:
OVS
multitenant
SDN
Multitenant
cluster
Multitenant cluster constraints
19
PoC
Openshift
3.0
Centralized management
of Openshift cluster
No direct communication
between tenants
Integrate with partners’
infrastructure
No interference between
tenants
Delegate rights to tenant
No direct communication
• Pods from different tenants should not be
able access each other
– Pods can by default access services in other
project (in OVS subnet SDN)
– Access to pods via routes and routers (router IP)
• Pods should not be able to access resources
from a different tenant
– Databases
– Image repository
– Webservices
20
No direct communication
• Blocked everything on network level
21
Too big to
succeed
Self
service
OSE 3.1:
OVS
multitenant
SDN
Multitenant
cluster
Multitenant cluster constraints
22
PoC
Openshift
3.0
Centralized management
of Openshift cluster
No direct communication
between tenants
Integrate with partners’
infrastructure
No interference between
tenants
Delegate rights to tenant
No interference
• A tenant should not see changes another
tenant made
• A tenant should not see effects of changes
another tenant made
23
No interference
• Projects are invisible to
users that do not have
access to them
• Nodes are global for
master
– solution: tag nodes per
tenant, all tenant projects
have a nodeselector
defined
• Unique names for projects
– workaround via name
convention: prefix per
tenant
24
Too big to
succeed
Self
service
OSE 3.1:
OVS
multitenant
SDN
Multitenant
cluster
Multitenant cluster constraints
25
PoC
Openshift
3.0
Centralized management
of Openshift cluster
No direct communication
between tenants
Integrate with partners’
infrastructure
No interference between
tenants
Delegate rights to tenant
Delegate rights to tenant
• Organize access rights
per tenant
– Different teams with
different accesses
– Tenant admin with access
to all
• Manage who can access
which routes
• Manage which pods can
access which resources
26
Organize access rights per tenant
27
• Openshift “Project”:
– Group of resources
– Access rights to those resources
– No nesting of projects (unlike Openstack & cloudforms)
Organize access rights per tenant
• Organize resources and access rights to them
in projects
• Tag projects as belonging to a tenant
28
Organize access rights per tenant
• We want to define a tenant admin
• Openshift roles: project based or cluster based
• Tenant admin contacts cluster admin
–Temporary solution (does not scale)
29
Manage access
• Traffic to router(s) has to pass through partner network
• Partner controls access from pods to resources in partner
network
– Needs to open access to all nodes because pod can change
nodes
30
OVS Multitenant SDN
31
• Use new feature “OVS multitenant SDN”?
– Would partially solve No direct communication
– We can only limit access to router based on IP
address, we still have to limit access based on node
instead of on pod
• Large impact if implemented
• Decided to wait for other solutions
Too big to
succeed
Self
service
OSE 3.1:
OVS
multitenant
SDN
Multitenant
cluster
PoC
Openshift
3.0
Self-service
33
• Self service for tasks that cannot be delegated
or require systems outside Openshift
– Via cloudforms using Openshift API
Too big to
succeed
Self
service
OSE 3.1:
OVS
multitenant
SDN
Multitenant
cluster
PoC
Openshift
3.0
Self-service
• Automatically set up
tags and
nodeselectors for our
tenant setup during
project creation
• Tenant admin is by
default project admin
of all projects within
tenant
• Other services outside
of Openshift
34
Too big to
succeed
Self
service
OSE 3.1:
OVS
multitenant
SDN
Multitenant
cluster
Too big to succeed
35
PoC
Openshift
3.0
• Each node keeps track of all the services in the
cluster
– Growing overhead on every node per service on
the cluster (due to ip tables)
– Noticeable for us around 500 services
– May need to think about splitting clusters
Wrapping up
36
Summary
• Project:
– tagged with tenant
– defined nodeselector
– has to follow name convention
• Node:
– tagged with tenant: dedicated node pool
– in tenant network
– in dedicated subnet for tenant in service network
37
Current state
• Running version:
openshift 3.3
• 250 Nodes / 500
projects / 2000
pods
• Large mission
critical e-gov
applications – in
production
38
Evaluation of the design
• Good
– Pods are blocked from other tenants' resources
– Pods of one tenant cannot access pods from other tenant
– Integration with existing customer resources
– Standardized framework facilitates scheduling, capacity
planning and reporting
– Single cluster to manage
• Bad
– Dedicated node pools
• Need a buffer per node pool
• Use more nodes compared to a single shared node pool
– Standardized framework: tenant cannot deviate
– Single large cluster: unforeseen overhead (e.g IPtables)
39
Lessons learned
• Openshift is still adding new features
– Regularly review design
• Uncommon setup
– First to find limitations and issues
– Have to create new workarounds
40
Future plans
• Automatically upgrade Openshift cluster
• Set up multiple clusters
– Overhead of large sized cluster (ip tables)
– Smaller clusters to upgrade
– More flexible for partners
• External SDN
• Experiment with new functionalities (egress
router)
41
Questions?
Comments?
Applicants?
(jobs@smals.be)
https://www.gcloud.belgium.be/
greenshift@gcloud.belgium.be
https://www.smals.be
https://www.slideshare.net/Smals_ICT/
https://www.smalsresearch.be/
42

20171010 multitenancy in openshift

  • 1.
    Implementing multi-tenant isolation ina single Openshift cluster Red Hat Forum, Breda (NL), 10/10/2017 www.gcloud.belgium.be 1
  • 2.
  • 3.
    • Focus =Synergy in ICT-services • Services provided by different institutions / service owners • In close collaboration with private sector G-Cloud = Belgian government Cloud 3 www.gcloud.belgium.be
  • 4.
    Business applications Hard infrastructure Softinfrastructure Platform Standard components & applications Housing LAN/WAN Network Storage BabelFed ITSM Service desk Web Content Management BeConnected Unified Communications & Collaboration Internet Access ProtectionBackup Archiving IAM / ShaD G-Cloud-projects 4 GreenShift Open Source YellowShift Microsoft BlueShift IBM RedShift Oracle Business Intelligence & Big Data Analytics Sharepoint Virtual Machine Hypervisor Bare Metal Preparation Realization Service On hold
  • 5.
  • 6.
    Shared ICT servicesin social security & e-health since… 1939
  • 7.
    About Smals 7 • In-houseICT services for government – Governed by Belgian public institutions – Members only – Services provided at cost • Focus on social security & health • Activities: – Software development – Infrastructure management – Staffing • Approximately 1790 employees – looking for 50 more (jobs@smals.be)
  • 8.
    Over 200 memberinstitutions Federal – Regional – Local
  • 9.
  • 10.
    Proof of conceptOSE 3.0 10 • Coming from single tenant OSE 2 • Set up OSE 3 proof of concept – Single shared node pool – Not multitenant Too big to succeed Self service OSE 3.1: OVS multitenant SDN Multitenant cluster PoC Openshift 3.0
  • 11.
    Multitenant cluster 11 • Multiplepartners: – organization or government institution • Multiple tenants per partner Too big to succeed Self service OSE 3.1: OVS multitenant SDN Multitenant cluster PoC Openshift 3.0
  • 12.
    Define: tenant • Tenanthas – Multiple teams – Different access rights per team – Multiple applications 12
  • 13.
    Too big to succeed Self service OSE3.1: OVS multitenant SDN Multitenant cluster Multitenant cluster constraints 13 PoC Openshift 3.0 Centralized management of Openshift cluster No direct communication between tenants Integrate with partners’ infrastructure No interference between tenants Delegate rights to tenant
  • 14.
    Too big to succeed Self service OSE3.1: OVS multitenant SDN Multitenant cluster Multitenant cluster constraints 14 PoC Openshift 3.0 Centralized management of Openshift cluster No direct communication between tenants Integrate with partners’ infrastructure No interference between tenants Delegate rights to tenant
  • 15.
    Centralized management • Sharedmaster(s) • Shared services 15
  • 16.
    Too big to succeed Self service OSE3.1: OVS multitenant SDN Multitenant cluster Multitenant cluster constraints 16 PoC Openshift 3.0 Centralized management of Openshift cluster No direct communication between tenants Integrate with partners’ infrastructure No interference between tenants Delegate rights to tenant
  • 17.
    Integrate with partners’infra • Pods can access resources in a partner’s network – Databases – Webservices – … 17
  • 18.
    Integrate with partners’infra • Nodes in subnet of partner network • Nodes in single network with master 18
  • 19.
    Too big to succeed Self service OSE3.1: OVS multitenant SDN Multitenant cluster Multitenant cluster constraints 19 PoC Openshift 3.0 Centralized management of Openshift cluster No direct communication between tenants Integrate with partners’ infrastructure No interference between tenants Delegate rights to tenant
  • 20.
    No direct communication •Pods from different tenants should not be able access each other – Pods can by default access services in other project (in OVS subnet SDN) – Access to pods via routes and routers (router IP) • Pods should not be able to access resources from a different tenant – Databases – Image repository – Webservices 20
  • 21.
    No direct communication •Blocked everything on network level 21
  • 22.
    Too big to succeed Self service OSE3.1: OVS multitenant SDN Multitenant cluster Multitenant cluster constraints 22 PoC Openshift 3.0 Centralized management of Openshift cluster No direct communication between tenants Integrate with partners’ infrastructure No interference between tenants Delegate rights to tenant
  • 23.
    No interference • Atenant should not see changes another tenant made • A tenant should not see effects of changes another tenant made 23
  • 24.
    No interference • Projectsare invisible to users that do not have access to them • Nodes are global for master – solution: tag nodes per tenant, all tenant projects have a nodeselector defined • Unique names for projects – workaround via name convention: prefix per tenant 24
  • 25.
    Too big to succeed Self service OSE3.1: OVS multitenant SDN Multitenant cluster Multitenant cluster constraints 25 PoC Openshift 3.0 Centralized management of Openshift cluster No direct communication between tenants Integrate with partners’ infrastructure No interference between tenants Delegate rights to tenant
  • 26.
    Delegate rights totenant • Organize access rights per tenant – Different teams with different accesses – Tenant admin with access to all • Manage who can access which routes • Manage which pods can access which resources 26
  • 27.
    Organize access rightsper tenant 27 • Openshift “Project”: – Group of resources – Access rights to those resources – No nesting of projects (unlike Openstack & cloudforms)
  • 28.
    Organize access rightsper tenant • Organize resources and access rights to them in projects • Tag projects as belonging to a tenant 28
  • 29.
    Organize access rightsper tenant • We want to define a tenant admin • Openshift roles: project based or cluster based • Tenant admin contacts cluster admin –Temporary solution (does not scale) 29
  • 30.
    Manage access • Trafficto router(s) has to pass through partner network • Partner controls access from pods to resources in partner network – Needs to open access to all nodes because pod can change nodes 30
  • 31.
    OVS Multitenant SDN 31 •Use new feature “OVS multitenant SDN”? – Would partially solve No direct communication – We can only limit access to router based on IP address, we still have to limit access based on node instead of on pod • Large impact if implemented • Decided to wait for other solutions Too big to succeed Self service OSE 3.1: OVS multitenant SDN Multitenant cluster PoC Openshift 3.0
  • 32.
    Self-service 33 • Self servicefor tasks that cannot be delegated or require systems outside Openshift – Via cloudforms using Openshift API Too big to succeed Self service OSE 3.1: OVS multitenant SDN Multitenant cluster PoC Openshift 3.0
  • 33.
    Self-service • Automatically setup tags and nodeselectors for our tenant setup during project creation • Tenant admin is by default project admin of all projects within tenant • Other services outside of Openshift 34
  • 34.
    Too big to succeed Self service OSE3.1: OVS multitenant SDN Multitenant cluster Too big to succeed 35 PoC Openshift 3.0 • Each node keeps track of all the services in the cluster – Growing overhead on every node per service on the cluster (due to ip tables) – Noticeable for us around 500 services – May need to think about splitting clusters
  • 35.
  • 36.
    Summary • Project: – taggedwith tenant – defined nodeselector – has to follow name convention • Node: – tagged with tenant: dedicated node pool – in tenant network – in dedicated subnet for tenant in service network 37
  • 37.
    Current state • Runningversion: openshift 3.3 • 250 Nodes / 500 projects / 2000 pods • Large mission critical e-gov applications – in production 38
  • 38.
    Evaluation of thedesign • Good – Pods are blocked from other tenants' resources – Pods of one tenant cannot access pods from other tenant – Integration with existing customer resources – Standardized framework facilitates scheduling, capacity planning and reporting – Single cluster to manage • Bad – Dedicated node pools • Need a buffer per node pool • Use more nodes compared to a single shared node pool – Standardized framework: tenant cannot deviate – Single large cluster: unforeseen overhead (e.g IPtables) 39
  • 39.
    Lessons learned • Openshiftis still adding new features – Regularly review design • Uncommon setup – First to find limitations and issues – Have to create new workarounds 40
  • 40.
    Future plans • Automaticallyupgrade Openshift cluster • Set up multiple clusters – Overhead of large sized cluster (ip tables) – Smaller clusters to upgrade – More flexible for partners • External SDN • Experiment with new functionalities (egress router) 41
  • 41.

Editor's Notes

  • #5 UCC: Voice beschikbaar / mail in uitbouw (december) ShaD: federation beschikbaar / andere versies in uitbouw