#ContainerWorld
@ContainerWrld
https://tmt.knect365.com/container-world/
HybridNetworking:ManagingContainersand
VirtualMachines
Prem Sankar Gopannan
System Manager/Principal Architect
Ericsson
#ContainerWorld
@ContainerWrld
https://tmt.knect365.com/container-world/
Agenda
- Kubernetes Networking Basics
- K8S and Openstack Interop challenges
- Openstack Kuryr
- Opendaylight COE
- Service Mesh
- Q&A
Kubernetes Architecture
 Master components
 API Server – Frontend for K8S control plane
 Scheduler
 Control-manager
 Node Controller
 Replication Controller
 Endpoints Controller
 Service account and token controllers
 Etcd – All clustered data is stored
 Worker Node
 Kubelet - primary node agent and watches pod that has
been assigned to node
 Kube-proxy – enables service abstraction by maintaining
network rules on the host
Container networking Basics
 Container uses Linux partitioning capabilities
 Cgroups – accounts and isolates resource usages
 Namespaces – partitions kernel resources – process have their set of resources
 Network namespaces
 Own network stack with interfaces, route tables, sockets and IPTABLE rules
 Multiple containers use pseudo interfaces (veth) and connected to real interfaces
 Kubernetes uses CNI plug-in –
 create pseudo interface
 attaches it network
 Set IP and maps to POD namespace
Kubernetes components
 Abstraction levels
 POD – Encompasses the containers the pods that are related/microservice
 Replication Controller – Defines pod count that corresponds to Service
 Service – defines logical set of Pods
 Ingress – connecting to external world
Service – Deployment 1
PodA
PodA
PodA
PodB
PodB
Service – Deployment 2
PodBPodA
PodBPodA
Kubernetes Networking - Overview
 CNI uses CNI Network plugin to setup container
networking
(If container runtime is docker, CNM will not be
used)
 Plugin responsible for creating network
interface to container
 Plugin calls IPAM to setup IP address
 Plugin needs to implement API for network
creation and deletion
 CNI has two calls
 Add pod to network
 Delete pod from network
Kubernetes
CNI
Network Plugin IPAM
Hybrid Deployment scenario
Neutron Plugin or Gluon/Proton
Linux Linux Linux Linux
Kubernetes
Datacenter Hardware
Linux
OpenStack
OVS
App
Orchestration Systems
OpenStack APIs
SDN Controller
Kubernetes APIs
CNI Plugin
KVM KVM ContainerRT
Container RT
App App App
VM Application Container Application
AppApp App
Openstack Kuryr
 Kuryr is an Openstack project
aimed at providing network and
storage support for hybrid
environments
 Bridge between container
Networking and Openstack
Neutron
 Two implementation
 Kuryr CNI for Kubernetes
 Kuryr libnetwork for docker
Kuryr-Kubernetes Architecture
Kuryr components
 Kuryr Controller
 Watches Kube API resource with a service account
 Secure connection with Neutron API server
 Kuryr CNI
 Communicates with Kube API
 Perform local binding of Neutron port
 Watches Pod resources for controller-driven vif
Kuryr Kubernetes modes
 Baremetal/side by side
 VM and Pods are in the same
 Nested
 Pods within VM
 Uses trunk ports to provide neutron port to containers
 Uses VLAN segmentation so POD communication goes to vswitch
Neutron – K8S construct mapping
Kubernetes Neutron
Namespace Network
Cluster Subnet Subnet Pool
Service Cluster IP Subnet
External Subnet Floating IP
External Network
Router
POD Port
Service Load Balancer
Openstack Kuryr
• Controller
• Watches K8S API endpoints to
make sure that the
corresponding model is
maintained in Neutron
• Updates K8S resource
endpoints annotations to keep
neutron details required by CNI
driver
• Watcher
• Used by both Controller
and CNI Driver
• Connects to K8S API
• Observe Registered
Endpoints and invoke call
back handlers
• CNI Driver
Kuryr-K8S integration
ODL COE Architecture
Host OS
Open vSwitch
Host OS
Open vSwitch
VM
Container
App
Container
App
Container
App
VLAN’s
Orchestrator
OpenDaylight
Kubernetes / Docker
Kuryr
Neutron / Gluon
Container
App
Container
App
Container
App
Container
Mgt
(docker,
kube-pxy)
Iptables / NAT/FW
Native
Opendaylight + Kubernetes
Service Mesh
 Responsible for handling service-to-service communication
 Apps are relieved from worrying about Traffic Management, Discovery, Service Identify and
Security, Policy Management
 Reliable delivery of requests
 Complex topology of services
 Developed as Network Proxy alongside application code
 Sits above TCP/IP and assumes L3/L4 to be present
 Some examples
 Dynamic routing rules
 Should it be routed to Production or Test (A/B testing)
 Health of service and eject it is consistent
Istio components
 Traffic Management
 Pilot
 Service Discovery
 Load Balancing Pools
 Routing Tables
 Request Routing
 Discovery and Load Balancing
 Handling Failures
 Fault Injection
 Rules Configuration
 Network and Auth
 Policies and Control
 Mixer
ISTIO ARCHITECTUREControl
DataPlane
Q&A

Container world hybridnetworking_rev2

  • 1.
  • 2.
    #ContainerWorld @ContainerWrld https://tmt.knect365.com/container-world/ Agenda - Kubernetes NetworkingBasics - K8S and Openstack Interop challenges - Openstack Kuryr - Opendaylight COE - Service Mesh - Q&A
  • 3.
    Kubernetes Architecture  Mastercomponents  API Server – Frontend for K8S control plane  Scheduler  Control-manager  Node Controller  Replication Controller  Endpoints Controller  Service account and token controllers  Etcd – All clustered data is stored  Worker Node  Kubelet - primary node agent and watches pod that has been assigned to node  Kube-proxy – enables service abstraction by maintaining network rules on the host
  • 4.
    Container networking Basics Container uses Linux partitioning capabilities  Cgroups – accounts and isolates resource usages  Namespaces – partitions kernel resources – process have their set of resources  Network namespaces  Own network stack with interfaces, route tables, sockets and IPTABLE rules  Multiple containers use pseudo interfaces (veth) and connected to real interfaces  Kubernetes uses CNI plug-in –  create pseudo interface  attaches it network  Set IP and maps to POD namespace
  • 5.
    Kubernetes components  Abstractionlevels  POD – Encompasses the containers the pods that are related/microservice  Replication Controller – Defines pod count that corresponds to Service  Service – defines logical set of Pods  Ingress – connecting to external world Service – Deployment 1 PodA PodA PodA PodB PodB Service – Deployment 2 PodBPodA PodBPodA
  • 6.
    Kubernetes Networking -Overview  CNI uses CNI Network plugin to setup container networking (If container runtime is docker, CNM will not be used)  Plugin responsible for creating network interface to container  Plugin calls IPAM to setup IP address  Plugin needs to implement API for network creation and deletion  CNI has two calls  Add pod to network  Delete pod from network Kubernetes CNI Network Plugin IPAM
  • 7.
    Hybrid Deployment scenario NeutronPlugin or Gluon/Proton Linux Linux Linux Linux Kubernetes Datacenter Hardware Linux OpenStack OVS App Orchestration Systems OpenStack APIs SDN Controller Kubernetes APIs CNI Plugin KVM KVM ContainerRT Container RT App App App VM Application Container Application AppApp App
  • 8.
    Openstack Kuryr  Kuryris an Openstack project aimed at providing network and storage support for hybrid environments  Bridge between container Networking and Openstack Neutron  Two implementation  Kuryr CNI for Kubernetes  Kuryr libnetwork for docker
  • 9.
  • 10.
    Kuryr components  KuryrController  Watches Kube API resource with a service account  Secure connection with Neutron API server  Kuryr CNI  Communicates with Kube API  Perform local binding of Neutron port  Watches Pod resources for controller-driven vif
  • 11.
    Kuryr Kubernetes modes Baremetal/side by side  VM and Pods are in the same  Nested  Pods within VM  Uses trunk ports to provide neutron port to containers  Uses VLAN segmentation so POD communication goes to vswitch
  • 12.
    Neutron – K8Sconstruct mapping Kubernetes Neutron Namespace Network Cluster Subnet Subnet Pool Service Cluster IP Subnet External Subnet Floating IP External Network Router POD Port Service Load Balancer
  • 13.
    Openstack Kuryr • Controller •Watches K8S API endpoints to make sure that the corresponding model is maintained in Neutron • Updates K8S resource endpoints annotations to keep neutron details required by CNI driver • Watcher • Used by both Controller and CNI Driver • Connects to K8S API • Observe Registered Endpoints and invoke call back handlers • CNI Driver
  • 14.
  • 15.
    ODL COE Architecture HostOS Open vSwitch Host OS Open vSwitch VM Container App Container App Container App VLAN’s Orchestrator OpenDaylight Kubernetes / Docker Kuryr Neutron / Gluon Container App Container App Container App Container Mgt (docker, kube-pxy) Iptables / NAT/FW Native
  • 16.
  • 17.
    Service Mesh  Responsiblefor handling service-to-service communication  Apps are relieved from worrying about Traffic Management, Discovery, Service Identify and Security, Policy Management  Reliable delivery of requests  Complex topology of services  Developed as Network Proxy alongside application code  Sits above TCP/IP and assumes L3/L4 to be present  Some examples  Dynamic routing rules  Should it be routed to Production or Test (A/B testing)  Health of service and eject it is consistent
  • 18.
    Istio components  TrafficManagement  Pilot  Service Discovery  Load Balancing Pools  Routing Tables  Request Routing  Discovery and Load Balancing  Handling Failures  Fault Injection  Rules Configuration  Network and Auth  Policies and Control  Mixer
  • 19.
  • 20.

Editor's Notes

  • #16 With microservices gaining traction, we do see different type of deployments. This diagram captures the possible deployment models. Hybrid environment This is brownfield deployment where in you would have containers running inside a VM or VM and containers existing side-by-side Baremetal or Native This is mostly greenfield deployment where the VNF applications that are based on Microservice architecture are deployed as containers and are orchestrated by Kubernetes/Docker. This is represented on the left side of the diagram COE addresses both the deployment models. The hybrid deployment is handled via Openstack Kuryr (that will be explained in the a moment) and Baremetal is handled by developing ODL CNI for Kubernetes
  • #17 This diagram provides the architecture view of COE baremetal. The components are watcher, ODL K8S components and CNI Plugin. The watcher listens for the K8S events and programs the needed flows in the ODL controller and CNI configures the OVS.