Here is the slide deck presented at our March 16, 2016 Kubernetes meetup by Aniket Daptari, Sr. Product Manager of Cloud Networking, Juniper Networks. It covers OpenContrail with Kubernetes. Sponsored by StackPointCloud and Concur.
Simplifying and Securing your OpenShift Network with Project CalicoAndrew Randall
OpenShift Commons Webinar presented on March 2 2017
OpenShift networking works great out of the box, right? So why would you consider anything else? This briefing examines an alternative approach that has benefits for many scenarios – from tightly securing a few high value AWS instances to scaling a large private cloud deployment. Come learn about how how Calico differs from traditional solutions like OpenShift SDN, and see how Calico has now been integrated with Kubernetes and OpenShift to provide a smooth deployment experience, and lessons learned across hundreds of enterprise users.
Here is the slide deck presented at our March 16, 2016 Kubernetes meetup by Aniket Daptari, Sr. Product Manager of Cloud Networking, Juniper Networks. It covers OpenContrail with Kubernetes. Sponsored by StackPointCloud and Concur.
Simplifying and Securing your OpenShift Network with Project CalicoAndrew Randall
OpenShift Commons Webinar presented on March 2 2017
OpenShift networking works great out of the box, right? So why would you consider anything else? This briefing examines an alternative approach that has benefits for many scenarios – from tightly securing a few high value AWS instances to scaling a large private cloud deployment. Come learn about how how Calico differs from traditional solutions like OpenShift SDN, and see how Calico has now been integrated with Kubernetes and OpenShift to provide a smooth deployment experience, and lessons learned across hundreds of enterprise users.
Container Networking: the Gotchas (Mesos London Meetup 11 May 2016)Andrew Randall
Presentation for the London Mesos Users Meetup, 11 May 2016.
An overview of the current state of the art in container networking, with lessons learned over the last 12 months or so deploying Project Calico in the real world.
Kuryr-Kubernetes: The perfect match for networking cloud native workloads - I...Cloud Native Day Tel Aviv
The Kuryr project offers an interesting approach to network cloud native workloads, by enabling container orchestration engines to consume network services from OpenStack Neutron.With pod-in-VM support, Kuryr-Kubernetes enables a whole slew of new hybrid workloads, like bare metal or in-VM pods accessing services that run on VMs, multiple COEs (e.g. Docker Swarm to Kubernetes), and more. Unified networking simplifies deployment, configuration and provides single pane of glass into management and troubleshooting.
Let’s dive into Kuryr Kubernetes and learn how different open source technologies can complement each other in order to enable number of complicated deployment scenarios.
Simple, Scalable and Secure Networking for Data Centers with Project CalicoEmma Gordon
Traditional overlay networks using VXLAN are more complicated to setup and diagnose than is necessary for the majority of data centers. Calico offers an alternative Layer 3 solution - aside from simplicity, this also offers benefits in terms of improved scale and security.
These are the Calico slides from the SDN Switzerland meetup on 13/11/2015,
Openstack Summit: Networking and policies across Containers and VMsSanjeev Rampal
Container networking & policies across mixed cloud environments (containers, VMs, bare metal). Talk & demo at Openstack Summit 2017 Boston.
Video recording of talk: https://www.openstack.org/videos/boston-2017/cisco-networking-policies-across-containers-and-vms
Can the Open vSwitch (OVS) bottleneck be resolved? - Erez Cohen - OpenStack D...Cloud Native Day Tel Aviv
OpenStack practitioners who have deployed cloud at scale would frown when they hear the mention of Open Virtual Switch (OVS), which has been a bottleneck for cloud network performance and scalability. As emerging technologies such as NFV keep pushing for higher data forwarding performance across the network infrastructure, it becomes critical to improve OVS performance without compromising flexibility, network programmability, and cost.
We will present a novel way to offload the entire OVS dataplane onto the embedded switch (eSwitch) implemented in the server NIC. This approach maximizes the effective bandwidth that the applications can use to communicate with each other or fetch data from storage, and enhances the efficiency of the cloud. Accelerated Switching And Packet Processing (ASAP2) Direct works seamlessly within the framework of SDN, and allow controllers to configure and update flows onto OVS the same way as before so that network programmability remains intact.
Sergei Gotchev, Juniper Networks
Juniper Day, Praha, 13.5.2015
Jestliže SlideShare nezobrazí prezentaci korektně, můžete si ji stáhnout ve formátu .ppsx nebo .pdf (kliknutím na tlačitko v dolní liště snímků).
Calico provides secure network connectivity for containers and virtual machine workloads.
Calico creates and manages a flat layer 3 network, assigning each workload a fully routable IP address. Workloads can communicate without IP encapsulation or network address translation for bare metal performance, easier troubleshooting, and better interoperability. In environments that require an overlay, Calico uses IP-in-IP tunneling or can work with other overlay networking such as flannel.
Calico also provides dynamic enforcement of network security rules. Using Calico’s simple policy language, you can achieve fine-grained control over communications between containers, virtual machine workloads, and bare metal host endpoints.
Proven in production at scale, Calico features integrations with Kubernetes, OpenShift, Docker, Mesos, DC/OS, and OpenStack.
Secure Multi Tenant Cloud with OpenContrailPriti Desai
Building a secure multi-tenant cloud necessitates proper tenant isolation and access control. Key network and security functions must scale independently based on the dynamic resource requirements across each tenant. Additionally, On-demand and self-service provisioning are required for achieving operational efficiencies. Robust, dynamic and elastic software abstractions are imperative to support applications built to run such complex environments.
This slide deck covers:
• Architectural design choices
• Implementation blueprints
• Operational best practices
that have been made to build OpenStack cloud at Symantec.
How we built Packet's bare metal cloud platformPacket
Overview on Packet's approach to bare metal server and network automation for our public cloud. Presented at the Downtech NY Tech meetup on May 19th, 2016
Container Networking: the Gotchas (Mesos London Meetup 11 May 2016)Andrew Randall
Presentation for the London Mesos Users Meetup, 11 May 2016.
An overview of the current state of the art in container networking, with lessons learned over the last 12 months or so deploying Project Calico in the real world.
Kuryr-Kubernetes: The perfect match for networking cloud native workloads - I...Cloud Native Day Tel Aviv
The Kuryr project offers an interesting approach to network cloud native workloads, by enabling container orchestration engines to consume network services from OpenStack Neutron.With pod-in-VM support, Kuryr-Kubernetes enables a whole slew of new hybrid workloads, like bare metal or in-VM pods accessing services that run on VMs, multiple COEs (e.g. Docker Swarm to Kubernetes), and more. Unified networking simplifies deployment, configuration and provides single pane of glass into management and troubleshooting.
Let’s dive into Kuryr Kubernetes and learn how different open source technologies can complement each other in order to enable number of complicated deployment scenarios.
Simple, Scalable and Secure Networking for Data Centers with Project CalicoEmma Gordon
Traditional overlay networks using VXLAN are more complicated to setup and diagnose than is necessary for the majority of data centers. Calico offers an alternative Layer 3 solution - aside from simplicity, this also offers benefits in terms of improved scale and security.
These are the Calico slides from the SDN Switzerland meetup on 13/11/2015,
Openstack Summit: Networking and policies across Containers and VMsSanjeev Rampal
Container networking & policies across mixed cloud environments (containers, VMs, bare metal). Talk & demo at Openstack Summit 2017 Boston.
Video recording of talk: https://www.openstack.org/videos/boston-2017/cisco-networking-policies-across-containers-and-vms
Can the Open vSwitch (OVS) bottleneck be resolved? - Erez Cohen - OpenStack D...Cloud Native Day Tel Aviv
OpenStack practitioners who have deployed cloud at scale would frown when they hear the mention of Open Virtual Switch (OVS), which has been a bottleneck for cloud network performance and scalability. As emerging technologies such as NFV keep pushing for higher data forwarding performance across the network infrastructure, it becomes critical to improve OVS performance without compromising flexibility, network programmability, and cost.
We will present a novel way to offload the entire OVS dataplane onto the embedded switch (eSwitch) implemented in the server NIC. This approach maximizes the effective bandwidth that the applications can use to communicate with each other or fetch data from storage, and enhances the efficiency of the cloud. Accelerated Switching And Packet Processing (ASAP2) Direct works seamlessly within the framework of SDN, and allow controllers to configure and update flows onto OVS the same way as before so that network programmability remains intact.
Sergei Gotchev, Juniper Networks
Juniper Day, Praha, 13.5.2015
Jestliže SlideShare nezobrazí prezentaci korektně, můžete si ji stáhnout ve formátu .ppsx nebo .pdf (kliknutím na tlačitko v dolní liště snímků).
Calico provides secure network connectivity for containers and virtual machine workloads.
Calico creates and manages a flat layer 3 network, assigning each workload a fully routable IP address. Workloads can communicate without IP encapsulation or network address translation for bare metal performance, easier troubleshooting, and better interoperability. In environments that require an overlay, Calico uses IP-in-IP tunneling or can work with other overlay networking such as flannel.
Calico also provides dynamic enforcement of network security rules. Using Calico’s simple policy language, you can achieve fine-grained control over communications between containers, virtual machine workloads, and bare metal host endpoints.
Proven in production at scale, Calico features integrations with Kubernetes, OpenShift, Docker, Mesos, DC/OS, and OpenStack.
Secure Multi Tenant Cloud with OpenContrailPriti Desai
Building a secure multi-tenant cloud necessitates proper tenant isolation and access control. Key network and security functions must scale independently based on the dynamic resource requirements across each tenant. Additionally, On-demand and self-service provisioning are required for achieving operational efficiencies. Robust, dynamic and elastic software abstractions are imperative to support applications built to run such complex environments.
This slide deck covers:
• Architectural design choices
• Implementation blueprints
• Operational best practices
that have been made to build OpenStack cloud at Symantec.
How we built Packet's bare metal cloud platformPacket
Overview on Packet's approach to bare metal server and network automation for our public cloud. Presented at the Downtech NY Tech meetup on May 19th, 2016
DockerCon EU 2018 Workshop: Container Networking for Swarm and Kubernetes in ...Guillaume Morini
Docker Enterprise is changing the application landscape but you still need container A to talk to B in a reliable and portable way. In this workshop you will learn key Docker Enterprise networking concepts, container networking best practices, get your hands dirty by going over use-cases and examples across both Swarm and Kubernetes. Join us to learn more.
Presentation given at the 2017 LinuxCon China
With the booming of Container technology, it brings obvious advantages for cloud: simple and faster deployment, portability and lightweight cost. But the networking challenges are significant. Users need to restructure their network and support container deployment with current cloud framework, like container and VMs.
In this presentation, we will introduce new container networking solution, which provides one management framework to work with different network componenets through Open/friendly modelling mechnism. iCAN can simplify network deployment and management with most orchestration systems and a variety of data plane components, and design extendsible architect to define and validate Service Level Agreement(SLA) for cloud native applications, which is important factor for enterprise to deliver successful and stable service via containers.
Gaetano Borgione's presentation from the 2017 Open Networking Summit.
Networking is vital for cloud-native apps where distributed computing and development models require speed, simplicity, and scale for massive number of ephemeral containers. Two of the most prevalent container networking models are CNI and CNM for developers using Docker, Mesos, or Kubernetes. This session will present an overview of distributed development, how CNI and CNM models work, and how container frameworks use these models for networking. Gaetano will also discuss the additional functions users need to consider in the control plane and data plane to achieve operational scale and efficiency.
"One network to rule them all" - OpenStack Summit Austin 2016Phil Estes
Presentation at IBM Client Day by Kyle Mestery and Phil Estes, OpenStack Summit 2016 - Austin, Texas on April 26, 2016. "Open, Scalable and Integrated Networking for Containers and VMs" covering Project Kuryr, Docker's libnetwork, and Neutron & OVS and OVN network stacks
Collabnix Slack Channel accomodates around 1300+ members and conducted the first online webinar. One of Dockerlabs contributor "Balasundaram Natarajan" talked around Demystifying Docker & Kubernetes Networking.
OSS Japan 2019 service mesh bridging Kubernetes and legacySteve Wong
how to join legacy VMs and bare metal machines to a Kubernetes service mesh so that VMs can consume Kubernetes services AND publish services used by Kubernetes hosted applications
Overview of OpenStack nova-networking evolution towards Neutron. Architecture overview of OVS plugin, ML2, and MidoNet Overlay product. Overview and example of Heat templates, along with automation of physical switches using Cumulus
How to build a Kubernetes networking solution from scratchAll Things Open
Presented by: Antonin Bas & Jianjun Shen, VMware
Presented at All Things Open 2020
Abstract: For the non-initiated, Kubernetes (K8s) networking can be a bit like dark magic. Many clusters have requirements beyond what the default network plugin, kubenet, can provide and require the use of a third-party Container Network Interface (CNI) plugin. But what exactly is the role of these plugins, how do they differ from each other and how does the choice of one affect your cluster?
In this talk, Antonin and Jianjun will describe how a group of developers was able to build a CNI plugin - an open source project called Antrea - from scratch and bring it to production in a matter of months. This velocity was achieved by leveraging existing open-source technologies extensively: Open vSwitch, a well-established programmable virtual switch for the data plane, and the K8s libraries for the control plane. Antonin and Jianjun will explain the responsibilities of a CNI plugin in the context of K8s and will walk the audience through the steps required to create one. They will show how Antrea integrates with the rest of the cloud-native ecosystem (e.g. dashboards such as Octant and Prometheus) to provide insight into the network and ensure that K8s networking is not just dark magic anymore.
The presentation will provide a brief overview of Tungsten Fabric, and the new features in the recent 5.0 release. A demo of Tungsten Fabric will follow, with an overview of core functionality, and newly released features.
Speaker: Nick Davey, Cloud - SDN Product Manager
OpenStack and OpenContrail for FreeBSD platform by Michał Dubieleurobsdcon
Abstract
OpenStack and OpenContrail network virtualization solution form a complete suite able to successfully handle orchestration of resources and services of a contemporary cloud installations. These projects, however, have been only available for Linux hosted platforms by now. This talk is about a work underway that brings them into the FreeBSD world.
It explains in greater details an architecture of an OpenStack system and shows how support for the FreeBSD bhyve hypervisor was brought up using the libvirt library. Details of the OpenContrail network virtualization solution is also provided, with special emphasis on the lower level system entities like a vRouter kernel module, which required most of the work while developing the FreeBSD version.
Speaker bio
Michal Dubiel, M.Sc. Eng., born 17th of September 1983 in Kraków, Poland. He graduated in 2009 from the faculty of Electrical Engineering, Automatics, Computer Science and Electronics of AGH University of Science and Technology in Kraków. Throughout his career he worked for ACK Cyfronet AGH on hardware-accelerated data mining systems and later for Motorola Electronics on DSP software for LTE base stations. Currently he is working for Semihalf on various software projects ranging from low level kernel development to Software Defined Networking systems. He is mainly interested in the computer science, especially the operating systems, programming languages, networks, and digital signal processing.
OpenStack Tokyo 2015: Connecting the Dots with NeutronPhil Estes
Mohammad Banikazemi and Phil Estes from IBM discuss unifying the virtualized networking layers between containers and VMs using Neutron and Docker's libnetwork pluggable API, filling the gap with recently announced Project Kuryr
Daniel Firestone and Gabriel Silva's presentation from the 2017 Open Networking Summit.
SDN is at the foundation of all large scale networks in the public cloud, such as Microsoft Azure - at past ONSes, Microsoft has detailed how all of Azure's virtual networks, load balancing, and security operate on SDN. But how do we make a software network scale to an era of 40, 50, and 100 gigabit networks on servers, providing great performance to end customers with ever increasing VM and container scale and density?
In this presentation, Daniel Firestone and Gabriel Silva will detail Azure Accelerated Networking, using Azure's FPGA-based SmartNICs. They will show how using FPGAs, we can achieve the programmability of a software network with the performance of a hardware one. They will detail how this and other host SDN advances have led to huge performance increases for Linux VMs in particular, and Linux-based NFV appliances, giving Azure industry-leading network performance.
OpenContrail tech doc in Japanese
1.Routing architecture and implementation
2.Service chaining architecture and implementation
3.Neutron router with OpenContrail
4.HA walk
Italy Agriculture Equipment Market Outlook to 2027harveenkaur52
Agriculture and Animal Care
Ken Research has an expertise in Agriculture and Animal Care sector and offer vast collection of information related to all major aspects such as Agriculture equipment, Crop Protection, Seed, Agriculture Chemical, Fertilizers, Protected Cultivators, Palm Oil, Hybrid Seed, Animal Feed additives and many more.
Our continuous study and findings in agriculture sector provide better insights to companies dealing with related product and services, government and agriculture associations, researchers and students to well understand the present and expected scenario.
Our Animal care category provides solutions on Animal Healthcare and related products and services, including, animal feed additives, vaccination
Instagram has become one of the most popular social media platforms, allowing people to share photos, videos, and stories with their followers. Sometimes, though, you might want to view someone's story without them knowing.
Ready to Unlock the Power of Blockchain!Toptal Tech
Imagine a world where data flows freely, yet remains secure. A world where trust is built into the fabric of every transaction. This is the promise of blockchain, a revolutionary technology poised to reshape our digital landscape.
Toptal Tech is at the forefront of this innovation, connecting you with the brightest minds in blockchain development. Together, we can unlock the potential of this transformative technology, building a future of transparency, security, and endless possibilities.
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptxBrad Spiegel Macon GA
Brad Spiegel Macon GA’s journey exemplifies the profound impact that one individual can have on their community. Through his unwavering dedication to digital inclusion, he’s not only bridging the gap in Macon but also setting an example for others to follow.
Gen Z and the marketplaces - let's translate their needsLaura Szabó
The product workshop focused on exploring the requirements of Generation Z in relation to marketplace dynamics. We delved into their specific needs, examined the specifics in their shopping preferences, and analyzed their preferred methods for accessing information and making purchases within a marketplace. Through the study of real-life cases , we tried to gain valuable insights into enhancing the marketplace experience for Generation Z.
The workshop was held on the DMA Conference in Vienna June 2024.
Meet up Milano 14 _ Axpo Italia_ Migration from Mule3 (On-prem) to.pdfFlorence Consulting
Quattordicesimo Meetup di Milano, tenutosi a Milano il 23 Maggio 2024 dalle ore 17:00 alle ore 18:30 in presenza e da remoto.
Abbiamo parlato di come Axpo Italia S.p.A. ha ridotto il technical debt migrando le proprie APIs da Mule 3.9 a Mule 4.4 passando anche da on-premises a CloudHub 1.0.
4. Beyond cloud-native… Do you care about:
• High-performance forwarding
• Proven cloud-grade, carrier-grade scale
• Feature rich for Kubernetes and LB, beyond CNI
• Feature rich in general for net + sec
• Multi-tenancy
• Open source / community
• Open standards-based federation
• Multiple orchestrator support
• Solid vendor backing and optional services
• Collapsing stacked SDNs: e.g. K8s on OpenStack
• Ease of use
SDN ECOSYSTEM in CNCF
6. Typical Kubernetes setup
●Kuberenetes Cluster
APIServer
Controller
Scheduler
etcd
OVS/Bridge
Docker network
pod
pod
kube-let
kube-proxy
Kubernetes system is consist from Kube Master and
Worker. Master has API, container scheduler,
Database. Worker has kube-let, kube-proxy and pod
as Container.
kube-let in worker node has Container Network
interface called CNI which is plugin of Network
function. Use can select network plugin for
particular use case.OVS/Bridge
Docker network
pod
pod
kube-let
kube-proxy
7. Typical Kubernetes Network
●Typical K8S network behavior
pod-network
service-network
external-network
Typically K8S has three type of Network
1) pod-network which is connecting POD. All of POD
connect common Network. This network only uses
internally.
2) service network which is used by cluster-ip of
“Service”. Inter Service communication uses this
network.
3) external-network which is used by ”LoadBalancer”.
Outside of K8S user uses this network to connect
POD.
pod pod pod
Service
ClusterIP
pod
Service
LoadBalancer
Internet
LAN
8. Typical Kubernetes Network
●Typical K8S network behavior
external-network
When POD and Service are created, each IP address is
assigned automatically.
1) External User connects “192.168.0.1” as Web
loadBalancer.
2) Web LoadBalancer does Destination NAT to
selected nginx pod.
3) nginx pod connects ”172.16.0.11” as DB Cluster IP
4) DB Cluster IP does Destination NAT to selected
mysql pod.
That is pod network is isolated from External network.
User cannot reach to POD directly from External.
Those such LoadBalancer and ClusterIP behavior is
done by kube-proxy.
mysq
l
nginx
DB
ClusterIP
Web
LoadBalancer
Internet
LAN
192.168.0.0/24
172.16.0.0/24
192.168.0.0/24
nginxmysq
l
.1
.21 .22 .23 .24
.11 .12
pod-network
service-network
9. K8S CNI Typical behavior
●Typical K8S network policy (namespace)
Namespace defines POD groups like Openstack Project.
It make isolation among different namespace.
That means pod inside namespace can communicate
each other, but different namespace cannot.
Even pods are created in different namespaces, IP
address are assigned by common pod-network pool.
mysq
l
nginx nginx
mysq
l
mysq
l
apatch apatchmysq
l
apiVersion: v1
kind: Pod
metadata:
name: mysql
namespace: groupA
labels:
name: db
spec:
containers:
- name: mysql-gA
image: mysql
namespace: groupA
namespace: groupB
pod-network
apiVersion: v1
kind: Pod
metadata:
name: mysql
namespace: groupB
labels:
name: sb
spec:
containers:
- name: mysql-gB
image: mysql
10. K8S CNI Typical behavior
●Typical K8S network policy (Label)
Label defines each pod as particular role. the Label also
be used to defined access control.
The pods in same service labels can reach each other.
The pods in different labels cannot reach.
Why Label is used to filter traffic, IP address of POD
may be changed when POD moves. so IP address based
filtering is useless.
For example, pod which labels are “wordpress” and
“db” accepts connection from pod having “wordpress”
and “webapi” labels. Connection is denied even If one
of labels are match.
mysq
l
nginx nginx
mysq
l
service: wordpress
role: webapi
service: wordpress
role: db
mysq
l
apatch apatchmysq
l
pod-network
service: redmine
role: webapi
service: redmine
role: db
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
service: wordpress
role: db
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
service: wordpress
role: webapi
11. K8S CNI Typical behavior
●Typical K8S network policy (Ingress)
nginx nginx
pod-network
external-network
Web
LoadBalancer
192.168.0.0/24
172.16.0.0/24
service-network
192.168.10.0/24 192.168.20.0/24
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
service: wordpress
role: nginx
policyTypes:
- Ingress
ingress:
- from:
- ipBlock:
cidr: 192.168.20.0/24
“Ingress” addressed incoming traffic to pod. User can
define particular CIDR and port in Ingress – from –
ipBlock section.
12. K8S CNI Typical behavior
●Typical K8S network policy (Egress)
nginx nginx
pod-network
external-network
Web
LoadBalancer
192.168.0.0/24
172.16.0.0/24
service-network
192.168.10.0/24 192.168.20.0/24
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
service: wordpress
role: nginx
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 192.168.20.0/24
“Egress” addressed ongoing traffic from pod. User can
define particular CIDR and port in Egress – from –
ipBlock section.
13. Typical Kubernetes External Connection
●Kuberenetes setup
APIServer
Controller
Scheduler
etcd
Physical Network
BMS
OVS/Bridge
Docker network
pod
pod
kube-let
kube-proxy
Internet
If POD needs to connect to other network like
Internet or LAN, POD address is translated to
IP address of Worker node as SNAT.
Pod Network cannot connect external
network without SNAT.
That means, External system cannot filter by
exact IP address of POD and Worker node.
Because IP address of POD sometimes be
changed and it moves to other worker node.
POD IP address
is translated to IP
address of node
14. Considering use case
●Dedicated pod-network
mysq
l
nginx nginxmysq
l
192.168.10.0/24
Tenant: A
mysq
l
nginx nginxmysq
l
192.168.20.0/24
Tenant: B
One Kubernetes setup has only one Pod-network. If
dedicated pod network is required, it must be separated
kubernetes setup must be deployed. Kubernetes can be
deployed on Virtual-machine on Openstack or anything.
But even multiple kubernetes setup on each VM are
deployed, different pod network cannot communicate
without NAT as I described so far.
15. Considering use case
●Openstack Virtual-machine for K8S setup
•NovaAPI
•Glance
•Keystone
•Neutron
OVS/Bridge
APIServer
Controller
Scheduler
APIServer
Controller
Scheduler
OVS/Bridge
mysq
l
nginx nginx
mysq
l
192.168.10.0/24
mysq
l
nginx nginx
mysq
l
192.168.20.0/24
Tenant: BTenant: A
mysq
l
nginx
mysq
l
nginx
OVS/BR OVS/BR
OVS/Bridge
mysq
l
nginx
mysq
l
nginx
OVS/BR OVS/BR
16. Typical Enterprise use case
●Consider K8S limitation from Typical Enterprise Network Design
Web Web
API API
DB DB
192.168.10.0/24
192.168.20.0/24
192.168.30.0/24
172.16.0.0/24
Syslo
g
Monitor
Service Network
Develop:A
Develop:B
Typical Enterprise Network uses dedicated
network to isolate different Section, Division
purpose. For the isolation, Firewall address
ingress/eggres traffic between them.
For instance “Service Network” only be
allowed to ”Web” server TCP:80.
Develop:A allows to connect ”Web”
TCP:22,80.
Develop:B allows to to connect “API”and“DB”.
Develop:A and Develop:B doesn’t connect
each other.
“Web” allows to connect API, TCP8080 and
Syslog and monitoring and so on.
if ”Web” and “API” is containerized by K8S,
existing Network design might not work well.
Challenges:
• Dedicated POD network
• FW integration
• Existing Network connection
• Direct POD connection
18. Physical IP Fabric
(no changes)
TungstenFabric
CONTROLLER
ORCHESTRATOR
Host O/SvRouter
Network / Storage
orchestration
Gateway
…
Internet / WAN
or Legacy Env.
(Config, Control, Analytics, Svr Mgmt)
(Windows, Linux ….) on BMS
TOR
Compute
orchestration
Virtual Network
Blue
Virtual Network
Red
FW
Logical View
…
Centralized
PolicyDefinition
Distributed
PolicyEnforcement
BGP
BGP XMPPEVPN
Tungsten Fabric Overview
19. Typical Kubernetes setup with Tungsten Fabric
●Kuberenetes Cluster on BMS
APIServer
TF vRouter
pod
pod
TungstenFabric is one of CNI plugins which provides
additional network service to POD.
TungstenFabric resolves POD network limitation
which I described so far.
TungstenFabric has two mode, one is BMS mode
and another is Nested mode.
BMS mode installs TungstenFabric vRouter on BMS.
•KubeManager
•Controller
•Analytics
•Analytics-DB
kubelet
CNI
Agent
TF vRouter
pod
pod
kubelet
CNI
Agent
20. Typical Kubernetes setup with Tungsten Fabric
●Kuberenetes Cluster on Openstack
TF vRouter
Nested mode doesn’t install TF vRouter on VM. Also
SDN Controller is not installed Kubernetes Cluster.
only Kubemanager and CNI are installed. kube-
manager calls TF Controller which works Openstack
Neutron Plugin and CNI calls TF Agent on Compute
node. It is very unique solution to avoid multiple
SDN controller runs on VM.
Also, Worker node doesn’t need TF vRouter. it uses
VLAN to isolate POD network.
•Controller
•Analytics
•Analytics-DB
•NovaAPI
•Glance
•Keystone
•Neutron
Agent
APIServer
kube-manager
kubelet
CNI
bridge
pod
pod
vlan
vlan
21. Typical Kubernetes setup with Tungsten Fabric
●Kuberenetes Cluster with Openstack
TungstanFabric can work with both Openstack and
K8S at same time. It can extend same Virtual-
network between VM and POD.
Also same security policy such as Security Group or
Label based FW is attached to both VM and POD.
•Controller
•Analytics
•Analytics-DB
•NovaAPI
•Glance
•Keystone
•Neutron
TF vRouter
pod
pod
kubelet
CNI
Agent
APIServer
kube-manager
TF
vRouter
Agent
VM
22. What challenges can TF resolve?
●Dedicated POD network
mysql nginx nginxmysql
192.168.10.0/24
Tenant: A
apiVersion: v1
kind: Pod
metadata:
name: mysql
annotations: {
"opencontrail.org/network" : '{"domain":"default-domain",
"project": ”user1", "name":”pod-vn1"}'
}
labels:
name: db
spec:
containers:
- name: mysql-gA
image: mysql
TungstenFabric provides dedicated network to POD
using “annotations”.
User can attach their own virtual-network to POD
without separated K8S setup.
23. What challenges can TF resolve?
mysql nginx nginxmysql
192.168.10.0/24
TungstenFabric allows to connect multiple virtual
networks connecting POD.
User can define 5 tuple based Filter between virtual
network like Security Group as Openstack.
It’s easy to define which traffic allows to connect by
TungstenFabric.
192.168.20.0/24
●Inter POD network
24. What challenges can TF resolve?
apiVersion: v1
kind: Pod
metadata:
name: cirros-vn1-1
annotations: {
"opencontrail.org/network" : '{"domain":"default-domain",
"project": "juniper-test", "name":"pod-service-1"}'
}
labels:
application: service-app1
label: web
spec:
replicas: 2
containers:
- name: cirros-vn1-1
image: docker.io/cirros
imagePullPolicy: IfNotPresent
●Enforce Label Based Filter
Traffic filter is defined by YAML which is configured
POD owner. Thus, IT supervisor cannot control
traffic using Firewall.
TungstenFabric enforces traffic by Firewall Rule
which is defined IT supervisor.
TungstenFabric provides global policy to POD
network. If service owner violates its policy, traffic
cannot reach to other POD.
25. What challenges can TF resolve?
●Direct Connect from External Network
TungstenFabric provides Floating IP to POD
interface as same feature as Openstack one.
TungstenFabric does Destination NAT on its vRouter
module to exact POD IP address.
Left Picture shows “Router” does NAT, but actually
TF vRouter on worker node does NAT. There is no
Physical Router or dedicated NAT server.
It is very useful to connect POD directly from
external Network for debugging purpose.
This feature is not K8S standard feature, so it
requires to use TungsteFabric API.
nginx nginx
pod-network
external-network
Web
LoadBalancer
service-network
public-network
D:203.0.113.1
10.0.10.1
10.0.10.1
26. What challenges can TF resolve?
●External Network Connection
As described, TungstenFabric connect with Physical
Router and HVTEP. So TungstenFabric can associate
External Network with POD network keeping
network isolation.
Because, Virtual-network of TungstenFabric has
“VNI” and “Route-Target” for each and its route
advertises to Router/HVTEP by L3VPN/EVPN and
create Tunnel between Router/HVTEP and TF
vRouter.
APIServer
TF vRouter
pod
pod
•KubeManager
•Controller
•Analytics
•Analytics-DB
kubelet
CNI
Agent
TF vRouter
pod
pod
kubelet
CNI
Agent
SV VMSV VM
BGP L3VPN/EVPN
27. What challenges can TF resolve?
●VNF integration
TugstanFabric can associate pod network with
VNF/PNF.
VNF/PVN can stir pod traffic to Service Chaining.
Service Chaining can stir traffic to multiple VNFs by
TF vRouter instead of actual VNF configuration.
Thus, it is possible to add/delete/scale out/scale in
without configuration changing in VNF/PNF.
TF vRouter
pod
pod
kubelet
CNI
Agent
TF vRouter
Agent
Internet
28. What challenges can TF resolve?
●Consider Typical Enterprise Network Design by TungsetenFabric
Web Web
API API
DB DB
192.168.10.0/24
192.168.20.0/24
192.168.30.0/24
172.16.0.0/24
Syslo
g
Monitor
Service Network
Develop:A
Develop:B
TunstenFabric resolved many challenges of
K8S default feature. Thus, Left picture
network design is enabled.
Tungsten Fabric Resolves:
• Dedicated POD network
• FW integration
• Existing Network connection
• Direct POD connection