The document provides an overview of Kubernetes networking concepts and common networking solutions like Flannel and Calico. It describes how Kubernetes networking works at a basic level using namespaces and CNI plugins. It then dives deeper into how Flannel and Calico implement overlay networking between pods using techniques like VXLAN tunnels, IP-in-IP encapsulation, and BGP routing. The document walks through examples of pod-to-pod communication and how packets flow with ARP resolution and routing for both Flannel and Calico networks.
this slide is created for understand open vswitch more easily.
so I tried to make it practical. if you just follow up this scenario, then you will get some knowledge about OVS.
In this document, I mainly use only two command "ip" and "ovs-vsctl" to show you the ability of these commands.
In this slide, we discuss the concept of IPTABLES/EBTABLES and then show how they work in a simple docker environment.
In order to track the packet flow in those containers communication, we use the LOG module in IPTABLES/EBTABLE to track the information.
This is a followup to our Docker networking tutorial. This slidedeck describes the options for deploying Docker container in a multi-host cluster environment. We introduce the LorisPack toolkit for connecting and isolating pods of containers deployed across multiple hosts.
Docker Networking with New Ipvlan and Macvlan DriversBrent Salisbury
Docker Networking presentation at ONS2016.
Docker Macvlan and Ipvlan Networking Drivers Experimental Readme:
github.com/docker/docker/blob/master/experimental/vlan-networks.md
Kernel requirements for Ipvlan mode is v4.2+, Macvlan mode is v3.19.
If using Virtualbox to test with, use NAT mode interfaces unless you have multiple MAC addresses working in your setup. Use the 172.x.x.x subnet and gateway used by the VBox NAT network. Vmware Fusion works out of the box.
Here is a screenshot of a VirtualBox NAT interface:
https://www.dropbox.com/s/w1rf61n18y7q4f1/Screenshot%202016-03-20%2001.55.13.png?dl=0
this slide is created for understand open vswitch more easily.
so I tried to make it practical. if you just follow up this scenario, then you will get some knowledge about OVS.
In this document, I mainly use only two command "ip" and "ovs-vsctl" to show you the ability of these commands.
In this slide, we discuss the concept of IPTABLES/EBTABLES and then show how they work in a simple docker environment.
In order to track the packet flow in those containers communication, we use the LOG module in IPTABLES/EBTABLE to track the information.
This is a followup to our Docker networking tutorial. This slidedeck describes the options for deploying Docker container in a multi-host cluster environment. We introduce the LorisPack toolkit for connecting and isolating pods of containers deployed across multiple hosts.
Docker Networking with New Ipvlan and Macvlan DriversBrent Salisbury
Docker Networking presentation at ONS2016.
Docker Macvlan and Ipvlan Networking Drivers Experimental Readme:
github.com/docker/docker/blob/master/experimental/vlan-networks.md
Kernel requirements for Ipvlan mode is v4.2+, Macvlan mode is v3.19.
If using Virtualbox to test with, use NAT mode interfaces unless you have multiple MAC addresses working in your setup. Use the 172.x.x.x subnet and gateway used by the VBox NAT network. Vmware Fusion works out of the box.
Here is a screenshot of a VirtualBox NAT interface:
https://www.dropbox.com/s/w1rf61n18y7q4f1/Screenshot%202016-03-20%2001.55.13.png?dl=0
Linux offers an extensive selection of programmable and configurable networking components from traditional bridges, encryption, to container optimized layer 2/3 devices, link aggregation, tunneling, several classification and filtering languages all the way up to full SDN components. This talk will provide an overview of many Linux networking components covering the Linux bridge, IPVLAN, MACVLAN, MACVTAP, Bonding/Team, OVS, classification & queueing, tunnel types, hidden routing tricks, IPSec, VTI, VRF and many others.
Introduce the basic concept of Open vSwitch. In this slide, we talked about how Linux kernel and networking stack worked together to forward and process the network packet and also compare those Linux networking stack functionality with Open vSwitch and Openflow.
At the end of this slide, we talk about the challenge to integrate the Open vSwitch with Kubernetes, what kind of the networking function we need to resolve and what is the benefit we can get from the Open Vswitch.
Introduction to Docker Networking options. We give in-depth description of the different options with single host examples. See our other presentations for multi-host, IPv6, and CoreOS Flannel descriptions.
This presentation provides an introductory overview of Linux networking options, including network namespaces, VLAN interfaces, MACVLAN interfaces, and virtual Ethernet (veth) interfaces.
Simplifying open stack and kubernetes networking with romanaJuergen Brendel
Romana, the open source project by Pani Networks, brings stunning simplicity to the usually so complex networking in OpenStack and Kubernetes. Using only native L3 routing and no overlays, along with automated distributed application of network policies and security rules, it provides operators with easy to understand and manage networking, while allowing network hardware to operate at its best and with full efficiency.
These slides were used during the OpenStack meetup in Auckland in May 2016, hosted by Catalyst IT.
OpenStack: Virtual Routers On Compute Nodesclayton_oneill
Learn the production pros and cons of operating Neutron legacy and HA routers on compute nodes in your production cloud. Not ready for DVR or third-party network overhauls? Virtual router network “hot spots” got you down? Large virtual router failure domains keeping you up late at night? Neutron reference architectures not providing a scalable routing solution? If you answered yes to any of these questions then this talk is for you.
The Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This talk will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers.
The talk will continue with a demo showing how to build your own simple overlay using these technologies.
Taking Security Groups to Ludicrous Speed with OVS (OpenStack Summit 2015)Thomas Graf
Open vSwitch (OVS) has long been a critical component of the Neutron's reference implementation, offering reliable and flexible virtual switching for cloud environments.
Being an early adopter of the OVS technology, Neutron's reference implementation made some compromises to stay within the early, stable featureset OVS exposed. In particular, Security Groups (SG) have been so far implemented by leveraging hybrid Linux Bridging and IPTables, which come at a significant performance overhead. However, thanks to recent developments and ongoing improvements within the OVS community, we are now able to implement feature-complete security groups directly within OVS.
In this talk we will summarize the existing Security Groups implementation in Neutron and compare its performance with the Open vSwitch-only approach. We hope this analysis will form the foundation of future improvements to the Neutron Open vSwitch reference design.
When it comes to networking inside Kubernetes, selecting the correct networking solution may be one of the most important decisions you may face. This is especially true if you are trying to run a Kubernetes cluster in production.
Therefore it's beneficial to have a good understanding of different CNI options out there and most importantly how these networking options are different from each other.
This presentation goes over packet by packet-level details of how the network plumbing is happening with different CNI plugins including, Flannel, Calico & Cilium.
Kubernetes has a very complex network architecture. It is the networking that enables Kubernetes to redefine the latest container technology.
1. Docker containers networks
2. Containers communication in a Pod
3. Pods communication cross different nodes
4. Pod to Service communication
Linux offers an extensive selection of programmable and configurable networking components from traditional bridges, encryption, to container optimized layer 2/3 devices, link aggregation, tunneling, several classification and filtering languages all the way up to full SDN components. This talk will provide an overview of many Linux networking components covering the Linux bridge, IPVLAN, MACVLAN, MACVTAP, Bonding/Team, OVS, classification & queueing, tunnel types, hidden routing tricks, IPSec, VTI, VRF and many others.
Introduce the basic concept of Open vSwitch. In this slide, we talked about how Linux kernel and networking stack worked together to forward and process the network packet and also compare those Linux networking stack functionality with Open vSwitch and Openflow.
At the end of this slide, we talk about the challenge to integrate the Open vSwitch with Kubernetes, what kind of the networking function we need to resolve and what is the benefit we can get from the Open Vswitch.
Introduction to Docker Networking options. We give in-depth description of the different options with single host examples. See our other presentations for multi-host, IPv6, and CoreOS Flannel descriptions.
This presentation provides an introductory overview of Linux networking options, including network namespaces, VLAN interfaces, MACVLAN interfaces, and virtual Ethernet (veth) interfaces.
Simplifying open stack and kubernetes networking with romanaJuergen Brendel
Romana, the open source project by Pani Networks, brings stunning simplicity to the usually so complex networking in OpenStack and Kubernetes. Using only native L3 routing and no overlays, along with automated distributed application of network policies and security rules, it provides operators with easy to understand and manage networking, while allowing network hardware to operate at its best and with full efficiency.
These slides were used during the OpenStack meetup in Auckland in May 2016, hosted by Catalyst IT.
OpenStack: Virtual Routers On Compute Nodesclayton_oneill
Learn the production pros and cons of operating Neutron legacy and HA routers on compute nodes in your production cloud. Not ready for DVR or third-party network overhauls? Virtual router network “hot spots” got you down? Large virtual router failure domains keeping you up late at night? Neutron reference architectures not providing a scalable routing solution? If you answered yes to any of these questions then this talk is for you.
The Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This talk will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers.
The talk will continue with a demo showing how to build your own simple overlay using these technologies.
Taking Security Groups to Ludicrous Speed with OVS (OpenStack Summit 2015)Thomas Graf
Open vSwitch (OVS) has long been a critical component of the Neutron's reference implementation, offering reliable and flexible virtual switching for cloud environments.
Being an early adopter of the OVS technology, Neutron's reference implementation made some compromises to stay within the early, stable featureset OVS exposed. In particular, Security Groups (SG) have been so far implemented by leveraging hybrid Linux Bridging and IPTables, which come at a significant performance overhead. However, thanks to recent developments and ongoing improvements within the OVS community, we are now able to implement feature-complete security groups directly within OVS.
In this talk we will summarize the existing Security Groups implementation in Neutron and compare its performance with the Open vSwitch-only approach. We hope this analysis will form the foundation of future improvements to the Neutron Open vSwitch reference design.
When it comes to networking inside Kubernetes, selecting the correct networking solution may be one of the most important decisions you may face. This is especially true if you are trying to run a Kubernetes cluster in production.
Therefore it's beneficial to have a good understanding of different CNI options out there and most importantly how these networking options are different from each other.
This presentation goes over packet by packet-level details of how the network plumbing is happening with different CNI plugins including, Flannel, Calico & Cilium.
Kubernetes has a very complex network architecture. It is the networking that enables Kubernetes to redefine the latest container technology.
1. Docker containers networks
2. Containers communication in a Pod
3. Pods communication cross different nodes
4. Pod to Service communication
Tutorial on using CoreOS Flannel for Docker networkingLorisPack Project
Flannel is an overlay based networking technique for networking Docker containers on CoreOS platforms. This tutorial explains the theory, setup instructions and limtations of the mechanism.
"One network to rule them all" - OpenStack Summit Austin 2016Phil Estes
Presentation at IBM Client Day by Kyle Mestery and Phil Estes, OpenStack Summit 2016 - Austin, Texas on April 26, 2016. "Open, Scalable and Integrated Networking for Containers and VMs" covering Project Kuryr, Docker's libnetwork, and Neutron & OVS and OVN network stacks
How to build a Kubernetes networking solution from scratchAll Things Open
Presented by: Antonin Bas & Jianjun Shen, VMware
Presented at All Things Open 2020
Abstract: For the non-initiated, Kubernetes (K8s) networking can be a bit like dark magic. Many clusters have requirements beyond what the default network plugin, kubenet, can provide and require the use of a third-party Container Network Interface (CNI) plugin. But what exactly is the role of these plugins, how do they differ from each other and how does the choice of one affect your cluster?
In this talk, Antonin and Jianjun will describe how a group of developers was able to build a CNI plugin - an open source project called Antrea - from scratch and bring it to production in a matter of months. This velocity was achieved by leveraging existing open-source technologies extensively: Open vSwitch, a well-established programmable virtual switch for the data plane, and the K8s libraries for the control plane. Antonin and Jianjun will explain the responsibilities of a CNI plugin in the context of K8s and will walk the audience through the steps required to create one. They will show how Antrea integrates with the rest of the cloud-native ecosystem (e.g. dashboards such as Octant and Prometheus) to provide insight into the network and ensure that K8s networking is not just dark magic anymore.
Containers require a new approach to networking. How are your containers communicating with each other? This talk will go through the different network topologies of Kubernetes. How Kubernetes addresses networking compared to traditional physical networking concepts. What are your options for networking using Kubernetes. What is the CNI (Container Network Interface) and how it affects Kubernetes networking.
박강민(pr0gr4m) / Linux Kernel Contributor - <Linux Kernel 101 for Beginner>
"리눅스 커널에 관심은 있지만, 커널을 어떻게 공부해야 하는지 모르는 분들을 위해 준비한 시간입니다.
입문자 분들이 리눅스 커널 공부를 시작하는 방법에 대해 소개합니다"
영상: https://youtu.be/96T6OCEqZNk
주최: https://www.facebook.com/groups/InfraEngineer
조준희 / Cisco - <삐약삐약 네트워크 엔지니어 이야기>
"그저 전공 공부만 하던 꿈이 없던 대학생이 네트워크엔지니어가 되는 과정과,
주니어인 제가 생각하는 네트워크 엔지니어에 대해 이야기합니다."
영상: https://youtu.be/D259i3pBYLA
주최: https://www.facebook.com/groups/InfraEngineer
이성민 / Netflix - [특별 발표]<시니어가 들려주는 "내가 알고 있는 걸 당신도 알게 된다면">
"모든 엔지니어는 실패를 통해 성장하고 저 또한 그랬습니다.
제가 주니어 때 알았다면 좋았을 이야기들, 오늘 이 자리에서 나누어보고자 합니다."
영상: https://youtu.be/MXl_t1vjkyU
주최: https://www.facebook.com/groups/InfraEngineer
https://www.facebook.com/groups/InfraEngineer
GIF pack include version
https://docs.google.com/presentation/d/1BTwGPUG6KGwc3xoW1_vU7CmloHXW-ardytNWomPdSy4/edit?usp=sharing
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
2. • We are in the business of
“Abstracting Networks” (As one
Big Switch) using Open
Networking Hardware
About Big Switch
About
• Spent 4 years in Engineering
building our products. Now in
Technical Product
Management
About Me
Kubernetes:
Abstracted Compute Domain
Legacy Next-Gen
http://www.youtube.com/watch?v=HlAXp0-M6SY&t=0m43s
Why Not Abstract the Network?
4. • Linux kernel has 6 types of
namespaces: pid, net, mnt, uts, ipc, user
• Network namespaces provide a brand-new
network stack for all the processes within the
namespace
• That includes network interfaces, routing
tables and iptables rules
Namespaces
K8S Networking: Basics
eth0
namespace-1
eth0
namespace-2
veth1veth0
bridge
eth0root namespace
Node1
5. • Lowest common denominator in K8S. Pod is
comprised of one or more containers along with a
“pause” container
• Pause container act as the “parent” container for
other containers inside the pod. One of it’s primary
responsibilities is to bring up the network
namespace
• Great for the redundancy: Termination of other
containers do not result in termination of the
network namespace
Pods
K8S Networking: Basics
6. • Multiple ways to access pod
namespaces
• ‘kubectl exec --it'
• ‘docker exec --it’
• nsenter (“namespace enter”, let
you run commands that are
installed on the host but not on
the container)
Accessing Pod
Namespaces
K8S Networking: Basics
Both containers belong to the same pod => Same Network Namespace => same ’ip a’ output
7. • Interface between container runtime and network
implementation
• Network plugin implements the CNI spec. It takes
a container runtime and configure (attach/detach)
it to the network
• CNI plugin is an executable (in: /opt/cni/bin)
• When invoked it reads in a JSON config &
Environment Variables to get all the required
parameters to configure the container with the
network
Container Networking
Interface : CNI
K8S Networking: Basics Container
Runtime
CNI
Plugin
Container Networking Interface
Configures Networking
Credit:https://www.slideshare.net/weaveworks/introduction-to-the-container-network-interface-cni
8. • Node IP belongs to 2
different subnets:
25.25.25.0/24 &
35.35.35.0/24
• Gateway is configured with
.254 in the network for
each network segment
• Bond0 interface is created
by bonding two 10G
interfaces
Topology Info
Topology
bond0root
Node-121
25.25.25.121
bond0root
Node-122
35.35.35.122
9. • To make networking easier, Kubernetes does
away with port-mapping and assigns a unique IP
address to each pod
• If a host cannot get an entire subnet to itself things
get pretty complicated
• Flannel aims to solve this problem by creating an
overlay mesh network that provisions a subnet to
each server
Intro
Flannel
K8S
Flannel CNI Plugin
FlannelD
(VXLAN/UDP..)
Network
10. • “Flannel.1” Is the VXLAN
interface
• CNI0 is the bridge
Default Config
Flannel
bond0root
Node-121
25.25.25.121
bond0root
Node-122
35.35.35.122
cni0
flannel.1
cni0
flannel.1
16. • The primary Calico agent that runs on each
machine that hosts endpoints.
• Responsible for programming routes and ACLs,
and anything else required on the host
Felix
Calico
• BGP Client: responsible of route distribution
• When Felix inserts routes into the Linux kernel
FIB, Bird will pick them up and distribute them to
the other nodes in the deployment
Bird
17. • Felix's primary responsibility is to program the
host's iptables and routes to provide the
connectivity to pods on that host.
• Bird is a BGP agent for Linux that is used to
exchange routing information between the hosts.
The routes that are programmed by Felix are
picked up by bird and distributed among the
cluster hosts
Architecture
Calico
eth0
pod1
eth1
pod2
veth1
cbr0
eth0
flannel
0
Node1
iptables
route
table
Bird
Felix
eth0
pod3
eth1
pod4
veth1
cbr0
eth0
flannel
0
Node2
iptables
route
table
Bird
Felix
*etcd/confd components are not shown for clarity
20. • Default: Node Mesh with
IP-IP tunnel
• Route table has entries to
all the other tunl0
interfaces through other
node IPs
Default
Configuration
bond0
tunl0
root
Node-121
25.25.25.121
iptables
route table
bond0
tunl0
192.168.243.0
root
Node-122
35.35.35.122
iptables
route table
“ipip” tunnel
Route to the other node’s tunnel
Calico
32. • Need the Calico nodes to
peer with the network
fabric
BGP mode
Calico
bond0
tunl0
root
Node-121
25.25.25.121
iptables
route table
bond0
tunl0
root
Node-122
35.35.35.122
iptables
route table
❌ ❌
eth0pod1
192.168.83.69
cali-x
eth0pod2
cali-y
192.168.243.4
Bird Bird
33. • Create a global BGP
configuration
• Create the network as a
BGP Peer (**Assuming an
abstracted cloud network.
Config will vary depending
on vendor)
BGP Mode
Calico
bond0
tunl0
root
Node-121
25.25.25.121
iptables
route table
bond0
tunl0
root
Node-122
35.35.35.122
iptables
route table
❌ ❌
eth0pod1
192.168.83.69
cali-x
eth0pod2
cali-y
192.168.243.4
Create a BGP configuration with AS:63400 Create a ”global” BGP Peer
Both Nodes are peering with the global BGP Peer
34. • Configure the Calico nodes
as BGP Neighbors on the
network
• As a result network will get
to learn about
192.168.83.64/26 &
192.168.243.0/26 routes
BGP Mode
Calico
bond0
tunl0
root
Node-121
25.25.25.121
iptables
route table
bond0
tunl0
root
Node-122
35.35.35.122
iptables
route table
❌ ❌
eth0pod1
192.168.83.69
cali-x
eth0pod2
cali-y
192.168.243.4
Network is peering with both the Calico Nodes
Network is learning pod network routes via BGP
36. • Cilium Agent, Cilium CLI Client, CNI Plugin will be
running on every node
• Cilium agent compiles BPF programs and make
the kernel runs these programs at key points in the
network stack to have visibility and control over all
network traffic in/out of all containers
• Cilium interacts with the Linux kernel to install BPF
program which will then perform networking tasks
and implement security rules
Architecture
Cilium
eth0
pod1
eth1
pod2
veth1
cbr0
eth0
flannel
0
Node1
*etcd/monitor components are not shown for clarity
Cilium
Agent
BPF Program BPF Program
37. • All nodes form a mesh of tunnels using the UDP
based encapsulation protocols: VXLAN (default) or
Geneve
• Simple: Only requirement is cluster nodes should
be able to reach each other using IP/UDP
• Auto-configured: Kubernetes is being run with the-
”--allocate-node-cidrs” option, Cilium can form an
overlay network automatically without any
configuration by the user
Overlay Network Mode
Cilium: Networking
• In direct routing mode, Cilium will hand all packets
which are not addressed for another local endpoint
to the routing subsystem of the Linux kernel
• Packets will be routed as if a local process would
have emitted the packet
• Admins can use routing daemon such as Zebra,
Bird , BGPD. The routing protocols will announce
the node allocation prefix via the node’s IP to all
other nodes.
Direct/Native Routing Mode
39. bond0
cilium_vxlan
root
Node-121
25.25.25.121
bond0
cilium_vxlan
root
Node-122
35.35.35.122
• ”cilium_vxlan” interface is in
metadata mode. It can send &
receive on multiple addresses
• Cilium addressing model allows
to derive the node address from
each container address.
• This is also used to derive the
VTEP address, so you don’t
need to run a control plane
protocol to distribute these
addresses. All you need to have
are routes to make the node
addresses routable
Default
Configuration
Cilium
cilium_host
cilium_health
cilium_net
cilium_host
cilium_health
cilium_net
‘cilium_vxlan’ interface is in Metadata mode.
No IP assigned to this interface
cilium_host
192.168.1.1
cilium_host
192.168.2.1
45. Service
IP
Pod1 Pod2 Pod3
Kube
Proxy
• Pods are mortal
• Need a higher level
abstractions: Services
• “Service” in Kubernetes is
a conceptual concept.
Service is not a
process/daemon. Outside
networks doesn’t learn
Service IP addresses
• Implemented through Kube
Proxy with IPTables rules
Services
K8S Networking: Basics
Yes! Back to the Basics…
46. • If Services are an abstracted concept
without any meaning outside of the
K8S cluster how do we access?
Exposing Services
K8S Networking: Basics
• NodePort / LoadBalancer /
Ingress etc.
NodePort: Service is accessed via ‘NodeIP:port’
Credit: https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0
47. • Load Balancer: Spins up a load
balancer and binds service IPs to
Load Balancer VIP
• Very common in public cloud
environments
• For baremetal workloads: ‘MetalLB’
(Up & coming project, load-balancer
implementation for bare
metal K8S clusters, using standard
routing protocols)
Exposing Services
K8S Networking: Basics LoadBalancer: Service is accessed via Loadbalancer
Credit: https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0
48. • Ingress: K8S Concept that lets you
decide how to let traffic into the cluster
• Sits in front of multiple services and
act as a ”router”
• Implemented through an ingress
controller (NGINX/HA Proxy)
Exposing Services
K8S Networking: Basics
Credit: https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0
49. • Ingress: Network (“Abstracted
Network”) can really help you out here
• Ingress controllers are deployed in
some of your “public” nodes in your
cluster
• Eg: Big Cloud Fabric (by Big
Switch), can expose a Virtual IP in
front of the Ingress Controllers and
perform Load Balancing/Health
Checks/Analytics
Exposing Services
K8S Networking: Basics
Virtual-IP
Kube-1862 Kube-1863
50. Thanks!
• Repo for all the command outputs/PDF slides:
https://github.com/jayakody/ons-2019
• Credits:
• Inspired by Life of a Packet- Michael Rubin, Google (https://www.youtube.com/watch?v=0Omvgd7Hg1I)
• Sarath Kumar, Prashanth Padubidry : Big Switch Engineering
• Thomas Graf, Dan Wendlandt: Isovalent (Cilium Project)
• Project Calico Slack Channel: special shout out to: Casey Davenport, Tigera
• And so many other folks who took time to share knowledge around this emerging space through different mediums
(Blogs/YouTube videos etc)