● Fundamentals
● Key Components
● Best practices
● Spring Boot REST API Deployment
● CI with Ansible
● Ansible for AWS
● Provisioning a Docker Host
● Docker&Ansible
https://github.com/maaydin/ansible-tutorial
Kubernetes Webinar - Using ConfigMaps & Secrets Janakiram MSV
Many applications require configuration using some combination of configuration files, command line arguments, and environment variables. ConfigMaps in Kubernetes provide mechanisms to inject containers with configuration data while keeping them portable. Secrets decouple sensitive content from the pods using a volume plug-in. This webinar will discuss the use cases and scenarios for using ConfigMaps and Secrets.
->Introduction
->>What is Ansible?
->>Ansible history
->Basic concepts
->>Inventory
->>Playbook
->>Role
->>Module
->>Plugin
->Diving into Ansible roles
->>Getting started
->>Create a role
->>Roles under the hood
->>How to use roles?
Title: Ansible, best practices.
Ansible has taken a prominent place in the configmanagement world. By now many people involved in DevOps have taken a look at it, or done a first project with it. Now it is time to step back and look at quality and craftmanship. Bas Meijer, Ansible ambassador, will talk about Ansible best practices, and will show tips, tricks and examples based on several projects.
About the speaker
Bas is a systems engineer and software developer and wasted decades on latenight hacking. He is currently helping out 2 enterprises with continuous delivery and devops.
About 94% of AI Adopters are planning to use containers in the next 1 year. What’s driving this exponential growth? Faster time to deployment and Faster AI workload processing are the two major reasons. You can use GPUs in big data applications such as machine learning, data analytics, and genome sequencing. Docker containerization makes it easier for you to package and distribute applications. You can enable GPU support when using YARN on Docker containers. In this talk, I will demonstrate how Docker accelerates the AI workload development and deployment over the IoT Edge devices in efficient manner
Soft Introduction to Google's framework for taming containers in the cloud. For devs and architects that they just enter the world of cloud, microservices and containers
● Fundamentals
● Key Components
● Best practices
● Spring Boot REST API Deployment
● CI with Ansible
● Ansible for AWS
● Provisioning a Docker Host
● Docker&Ansible
https://github.com/maaydin/ansible-tutorial
Kubernetes Webinar - Using ConfigMaps & Secrets Janakiram MSV
Many applications require configuration using some combination of configuration files, command line arguments, and environment variables. ConfigMaps in Kubernetes provide mechanisms to inject containers with configuration data while keeping them portable. Secrets decouple sensitive content from the pods using a volume plug-in. This webinar will discuss the use cases and scenarios for using ConfigMaps and Secrets.
->Introduction
->>What is Ansible?
->>Ansible history
->Basic concepts
->>Inventory
->>Playbook
->>Role
->>Module
->>Plugin
->Diving into Ansible roles
->>Getting started
->>Create a role
->>Roles under the hood
->>How to use roles?
Title: Ansible, best practices.
Ansible has taken a prominent place in the configmanagement world. By now many people involved in DevOps have taken a look at it, or done a first project with it. Now it is time to step back and look at quality and craftmanship. Bas Meijer, Ansible ambassador, will talk about Ansible best practices, and will show tips, tricks and examples based on several projects.
About the speaker
Bas is a systems engineer and software developer and wasted decades on latenight hacking. He is currently helping out 2 enterprises with continuous delivery and devops.
About 94% of AI Adopters are planning to use containers in the next 1 year. What’s driving this exponential growth? Faster time to deployment and Faster AI workload processing are the two major reasons. You can use GPUs in big data applications such as machine learning, data analytics, and genome sequencing. Docker containerization makes it easier for you to package and distribute applications. You can enable GPU support when using YARN on Docker containers. In this talk, I will demonstrate how Docker accelerates the AI workload development and deployment over the IoT Edge devices in efficient manner
Soft Introduction to Google's framework for taming containers in the cloud. For devs and architects that they just enter the world of cloud, microservices and containers
Introduction to Helm, the package manager for Kubernetes: Create and use Kubernetes charts. Deploy releases on a cluster ... and rollback your releases. Get for instance Prometheus up and running with just a single command.
Ansible is tool for Configuration Management. The big difference to Chef and Puppet is, that Ansible doesn't need a Master and doesn't need a special client on the servers. It works completely via SSH and the configuration is done in Yaml.
These slides give a short introduction & motivation for Ansible.
A brief study on Kubernetes and its componentsRamit Surana
Kubernetes is an open source orchestration system for Docker containers. It handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches the users declared intentions. Using the concepts of "labels" and "pods", it groups the containers which make up an application into logical units for easy management and discovery.
Jenkins is an open source automation server written in Java. Jenkins helps to automate the non-human part of software development process, with continuous integration and facilitating technical aspects of continuous delivery. It is a server-based system that runs in servlet containers such as Apache Tomcat.
Seven Habits of Highly Effective Jenkins Users (2014 edition!)Andrew Bayer
What plugins, tools and behaviors can help you get the most out of your Jenkins setup without all of the pain? We'll find out as we go over a set of Jenkins power tools, habits and best practices that will help with any Jenkins setup.
This presentation is an introduction to Ansible, an IT automation tool which can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates.
Checking in your deployment configuration as code
Helm is a tool that streamlines the creation, deployment and management of your Kubernetes-native applications. In this talk, we take a look at how Helm enables you to manage your deployment configurations as code, and demonstrate how it can be used to power your continuous delivery (CI/CD) pipeline.
A Comprehensive Introduction to Kubernetes. This slide deck serves as the lecture portion of a full-day Workshop covering the architecture, concepts and components of Kubernetes. For the interactive portion, please see the tutorials here:
https://github.com/mrbobbytables/k8s-intro-tutorials
Helm - the Better Way to Deploy on Kubernetes - Reinhard Nägele - Codemotion...Codemotion
Helm is the official package manager for Kubernetes. This session introduces Helm and illustrates its advantages over "kubectl" with plain Kubernetes manifests. We will learn about its architecture and features, such as lifecycle management, parameterizability using Go templating, chart dependencies, etc. Demos will explain how all the bits and pieces work together.
Kubernetes has two simple but powerful network concepts: every Pod is connected to the same network, and Services let you talk to a Pod by name. Bryan will take you through how these concepts are implemented - Pod Networks via the Container Network Interface (CNI), Service Discovery via kube-dns and Service virtual IPs, then on to how Services are exposed to the rest of the world.
At Pinterest, the ProxySQL infrastructure fronts numerous heterogeneous databases. Due to the high number of unique configurations and dynamic nature of cloud deployments, it’s a challenge to reliably provision, change, auto-scale, and monitor ProxySQL servers. Applying Infrastructure as Code principles using Terraform, we made it possible to manage such a large fleet of ProxySQL instances confidently. Come learn how we automated provisioning, testing, and monitoring of ProxySQL at scale.
Cilium - Network security for microservicesThomas Graf
Cilium is open source software for providing and transparently securing network connectivity and loadbalancing between application containers and services deployed using Linux container management platforms like Docker and Kubernetes.
A new Linux kernel technology called eBPF is at the foundation of Cilium, which enables the dynamic insertion of BPF bytecode into the Linux kernel. Cilium generates eBPF programs for each individual application container to provide networking, security, loadbalancing and visibility.
Introduction to Helm, the package manager for Kubernetes: Create and use Kubernetes charts. Deploy releases on a cluster ... and rollback your releases. Get for instance Prometheus up and running with just a single command.
Ansible is tool for Configuration Management. The big difference to Chef and Puppet is, that Ansible doesn't need a Master and doesn't need a special client on the servers. It works completely via SSH and the configuration is done in Yaml.
These slides give a short introduction & motivation for Ansible.
A brief study on Kubernetes and its componentsRamit Surana
Kubernetes is an open source orchestration system for Docker containers. It handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches the users declared intentions. Using the concepts of "labels" and "pods", it groups the containers which make up an application into logical units for easy management and discovery.
Jenkins is an open source automation server written in Java. Jenkins helps to automate the non-human part of software development process, with continuous integration and facilitating technical aspects of continuous delivery. It is a server-based system that runs in servlet containers such as Apache Tomcat.
Seven Habits of Highly Effective Jenkins Users (2014 edition!)Andrew Bayer
What plugins, tools and behaviors can help you get the most out of your Jenkins setup without all of the pain? We'll find out as we go over a set of Jenkins power tools, habits and best practices that will help with any Jenkins setup.
This presentation is an introduction to Ansible, an IT automation tool which can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates.
Checking in your deployment configuration as code
Helm is a tool that streamlines the creation, deployment and management of your Kubernetes-native applications. In this talk, we take a look at how Helm enables you to manage your deployment configurations as code, and demonstrate how it can be used to power your continuous delivery (CI/CD) pipeline.
A Comprehensive Introduction to Kubernetes. This slide deck serves as the lecture portion of a full-day Workshop covering the architecture, concepts and components of Kubernetes. For the interactive portion, please see the tutorials here:
https://github.com/mrbobbytables/k8s-intro-tutorials
Helm - the Better Way to Deploy on Kubernetes - Reinhard Nägele - Codemotion...Codemotion
Helm is the official package manager for Kubernetes. This session introduces Helm and illustrates its advantages over "kubectl" with plain Kubernetes manifests. We will learn about its architecture and features, such as lifecycle management, parameterizability using Go templating, chart dependencies, etc. Demos will explain how all the bits and pieces work together.
Kubernetes has two simple but powerful network concepts: every Pod is connected to the same network, and Services let you talk to a Pod by name. Bryan will take you through how these concepts are implemented - Pod Networks via the Container Network Interface (CNI), Service Discovery via kube-dns and Service virtual IPs, then on to how Services are exposed to the rest of the world.
At Pinterest, the ProxySQL infrastructure fronts numerous heterogeneous databases. Due to the high number of unique configurations and dynamic nature of cloud deployments, it’s a challenge to reliably provision, change, auto-scale, and monitor ProxySQL servers. Applying Infrastructure as Code principles using Terraform, we made it possible to manage such a large fleet of ProxySQL instances confidently. Come learn how we automated provisioning, testing, and monitoring of ProxySQL at scale.
Cilium - Network security for microservicesThomas Graf
Cilium is open source software for providing and transparently securing network connectivity and loadbalancing between application containers and services deployed using Linux container management platforms like Docker and Kubernetes.
A new Linux kernel technology called eBPF is at the foundation of Cilium, which enables the dynamic insertion of BPF bytecode into the Linux kernel. Cilium generates eBPF programs for each individual application container to provide networking, security, loadbalancing and visibility.
This presentation start from basic concept such as container and container orchestration
And then go through Kubernetes internal especially Master Node components and Work Node components and show and explain core mechanism with codes.
Cloud-Barista 제3차 오픈 컨퍼런스 : CB-Larva - Cloud-Barista 인큐베이터(Cloud-Barista Incu...Cloud-Barista Community
# 부제 : CB-Larva - 멀티 클라우드의 신규 기능 및 니즈 찾아서
- CB-Larva 개요
- 연구 개발 후보군 소개
- cb-network 연구 개발 사항
- cb-subnet 시연: VM간 동일 서브넷 구성 및 통신(PoC)
# 발표영상(Youtube) : https://youtu.be/YRj6TIVRoYg
----------------------------------------------------------------------------------------------------------
# Cloud-Barsita Community homepage : https://cloud-barista.github.io
# Cloud-Barsita Community github : https://github.com/cloud-barista
# Cloud-Barsita youtube channel : https://cloud-barista.github.io/youtube
오픈스택 커뮤니티 - 제1회 공개 SW 커뮤니티데이 (2017년 9월 정기 세미나 대체)
- 일시: 9월 22일 금요일
- 발표자: 장태희 (운영진, 스터디 매니저)
- 행사 정보: https://www.facebook.com/groups/openstack.kr/permalink/1826976907316452/
Similar to Calico routing modes_trans_by_duck_in_korean (20)
박강민(pr0gr4m) / Linux Kernel Contributor - <Linux Kernel 101 for Beginner>
"리눅스 커널에 관심은 있지만, 커널을 어떻게 공부해야 하는지 모르는 분들을 위해 준비한 시간입니다.
입문자 분들이 리눅스 커널 공부를 시작하는 방법에 대해 소개합니다"
영상: https://youtu.be/96T6OCEqZNk
주최: https://www.facebook.com/groups/InfraEngineer
조준희 / Cisco - <삐약삐약 네트워크 엔지니어 이야기>
"그저 전공 공부만 하던 꿈이 없던 대학생이 네트워크엔지니어가 되는 과정과,
주니어인 제가 생각하는 네트워크 엔지니어에 대해 이야기합니다."
영상: https://youtu.be/D259i3pBYLA
주최: https://www.facebook.com/groups/InfraEngineer
이성민 / Netflix - [특별 발표]<시니어가 들려주는 "내가 알고 있는 걸 당신도 알게 된다면">
"모든 엔지니어는 실패를 통해 성장하고 저 또한 그랬습니다.
제가 주니어 때 알았다면 좋았을 이야기들, 오늘 이 자리에서 나누어보고자 합니다."
영상: https://youtu.be/MXl_t1vjkyU
주최: https://www.facebook.com/groups/InfraEngineer
https://www.facebook.com/groups/InfraEngineer
GIF pack include version
https://docs.google.com/presentation/d/1BTwGPUG6KGwc3xoW1_vU7CmloHXW-ardytNWomPdSy4/edit?usp=sharing
1. [원본 출처 : https://octetz.com/docs/2020/2020-10-01-calico-routing-modes/]
Calico Routing Modes
How does Calico route container traffic? Many say “It uses BGP to route unencapsulated traffic providing near-
native network performance.” They aren’t completely wrong. It is possible to run Calico in this mode, but it is
not the default. It’s also a common misconception that BGP is how Calico routes traffic; it is part, but Calico
may also leverage IP-in-IP or VXLAN to perform routing. In this post, I’ll attempt to explain the routing options
of Calico and how BGP compliments each.
[Calico 는 어떻게 컨테이너 trafffic 을 routing 할까요? “(VxLAN 이나 IP in IP 등의 overlay 로) Encapsulated
되지 않은 traffic 을 line rate 에 가까운 network 성능을 제공하기 위해서 Calico 가 BGP 를 사용한다고 많이들
얘기 합니다. 그것들이 완전히 틀린 말은 아닙니다. Clico 를 그런 mode 로 실행하는 것은 비록 그것이
default mode 는 아니지만 가능합니다. 또 하나의 일반적으로 잘못된 인식은 BGP 가 Calico CNI 사용 시
traffic 을 routing 하는 프로토콜이라고 생각하는 것입니다. 그 것은 일부분이긴 하지만, Calico 는 routing 을
위해서 IP-in-IP 도 사용합니다. 본 post 에서는 Calico 의 routing option 에 대해서 설명하고, 어떻게 BGP 와
서로 보완하는 지를 알아 보겠습니다.]
2. Click here to watch the video version of this content.
Example Architecture [예제 구조도]
For this demonstration, I have setup the following architecture in AWS. The terraform is here. The Calico
deployment is here.
[이 데모를 위해서 다음 그림과 같이 AWS 에 설치를 하였습니다. Teraffom 은 여기 그리고 Calico
deployment 는 여기 있습니다.]
3. For simplicity, there is only 1 master node. Worker nodes are spread across availability zones in 2 different
subnets. There will be 2 worker nodes in subnet 1 and 1 worker node in subnet 2. Calico is the container
networking plugin across all nodes. Throughout this post, I'll refer to these nodes as follows.
[단순화 하기 위해서 master 노드는 1 개만 생성했습니다. Worker 노드들은 각기 다른 az 에 2 개의 서로 다른
subnet 에 분산 배포하였습니다. Subnet 1 에는 2 개의 worker node 가 있고, subnet 2 에는 1 개의 worker
4. node 가 있습니다. Calico 가 모든 노드에서 CNI plug-in 으로 사용 되었습니다. 이 포스트에서 이 노드들을
다음과 같은 이름으로 사용하겠습니다.]
master: Kube-Master-Node, subnet 1
worker-1: Kube-Worker-Node 1, subnet 1
worker-2: Kube-Worker-Node 2, subnet 1
worker-3: Kube-Worker-Node 3, subnet 2
These are consistent with the node names in my Kubernetes cluster.
[이것들은 아래와 같이 제 Kubernetes cluster 의 node 이름들과 일치합니다.]
NAME STATUS ROLES AGE VERSION
master Ready master 6m55s v1.17.0
worker-1 Ready <none> 113s v1.17.0
worker-2 Ready <none> 77s v1.17.0
worker-3 Ready <none> 51s v1.17.0
Pods are deployed with manifests for pod-1, pod-2, and pod-3.
[파드들은 각각의 manifests, pod-1, pod-2, pod-3 로 배포 되었습니다]
NAME READY STATUS RESTARTS AGE NODE
pod-1 1/1 Running 0 4m52s worker-1
5. pod-2 1/1 Running 0 3m36 worker-2
pod-3 1/1 Running 0 3m23s worker-3
Route Sharing
By default, Calico uses BGP to distribute routes amongst hosts. Calico-node pods run on every host. Each
calico-node peers together.
[호스트들간에 routes 를 서로 전파하고 설정하기 위해서 기본 설정으로 BGP 를 사용하였습니다. Calico-
node 라는 이름의 pod 들이 각 host 에 실행됩니다. 그리고 각 calco-node 들은 서로간에 BGP neighbor
peering 맺습니다.]
6. The calico-node container hosts 2 processes.
[ calico-node container 는 2 개의 process 로 구성됩니다.]
1. BIRD: Shares routes via BGP.
2. Felix: Programs host route tables.
[
7. 1. BIRD: BGP 로 Felix 가 생성한 network route 를 광고합니다.
2. Felix: host routing table 에 routing entry 를 만듭니다.
]
BIRD can be configured for advanced BGP architectures, such as centralized route sharing via route
reflectors and peering with BGP-capable routers. Using calicoctl, you can view nodes sharing routes.
[BIRD 는 router node 들 간에 full mesh connection 을 맺는 방식이 아닌 route reflectors(RR)를 이용해서 중앙
통제식 route 공유나 BGP 지원하는 라우터 들과의 peering 등의 고급의 BGP 아키텍처 설정을 지원합니다.
calicoctl 을 이용해서 routes 를 서로 공유하는 node 들을 볼 수 있습니다.]
$ sudo calicoctl node status
Calico process is running.
IPv4 BGP status
+--------------+-------------------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+--------------+-------------------+-------+----------+-------------+
| 10.30.0.206 | node-to-node mesh | up | 18:42:27 | Established |
| 10.30.0.56 | node-to-node mesh | up | 18:42:27 | Established |
| 10.30.1.66 | node-to-node mesh | up | 18:42:27 | Established |
+--------------+-------------------+-------+----------+-------------+
8. IPv6 BGP status
No IPv6 peers found.
Each host IP represents a node this host is peering with. This was run on master and the IPs map as:
[calicoctl node status 의 결과에 보여지는 각 host IP(PEER ADDRESS)는 이 명령을 수행한 host 가 peering 을
맺은 하나의 node 를 나타냅니다.]
10.30.0.206: worker-1
10.30.0.56: worker-2
10.30.1.66: worker-3
Once routes are shared, Felix programs a host's route table as follows.
[각 node 간에 서로 알고 있는 network 에 대한 route 공유가 끝나고 나면, Felix 는 host 의 route table 을
다음과 같이 설정합니다.]
# run on master
$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.30.0.1 0.0.0.0 UG 100 0 0 ens5
10.30.0.0 0.0.0.0 255.255.255.0 U 0 0 0 ens5
10.30.0.1 0.0.0.0 255.255.255.255 UH 100 0 0 ens5
9. 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.97.192 10.30.1.66 255.255.255.192 UG 0 0 0 tunl0
192.168.133.192 10.30.0.56 255.255.255.192 UG 0 0 0 tunl0
192.168.219.64 0.0.0.0 255.255.255.192 U 0 0 0 *
192.168.219.65 0.0.0.0 255.255.255.255 UH 0 0 0 cali50e69859f2f
192.168.219.66 0.0.0.0 255.255.255.255 UH 0 0 0 calif52892c3dce
192.168.226.64 10.30.0.206 255.255.255.192 UG 0 0 0 tunl0
These routes are programmed for IP-in-IP traffic. Each host's pod CIDR (Destination + Genmask) goes through
a tunl0 interface. Pods, with endpoints, have a cali* interface, which is used for network policy enforcement.
[ 일련의 route 들은 IP-in-IP traffic 을 위해서 설정되었습니다. 각 host 의 pod CIDR 은 tunl0 interface 를
통해서 나가도록 route 가 설정됩니다. 이 Node 에 생성되어 있는 pod 들인 경우는 cali* interface 로 routing
table 에 entry 가 생성되어 있습니다. 이 cali* interface 들은 network 정책을 적용하는데 사용됩니다.]
Routing
Calico supports 3 routing modes.
[Calico 는 3 가지 routing mode 를 지원합니다.]
IP-in-IP: default; encapsulated // IP-in-IP 오버레이: default 설정: 터널 모드
10. Direct: unencapsulated // overlay 설정 안함. L3 routing 모드로 Node IP 를 interface IP 로
사용
VXLAN: encapsulated; no BGP // VxLAN 오버레이, BGP 지원 안함
IP-in-IP and VXLAN encapsulate packets. Encapsulated packets “feel” native to the network they run atop. For
Kubernetes, this enables running a ‘virtual’ pod network independent of the host network.
[IP-in-IP 와 VxLAN 은 tunnel 내부에 packet 을 캡슐화 합니다. 캡슐화된 packet 들은 자신들이 오가는 이
network 가 자신들이 속한 네트워크로 느낍니다. 쿠버네티스를 위해서 이 tunneling overlay 기술은 virtual
pod network 를 host network 와 독립된 network 로 동작할 수 있게 합니다.]
IP-in-IP
IP-in-IP is a simple form of encapsulation achieved by putting an IP packet inside another. A transmitted
packet contains an outer header with host source and destination IPs and an inner header with pod source
and destination IPs.
11. [IP-in-IP 는 IP packet 을 다른 IP header 의 안쪽 payload 로 집어 넣는 심플한 형식으로 캡슐화를 합니다.
외부로 전송되는 패킷은 host IP 를 source 와 destination 으로 하는 outer head 와 pod 의 IP 를 source 와
destination 으로 하는 inner header 를 포함합니다.]
In IP-in-IP mode, worker-1's route table is as follows.
[IP-in-IP 모드를 사용한 경우, woker-1 의 route table 은 다음과 같습니다.]
# run on worker-1
sudo route
12. * 각 노드들, worker-2, worker-3, 그리고 master node 의 podCIDR 은 tunl0 interface 로 나가도록 설정되어 있고,
Gateway 는
각 peer Node 의 Node IP 로 되어 있습니다.
Below is a packet sent from pod-1 to pod-2.
[아래는 Worker-1 에 있는 pod-1 에서 Worker-2 에 있는 pod-2 로 packet 을 전송한 것을 Node interface 에서
캡처한 것입니다.]
# sent from inside pod-1
curl 192.168.133.194
13.
14. IP-in-IP also features a selective mode. It is used when only routing between subnets requires encapsulation.
I’ll explore this in the next section.
I believe IP-in-IP is Calico’s default as it often just works. For example, networks that rejects packets without a
host's IP as the destination or packets where routers between subnets rely on the destination IP for a host.
[ IP-in-IP mode 는 또한 selective mode 를 지원합니다. 그 것은 오직 서로 다는 subnet 간의 routing 인
경우에만 tunnel 캡슐화를 위해서 사용됩니다.. 다음 section 에서 자세히 알아 보겠습니다.
Calico 가 단지 종종 그저 잘 동작하기 때문에 IP-in-IP mode 를 default 로 사용한다고 믿습니다. 예를 들어,
network 에서는 destination IP 가 host 의 IP 가 아닌 경우 reject 하거나, subnet 사이에 있는 라우터들에서는
packet 에 대한 routing 을 host 의 IP 인 destination IP 에 의존해 routing 을 수행합니다.]
* 이 부분은 뭔가 개인적인 의견을 적은 것 같은데, 맞는 얘기라고 보기는 어렵네요.
Direct
Direct is a made up word I’m using for non-encapsulated routing. Direct sends packets as if they came
directly from the pod. Since there is no encapsulation and de-capsulation overhead, direct is highly
performant.
15. [Direct 는 필자가 IP-in-IP tunnel 방식이 아닌 것을 칭하기 위해서 만들어 낸 용어입니다. Direct 는 마치
pod 에서 packet 이 온 것처럼 packet 을 보냅니다. Tunneling header 의 부착 및 탈착을 하지 않기 때문에
direct 는 매우 성능이 좋습니다.]
To route directly, the Calico IPPool must not have IP-in-IP enabled.
[Tunneling 없이 직접 routing 되도록 하기 위해서 Calic IPPool 은 반드시 IP-in-IP mode 가 disable 되어야만
합니다.]
To modify the pool, download the default ippool.
[IP pool 을 수정하기 위해서 default ippool 을 다음과 같이 yaml file 로 저장합니다.]
calicoctl get ippool default-ipv4-ippool -o yaml > ippool.yaml
Disable IP-in-IP by setting it to Never.
[ipipMode 를 Never 로 설정해서 IP-in-IP 를 disable 합니다.]
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
16. # remove creationTimestamp, resourceVersion,
# and uid if present
name: default-ipv4-ippool
spec:
blockSize: 26
cidr: 192.168.0.0/16
ipipMode: Never
natOutgoing: true
nodeSelector: all()
vxlanMode: Never
Apply the change. // 변경된 내용을 벅용합니다.
calicoctl apply -f ippool.yaml
On worker-1, the route table is updated.
[Worker-1 에서 보면, route table 이 아래와 같이 update 되었습니다.]
route -n
17. 2 important changes are: // 2 가지 중요한 변경 사항이 있습니다.
1. The tunl0 interface is removed and all routes point to ens5.
[ tunl0 interface 로 되었던 것들이 모두 ens5 로 바뀌었습니다]
18. 2. worker-3's route points to the network gateway (10.30.0.1) rather than the host.
[worker-3 의 Gateway 가 node host IP 가 아닌 subnet gateway (10.30.0.1)로 바뀌었습니다.]
1. This is because worker-3 is on a different subnet.
[이 것은 worker-3 의 pod 들이 있는 host 가 worker-1 host 와 다른 subnet 에 존재하기
때문입니다.]
With direct routing, requests from pod-1 to pod-2 fail.
# sent from pod-1
$ curl -v4 192.168.133.194 --max-time 10
* Trying 192.168.133.194:80...
* TCP_NODELAY set
* Connection timed out after 10001 milliseconds
* Closing connection 0
curl: (28) Connection timed out after 10001 milliseconds
Packets are blocked because src/dst checks are enabled. To fix this, disable these checks on every host in
AWS.
19. [ src/dst checks 가 enable 되어 있기 때문에 packet 은 block 됩니다. Block 이 안 되게 하기 위해서 AWS
모든 host 에서 이 checks 를 disable 합니다.]
* src/dst checks 는 AWS 에서 packet 의 source 또는 destination IP 가 Instance 의 IP 가 아닌 경우
block(packet discard)하는 기능입니다. IP-in-IP 를 하지 않는
Direct mode 에서는 src/dst IP 는 그냥 pod 의 IP 입니다. 그러므로 src/dst checks 기능에 의해 packet 이
block 되지 않도록 기능을 disable 해야 합니다.
20. Traffic is now routable between pod-1 and pod-2. The wireshark output is as follows.
[Traffic 은 이제 pod-1 과 pod-2 사이에서 routable 합니다. Wireshark 에서 본 캡처 패킷의 내용은 다음과
같습니다.]
curl -v4 192.168.133.194
21.
22. However, communication between pod-1 and pod-3 now fails.
[그러나, 바뜨, pod-1 과 pod-3 사이의 통신의 실패합니다.]
# sent from pod-1
$ curl 192.168.97.193 --max-time 10
curl: (28) Connection timed out after 10000 milliseconds
Do you remember the updated route table? On worker-1, traffic sent to worker-3 routes to the network
gateway rather than to worker-3. This is because worker-3 lives on a different subnet. When the packet reaches
the network gateway, it does not have a routable IP address, instead it only sees the pod-3 IP.
[route table,을 업데이트 했던 것을 기억하시나요? Worker-1 에서 Worker-3 으로 보내진 traffic 은 worker-
3 으로 보내지는 것이 아니라 network gateway 로 routing 되서 보내 집니다. 이 것은 worker-3 이 worker-
1 과 다른 subnet 에 존재하기 때문입니다. Packet 이 network gateway 에 수신 되었을 때, router 는 pod-3
IP 에 대한 routing table 이 존재하지 않기 때문에 packet 을 routing 할 수 없습니다.]
Calico supports a CrossSubnet setting for IP-in-IP routing. This setting tells Calico to only use IP-in-IP when
crossing a subnet boundary. This gives you high-performance direct routing inside a subnet and still enables
you to route across subnets, at the cost of some encapsulation.
23. [Calico 는 CrossSubnet 설정에 IP-in-IP routing 을 지원합니다. 이 설정은 Calico 가 오직 cross subnet
boundary routing 인 경우에만 IP-in-IP tunneling 을 사용하도록 합니다. 이 것은 동일 subnet 에서는
고성능의 direct routing 이 가능하게 하고, 서로 다른 subnet 인 경우에는 tunneling 으로 서로 다른 suibnet
간에도 통신이 되도록 해 줍니다.]
To enable this, update the IPPool as follows.
[이 기능을 enable 하기 위해서는 IPPool 을 다음과 같이 update 합니다.]
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
name: default-ipv4-ippool
spec:
blockSize: 26
24. cidr: 192.168.0.0/16
ipipMode: CrossSubnet
natOutgoing: true
nodeSelector: all()
vxlanMode: Never
calicoctl apply -f ippool.yaml
Now routing between all pods works! Examining worker-1's route table:
[이제 모든 pod 간 routing 이 제대로 동작합니다. Worker-1 의 routing table 을 보면:]
25. The tunl0 interface is reintroduced for routing to worker-3.
[worker-3 에 대해서만 tunl0 interface 가 다시 생성되었습니다]
VXLAN
26. VXLAN routing is supported in Calico 3.7+. Historically, to route traffic using VXLAN and use Calico policy
enforcement, you’d need to deploy Flannel and Calico. This was referred to as Canal. Whether you use VXLAN
or IP-in-IP is determined by your network architecture. VXLAN is feature rich way to create a virtualized layer
2 network. It fosters larger header sizes and likely requires more processing power to facilitate. VXLAN is great
for networks that do not support IP-in-IP, such as Azure, or don’t support BGP, which is disabled in VXLAN
mode.
[VxLAN overlay routing 은 Calico 3.7+에서 지원합니다. 역사적으로 traffic 을 VxLAN 으로 routing 하고 Calico
정책 적용을 사용하기 위해서는 Flannel 과 Calico 를 사용해야만 했었습니다. 이것은 Canal 이라고
불렸었습니다. VxLAN 을 사용해야 할지, IP-in-IP 를 사용해야 할지는 network 의 구조에 의해서
결정지어집니다. VxLAN 은 가상의 Layer 2 network 를 구성하는 고급 기능입니다. VxLAN 은 훨씬 큰 header
size 를 사용할 뿐만 아니라 사용하기 위해서는 더 많은 processing power 를 필요로 합니다. VxLAN 은
Azure 처럼 IP-in-IP overlay 을 지원하지 않거나 BGP 를 지원하지 않는 network 에서 효과적입니다.]
Setting up Calico to use VXLAN fundamentally changes how routing occurs. Thus rather than altering the
IPPool, I'll be redeploying on a new cluster.
[Calico 를 VxLAN 을 사용하도록 설정하는 것은 근본적으로 routing 이 일어나는 것을 변경 시킵니다. 그래서
IPPool 을 변경하기 보다는 새로운 cluster 를 만들고자 합니다.]
27. To enable VXLAN, as of Calico 3.11, you need to make the following 3 changes to the Calico manifest.
[Calico 3.11 부터 VxLAN 을 enable 하기 위해서 다음의 3 가지 변경을 Calico manifest 에 적용해야 합니다.]
1. Set the backend to vxlan.
[1. Backend 를 vxlan 으로 설정]
kind: ConfigMap
apiVersion: v1
metadata:
name: calico-config
namespace: kube-system
data:
# Typha is disabled.
typha_service_name: “none”
# value changed from bird to vxlan
calico_backend: “vxlan”
2. Set the CALICO_IPV4_IPIP pool to CALICO_IPV4_VXLAN.
[2. CALICO_IPV4_IPIP pool 을 CALICO_IPV4_VXLAN 으로 설정]
# Enable VXLAN
- name: CALICO_IPV4POOL_VXLAN
value: "Always"
28. 3. Disable BGP-related liveness and readiness checks.
[3. BGP 와 관련된 liveness check 와 readiness check 를 disable]
livenessProbe:
exec:
command:
- /bin/calico-node
- -felix-live
# disable bird liveness test
# - -bird-live
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6
readinessProbe:
exec:
command:
- /bin/calico-node
- -felix-ready
# disable bird readiness test
# - -bird-ready
periodSeconds: 10
Then apply the modified configuration.
29. [수정한 설정을 적용합니다.]
kubectl apply -f calico.yaml
With VXLAN enabled, you can now see changes to the route tables.
[VxLAN 을 적용한 후에, 아래와 같이 변경 된 route tables 을 볼 수 있습니다.]
Inspecting the packets shows the VXLAN-style encapsulation and how it differs from IP-in-IP.
[VxLAN 으로 터널링 된 packet 이 IP-in-IP 로 터널링 된 packet 과 어떻게 다른지 캡처 된 패킷을 통해 알 수
있습니다]
30. Summary
Now that we've explored routing in Calico using IP-in-IP, Direct, and VXLAN, I hope you’re feeling more
knowledgeable about Calico’s routing options. Additionally, I hope these options demonstrate that Calico is a
fantastic container networking plugin, extremely capable in most network environments.
[Calico 에서 IP-in-IP 모드, Direct 모드, VxLAN 모드로 라우팅 하는 것을 살펴 보았습니다. Calico routing
옵션에 대해서 좀 더 많은 것을 알게 되었다고 느끼셨기를 바랍니다. 그리고 이러한 옵션들이 Calico 가
환상적인 container networking plugin 이면서, 대부분의 네트워크 환경에서 사용될 수 있다는 것을 보여
주었기를 바랍니다.]