SlideShare a Scribd company logo
1 of 130
Kubernetes Walkthrough
2021. 4. 28 / Sangwon Lee
1. 준비
- Kubernetes 소개
2. 설정파일 작성
- 직접 작성
- 템플릿 도구 사용 - helm
- 템플릿 도구 사용 - kustomize
3. 활용 패턴
- 클러스터/네임스페이스 분리
- 배치/스케쥴링 (Resource / Node Selector / Taint)
- 라이프사이클 (Hook / Readiness Probe / Grace Period)
- 볼륨 (ReadWriteOnce / ReadWriteMany / Configmap / Secret)
- 크론잡
- 싱글톤 Pod (Headless Service)
- 사이드카
목차
4. 웹서비스 관련
- HTTPS(TLS) 설정
- Ingress Practices
- 라이브 중인 기존 서비스 이전
- 서비스(프론트) 점검 걸기
5. 배포 파이프라인
- ArgoCD 활용 (자체 운영 필요)
1. 준비
Kubernetes 소개 (1/4) - History
https://www.slideshare.net/egg9/kubernetes-introduction
Kubernetes 소개 (2/4) - Why?
● Provisioning and deployment
● Configuration and scheduling
● Resource allocation
● Container availability
● Scaling or removing containers based on balancing workloads across your infrastructure
● Load balancing and traffic routing
● Monitoring container health
● Configuring applications based on the container in which they will run
● Keeping interactions between containers secure
→ Container Orchestration
Kubernetes 소개 (3/4) - Core Mechanism
https://theithollow.com/2019/09/16/kubernetes-desired-state-and-control-loops/
Desired-State and Control-loops
1. Observe
What is the desired state of our objects?
2. Check Differences
What is the current state of our objects and the differences
between our desired state?
3. Take Action
Make the current state look like the desired state.
4. Repeat
Repeat this over and over again.
Kubernetes 소개 (4/4) - Architecture
https://medium.com/@mohan08p/simplified-kubernetes-architecture-3febe12480eb
클러스터 할당 및 접근 설정 (1/3) - kube config
➜ cat ~/.kube/config
apiVersion: v1
kind: Config
clusters:
- name: my-dev-cluster
cluster:
insecure-skip-tls-verify: true
server: https://my-dev-cluster-master-1.example.com:6443
users:
- name: my-dev-creds
user:
token: eyJhbGciOiJSUzI1NiIsImtpZCI6…
contexts:
- context:
cluster: my-dev-cluster
user: my-dev-creds
namespace: ingress-nginx
name: my-dev-ctx
current-context: my-dev-ctx
preferences: {}
...
# MacOS
➜ curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl"
➜ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4",
GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-14T14:49:35Z",
GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.11",
GitCommit:"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede", GitTreeState:"clean", BuildDate:"2020-03-12T21:00:06Z",
GoVersion:"go1.12.17", Compiler:"gc", Platform:"linux/amd64"}
클러스터 할당 및 접근 설정 (2/3) - kubectl 설치
https://kubernetes.io/docs/tasks/tools/
클러스터 할당 및 접근 설정 (3/3) - kubectx 설치 (추천)
https://github.com/ahmetb/kubectx
클러스터 할당 및 접근 설정 (3/3) - kubectx 설치 (추천)
➜ kubectx
Switched to context "my-dev-ctx".
➜ kubens
hello-app-beta
> hello-app-production
2/16
Context "my-dev-ctx" modified.
Active namespace is "hello-app-production".
➜ kubectl get pod
NAME READY STATUS RESTARTS AGE
hello-app-6dd9cd8bcc-gk9m6 3/3 Running 0 20h
hello-app-6dd9cd8bcc-nkdbb 3/3 Running 0 20h
2. 설정파일 작성
직접 작성 (1/3) - 오브젝트(Object)의 구성
https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/
직접 작성 (2/3) - 한 파일에 여러 오브젝트 기술
직접 작성 (3/3) - Phase 별로 구성
my-app1
ㄴ app-dev.yaml
ㄴ app-sandbox.yaml
ㄴ app-cbt.yaml
ㄴ app-production.yaml
my-app2
...
ㄴ sandbox
ㄴ deployment.yaml
ㄴ service.yaml
ㄴ secret.yaml
ㄴ production
ㄴ deployment.yaml
ㄴ service.yaml
ㄴ secret.yaml
➜ $ cd my-app1
➜ my-app1$ kubectl apply -f ./app-sandbox.yaml
➜ $ cd my-app2/sandbox
➜ my-app2/sandbox$ kubectl apply -f .
or
➜ my-app2$ kubectl apply -f ./sandbox
Helm (1/3) - How it works
Helm (1/3) - 그러나 우리는 그냥 템플릿 도구로써
https://gowebexamples.com/templates/
Helm (2/4) - Template 생성
➜ k8s-test$ helm create my-app
Creating my-app
➜ k8s-test$ ls -al
total 0
drwxr-xr-x 3 sangwonl staff 96 Apr 1 17:35 .
drwxr-xr-x 103 sangwonl staff 3296 Apr 1 17:35 ..
drwxr-xr-x 7 sangwonl staff 224 Apr 1 17:35 my-app
➜ k8s-test$ tree my-app
my-app
├── Chart.yaml
├── charts
├── templates
│ ├── NOTES.txt
│ ├── _helpers.tpl
│ ├── deployment.yaml
│ ├── ingress.yaml
│ ├── service.yaml
│ ├── serviceaccount.yaml
│ └── tests
│ └── test-connection.yaml
└── values.yaml
3 directories, 9 files
Helm (2/4) - Template 생성
➜ k8s-test$ helm create my-app
Creating my-app
➜ k8s-test$ ls -al
total 0
drwxr-xr-x 3 sangwonl staff 96 Apr 1 17:35 .
drwxr-xr-x 103 sangwonl staff 3296 Apr 1 17:35 ..
drwxr-xr-x 7 sangwonl staff 224 Apr 1 17:35 my-app
➜ k8s-test$ tree my-app
my-app
├── Chart.yaml
├── charts
├── templates
│ ├── NOTES.txt
│ ├── _helpers.tpl
│ ├── deployment.yaml
│ ├── ingress.yaml
│ ├── service.yaml
│ ├── serviceaccount.yaml
│ └── tests
│ └── test-connection.yaml
└── values.yaml
3 directories, 9 files
➜ k8s-test$ cat my-app/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "my-app.fullname" . }}
labels:
{{ include "my-app.labels" . | indent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "my-app.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "my-app.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
...
..
Helm (2/4) - Template 생성
➜ k8s-test$ helm create my-app
Creating my-app
➜ k8s-test$ ls -al
total 0
drwxr-xr-x 3 sangwonl staff 96 Apr 1 17:35 .
drwxr-xr-x 103 sangwonl staff 3296 Apr 1 17:35 ..
drwxr-xr-x 7 sangwonl staff 224 Apr 1 17:35 my-app
➜ k8s-test$ tree my-app
my-app
├── Chart.yaml
├── charts
├── templates
│ ├── NOTES.txt
│ ├── _helpers.tpl
│ ├── deployment.yaml
│ ├── ingress.yaml
│ ├── service.yaml
│ ├── serviceaccount.yaml
│ └── tests
│ └── test-connection.yaml
└── values.yaml
3 directories, 9 files
replicaCount: 1
image:
repository: nginx
tag: stable
pullPolicy: IfNotPresent
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
service:
type: ClusterIP
port: 80
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
...
..
Helm (3/4) - Template 렌더
➜ my-app$ helm template . -f values.yaml
---
# Source: my-app/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: release-name-my-app
labels:
app.kubernetes.io/name: my-app
helm.sh/chart: my-app-0.1.0
app.kubernetes.io/instance: release-name
app.kubernetes.io/version: "1.0"
app.kubernetes.io/managed-by: Tiller
---
# Source: my-app/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: release-name-my-app
labels:
app.kubernetes.io/name: my-app
helm.sh/chart: my-app-0.1.0
app.kubernetes.io/instance: release-name
app.kubernetes.io/version: "1.0"
...
..
Helm (3/4) - Template 렌더 (검증)
➜ my-app$ helm template . -f values.yaml
---
# Source: my-app/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: release-name-my-app
labels:
app.kubernetes.io/name: my-app
helm.sh/chart: my-app-0.1.0
app.kubernetes.io/instance: release-name
app.kubernetes.io/version: "1.0"
app.kubernetes.io/managed-by: Tiller
---
# Source: my-app/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: release-name-my-app
labels:
app.kubernetes.io/name: my-app
helm.sh/chart: my-app-0.1.0
app.kubernetes.io/instance: release-name
app.kubernetes.io/version: "1.0"
...
..
➜ my-app$ helm template . -f values.yaml | kubectl apply -f - 
--dry-run=client 
--validate=true
serviceaccount/release-name-my-app created (dry run)
service/release-name-my-app created (dry run)
pod/release-name-my-app-test-connection created (dry run)
deployment.apps/release-name-my-app created (dry run)
Helm (3/4) - Template 렌더 (적용)
➜ my-app$ helm template . -f values.yaml
---
# Source: my-app/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: release-name-my-app
labels:
app.kubernetes.io/name: my-app
helm.sh/chart: my-app-0.1.0
app.kubernetes.io/instance: release-name
app.kubernetes.io/version: "1.0"
app.kubernetes.io/managed-by: Tiller
---
# Source: my-app/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: release-name-my-app
labels:
app.kubernetes.io/name: my-app
helm.sh/chart: my-app-0.1.0
app.kubernetes.io/instance: release-name
app.kubernetes.io/version: "1.0"
...
..
➜ my-app$ helm template . -f values.yaml | kubectl apply -f -
serviceaccount/release-name-my-app created
service/release-name-my-app created
pod/release-name-my-app-test-connection created
deployment.apps/release-name-my-app created
➜ my-app$ kubectl get pod
NAME READY STATUS RESTARTS AGE
my-app-5dfb54954b-b7rvt 1/1 Running 0 10s
my-app-5dfb54954b-n6752 1/1 Running 0 13s
Helm (4/4) - Phase 별로 구성
➜ k8s-test$ tree my-app
my-app
├── Chart.yaml
├── charts
├── templates
│ ├── NOTES.txt
│ ├── _helpers.tpl
│ ├── deployment.yaml
│ ├── ingress.yaml
│ ├── service.yaml
│ └── secrets.yaml
├── overrides
│ ├── dev.yaml
│ ├── sandbox.yaml
│ ├── cbt.yaml
│ └── production.yaml
└── values.yaml
3 directories, 12 files
➜ my-app$ helm template . -f values.yaml -f overrides/sandbox.yaml
---
# Source: my-app/templates/deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: my-app
namespace: my-app-sandbox
spec:
selector:
matchLabels:
app: my-app
replicas: 2
template:
...
..
Kustomize (1/4) - 생성
➜ my-app$ ls
deployment.yaml ingress.yaml secret.yaml service.yaml
➜ my-app$ kustomize create --autodetect
➜ my-app$ ls
deployment.yaml ingress.yaml kustomization.yaml secret.yaml service.yaml
➜ my-app$ cat kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- ingress.yaml
- secret.yaml
- service.yaml
Kustomize (2/4) - 패치
➜ my-app$ ls
deployment.yaml ingress.yaml kustomization.yaml
patch-resources.yaml secret.yaml service.yaml
➜ my-app$ cat kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- ingress.yaml
- secret.yaml
- service.yaml
images:
- name: sangwonl/my-app
newTag: 1.58.1
patchesStrategicMerge:
- patch-resources.yaml
➜ my-app$ cat patch-resources.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 2
template:
spec:
containers:
- name: my-app
resources:
requests:
cpu: 2000m
memory: 2.5Gi
limits:
cpu: 2000m
memory: 2.5Gi
...
..
➜ my-app$ cat patch-ingress.yaml
- op: replace
path: /spec/rules/0/host
value: my-app-sandbox.example.com
- op: replace
path: /spec/tls/0/hosts/0
value: my-app-sandbox.example.com
Kustomize (2/4) - 패치 (JSON6902)
➜ my-app$ cat kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- ingress.yaml
- secret.yaml
- service.yaml
images:
- name: sangwonl/my-app
newTag: 1.58.1
patchesStrategicMerge:
- patch-resources.yaml
patchesJson6902:
- path: patch-ingress.yaml
target:
group: extensions
kind: Ingress
name: my-app-ingress
version: v1beta1
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: my-app-ingress
spec:
rules:
- host: plz-patch-me.example.com
http:
paths:
- backend:
serviceName: my-app-service
servicePort: http-port
path: /
tls:
- hosts:
- plz-patch-me.example.com
secretName: tls-secret-example-com
➜ my-app$ kustomize build
apiVersion: v1
kind: Service
metadata:
labels:
app: my-app
name: my-app-service
spec:
ports:
- name: http-port
port: 80
protocol: TCP
targetPort: 80
selector:
app: my-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: my-app
spec:
replicas: 2
...
..
Kustomize (3/4) - Template 렌더
➜ my-app$ kustomize build
apiVersion: v1
kind: Service
metadata:
labels:
app: my-app
name: my-app-service
spec:
ports:
- name: http-port
port: 80
protocol: TCP
targetPort: 80
selector:
app: my-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: my-app
spec:
replicas: 2
...
..
Kustomize (3/4) - Template 렌더 (검증)
➜ my-app$ kustomize build | kubectl apply -f - 
--dry-run=client 
--validate=true
serviceaccount/release-name-my-app created (dry run)
service/release-name-my-app created (dry run)
pod/release-name-my-app-test-connection created (dry run)
deployment.apps/release-name-my-app created (dry run)
➜ my-app$ kustomize build
apiVersion: v1
kind: Service
metadata:
labels:
app: my-app
name: my-app-service
spec:
ports:
- name: http-port
port: 80
protocol: TCP
targetPort: 80
selector:
app: my-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: my-app
spec:
replicas: 2
...
..
Kustomize (3/4) - Template 렌더 (적용)
➜ my-app$ kustomize build | kubectl apply -f -
serviceaccount/release-name-my-app created
service/release-name-my-app created
pod/release-name-my-app-test-connection created
deployment.apps/release-name-my-app created
➜ my-app$ kubectl get pod
NAME READY STATUS RESTARTS AGE
my-app-5dfb54954b-b7rvt 1/1 Running 0 10s
my-app-5dfb54954b-n6752 1/1 Running 0 13s
Kustomize (4/4) - Phase 별로 구성
➜ k8s-test$ tree my-app
my-app
├── base
│ ├── deployment.yaml
│ ├── ingress.yaml
│ ├── kustomization.yaml
│ ├── service.yaml
│ └── secrets.yaml
└── overlays
...
├── sandbox
│ ├── kustomization.yaml
│ ├── patch-resources.yaml
│ └── patch-ingress.yaml
├── cbt
│ ├── kustomization.yaml
│ ├── patch-resources.yaml
│ └── patch-ingress.yaml
└── production
├── kustomization.yaml
├── patch-environments.yaml
├── patch-resources.yaml
└── patch-ingress.yaml
6 directories, 14 files
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
commonLabels:
app: my-app
resources:
- secrets.yaml
- deployment.yaml
- service.yaml
- ingress.yaml
images:
- name: sangwonl/my-app
newTag: latest
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patchesStrategicMerge:
- patch-resources.yaml
- patch-ingress.yaml
images:
- name: sangwonl/my-app
newTag: 1.51.3
Kustomize (4/4) - Phase 별로 구성
➜ k8s-test$ tree my-app
my-app
├── base
│ ├── deployment.yaml
│ ├── ingress.yaml
│ ├── kustomization.yaml
│ ├── service.yaml
│ └── secrets.yaml
└── overlays
...
├── sandbox
│ ├── kustomization.yaml
│ ├── patch-resources.yaml
│ └── patch-ingress.yaml
├── cbt
│ ├── kustomization.yaml
│ ├── patch-resources.yaml
│ └── patch-ingress.yaml
└── production
├── kustomization.yaml
├── patch-environments.yaml
├── patch-resources.yaml
└── patch-ingress.yaml
6 directories, 14 files
➜ my-app$ cd overlays/sandbox
➜ my-app$ kustomize build
or
➜ my-app$ kustomize build overlays/sandbox
3. 활용 패턴
- 서비스 성격에 따라 여러 클러스터로 분리
my-data-02 → 데이터 파이프라인 및 데이터 집계/분석 관련 앱들
my-dev-04 → 개발용 클러스터
my-prod-02 → 운영용 클러스터
my-prod-03 → 운영용 신규 클러스터
my-devops-01 → Jenkins, ArgoCD 등 DevOps 관련 앱들
my-pm-01 → PM worker들로 이루어진 클러스터, PM에 띄워야할 앱을 위해
- 클러스터 이름 규칙
<org>-<purpose>-<index> (index는 클러스터 이전마다 증가)
클러스터 분리 및 이름 규칙
- 네임스페이스를 쓰면?
* k8s 오브젝트들의 경계를 칠 수 있음
* 네임스페이스가 다르면 오브젝트 이름이 동일해도 됨
* 보통 앱 설정은 Phase 별로 거의 동일
→ 네임스페이스로 Phase를 구분 가능
- 네임스페이스 이름 규칙
<app-name>-<phase>
* phase → dev, sandbox, cbt, production
* dev cluster → dev / sandbox
prod cluster → cbt / production
<cluster>-<common>
* 클러스터 공통인 앱들, default를 써도 됨
ex) my-app-sandbox, my-app-production, jenkins-common
네임스페이스 용도 및 이름 규칙
Cluster (my-dev-04)
my-app-dev
my-app-ingress
my-app-service
my-app-deploy
my-app-sandbox
my-app-ingress
my-app-service
my-app-deploy
배치 / 스케쥴링 (1/3) - Pod Resource
배치 / 스케쥴링 (1/3) - Pod Resource
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
containers:
image: sangwonl/my-app:1.58.1
name: my-app
...
resources:
requests:
cpu: 2500m
memory: 2Gi
limits:
cpu: 4
memory: 4Gi
배치 / 스케쥴링 (1/3) - Pod Resource
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
containers:
image: sangwonl/my-app:1.58.1
name: my-app
...
resources:
requests:
cpu: 2500m
memory: 2Gi
limits:
cpu: 4
memory: 4Gi
1 CPU = 1000m (Millicores)
2Gi 메모리와 CPU 2.5 코어 만큼의 여유
가 되는 노드가 있다면 해당 노드로 스케
쥴링 가능
배치 / 스케쥴링 (1/3) - Pod Resource
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
containers:
image: sangwonl/my-app:1.58.1
name: my-app
...
resources:
requests:
cpu: 2500m
memory: 2Gi
limits:
cpu: 4
memory: 4Gi
1 CPU = 1000m (Millicores)
2Gi 메모리와 CPU 2.5 코어 만큼의 여유
가 되는 노드가 있다면 해당 노드로 스케
쥴링 가능
Pod의 CPU 사용량이 4 코어를 넘어가면
사용량이 제한(Throttling)되기 시작.
한편, 메모리 사용량이 4Gi를 넘어가게 되
면 Killed 될 수 있음.
배치 / 스케쥴링 (1/3) - Pod Resource
배치 / 스케쥴링 (2/3) - Node Selector
Node 1
label: gpu=enabled
Node 3
label: gpu=disabled
Node 4
-
Node 2
label: gpu=enabled
Pod
nodeSelector:
gpu: enabled
O
O
X
X
배치 / 스케쥴링 (2/3) - Node Selector
Node 1
label: gpu=enabled
Node 3
label: gpu=disabled
Node 4
-
Node 2
label: gpu=enabled
Pod
nodeSelector:
gpu: enabled
O
O
X
X
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
containers:
image: sangwonl/my-app:1.58.1
name: my-app
...
nodeSelector:
gpu: enabled
➜ my-app$ kubectl label <node-1> gpu=enabled
➜ my-app$ kubectl label <node-2> gpu=enabled
배치 / 스케쥴링 (3/3) - Taint / Toleration
Node 1
taint:
spot=true:NoSchedule
Node 3
-
Node 4
-
Node 2
taint:
spot=true:NoSchedule
Pod
tolerations:
...
O
O
X
Pod
-
X
O
O
O
O
배치 / 스케쥴링 (3/3) - Taint / Toleration
Node 1
taint:
spot=true:NoSchedule
Node 3
-
Node 4
-
Node 2
taint:
spot=true:NoSchedule
Pod
tolerations:
...
O
O
X
Pod
-
X
O
O
O
O
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
containers:
image: sangwonl/my-app:1.58.1
name: my-app
...
tolerations:
- key: spot
operator: Equal
value: true
effect: NoSchedule
➜ my-app$ kubectl taint <node-1> spot=true:NoSchedule
➜ my-app$ kubectl taint <node-2> spot=true:NoSchedule
라이프사이클 (1/4) - Pod 상태
라이프사이클 (2/4) - Hook (postStart)
a Pod
https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#hook-details
Container A
Process A
postStart
Container created Entrypoint
Container B
Process B
postStart
Entrypoint
Process C
postStart
Entrypoint
Take too long or hangs
→ can't reach RUNNING state
Init container 1
Process 1
Pod created
Init container 2
Process 2
Init container 3
Process 3
Init 1/3
Init 2/3
Init 3/3
Container C
라이프사이클 (2/4) - Hook (postStart)
a Pod
https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#hook-details
Container A
Process A
postStart
Container created Entrypoint
Container B
Process B
postStart
Entrypoint
Process C
postStart
Entrypoint
Take too long or hangs
→ can't reach RUNNING state
Init container 1
Process 1
Pod created
Init container 2
Process 2
Init container 3
Process 3
Init 1/3
Init 2/3
Init 3/3
Container C
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
containers:
image: sangwonl/my-app:1.58.1
name: my-app
...
initContainers:
- name: init-data-home
image: busybox
command: ['sh', '-c', 'rm -rf /data/*']
- name: init-lookup-db
image: busybox
command: ['sh', '-c', 'until nslookup mydb; do echo waiting for mydb; sleep 2; done;']
...
라이프사이클 (2/4) - Hook (postStart)
a Pod
https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#hook-details
Container A
Process A
postStart
Container created Entrypoint
Container B
Process B
postStart
Entrypoint
Process C
postStart
Entrypoint
Take too long or hangs
→ can't reach RUNNING state
Init container 1
Process 1
Pod created
Init container 2
Process 2
Init container 3
Process 3
Init 1/3
Init 2/3
Init 3/3
Container C
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
containers:
image: sangwonl/my-app:1.58.1
name: my-app
...
lifecycle:
postStart:
exec: # or tcpSocket, httpGet
command: ["rm", "-rf", "/data/db/lost+found"]
...
라이프사이클 (2/4) - Hook (preStop)
라이프사이클 (2/4) - Hook (preStop)
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
containers:
image: sangwonl/my-app:1.58.1
name: my-app
...
lifecycle:
preStop:
exec: # or tcpSocket, httpGet
command: ["./stop-consuming.sh"]
...
라이프사이클 (3/4) - Readiness Probe (HTTP)
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
...
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: sangwonl/my-app:latest
ports:
- containerPort: 9090
readinessProbe:
httpGet:
port: 9090
path: /healthcheck
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
GET http://localhost:9090/healthcheck
initialDelaySeconds: 60
컨테이너가 시작된 후 60초 동안은 probe ping 없이 기다림
periodSeconds: 10
얼마나 주기적으로 probe 할 것인지, 기본값 10초
timeoutSeconds: 1
probe 응답의 타임아웃, probe가 1초 넘어가면 실패로 간주, 기본값 1초
successThreshold: 1
probe가 최소 몇 번 성공해야 readiness를 성공이라고 판단할지, 기본값 1
failureThreshold: 3
probe가 최소 몇 번 실패해야 readiness를 실패라고 판단할지, 기본값 3
라이프사이클 (3/4) - Readiness Probe (gRPC)
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
...
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: sangwonl/my-app:latest
ports:
- containerPort: 9090
readinessProbe:
exec:
command: # 커맨드 실행 방식
- grpc_health_probe # 도커이미지에 미리 설치해둔 실행 파일
- -addr=:9090 # 실행 파일의 인자
initialDelaySeconds: 60
$ grpc_health_probe -addr=:9090 ➜ my-app$ cat Dockerfile
...
...
# Install grpc_health_probe
RUN wget -qO/bin/grpc_health_probe https://github.com/grpc-ecosystem/grpc-
health-probe/releases/download/v0.3.2/grpc_health_probe-linux-amd64 && 
chmod +x /bin/grpc_health_probe
...
SIGTERM
Running
Terminating
라이프사이클 (4/4) - Termination Grace Period
Pod
terminationGracePeriodSeconds: 120
SIGTERM
& SIGKILL
Pod
terminationGracePeriodSeconds: 0
Running
Terminating
& Shutdown
Shutdown
SIGKILL
Termination Grace Period
(120s)
앱(프로세스)이 완전히 종료되는
시점
SIGTERM
Running
Terminating
라이프사이클 (4/4) - Termination Grace Period
Pod
terminationGracePeriodSeconds: 120
SIGTERM
& SIGKILL
Pod
terminationGracePeriodSeconds: 0
Running
Terminating
& Shutdown
Shutdown
SIGKILL
Termination Grace Period
(120s)
앱(프로세스)이 완전히 종료되는
시점
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
terminationGracePeriodSeconds: 120
containers:
image: sangwonl/my-app:1.58.1
name: my-app
...
볼륨 (1/4) - 기본
Pod
Container A Container A
Volume A Volume B
볼륨 (1/4) - 기본
Pod
Container A Container A
Volume A Volume B
X
X
볼륨 (1/4) - Pod 스코프 임시 볼륨 (emptyDir)
Pod
Container A Container A
Volume A Volume B
볼륨 (1/4) - Pod 스코프 임시 볼륨 (emptyDir)
Pod
Container A Container A
Volume A Volume B
Volume
(Shared in Pod) Removed
when pod is terminated
볼륨 (1/4) - Pod 스코프 임시 볼륨 (emptyDir)
Pod
Container A Container A
Volume A Volume B
Volume
(Shared in Pod)
mounted
O
Removed
when pod is terminated
볼륨 (1/4) - Pod 스코프 임시 볼륨 (emptyDir)
Pod
Container A Container A
Volume A Volume B
Volume
(Shared in Pod)
mounted mounted
O O
Removed
when pod is terminated
볼륨 (1/4) - Pod 스코프 임시 볼륨 (emptyDir)
Pod
Container A Container A
Volume A Volume B
Volume
(Shared in Pod)
mounted mounted
O O
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
containers:
image: sangwonl/my-app:1.58.1
name: my-app
...
volumeMounts:
- mountPath: /var/log/filebeat
name: filebeat-log-volume
volumes:
- name: filebeat-log-volume
emptyDir: {}
...
Worker
(Host)
볼륨 (2/4) - Persistent Volume (ReadWriteOnce)
Pod
Container A Container A
Volume A Volume B
Volume
(Shared in Pod)
O O
mounted mounted
Worker
(Host)
볼륨 (2/4) - Persistent Volume (ReadWriteOnce)
Pod
Container A Container A
Volume A Volume B
Volume
(Shared in Pod)
mounted mounted
O O
Host Volume
(Per Pod)
mounted
Persistent Volume
(Openstack Cinder)
Mounted by
Kubernetes Cinder Driver
Worker
(Host)
볼륨 (2/4) - Persistent Volume (ReadWriteOnce)
Pod
Container A Container A
Volume A Volume B
Volume
(Shared in Pod)
mounted mounted
O O
mounted
Host Volume
(Per Pod)
mounted
Persistent Volume
(Openstack Cinder)
Mounted by
Kubernetes Cinder Driver
Worker
(Host)
볼륨 (2/4) - Persistent Volume (ReadWriteOnce)
Pod
Container A Container A
Volume A Volume B
Volume
(Shared in Pod)
mounted mounted
O O
mounted
Host Volume
(Per Pod)
mounted
Persistent Volume
(Openstack Cinder)
Mounted by
Kubernetes Cinder Driver
Pod
Volume
(Shared in Pod)
can't mount
X
Worker
(Host)
볼륨 (2/4) - Persistent Volume (ReadWriteOnce)
Pod
Container A Container A
Volume A Volume B
Volume
(Shared in Pod)
mounted mounted
O O
mounted
Host Volume
(Per Pod)
mounted
Persistent Volume
(Openstack Cinder)
Mounted by
Kubernetes Cinder Driver
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-app-pvc
namespace: my-app-sandbox
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Worker
(Host)
볼륨 (2/4) - Persistent Volume (ReadWriteOnce)
Pod
Container A Container A
Volume A Volume B
Volume
(Shared in Pod)
mounted mounted
O O
mounted
Host Volume
(Per Pod)
mounted
Persistent Volume
(Openstack Cinder)
Mounted by
Kubernetes Cinder Driver
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-app-pvc
namespace: my-app-sandbox
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
containers:
image: sangwonl/my-app:1.58.1
name: my-app
...
volumeMounts:
- name: my-app-volume
mountPath: "/var/lib/my-persistent-data"
volumes:
- name: my-app-volume
persistentVolumeClaim:
claimName: my-app-pvc
...
Worker A
(Host A)
볼륨 (3/4) - Persistent Volume (ReadWriteMany)
Pod A
Container A Container A
Volume A Volume B
Volume
(Shared in Pod)
mounted mounted
mounted
Host Volume
(Per Pod)
Worker B
(Host B)
Pod B
Container A Container A
Volume A Volume B
Volume
(Shared in Pod)
mounted mounted
mounted
Host Volume
(Per Pod)
Worker A
(Host A)
볼륨 (3/4) - Persistent Volume (ReadWriteMany)
Pod A
Container A Container A
Volume A Volume B
Volume
(Shared in Pod)
mounted mounted
NFS Daemon
(Tenth2, Dedicated Host)
mounted
Host Volume
(Per Pod)
mounted
NFS Volume
(/nfsdata)
Mounted by
Kubernetes NFS Driver
Worker B
(Host B)
Pod B
Container A Container A
Volume A Volume B
Volume
(Shared in Pod)
mounted mounted
mounted
Host Volume
(Per Pod)
mounted
Worker A
(Host A)
볼륨 (3/4) - Persistent Volume (ReadWriteMany)
Pod A
Container A Container A
Volume A Volume B
Volume
(Shared in Pod)
mounted mounted
NFS Daemon
(Tenth2, Dedicated Host)
mounted
Host Volume
(Per Pod)
mounted
NFS Volume
(/nfsdata)
Mounted by
Kubernetes NFS Driver
Worker B
(Host B)
Pod B
Container A Container A
Volume A Volume B
Volume
(Shared in Pod)
mounted mounted
mounted
Host Volume
(Per Pod)
mounted
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-app-pv
spec:
volumeMode: Filesystem
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs
nfs:
path: /k8s_dev_nfs_pvc_test/nfs
server: nfs.internal.example.com
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-app-pvc
spec:
volumeMode: Filesystem
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: nfs
Worker A
(Host A)
볼륨 (3/4) - Persistent Volume (ReadWriteMany)
Pod A
Container A Container A
Volume A Volume B
Volume
(Shared in Pod)
mounted mounted
NFS Daemon
(Tenth2, Dedicated Host)
mounted
Host Volume
(Per Pod)
mounted
NFS Volume
(/nfsdata)
Mounted by
Kubernetes NFS Driver
Worker B
(Host B)
Pod B
Container A Container A
Volume A Volume B
Volume
(Shared in Pod)
mounted mounted
mounted
Host Volume
(Per Pod)
mounted
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
containers:
image: sangwonl/my-app:1.58.1
name: my-app
...
volumeMounts:
- name: my-app-volume
mountPath: "/var/lib/my-shared-data"
volumes:
- name: my-app-volume
persistentVolumeClaim:
claimName: my-app-pvc
...
Pod
Container
Volume
(from ConfigMap)
/deploy/backup.sh
볼륨 (4/4) - Configmap / Secret as Volume
Volume
(from Secret)
/deploy/.ssh/id_rsa
mounted
mounted
apiVersion: v1
kind: ConfigMap
apiVersion: v1
kind: Secret
Pod
Container
Volume
(from ConfigMap)
/deploy/backup.sh
볼륨 (4/4) - Configmap / Secret as Volume
Volume
(from Secret)
/deploy/.ssh/id_rsa
mounted
mounted
apiVersion: v1
kind: ConfigMap
apiVersion: v1
kind: Secret
apiVersion: v1
kind: ConfigMap
metadata:
name: my-app-scripts
namespace: my-app-sandbox
data:
backup.sh: |
#! /usr/bin/env bash
mongodump 
--host my-app-rs/my-app-mongo.example.com 
-d my-app -o backups/
restore.sh: |
#! /usr/bin/env bash
mongorestore 
..
Pod
Container
Volume
(from ConfigMap)
/deploy/backup.sh
볼륨 (4/4) - Configmap / Secret as Volume
Volume
(from Secret)
/deploy/.ssh/id_rsa
mounted
mounted
apiVersion: v1
kind: ConfigMap
apiVersion: v1
kind: Secret
apiVersion: v1
kind: Secret
metadata:
name: my-app-sshkey
namespace: my-app-sandbox
data:
id_rsa: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVk...
# base64 encoded string
Pod
Container
Volume
(from ConfigMap)
/deploy/backup.sh
볼륨 (4/4) - Configmap / Secret as Volume
Volume
(from Secret)
/deploy/.ssh/id_rsa
mounted
mounted
apiVersion: v1
kind: ConfigMap
apiVersion: v1
kind: Secret
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
containers:
...
volumeMounts:
- name: my-app-scripts-volume
mountPath: /deploy/backup.sh
subPath: backup.sh
- name: my-app-sshkey-volume
mountPath: "/deploy/.ssh"
readOnly: true
volumes:
- name: my-app-scripts-volume
configMap:
name: my-app-scripts
defaultMode: 0777
- name: my-app-sshkey-volume
secret:
secretName: my-app-sshkey
...
크론잡 (스케쥴잡)
apiVersion: v1
kind: ConfigMap
metadata:
name: my-app-scripts
namespace: my-app-sandbox
data:
backup.sh: |
#! /usr/bin/env bash
mongodump 
--host my-app-rs/my-app-mongo.example.com 
-d my-app -o backups/
restore.sh: |
#! /usr/bin/env bash
mongorestore 
..
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: my-app-cron
spec:
concurrencyPolicy: Forbid
startingDeadlineSeconds: 600
schedule: "30 2 * * *"
jobTemplate:
spec:
backoffLimit: 0
template:
spec:
restartPolicy: Never
containers:
- name: my-app-backup
image: mongo:3.6
command:
- /deploy/backup.sh
volumeMounts:
- name: my-app-scripts-volume
mountPath: /deploy/backup.sh
subPath: backup.sh
volumes:
- name: my-app-scripts-volume
configMap:
name: my-app-scripts
defaultMode: 0777
Pod C
app: my-app
Pod B
app: my-app
싱글톤 Pod
Pod A
app: my-app
...
Service
name: my-app-svc
type: ClusterIp
selector:
app: my-app
Ingress
backend:
serviceName: my-app-svc
Pod C
app: my-app
Pod B
app: my-app
싱글톤 Pod
Pod A
app: my-app
...
Service
name: my-app-svc
type: ClusterIp
selector:
app: my-app
Ingress
backend:
serviceName: my-app-svc
Pod X
app: my-store
name: my-store-pod
Pod C
app: my-app
Pod B
app: my-app
싱글톤 Pod
Pod A
app: my-app
...
Service
name: my-app-svc
type: ClusterIp
selector:
app: my-app
Ingress
backend:
serviceName: my-app-svc
Pod X
app: my-store
name: my-store-pod
connect("my-store-pod:3306")
X
connect("my-store-pod:3306")
X
Service
name: my-store-svc
type: None
selector:
app: my-store
Pod C
app: my-app
Pod B
app: my-app
싱글톤 Pod - Headless Service
Pod A
app: my-app
...
Service
name: my-app-svc
type: ClusterIp
selector:
app: my-app
Ingress
backend:
serviceName: my-app-svc
Pod X
app: my-store
name: my-store-pod
connect("my-store-svc:3306")
O
Service
name: my-store-svc
type: None
selector:
app: my-store
Pod C
app: my-app
Pod B
app: my-app
싱글톤 Pod - Headless Service
Pod A
app: my-app
...
Service
name: my-app-svc
type: ClusterIp
selector:
app: my-app
Ingress
backend:
serviceName: my-app-svc
Pod X
app: my-store
name: my-store-pod
connect("my-store-svc:3306")
O
apiVersion: v1
kind: Service
metadata:
name: my-store-svc
spec:
clusterIP: None
selector:
app: my-store
ports:
- protocol: TCP
port: 3306
targetPort: 3306
사이드카 (Sidecar)
사이드카 (Sidecar)
Pod
Nginx
main container
Filebeat
sidecar container
/var/log/nginx/access.log /var/log/nginx/access.log
Volume
(Shared in Pod)
mounted (read / write) mounted (read only)
Removed
when pod is terminated
Kafka
사이드카 (Sidecar)
Pod
Nginx
main container
Filebeat
sidecar container
/var/log/nginx/access.log /var/log/nginx/access.log
Volume
(Shared in Pod)
mounted (read / write) mounted (read only)
Removed
when pod is terminated
Kafka
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: my-app
spec:
...
template:
spec:
containers:
- name: my-app
image: "nginx:latest"
ports:
- containerPort: 80
volumeMounts:
- name: my-app-logs
mountPath: /var/log/nginx
- name: filebeat
image: docker.elastic.co/beats/filebeat:6.2.4
volumeMounts:
- name: my-app-logs
mountPath: /var/log/nginx
readOnly: true
command:
- filebeat
- -c
- "/etc/filebeat.yml"
volumes:
- name: my-app-logs
emptyDir: {}
...
4. 웹서비스 관련
HTTPS(TLS) 설정 (1/3) - HTTP
apiVersion: v1
kind: Service
metadata:
labels:
app: my-app-svc
name: my-app-svc
spec:
ports:
- port: 8000
protocol: TCP
targetPort: 8000
selector:
app: my-app
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: my-app
name: my-app
spec:
template:
spec:
containers:
- image: my-app:latest
name: my-app
ports:
- containerPort: 8000
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: my-app-ingress
name: my-app-ingress
spec:
rules:
- host: my-app.example.com
http:
paths:
- path: /
backend:
serviceName: my-app-svc
servicePort: 8000
HTTPS(TLS) 설정 (1/3) - TLS 설정 추가
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: my-app-ingress
name: my-app-ingress
spec:
rules:
- host: my-app.example.com
http:
paths:
- path: /
backend:
serviceName: my-app-svc
servicePort: 8000
HTTPS(TLS) 설정 (1/3) - TLS 설정 추가
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: my-app-ingress
name: my-app-ingress
spec:
rules:
- host: my-app.example.com
http:
paths:
- path: /
backend:
serviceName: my-app-svc
servicePort: 8000
tls:
- secretName: tls-secret
hosts:
- my-app.example.com
HTTPS(TLS) 설정 (1/3) - TLS 설정 추가
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: my-app-ingress
name: my-app-ingress
spec:
rules:
- host: my-app.example.com
http:
paths:
- path: /
backend:
serviceName: my-app-svc
servicePort: 8000
tls:
- secretName: tls-secret
hosts:
- my-app.example.com
apiVersion: v1
kind: Secret
metadata:
name: tls-secret
type: kubernetes.io/tls
data:
tls.crt: LS0tLS1CRUdJTiBDRVJ...
tls.key: LS0tLS1CRUdJTiBSU0E...
HTTPS(TLS) 설정 (1/3) - TLS 설정 추가
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: my-app-ingress
name: my-app-ingress
spec:
rules:
- host: my-app.example.com
http:
paths:
- path: /
backend:
serviceName: my-app-svc
servicePort: 8000
tls:
- secretName: tls-secret
hosts:
- my-app.example.com
apiVersion: v1
kind: Secret
metadata:
name: tls-secret
type: kubernetes.io/tls
data:
tls.crt: LS0tLS1CRUdJTiBDRVJ...
tls.key: LS0tLS1CRUdJTiBSU0E...
➜ $ cat STAR.example.com_crt.pem | base64
LS0tLS1CRUdJTiBDRVJ...
➜ $ cat STAR.example.com_key.pem | base64
LS0tLS1CRUdJTiBSU0E...
HTTPS(TLS) 설정 (1/3) - 반영
Ingress Node
Worker Node 2
ingress-nginx-controller (Pod)
/etc/nginx/nginx/conf
server {
server_name my-app.example.com;
listen 80;
listen 443 ssl http2;
...
80 / 443
Ingress Node
ingress-nginx-controller (Pod)
/etc/nginx/nginx/conf
server {
server_name my-app.example.com;
listen 80;
listen 443 ssl http2;
...
80 / 443
VIP
my-app.example.com my-app (Pod)
8000
Worker Node 1
my-app (Pod)
8000
Worker Node 3
my-app (Pod)
8000
HTTPS(TLS) 설정 (2/3) - HTTP / HTTPS 동시 지원
TLS 설정을 추가하면 기본적으로는 HTTPS 만 지원함 (HTTP 로 요청하면 308 Redirect 시킴)
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#server-side-https-enforcement-through-redirect
HTTP / HTTPS 를 모두 지원하고 싶다면 Ingress 에 Annotation을 추가
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: my-app-ingress
name: my-app-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
...
Ingress 는 TLS 설정 없이, 외부 LB를 통해 항상 SSL offloading 을 하고 들어오게 하려면?
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
HTTPS(TLS) 설정 (3/3) - gRPC 지원
HTTPS(TLS) 설정 (3/3) - gRPC 지원
gRPC
HTTPS(TLS) 설정 (3/3) - gRPC 지원
gRPC HTTP2
over
HTTPS(TLS) 설정 (3/3) - gRPC 지원
gRPC HTTP2 TLS 1.2
or higher
over MUST use
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: my-app-ingress
name: my-app-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
spec:
rules:
- host: my-app.example.com
http:
paths:
- path: /
backend:
serviceName: my-app-svc
servicePort: 50051
tls:
- secretName: tls-secret
hosts:
- my-app.example.com
HTTPS(TLS) 설정 (3/3) - gRPC 지원
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: my-app-ingress
name: my-app-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
spec:
rules:
- host: my-app.example.com
http:
paths:
- path: /
backend:
serviceName: my-app-svc
servicePort: 50051
tls:
- secretName: tls-secret
hosts:
- my-app.example.com
HTTPS(TLS) 설정 (3/3) - gRPC 지원
server {
server_name my-app.example.com;
listen 80;
listen 443 ssl http2;
# Allow websocket connections
grpc_set_header Upgrade $http_upgrade;
grpc_set_header Connection $connection_upgrade;
grpc_set_header X-Request-ID $req_id;
grpc_set_header X-Real-IP $the_real_ip;
grpc_set_header X-Forwarded-For $the_real_ip;
grpc_set_header X-Forwarded-Host $best_http_host;
grpc_set_header X-Forwarded-Port $pass_port;
grpc_set_header X-Forwarded-Proto $pass_access_scheme;
grpc_set_header X-Original-URI $request_uri;
grpc_set_header X-Scheme $pass_access_scheme;
...
...
grpc_pass grpc://upstream_balancer;
...
}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: my-app-ingress
name: my-app-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
spec:
rules:
- host: my-app.example.com
http:
paths:
- path: /
backend:
serviceName: my-app-svc
servicePort: 50051
tls:
- secretName: tls-secret
hosts:
- my-app.example.com
HTTPS(TLS) 설정 (3/3) - gRPC 지원
server {
server_name my-app.example.com;
listen 80;
listen 443 ssl http2;
# Allow websocket connections
grpc_set_header Upgrade $http_upgrade;
grpc_set_header Connection $connection_upgrade;
grpc_set_header X-Request-ID $req_id;
grpc_set_header X-Real-IP $the_real_ip;
grpc_set_header X-Forwarded-For $the_real_ip;
grpc_set_header X-Forwarded-Host $best_http_host;
grpc_set_header X-Forwarded-Port $pass_port;
grpc_set_header X-Forwarded-Proto $pass_access_scheme;
grpc_set_header X-Original-URI $request_uri;
grpc_set_header X-Scheme $pass_access_scheme;
...
...
grpc_pass grpc://upstream_balancer;
...
}
http://nginx.org/en/docs/http/ngx_http_grpc_module.html
Ingress Practice - Private / Public 용 그룹 분리 (보안)
Internal Service A
Client
(Frontend App)
External Service
Internal Service B
Internal Service C
Ingress Practice - 로깅 포맷 변경
➜ $ kubectl edit cm ingress-nginx -n ingress-nginx
apiVersion: v1
kind: ConfigMap
data:
server-snippet: set $resp_body "";
http-snippet: |-
body_filter_by_lua '
local resp_body = string.sub(ngx.arg[1], 1, 1000)
ngx.ctx.buffered = (ngx.ctx.buffered or "") .. resp_body
if ngx.arg[2] then
ngx.var.resp_body = ngx.ctx.buffered
end
';
log-format-escape-json: "true"
log-format-upstream:
'{"time":"$time_iso8601","req_id":"$req_id","remote_address":"$proxy_add_x_forwarded_for","remote_port":$remote_port,"local_
address":"$server_addr","local_port":$server_port,"service_name":"$service_name","User-
Agent":"$http_user_agent","Host":"$host","Content-
Type":"$sent_http_content_type","body":"$request_body","status":$status,"body":"$resp_body","upstream_addr":"$upstream_addr"
,"request_time":"$request_time","upstream_response_time":"$upstream_response_time","request_method":"$request_method","url":
"$uri","server_protocol":"$server_protocol"}'
- 가령, Ingress(Nginx) 로그를 JSON 형태로 출력하고 싶다면
더 자세한 파라미터 참고: Log format - NGINX Ingress Controller (kubernetes.github.io)
- Ingress(Nginx) 공통 설정은 ConfigMap으로 변경
Ingress Practice - 공통 설정 변경
apiVersion: v1
kind: ConfigMap
data:
generate-request-id: "false"
use-forwarded-headers: "true"
use-http2: "true"
...
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/
- 컨테이너 이미지 보안 취약점 해결 (불필요 명령어 삭제 & Package Update)
컨테이너 이미지 취약점 점검 (1/3)
# 패키지 업데이트
RUN apk -y update
# for other linux distributions
# or apt-get -y update
# or yum -y update
# 불필요한 패키지 제거
RUN rm 
/usr/bin/wget 
/usr/bin/curl 
/usr/bin/nc 
/sbin/route 
/bin/ping 
/bin/ping6
- 컨테이너 실행 유저 non-root 로 변경
컨테이너 이미지 취약점 점검 (2/3)
# Docker 를 쓰는 경우면 Dockerfile에 새로운 유저를 추가하고
# 실행 파일이 있거나 실행 중 필요한 경로에 대해 권한을 주는 명령을 추가
RUN addgroup deploy && adduser -D -G deploy deploy 
&& chown -R deploy:deploy $DEPLOY_HOME 
&& chmod -R 740 $DEPLOY_HOME
USER deploy
- 컨테이너 실행 유저 non-root 로 변경 (JIB)
컨테이너 이미지 취약점 점검 (3/3)
# Gradle 설정을 통해 jib 을 쓰는 경우라면
# 이전장에서처럼 nonroot 유저가 추가된 이미지를 하나 생성한 후에
# jib.container.user 로 설정
jib {
from {
image = 'sangwonl/java-debian10:nonroot'
}
container {
user = 'deploy'
environment = []
jvmFlags = []
}
}
- 작업 중 가장 부담스러운 부분, 점진적으로 트래픽을 이전하는 방법이 없을까?
라이브 중인 기존 서비스 k8s 클러스터로 이전
- 작업 중 가장 부담스러운 부분, 점진적으로 트래픽을 이전하는 방법이 없을까?
- LB와 도메인 사이에 중간 LB를 둬서 일정 비율로 라우팅을 전환하는 방법을 권장
(Provision 가능한 LB 면 좋지만 그게 아니면 Nginx 등으로 직접 구성도 가능)
- 신규 클러스터의 트래픽 유입 비율을 점진적으로 전환 가능
(가령, 1% → 5% → 25% → 50% → 75% → 100%)
라이브 중인 기존 서비스 k8s 클러스터로 이전
- 혹시 새로운 LB-IP로 도메인을 이전했다면 중간 단계 DNS 혹은 클라이언트 DNS 캐싱에서 캐싱 가능성
- 따라서 기존 LB-IP가 도메인에서 완전히 제거되었는지, 트래픽 유입이 없는지 확인하시고 삭제하길 권장
- Ingress의 Routing Rule과 TLS 설정(시크릿) 등을 한번 더 확인하세요! (실수 많이 일어나는 부분)
라이브 중인 기존 서비스 k8s 클러스터로 이전 - 유의사항
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx" # 분리한 외부/내부 Ingress 클래스가 맞는지!
name: my-public-service-ingress
spec:
rules:
- host: my-service.example.com # 적용하려는 도메인이 맞는지!
http:
paths:
- path: /
backend:
serviceName: my-public-service
servicePort: http-port
tls:
- secretName: tls-example-com # 해당 도메인의 인증서가 맞는지!
hosts:
- my-service.example.com
서비스(프론트) 점검 걸기 (1/3)
- 배포시에 유저 유입 막기 위한 점검 페이지 띄우기
- Proxy(Nginx)에서 점검페이지를 띄우고 특정 Source IP는 Allow 해주는 방식
- k8s Ingress도 결국은 Nginx와 같은 Proxy 서버이기 때문에 Ingress 설정으로 가능
서비스(프론트) 점검 걸기 (2/3) - 점검 서비스를 별도로
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/whitelist-source-range: 172.0.0.0/8,127.0.0.1
nginx.ingress.kubernetes.io/server-snippet: |
error_page 403 @maintenance; # 위의 whitelist-source-range를 통해 제한된 유저들은 403 응답이 내려가는데 이를 캐치해서
location @maintenance {
proxy_pass http://my-app-repair.example.com; # maintenance 서비스로 proxy_pass 한 결과로 응답
}
name: my-app-ingress
spec:
rules:
- host: my-app.example.com
http:
paths:
- path: /
backend:
serviceName: my-app-svc
servicePort: http-port
server-snippet 참고
Allow 하지 않은 source에서 오는 요청들을 403 → 이때 점검 서비스로 proxy_pass
서비스(프론트) 점검 걸기 (3/3) - 프론트앱에서 점검도 제공
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/whitelist-source-range: 172.0.0.0/8,127.0.0.1/32
nginx.ingress.kubernetes.io/server-snippet: |
error_page 403 @maintenance; # 위의 whitelist-source-range를 통해 제한된 유저들은 403 응답이 내려가는데 이를 캐치해서
location @maintenance {
proxy_pass http://upstream_balancer; # 이 Ingress의 원래 서비스(my-app-svc)로 그대로 넘기는데
proxy_set_header X-Maintenance "on"; # X-Maintenance: "on" 이라는 헤더를 추가해서 넘김 (그럼 앱에서 이 헤더를 보고 분기 가능)
}
name: my-app-ingress
spec:
rules:
- host: my-app.example.com
http:
paths:
- path: /
backend:
serviceName: my-app-svc
servicePort: http-port
Allow 하지 않은 source에서 오는 요청들을 403 → 이때 원래 서비스로 proxy_pass
(단, 프론트앱에서 점검 상태인지를 알 수 있도록 헤더 제공)
서비스(프론트) 점검 걸기 (3/3) - 프론트앱에서 점검도 제공
if ($http_x_maintenance = "on") {
return 501;
}
error_page 501 @maintenance;
location @maintenance {
root /etc/nginx/html;
rewrite ^(.*)$ /maintenance.html break;
}
프론트 앱에서 점검 플래그 헤더를 확인해서 점검 페이지를 노출
Nginx 예시
5. 배포 파이프라인
ArgoCD 활용
- GitOps (https://www.gitops.tech/#what-is-gitops)
The core idea of GitOps is having a Git repository that always contains declarative descriptions of the infrastructure
currently desired in the production environment and an automated process to make the production environment match the
described state in the repository.
ArgoCD 활용
- GitOps (https://www.gitops.tech/#what-is-gitops)
The core idea of GitOps is having a Git repository that always contains declarative descriptions of the infrastructure
currently desired in the production environment and an automated process to make the production environment match the
described state in the repository.
- Tools: FluxCD vs ArgoCD vs JenkinsX
ArgoCD 활용
- CLI 설치
ArgoCD 활용 - 설치
- ArgoCD 설치를 참고 (핵심은 install.yaml 파일이고 이걸 기준으로 커스터마이징 및 확장)
$ kubectl create namespace argocd
$ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
$ brew install argocd
ArgoCD 활용 - 커스터마이징
- LDAP 설정을 위한 ConfigMap
# argocd-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
data:
statusbadge.enabled: 'true'
url: https://argocd.example.com
accounts.jenkins: apiKey
accounts.jenkins.enabled: 'true'
dex.config: |
connectors:
- type: ldap
id: ldap
name: LDAP
config:
host: ldap.example.com:389
insecureNoSSL: true
startTLS: false
bindDN: ...
bindPW: ...
userSearch:
baseDN: 'o=identitymaster'
username: uid
idAttr: uid
emailAttr: mail
nameAttr: uid
ArgoCD 활용 - 커스터마이징
- RBAC 설정을 위한 ConfigMap
# argocd-rbac-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-rbac-cm
data:
policy.default: role:admin
- HTTPS & GRPC를 위해 Secret 추가
# secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: tls-secret-example-com
type: kubernetes.io/tls
data:
tls.crt: LS0tLS1CRUdJTiBDRVJUS***
tls.key: LS0tLS1CRUdJTiBSU0EgU***
ArgoCD 활용 - 커스터마이징
- Ingress 설정
# ingress-https.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: argocd-server-http-ingress
namespace: argocd
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
rules:
- http:
paths:
- backend:
serviceName: argocd-server
servicePort: http
host: argocd.example.com
tls:
- secretName: tls-secret-example-com
hosts:
- argocd.example.com
ArgoCD 활용 - 커스터마이징
- gRPC 설정
# ingress-grpc.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: argocd-server-grpc-ingress
namespace: argocd
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
spec:
rules:
- http:
paths:
- backend:
serviceName: argocd-server
servicePort: https
host: argocd-grpc.example.com
tls:
- hosts:
- argocd-grpc.example.com
secretName: argocd-secret
ArgoCD 활용 - CLI 사용
- localhost → k8s argocd server 포워딩 설정
$ kubectl port-forward svc/argocd-server -n argocd 8080:443
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
ArgoCD 활용 - CLI 사용
- k8s 클러스터 조회 / 추가 / 삭제
# 조회
$ argocd cluster list
SERVER NAME VERSION STATUS MESSAGE
https://my-dev-04-master-1.example.com:6443 my-dev-04-ctx 1.15 Successful
https://my-prod-03-master-1.example.com:6443 my-prod-03-ctx 1.15 Successful
# 추가
$ argocd cluster add my-devops-ctx
INFO[0000] ServiceAccount "argocd-manager" already exists in namespace "kube-system"
INFO[0000] ClusterRole "argocd-manager-role" updated
INFO[0000] ClusterRoleBinding "argocd-manager-role-binding" updated
Cluster 'https://my-devops-master-1.example.com:6443' added
# 다시 조회
$ argocd cluster list
SERVER NAME VERSION STATUS MESSAGE
https://my-dev-04-master-1.example.com:6443 my-dev-04-ctx 1.15 Successful
https://my-prod-03-master-1.example.com:6443 my-prod-03-ctx 1.15 Successful
https://my-devops-master-1.example.com:6443 my-devops-ctx 1.11 Successful
# 삭제 - 서버이름으로 해야함
$ argocd cluster rm https://my-devops-master-1.example.com:6443
# 다시 조회
$ argocd cluster list
SERVER NAME VERSION STATUS MESSAGE
https://my-dev-04-master-1.example.com:6443 my-dev-04-ctx 1.15 Successful
https://my-prod-03-master-1.example.com:6443 my-prod-03-ctx 1.15 Successful
ArgoCD 활용 - CLI 사용
- Git 리파지토리 접근 Credential 관리
# 조회
$ argocd repocreds list
URL PATTERN USERNAME SSH_CREDS TLS_CREDS
https://github.com/sangwonl sangwonl false false
# 추가
$ argocd repocreds add https://github.com/sangwonl --username sangwonl --password "*********"
# 삭제
$ argocd repocreds rm https://github.com/sangwonl
ArgoCD 활용 - CLI 사용
- Git 리파지토리 조회 / 추가 / 삭제
# 조회
$ argocd repo list
TYPE NAME REPO INSECURE OCI LFS CREDS STATUS MESSAGE
git https://github.com/sangwonl/kube-recipes.git false false false true Successful
# 추가
$ argocd repo add https://github.com/sangwonl/k8s-apps.git
# 다시 조회
$ argocd repo list
TYPE NAME REPO INSECURE OCI LFS CREDS STATUS MESSAGE
git https://github.com/sangwonl/kube-recipes.git false false false true Successful
git https://github.com/sangwonl/k8s-apps.git false false false true Successful
# 삭제
$ argocd repo rm https://github.com/sangwonl/k8s-apps.git
ArgoCD 활용 - Pipeline
끝.
감사합니다.
Appendix
Pets vs Cattle
Pod?

More Related Content

What's hot

Kubernetes Networking
Kubernetes NetworkingKubernetes Networking
Kubernetes NetworkingCJ Cullen
 
Deploy Application on Kubernetes
Deploy Application on KubernetesDeploy Application on Kubernetes
Deploy Application on KubernetesOpsta
 
Kubernetes Deployment Strategies
Kubernetes Deployment StrategiesKubernetes Deployment Strategies
Kubernetes Deployment StrategiesAbdennour TM
 
Kubernetes GitOps featuring GitHub, Kustomize and ArgoCD
Kubernetes GitOps featuring GitHub, Kustomize and ArgoCDKubernetes GitOps featuring GitHub, Kustomize and ArgoCD
Kubernetes GitOps featuring GitHub, Kustomize and ArgoCDSunnyvale
 
Kubernetes for Beginners: An Introductory Guide
Kubernetes for Beginners: An Introductory GuideKubernetes for Beginners: An Introductory Guide
Kubernetes for Beginners: An Introductory GuideBytemark
 
CKA Certified Kubernetes Administrator Notes
CKA Certified Kubernetes Administrator Notes CKA Certified Kubernetes Administrator Notes
CKA Certified Kubernetes Administrator Notes Adnan Rashid
 
Kubernetes 101 for Beginners
Kubernetes 101 for BeginnersKubernetes 101 for Beginners
Kubernetes 101 for BeginnersOktay Esgul
 
Kubernetes Introduction
Kubernetes IntroductionKubernetes Introduction
Kubernetes IntroductionPeng Xiao
 
Kubermatic CNCF Webinar - start.kubermatic.pdf
Kubermatic CNCF Webinar - start.kubermatic.pdfKubermatic CNCF Webinar - start.kubermatic.pdf
Kubermatic CNCF Webinar - start.kubermatic.pdfLibbySchulze
 
CD using ArgoCD(KnolX).pdf
CD using ArgoCD(KnolX).pdfCD using ArgoCD(KnolX).pdf
CD using ArgoCD(KnolX).pdfKnoldus Inc.
 
Introduction to kubernetes
Introduction to kubernetesIntroduction to kubernetes
Introduction to kubernetesRishabh Indoria
 
Service Mesh on Kubernetes with Istio
Service Mesh on Kubernetes with IstioService Mesh on Kubernetes with Istio
Service Mesh on Kubernetes with IstioMichelle Holley
 
Gitops: the kubernetes way
Gitops: the kubernetes wayGitops: the kubernetes way
Gitops: the kubernetes waysparkfabrik
 
Kubernetes Helm (Boulder Kubernetes Meetup, June 2016)
Kubernetes Helm (Boulder Kubernetes Meetup, June 2016)Kubernetes Helm (Boulder Kubernetes Meetup, June 2016)
Kubernetes Helm (Boulder Kubernetes Meetup, June 2016)Matt Butcher
 
Intro to Helm for Kubernetes
Intro to Helm for KubernetesIntro to Helm for Kubernetes
Intro to Helm for KubernetesCarlos E. Salazar
 

What's hot (20)

Quick introduction to Kubernetes
Quick introduction to KubernetesQuick introduction to Kubernetes
Quick introduction to Kubernetes
 
Kubernetes Networking
Kubernetes NetworkingKubernetes Networking
Kubernetes Networking
 
Deploy Application on Kubernetes
Deploy Application on KubernetesDeploy Application on Kubernetes
Deploy Application on Kubernetes
 
Kubernetes Deployment Strategies
Kubernetes Deployment StrategiesKubernetes Deployment Strategies
Kubernetes Deployment Strategies
 
Kubernetes GitOps featuring GitHub, Kustomize and ArgoCD
Kubernetes GitOps featuring GitHub, Kustomize and ArgoCDKubernetes GitOps featuring GitHub, Kustomize and ArgoCD
Kubernetes GitOps featuring GitHub, Kustomize and ArgoCD
 
Kubernetes for Beginners: An Introductory Guide
Kubernetes for Beginners: An Introductory GuideKubernetes for Beginners: An Introductory Guide
Kubernetes for Beginners: An Introductory Guide
 
CKA Certified Kubernetes Administrator Notes
CKA Certified Kubernetes Administrator Notes CKA Certified Kubernetes Administrator Notes
CKA Certified Kubernetes Administrator Notes
 
Kubernetes 101 for Beginners
Kubernetes 101 for BeginnersKubernetes 101 for Beginners
Kubernetes 101 for Beginners
 
Kubernetes Introduction
Kubernetes IntroductionKubernetes Introduction
Kubernetes Introduction
 
Helm intro
Helm introHelm intro
Helm intro
 
Kubermatic CNCF Webinar - start.kubermatic.pdf
Kubermatic CNCF Webinar - start.kubermatic.pdfKubermatic CNCF Webinar - start.kubermatic.pdf
Kubermatic CNCF Webinar - start.kubermatic.pdf
 
Kubernetes PPT.pptx
Kubernetes PPT.pptxKubernetes PPT.pptx
Kubernetes PPT.pptx
 
CD using ArgoCD(KnolX).pdf
CD using ArgoCD(KnolX).pdfCD using ArgoCD(KnolX).pdf
CD using ArgoCD(KnolX).pdf
 
Introduction to kubernetes
Introduction to kubernetesIntroduction to kubernetes
Introduction to kubernetes
 
Service Mesh on Kubernetes with Istio
Service Mesh on Kubernetes with IstioService Mesh on Kubernetes with Istio
Service Mesh on Kubernetes with Istio
 
Gitops: the kubernetes way
Gitops: the kubernetes wayGitops: the kubernetes way
Gitops: the kubernetes way
 
Helm 3
Helm 3Helm 3
Helm 3
 
Kubernetes Helm (Boulder Kubernetes Meetup, June 2016)
Kubernetes Helm (Boulder Kubernetes Meetup, June 2016)Kubernetes Helm (Boulder Kubernetes Meetup, June 2016)
Kubernetes Helm (Boulder Kubernetes Meetup, June 2016)
 
Intro to Helm for Kubernetes
Intro to Helm for KubernetesIntro to Helm for Kubernetes
Intro to Helm for Kubernetes
 
Kubernetes Basics
Kubernetes BasicsKubernetes Basics
Kubernetes Basics
 

Similar to Kubernetes walkthrough

Istio Playground
Istio PlaygroundIstio Playground
Istio PlaygroundQAware GmbH
 
OpenShift Meetup - Tokyo - Service Mesh and Serverless Overview
OpenShift Meetup - Tokyo - Service Mesh and Serverless OverviewOpenShift Meetup - Tokyo - Service Mesh and Serverless Overview
OpenShift Meetup - Tokyo - Service Mesh and Serverless OverviewMaría Angélica Bracho
 
Helm Charts Security 101
Helm Charts Security 101Helm Charts Security 101
Helm Charts Security 101Deep Datta
 
Kubernetes deployment strategies - CNCF Webinar
Kubernetes deployment strategies - CNCF WebinarKubernetes deployment strategies - CNCF Webinar
Kubernetes deployment strategies - CNCF WebinarEtienne Tremel
 
GE Predix 新手入门 赵锴 物联网_IoT
GE Predix 新手入门 赵锴 物联网_IoTGE Predix 新手入门 赵锴 物联网_IoT
GE Predix 新手入门 赵锴 物联网_IoTKai Zhao
 
Gluster Contenarized Storage for Cloud Applications
Gluster Contenarized Storage for Cloud ApplicationsGluster Contenarized Storage for Cloud Applications
Gluster Contenarized Storage for Cloud ApplicationsHumble Chirammal
 
Gluster Containerized Storage for Cloud Applications
Gluster Containerized Storage for Cloud ApplicationsGluster Containerized Storage for Cloud Applications
Gluster Containerized Storage for Cloud ApplicationsGluster.org
 
Orchestration Tool Roundup - Arthur Berezin & Trammell Scruggs
Orchestration Tool Roundup - Arthur Berezin & Trammell ScruggsOrchestration Tool Roundup - Arthur Berezin & Trammell Scruggs
Orchestration Tool Roundup - Arthur Berezin & Trammell ScruggsCloud Native Day Tel Aviv
 
Helm Security Webinar
Helm Security WebinarHelm Security Webinar
Helm Security WebinarDeep Datta
 
IBM Cloud University: Build, Deploy and Scale Node.js Microservices
IBM Cloud University: Build, Deploy and Scale Node.js MicroservicesIBM Cloud University: Build, Deploy and Scale Node.js Microservices
IBM Cloud University: Build, Deploy and Scale Node.js MicroservicesChris Bailey
 
Devfest 2023 - Service Weaver Introduction - Taipei.pdf
Devfest 2023 - Service Weaver Introduction - Taipei.pdfDevfest 2023 - Service Weaver Introduction - Taipei.pdf
Devfest 2023 - Service Weaver Introduction - Taipei.pdfKAI CHU CHUNG
 
Continuous Security: From tins to containers - now what!
Continuous Security: From tins to containers - now what!Continuous Security: From tins to containers - now what!
Continuous Security: From tins to containers - now what!Michael Man
 
Swift Cloud Workshop - Swift Microservices
Swift Cloud Workshop - Swift MicroservicesSwift Cloud Workshop - Swift Microservices
Swift Cloud Workshop - Swift MicroservicesChris Bailey
 
Docker based Architecture by Denys Serdiuk
Docker based Architecture by Denys SerdiukDocker based Architecture by Denys Serdiuk
Docker based Architecture by Denys SerdiukLohika_Odessa_TechTalks
 
PaaSTA: Autoscaling at Yelp
PaaSTA: Autoscaling at YelpPaaSTA: Autoscaling at Yelp
PaaSTA: Autoscaling at YelpNathan Handler
 
Operator SDK for K8s using Go
Operator SDK for K8s using GoOperator SDK for K8s using Go
Operator SDK for K8s using GoCloudOps2005
 
Node Interactive: Node.js Performance and Highly Scalable Micro-Services
Node Interactive: Node.js Performance and Highly Scalable Micro-ServicesNode Interactive: Node.js Performance and Highly Scalable Micro-Services
Node Interactive: Node.js Performance and Highly Scalable Micro-ServicesChris Bailey
 
Spinnaker Summit 2018: CI/CD Patterns for Kubernetes with Spinnaker
Spinnaker Summit 2018: CI/CD Patterns for Kubernetes with SpinnakerSpinnaker Summit 2018: CI/CD Patterns for Kubernetes with Spinnaker
Spinnaker Summit 2018: CI/CD Patterns for Kubernetes with SpinnakerAndrew Phillips
 
Build Your Own CaaS (Container as a Service)
Build Your Own CaaS (Container as a Service)Build Your Own CaaS (Container as a Service)
Build Your Own CaaS (Container as a Service)HungWei Chiu
 
Deploying Rails Applications with Capistrano
Deploying Rails Applications with CapistranoDeploying Rails Applications with Capistrano
Deploying Rails Applications with CapistranoAlmir Mendes
 

Similar to Kubernetes walkthrough (20)

Istio Playground
Istio PlaygroundIstio Playground
Istio Playground
 
OpenShift Meetup - Tokyo - Service Mesh and Serverless Overview
OpenShift Meetup - Tokyo - Service Mesh and Serverless OverviewOpenShift Meetup - Tokyo - Service Mesh and Serverless Overview
OpenShift Meetup - Tokyo - Service Mesh and Serverless Overview
 
Helm Charts Security 101
Helm Charts Security 101Helm Charts Security 101
Helm Charts Security 101
 
Kubernetes deployment strategies - CNCF Webinar
Kubernetes deployment strategies - CNCF WebinarKubernetes deployment strategies - CNCF Webinar
Kubernetes deployment strategies - CNCF Webinar
 
GE Predix 新手入门 赵锴 物联网_IoT
GE Predix 新手入门 赵锴 物联网_IoTGE Predix 新手入门 赵锴 物联网_IoT
GE Predix 新手入门 赵锴 物联网_IoT
 
Gluster Contenarized Storage for Cloud Applications
Gluster Contenarized Storage for Cloud ApplicationsGluster Contenarized Storage for Cloud Applications
Gluster Contenarized Storage for Cloud Applications
 
Gluster Containerized Storage for Cloud Applications
Gluster Containerized Storage for Cloud ApplicationsGluster Containerized Storage for Cloud Applications
Gluster Containerized Storage for Cloud Applications
 
Orchestration Tool Roundup - Arthur Berezin & Trammell Scruggs
Orchestration Tool Roundup - Arthur Berezin & Trammell ScruggsOrchestration Tool Roundup - Arthur Berezin & Trammell Scruggs
Orchestration Tool Roundup - Arthur Berezin & Trammell Scruggs
 
Helm Security Webinar
Helm Security WebinarHelm Security Webinar
Helm Security Webinar
 
IBM Cloud University: Build, Deploy and Scale Node.js Microservices
IBM Cloud University: Build, Deploy and Scale Node.js MicroservicesIBM Cloud University: Build, Deploy and Scale Node.js Microservices
IBM Cloud University: Build, Deploy and Scale Node.js Microservices
 
Devfest 2023 - Service Weaver Introduction - Taipei.pdf
Devfest 2023 - Service Weaver Introduction - Taipei.pdfDevfest 2023 - Service Weaver Introduction - Taipei.pdf
Devfest 2023 - Service Weaver Introduction - Taipei.pdf
 
Continuous Security: From tins to containers - now what!
Continuous Security: From tins to containers - now what!Continuous Security: From tins to containers - now what!
Continuous Security: From tins to containers - now what!
 
Swift Cloud Workshop - Swift Microservices
Swift Cloud Workshop - Swift MicroservicesSwift Cloud Workshop - Swift Microservices
Swift Cloud Workshop - Swift Microservices
 
Docker based Architecture by Denys Serdiuk
Docker based Architecture by Denys SerdiukDocker based Architecture by Denys Serdiuk
Docker based Architecture by Denys Serdiuk
 
PaaSTA: Autoscaling at Yelp
PaaSTA: Autoscaling at YelpPaaSTA: Autoscaling at Yelp
PaaSTA: Autoscaling at Yelp
 
Operator SDK for K8s using Go
Operator SDK for K8s using GoOperator SDK for K8s using Go
Operator SDK for K8s using Go
 
Node Interactive: Node.js Performance and Highly Scalable Micro-Services
Node Interactive: Node.js Performance and Highly Scalable Micro-ServicesNode Interactive: Node.js Performance and Highly Scalable Micro-Services
Node Interactive: Node.js Performance and Highly Scalable Micro-Services
 
Spinnaker Summit 2018: CI/CD Patterns for Kubernetes with Spinnaker
Spinnaker Summit 2018: CI/CD Patterns for Kubernetes with SpinnakerSpinnaker Summit 2018: CI/CD Patterns for Kubernetes with Spinnaker
Spinnaker Summit 2018: CI/CD Patterns for Kubernetes with Spinnaker
 
Build Your Own CaaS (Container as a Service)
Build Your Own CaaS (Container as a Service)Build Your Own CaaS (Container as a Service)
Build Your Own CaaS (Container as a Service)
 
Deploying Rails Applications with Capistrano
Deploying Rails Applications with CapistranoDeploying Rails Applications with Capistrano
Deploying Rails Applications with Capistrano
 

Recently uploaded

SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024Scott Keck-Warren
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
Azure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAzure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAndikSusilo4
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Alan Dix
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure servicePooja Nehwal
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitecturePixlogix Infotech
 
How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?XfilesPro
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksSoftradix Technologies
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 3652toLead Limited
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphNeo4j
 
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your BudgetHyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your BudgetEnjoy Anytime
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxnull - The Open Security Community
 

Recently uploaded (20)

SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
Azure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAzure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & Application
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC Architecture
 
How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other Frameworks
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
 
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your BudgetHyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
 
Vulnerability_Management_GRC_by Sohang Sengupta.pptx
Vulnerability_Management_GRC_by Sohang Sengupta.pptxVulnerability_Management_GRC_by Sohang Sengupta.pptx
Vulnerability_Management_GRC_by Sohang Sengupta.pptx
 

Kubernetes walkthrough

  • 2. 1. 준비 - Kubernetes 소개 2. 설정파일 작성 - 직접 작성 - 템플릿 도구 사용 - helm - 템플릿 도구 사용 - kustomize 3. 활용 패턴 - 클러스터/네임스페이스 분리 - 배치/스케쥴링 (Resource / Node Selector / Taint) - 라이프사이클 (Hook / Readiness Probe / Grace Period) - 볼륨 (ReadWriteOnce / ReadWriteMany / Configmap / Secret) - 크론잡 - 싱글톤 Pod (Headless Service) - 사이드카 목차 4. 웹서비스 관련 - HTTPS(TLS) 설정 - Ingress Practices - 라이브 중인 기존 서비스 이전 - 서비스(프론트) 점검 걸기 5. 배포 파이프라인 - ArgoCD 활용 (자체 운영 필요)
  • 4. Kubernetes 소개 (1/4) - History https://www.slideshare.net/egg9/kubernetes-introduction
  • 5. Kubernetes 소개 (2/4) - Why? ● Provisioning and deployment ● Configuration and scheduling ● Resource allocation ● Container availability ● Scaling or removing containers based on balancing workloads across your infrastructure ● Load balancing and traffic routing ● Monitoring container health ● Configuring applications based on the container in which they will run ● Keeping interactions between containers secure → Container Orchestration
  • 6. Kubernetes 소개 (3/4) - Core Mechanism https://theithollow.com/2019/09/16/kubernetes-desired-state-and-control-loops/ Desired-State and Control-loops 1. Observe What is the desired state of our objects? 2. Check Differences What is the current state of our objects and the differences between our desired state? 3. Take Action Make the current state look like the desired state. 4. Repeat Repeat this over and over again.
  • 7. Kubernetes 소개 (4/4) - Architecture https://medium.com/@mohan08p/simplified-kubernetes-architecture-3febe12480eb
  • 8. 클러스터 할당 및 접근 설정 (1/3) - kube config ➜ cat ~/.kube/config apiVersion: v1 kind: Config clusters: - name: my-dev-cluster cluster: insecure-skip-tls-verify: true server: https://my-dev-cluster-master-1.example.com:6443 users: - name: my-dev-creds user: token: eyJhbGciOiJSUzI1NiIsImtpZCI6… contexts: - context: cluster: my-dev-cluster user: my-dev-creds namespace: ingress-nginx name: my-dev-ctx current-context: my-dev-ctx preferences: {} ...
  • 9. # MacOS ➜ curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl" ➜ kubectl version Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-14T14:49:35Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.11", GitCommit:"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede", GitTreeState:"clean", BuildDate:"2020-03-12T21:00:06Z", GoVersion:"go1.12.17", Compiler:"gc", Platform:"linux/amd64"} 클러스터 할당 및 접근 설정 (2/3) - kubectl 설치 https://kubernetes.io/docs/tasks/tools/
  • 10. 클러스터 할당 및 접근 설정 (3/3) - kubectx 설치 (추천) https://github.com/ahmetb/kubectx
  • 11. 클러스터 할당 및 접근 설정 (3/3) - kubectx 설치 (추천) ➜ kubectx Switched to context "my-dev-ctx". ➜ kubens hello-app-beta > hello-app-production 2/16 Context "my-dev-ctx" modified. Active namespace is "hello-app-production". ➜ kubectl get pod NAME READY STATUS RESTARTS AGE hello-app-6dd9cd8bcc-gk9m6 3/3 Running 0 20h hello-app-6dd9cd8bcc-nkdbb 3/3 Running 0 20h
  • 13. 직접 작성 (1/3) - 오브젝트(Object)의 구성 https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/
  • 14. 직접 작성 (2/3) - 한 파일에 여러 오브젝트 기술
  • 15. 직접 작성 (3/3) - Phase 별로 구성 my-app1 ㄴ app-dev.yaml ㄴ app-sandbox.yaml ㄴ app-cbt.yaml ㄴ app-production.yaml my-app2 ... ㄴ sandbox ㄴ deployment.yaml ㄴ service.yaml ㄴ secret.yaml ㄴ production ㄴ deployment.yaml ㄴ service.yaml ㄴ secret.yaml ➜ $ cd my-app1 ➜ my-app1$ kubectl apply -f ./app-sandbox.yaml ➜ $ cd my-app2/sandbox ➜ my-app2/sandbox$ kubectl apply -f . or ➜ my-app2$ kubectl apply -f ./sandbox
  • 16. Helm (1/3) - How it works
  • 17. Helm (1/3) - 그러나 우리는 그냥 템플릿 도구로써 https://gowebexamples.com/templates/
  • 18. Helm (2/4) - Template 생성 ➜ k8s-test$ helm create my-app Creating my-app ➜ k8s-test$ ls -al total 0 drwxr-xr-x 3 sangwonl staff 96 Apr 1 17:35 . drwxr-xr-x 103 sangwonl staff 3296 Apr 1 17:35 .. drwxr-xr-x 7 sangwonl staff 224 Apr 1 17:35 my-app ➜ k8s-test$ tree my-app my-app ├── Chart.yaml ├── charts ├── templates │ ├── NOTES.txt │ ├── _helpers.tpl │ ├── deployment.yaml │ ├── ingress.yaml │ ├── service.yaml │ ├── serviceaccount.yaml │ └── tests │ └── test-connection.yaml └── values.yaml 3 directories, 9 files
  • 19. Helm (2/4) - Template 생성 ➜ k8s-test$ helm create my-app Creating my-app ➜ k8s-test$ ls -al total 0 drwxr-xr-x 3 sangwonl staff 96 Apr 1 17:35 . drwxr-xr-x 103 sangwonl staff 3296 Apr 1 17:35 .. drwxr-xr-x 7 sangwonl staff 224 Apr 1 17:35 my-app ➜ k8s-test$ tree my-app my-app ├── Chart.yaml ├── charts ├── templates │ ├── NOTES.txt │ ├── _helpers.tpl │ ├── deployment.yaml │ ├── ingress.yaml │ ├── service.yaml │ ├── serviceaccount.yaml │ └── tests │ └── test-connection.yaml └── values.yaml 3 directories, 9 files ➜ k8s-test$ cat my-app/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: {{ include "my-app.fullname" . }} labels: {{ include "my-app.labels" . | indent 4 }} spec: replicas: {{ .Values.replicaCount }} selector: matchLabels: app.kubernetes.io/name: {{ include "my-app.name" . }} app.kubernetes.io/instance: {{ .Release.Name }} template: metadata: labels: app.kubernetes.io/name: {{ include "my-app.name" . }} app.kubernetes.io/instance: {{ .Release.Name }} spec: containers: - name: {{ .Chart.Name }} securityContext: {{- toYaml .Values.securityContext | nindent 12 }} image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" imagePullPolicy: {{ .Values.image.pullPolicy }} ... ..
  • 20. Helm (2/4) - Template 생성 ➜ k8s-test$ helm create my-app Creating my-app ➜ k8s-test$ ls -al total 0 drwxr-xr-x 3 sangwonl staff 96 Apr 1 17:35 . drwxr-xr-x 103 sangwonl staff 3296 Apr 1 17:35 .. drwxr-xr-x 7 sangwonl staff 224 Apr 1 17:35 my-app ➜ k8s-test$ tree my-app my-app ├── Chart.yaml ├── charts ├── templates │ ├── NOTES.txt │ ├── _helpers.tpl │ ├── deployment.yaml │ ├── ingress.yaml │ ├── service.yaml │ ├── serviceaccount.yaml │ └── tests │ └── test-connection.yaml └── values.yaml 3 directories, 9 files replicaCount: 1 image: repository: nginx tag: stable pullPolicy: IfNotPresent imagePullSecrets: [] nameOverride: "" fullnameOverride: "" service: type: ClusterIP port: 80 ingress: enabled: false annotations: {} # kubernetes.io/ingress.class: nginx # kubernetes.io/tls-acme: "true" hosts: - host: chart-example.local ... ..
  • 21. Helm (3/4) - Template 렌더 ➜ my-app$ helm template . -f values.yaml --- # Source: my-app/templates/serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: release-name-my-app labels: app.kubernetes.io/name: my-app helm.sh/chart: my-app-0.1.0 app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.0" app.kubernetes.io/managed-by: Tiller --- # Source: my-app/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-my-app labels: app.kubernetes.io/name: my-app helm.sh/chart: my-app-0.1.0 app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.0" ... ..
  • 22. Helm (3/4) - Template 렌더 (검증) ➜ my-app$ helm template . -f values.yaml --- # Source: my-app/templates/serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: release-name-my-app labels: app.kubernetes.io/name: my-app helm.sh/chart: my-app-0.1.0 app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.0" app.kubernetes.io/managed-by: Tiller --- # Source: my-app/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-my-app labels: app.kubernetes.io/name: my-app helm.sh/chart: my-app-0.1.0 app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.0" ... .. ➜ my-app$ helm template . -f values.yaml | kubectl apply -f - --dry-run=client --validate=true serviceaccount/release-name-my-app created (dry run) service/release-name-my-app created (dry run) pod/release-name-my-app-test-connection created (dry run) deployment.apps/release-name-my-app created (dry run)
  • 23. Helm (3/4) - Template 렌더 (적용) ➜ my-app$ helm template . -f values.yaml --- # Source: my-app/templates/serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: release-name-my-app labels: app.kubernetes.io/name: my-app helm.sh/chart: my-app-0.1.0 app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.0" app.kubernetes.io/managed-by: Tiller --- # Source: my-app/templates/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-my-app labels: app.kubernetes.io/name: my-app helm.sh/chart: my-app-0.1.0 app.kubernetes.io/instance: release-name app.kubernetes.io/version: "1.0" ... .. ➜ my-app$ helm template . -f values.yaml | kubectl apply -f - serviceaccount/release-name-my-app created service/release-name-my-app created pod/release-name-my-app-test-connection created deployment.apps/release-name-my-app created ➜ my-app$ kubectl get pod NAME READY STATUS RESTARTS AGE my-app-5dfb54954b-b7rvt 1/1 Running 0 10s my-app-5dfb54954b-n6752 1/1 Running 0 13s
  • 24. Helm (4/4) - Phase 별로 구성 ➜ k8s-test$ tree my-app my-app ├── Chart.yaml ├── charts ├── templates │ ├── NOTES.txt │ ├── _helpers.tpl │ ├── deployment.yaml │ ├── ingress.yaml │ ├── service.yaml │ └── secrets.yaml ├── overrides │ ├── dev.yaml │ ├── sandbox.yaml │ ├── cbt.yaml │ └── production.yaml └── values.yaml 3 directories, 12 files ➜ my-app$ helm template . -f values.yaml -f overrides/sandbox.yaml --- # Source: my-app/templates/deployment.yaml apiVersion: apps/v1beta1 kind: Deployment metadata: name: my-app namespace: my-app-sandbox spec: selector: matchLabels: app: my-app replicas: 2 template: ... ..
  • 25. Kustomize (1/4) - 생성 ➜ my-app$ ls deployment.yaml ingress.yaml secret.yaml service.yaml ➜ my-app$ kustomize create --autodetect ➜ my-app$ ls deployment.yaml ingress.yaml kustomization.yaml secret.yaml service.yaml ➜ my-app$ cat kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - deployment.yaml - ingress.yaml - secret.yaml - service.yaml
  • 26. Kustomize (2/4) - 패치 ➜ my-app$ ls deployment.yaml ingress.yaml kustomization.yaml patch-resources.yaml secret.yaml service.yaml ➜ my-app$ cat kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - deployment.yaml - ingress.yaml - secret.yaml - service.yaml images: - name: sangwonl/my-app newTag: 1.58.1 patchesStrategicMerge: - patch-resources.yaml ➜ my-app$ cat patch-resources.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-app spec: replicas: 2 template: spec: containers: - name: my-app resources: requests: cpu: 2000m memory: 2.5Gi limits: cpu: 2000m memory: 2.5Gi ... ..
  • 27. ➜ my-app$ cat patch-ingress.yaml - op: replace path: /spec/rules/0/host value: my-app-sandbox.example.com - op: replace path: /spec/tls/0/hosts/0 value: my-app-sandbox.example.com Kustomize (2/4) - 패치 (JSON6902) ➜ my-app$ cat kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - deployment.yaml - ingress.yaml - secret.yaml - service.yaml images: - name: sangwonl/my-app newTag: 1.58.1 patchesStrategicMerge: - patch-resources.yaml patchesJson6902: - path: patch-ingress.yaml target: group: extensions kind: Ingress name: my-app-ingress version: v1beta1 apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx name: my-app-ingress spec: rules: - host: plz-patch-me.example.com http: paths: - backend: serviceName: my-app-service servicePort: http-port path: / tls: - hosts: - plz-patch-me.example.com secretName: tls-secret-example-com
  • 28. ➜ my-app$ kustomize build apiVersion: v1 kind: Service metadata: labels: app: my-app name: my-app-service spec: ports: - name: http-port port: 80 protocol: TCP targetPort: 80 selector: app: my-app --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: my-app spec: replicas: 2 ... .. Kustomize (3/4) - Template 렌더
  • 29. ➜ my-app$ kustomize build apiVersion: v1 kind: Service metadata: labels: app: my-app name: my-app-service spec: ports: - name: http-port port: 80 protocol: TCP targetPort: 80 selector: app: my-app --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: my-app spec: replicas: 2 ... .. Kustomize (3/4) - Template 렌더 (검증) ➜ my-app$ kustomize build | kubectl apply -f - --dry-run=client --validate=true serviceaccount/release-name-my-app created (dry run) service/release-name-my-app created (dry run) pod/release-name-my-app-test-connection created (dry run) deployment.apps/release-name-my-app created (dry run)
  • 30. ➜ my-app$ kustomize build apiVersion: v1 kind: Service metadata: labels: app: my-app name: my-app-service spec: ports: - name: http-port port: 80 protocol: TCP targetPort: 80 selector: app: my-app --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: my-app spec: replicas: 2 ... .. Kustomize (3/4) - Template 렌더 (적용) ➜ my-app$ kustomize build | kubectl apply -f - serviceaccount/release-name-my-app created service/release-name-my-app created pod/release-name-my-app-test-connection created deployment.apps/release-name-my-app created ➜ my-app$ kubectl get pod NAME READY STATUS RESTARTS AGE my-app-5dfb54954b-b7rvt 1/1 Running 0 10s my-app-5dfb54954b-n6752 1/1 Running 0 13s
  • 31. Kustomize (4/4) - Phase 별로 구성 ➜ k8s-test$ tree my-app my-app ├── base │ ├── deployment.yaml │ ├── ingress.yaml │ ├── kustomization.yaml │ ├── service.yaml │ └── secrets.yaml └── overlays ... ├── sandbox │ ├── kustomization.yaml │ ├── patch-resources.yaml │ └── patch-ingress.yaml ├── cbt │ ├── kustomization.yaml │ ├── patch-resources.yaml │ └── patch-ingress.yaml └── production ├── kustomization.yaml ├── patch-environments.yaml ├── patch-resources.yaml └── patch-ingress.yaml 6 directories, 14 files apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization commonLabels: app: my-app resources: - secrets.yaml - deployment.yaml - service.yaml - ingress.yaml images: - name: sangwonl/my-app newTag: latest apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - ../../base patchesStrategicMerge: - patch-resources.yaml - patch-ingress.yaml images: - name: sangwonl/my-app newTag: 1.51.3
  • 32. Kustomize (4/4) - Phase 별로 구성 ➜ k8s-test$ tree my-app my-app ├── base │ ├── deployment.yaml │ ├── ingress.yaml │ ├── kustomization.yaml │ ├── service.yaml │ └── secrets.yaml └── overlays ... ├── sandbox │ ├── kustomization.yaml │ ├── patch-resources.yaml │ └── patch-ingress.yaml ├── cbt │ ├── kustomization.yaml │ ├── patch-resources.yaml │ └── patch-ingress.yaml └── production ├── kustomization.yaml ├── patch-environments.yaml ├── patch-resources.yaml └── patch-ingress.yaml 6 directories, 14 files ➜ my-app$ cd overlays/sandbox ➜ my-app$ kustomize build or ➜ my-app$ kustomize build overlays/sandbox
  • 34. - 서비스 성격에 따라 여러 클러스터로 분리 my-data-02 → 데이터 파이프라인 및 데이터 집계/분석 관련 앱들 my-dev-04 → 개발용 클러스터 my-prod-02 → 운영용 클러스터 my-prod-03 → 운영용 신규 클러스터 my-devops-01 → Jenkins, ArgoCD 등 DevOps 관련 앱들 my-pm-01 → PM worker들로 이루어진 클러스터, PM에 띄워야할 앱을 위해 - 클러스터 이름 규칙 <org>-<purpose>-<index> (index는 클러스터 이전마다 증가) 클러스터 분리 및 이름 규칙
  • 35. - 네임스페이스를 쓰면? * k8s 오브젝트들의 경계를 칠 수 있음 * 네임스페이스가 다르면 오브젝트 이름이 동일해도 됨 * 보통 앱 설정은 Phase 별로 거의 동일 → 네임스페이스로 Phase를 구분 가능 - 네임스페이스 이름 규칙 <app-name>-<phase> * phase → dev, sandbox, cbt, production * dev cluster → dev / sandbox prod cluster → cbt / production <cluster>-<common> * 클러스터 공통인 앱들, default를 써도 됨 ex) my-app-sandbox, my-app-production, jenkins-common 네임스페이스 용도 및 이름 규칙 Cluster (my-dev-04) my-app-dev my-app-ingress my-app-service my-app-deploy my-app-sandbox my-app-ingress my-app-service my-app-deploy
  • 36. 배치 / 스케쥴링 (1/3) - Pod Resource
  • 37. 배치 / 스케쥴링 (1/3) - Pod Resource apiVersion: apps/v1 kind: Deployment metadata: ... spec: ... template: ... spec: containers: image: sangwonl/my-app:1.58.1 name: my-app ... resources: requests: cpu: 2500m memory: 2Gi limits: cpu: 4 memory: 4Gi
  • 38. 배치 / 스케쥴링 (1/3) - Pod Resource apiVersion: apps/v1 kind: Deployment metadata: ... spec: ... template: ... spec: containers: image: sangwonl/my-app:1.58.1 name: my-app ... resources: requests: cpu: 2500m memory: 2Gi limits: cpu: 4 memory: 4Gi 1 CPU = 1000m (Millicores) 2Gi 메모리와 CPU 2.5 코어 만큼의 여유 가 되는 노드가 있다면 해당 노드로 스케 쥴링 가능
  • 39. 배치 / 스케쥴링 (1/3) - Pod Resource apiVersion: apps/v1 kind: Deployment metadata: ... spec: ... template: ... spec: containers: image: sangwonl/my-app:1.58.1 name: my-app ... resources: requests: cpu: 2500m memory: 2Gi limits: cpu: 4 memory: 4Gi 1 CPU = 1000m (Millicores) 2Gi 메모리와 CPU 2.5 코어 만큼의 여유 가 되는 노드가 있다면 해당 노드로 스케 쥴링 가능 Pod의 CPU 사용량이 4 코어를 넘어가면 사용량이 제한(Throttling)되기 시작. 한편, 메모리 사용량이 4Gi를 넘어가게 되 면 Killed 될 수 있음.
  • 40. 배치 / 스케쥴링 (1/3) - Pod Resource
  • 41. 배치 / 스케쥴링 (2/3) - Node Selector Node 1 label: gpu=enabled Node 3 label: gpu=disabled Node 4 - Node 2 label: gpu=enabled Pod nodeSelector: gpu: enabled O O X X
  • 42. 배치 / 스케쥴링 (2/3) - Node Selector Node 1 label: gpu=enabled Node 3 label: gpu=disabled Node 4 - Node 2 label: gpu=enabled Pod nodeSelector: gpu: enabled O O X X apiVersion: apps/v1 kind: Deployment metadata: ... spec: ... template: ... spec: containers: image: sangwonl/my-app:1.58.1 name: my-app ... nodeSelector: gpu: enabled ➜ my-app$ kubectl label <node-1> gpu=enabled ➜ my-app$ kubectl label <node-2> gpu=enabled
  • 43. 배치 / 스케쥴링 (3/3) - Taint / Toleration Node 1 taint: spot=true:NoSchedule Node 3 - Node 4 - Node 2 taint: spot=true:NoSchedule Pod tolerations: ... O O X Pod - X O O O O
  • 44. 배치 / 스케쥴링 (3/3) - Taint / Toleration Node 1 taint: spot=true:NoSchedule Node 3 - Node 4 - Node 2 taint: spot=true:NoSchedule Pod tolerations: ... O O X Pod - X O O O O apiVersion: apps/v1 kind: Deployment metadata: ... spec: ... template: ... spec: containers: image: sangwonl/my-app:1.58.1 name: my-app ... tolerations: - key: spot operator: Equal value: true effect: NoSchedule ➜ my-app$ kubectl taint <node-1> spot=true:NoSchedule ➜ my-app$ kubectl taint <node-2> spot=true:NoSchedule
  • 46. 라이프사이클 (2/4) - Hook (postStart) a Pod https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#hook-details Container A Process A postStart Container created Entrypoint Container B Process B postStart Entrypoint Process C postStart Entrypoint Take too long or hangs → can't reach RUNNING state Init container 1 Process 1 Pod created Init container 2 Process 2 Init container 3 Process 3 Init 1/3 Init 2/3 Init 3/3 Container C
  • 47. 라이프사이클 (2/4) - Hook (postStart) a Pod https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#hook-details Container A Process A postStart Container created Entrypoint Container B Process B postStart Entrypoint Process C postStart Entrypoint Take too long or hangs → can't reach RUNNING state Init container 1 Process 1 Pod created Init container 2 Process 2 Init container 3 Process 3 Init 1/3 Init 2/3 Init 3/3 Container C apiVersion: apps/v1 kind: Deployment metadata: ... spec: ... template: ... spec: containers: image: sangwonl/my-app:1.58.1 name: my-app ... initContainers: - name: init-data-home image: busybox command: ['sh', '-c', 'rm -rf /data/*'] - name: init-lookup-db image: busybox command: ['sh', '-c', 'until nslookup mydb; do echo waiting for mydb; sleep 2; done;'] ...
  • 48. 라이프사이클 (2/4) - Hook (postStart) a Pod https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#hook-details Container A Process A postStart Container created Entrypoint Container B Process B postStart Entrypoint Process C postStart Entrypoint Take too long or hangs → can't reach RUNNING state Init container 1 Process 1 Pod created Init container 2 Process 2 Init container 3 Process 3 Init 1/3 Init 2/3 Init 3/3 Container C apiVersion: apps/v1 kind: Deployment metadata: ... spec: ... template: ... spec: containers: image: sangwonl/my-app:1.58.1 name: my-app ... lifecycle: postStart: exec: # or tcpSocket, httpGet command: ["rm", "-rf", "/data/db/lost+found"] ...
  • 49. 라이프사이클 (2/4) - Hook (preStop)
  • 50. 라이프사이클 (2/4) - Hook (preStop) apiVersion: apps/v1 kind: Deployment metadata: ... spec: ... template: ... spec: containers: image: sangwonl/my-app:1.58.1 name: my-app ... lifecycle: preStop: exec: # or tcpSocket, httpGet command: ["./stop-consuming.sh"] ...
  • 51. 라이프사이클 (3/4) - Readiness Probe (HTTP) apiVersion: apps/v1 kind: Deployment metadata: name: my-app spec: ... template: metadata: labels: app: my-app spec: containers: - name: my-app image: sangwonl/my-app:latest ports: - containerPort: 9090 readinessProbe: httpGet: port: 9090 path: /healthcheck initialDelaySeconds: 60 periodSeconds: 10 timeoutSeconds: 1 successThreshold: 1 failureThreshold: 3 GET http://localhost:9090/healthcheck initialDelaySeconds: 60 컨테이너가 시작된 후 60초 동안은 probe ping 없이 기다림 periodSeconds: 10 얼마나 주기적으로 probe 할 것인지, 기본값 10초 timeoutSeconds: 1 probe 응답의 타임아웃, probe가 1초 넘어가면 실패로 간주, 기본값 1초 successThreshold: 1 probe가 최소 몇 번 성공해야 readiness를 성공이라고 판단할지, 기본값 1 failureThreshold: 3 probe가 최소 몇 번 실패해야 readiness를 실패라고 판단할지, 기본값 3
  • 52. 라이프사이클 (3/4) - Readiness Probe (gRPC) apiVersion: apps/v1 kind: Deployment metadata: name: my-app spec: ... template: metadata: labels: app: my-app spec: containers: - name: my-app image: sangwonl/my-app:latest ports: - containerPort: 9090 readinessProbe: exec: command: # 커맨드 실행 방식 - grpc_health_probe # 도커이미지에 미리 설치해둔 실행 파일 - -addr=:9090 # 실행 파일의 인자 initialDelaySeconds: 60 $ grpc_health_probe -addr=:9090 ➜ my-app$ cat Dockerfile ... ... # Install grpc_health_probe RUN wget -qO/bin/grpc_health_probe https://github.com/grpc-ecosystem/grpc- health-probe/releases/download/v0.3.2/grpc_health_probe-linux-amd64 && chmod +x /bin/grpc_health_probe ...
  • 53. SIGTERM Running Terminating 라이프사이클 (4/4) - Termination Grace Period Pod terminationGracePeriodSeconds: 120 SIGTERM & SIGKILL Pod terminationGracePeriodSeconds: 0 Running Terminating & Shutdown Shutdown SIGKILL Termination Grace Period (120s) 앱(프로세스)이 완전히 종료되는 시점
  • 54. SIGTERM Running Terminating 라이프사이클 (4/4) - Termination Grace Period Pod terminationGracePeriodSeconds: 120 SIGTERM & SIGKILL Pod terminationGracePeriodSeconds: 0 Running Terminating & Shutdown Shutdown SIGKILL Termination Grace Period (120s) 앱(프로세스)이 완전히 종료되는 시점 apiVersion: apps/v1 kind: Deployment metadata: ... spec: ... template: ... spec: terminationGracePeriodSeconds: 120 containers: image: sangwonl/my-app:1.58.1 name: my-app ...
  • 55. 볼륨 (1/4) - 기본 Pod Container A Container A Volume A Volume B
  • 56. 볼륨 (1/4) - 기본 Pod Container A Container A Volume A Volume B X X
  • 57. 볼륨 (1/4) - Pod 스코프 임시 볼륨 (emptyDir) Pod Container A Container A Volume A Volume B
  • 58. 볼륨 (1/4) - Pod 스코프 임시 볼륨 (emptyDir) Pod Container A Container A Volume A Volume B Volume (Shared in Pod) Removed when pod is terminated
  • 59. 볼륨 (1/4) - Pod 스코프 임시 볼륨 (emptyDir) Pod Container A Container A Volume A Volume B Volume (Shared in Pod) mounted O Removed when pod is terminated
  • 60. 볼륨 (1/4) - Pod 스코프 임시 볼륨 (emptyDir) Pod Container A Container A Volume A Volume B Volume (Shared in Pod) mounted mounted O O Removed when pod is terminated
  • 61. 볼륨 (1/4) - Pod 스코프 임시 볼륨 (emptyDir) Pod Container A Container A Volume A Volume B Volume (Shared in Pod) mounted mounted O O apiVersion: apps/v1 kind: Deployment metadata: ... spec: ... template: ... spec: containers: image: sangwonl/my-app:1.58.1 name: my-app ... volumeMounts: - mountPath: /var/log/filebeat name: filebeat-log-volume volumes: - name: filebeat-log-volume emptyDir: {} ...
  • 62. Worker (Host) 볼륨 (2/4) - Persistent Volume (ReadWriteOnce) Pod Container A Container A Volume A Volume B Volume (Shared in Pod) O O mounted mounted
  • 63. Worker (Host) 볼륨 (2/4) - Persistent Volume (ReadWriteOnce) Pod Container A Container A Volume A Volume B Volume (Shared in Pod) mounted mounted O O Host Volume (Per Pod) mounted Persistent Volume (Openstack Cinder) Mounted by Kubernetes Cinder Driver
  • 64. Worker (Host) 볼륨 (2/4) - Persistent Volume (ReadWriteOnce) Pod Container A Container A Volume A Volume B Volume (Shared in Pod) mounted mounted O O mounted Host Volume (Per Pod) mounted Persistent Volume (Openstack Cinder) Mounted by Kubernetes Cinder Driver
  • 65. Worker (Host) 볼륨 (2/4) - Persistent Volume (ReadWriteOnce) Pod Container A Container A Volume A Volume B Volume (Shared in Pod) mounted mounted O O mounted Host Volume (Per Pod) mounted Persistent Volume (Openstack Cinder) Mounted by Kubernetes Cinder Driver Pod Volume (Shared in Pod) can't mount X
  • 66. Worker (Host) 볼륨 (2/4) - Persistent Volume (ReadWriteOnce) Pod Container A Container A Volume A Volume B Volume (Shared in Pod) mounted mounted O O mounted Host Volume (Per Pod) mounted Persistent Volume (Openstack Cinder) Mounted by Kubernetes Cinder Driver apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-app-pvc namespace: my-app-sandbox spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi
  • 67. Worker (Host) 볼륨 (2/4) - Persistent Volume (ReadWriteOnce) Pod Container A Container A Volume A Volume B Volume (Shared in Pod) mounted mounted O O mounted Host Volume (Per Pod) mounted Persistent Volume (Openstack Cinder) Mounted by Kubernetes Cinder Driver apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-app-pvc namespace: my-app-sandbox spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi apiVersion: apps/v1 kind: Deployment metadata: ... spec: ... template: ... spec: containers: image: sangwonl/my-app:1.58.1 name: my-app ... volumeMounts: - name: my-app-volume mountPath: "/var/lib/my-persistent-data" volumes: - name: my-app-volume persistentVolumeClaim: claimName: my-app-pvc ...
  • 68. Worker A (Host A) 볼륨 (3/4) - Persistent Volume (ReadWriteMany) Pod A Container A Container A Volume A Volume B Volume (Shared in Pod) mounted mounted mounted Host Volume (Per Pod) Worker B (Host B) Pod B Container A Container A Volume A Volume B Volume (Shared in Pod) mounted mounted mounted Host Volume (Per Pod)
  • 69. Worker A (Host A) 볼륨 (3/4) - Persistent Volume (ReadWriteMany) Pod A Container A Container A Volume A Volume B Volume (Shared in Pod) mounted mounted NFS Daemon (Tenth2, Dedicated Host) mounted Host Volume (Per Pod) mounted NFS Volume (/nfsdata) Mounted by Kubernetes NFS Driver Worker B (Host B) Pod B Container A Container A Volume A Volume B Volume (Shared in Pod) mounted mounted mounted Host Volume (Per Pod) mounted
  • 70. Worker A (Host A) 볼륨 (3/4) - Persistent Volume (ReadWriteMany) Pod A Container A Container A Volume A Volume B Volume (Shared in Pod) mounted mounted NFS Daemon (Tenth2, Dedicated Host) mounted Host Volume (Per Pod) mounted NFS Volume (/nfsdata) Mounted by Kubernetes NFS Driver Worker B (Host B) Pod B Container A Container A Volume A Volume B Volume (Shared in Pod) mounted mounted mounted Host Volume (Per Pod) mounted apiVersion: v1 kind: PersistentVolume metadata: name: my-app-pv spec: volumeMode: Filesystem capacity: storage: 1Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: nfs nfs: path: /k8s_dev_nfs_pvc_test/nfs server: nfs.internal.example.com apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-app-pvc spec: volumeMode: Filesystem accessModes: - ReadWriteMany resources: requests: storage: 1Gi storageClassName: nfs
  • 71. Worker A (Host A) 볼륨 (3/4) - Persistent Volume (ReadWriteMany) Pod A Container A Container A Volume A Volume B Volume (Shared in Pod) mounted mounted NFS Daemon (Tenth2, Dedicated Host) mounted Host Volume (Per Pod) mounted NFS Volume (/nfsdata) Mounted by Kubernetes NFS Driver Worker B (Host B) Pod B Container A Container A Volume A Volume B Volume (Shared in Pod) mounted mounted mounted Host Volume (Per Pod) mounted apiVersion: apps/v1 kind: Deployment metadata: ... spec: ... template: ... spec: containers: image: sangwonl/my-app:1.58.1 name: my-app ... volumeMounts: - name: my-app-volume mountPath: "/var/lib/my-shared-data" volumes: - name: my-app-volume persistentVolumeClaim: claimName: my-app-pvc ...
  • 72. Pod Container Volume (from ConfigMap) /deploy/backup.sh 볼륨 (4/4) - Configmap / Secret as Volume Volume (from Secret) /deploy/.ssh/id_rsa mounted mounted apiVersion: v1 kind: ConfigMap apiVersion: v1 kind: Secret
  • 73. Pod Container Volume (from ConfigMap) /deploy/backup.sh 볼륨 (4/4) - Configmap / Secret as Volume Volume (from Secret) /deploy/.ssh/id_rsa mounted mounted apiVersion: v1 kind: ConfigMap apiVersion: v1 kind: Secret apiVersion: v1 kind: ConfigMap metadata: name: my-app-scripts namespace: my-app-sandbox data: backup.sh: | #! /usr/bin/env bash mongodump --host my-app-rs/my-app-mongo.example.com -d my-app -o backups/ restore.sh: | #! /usr/bin/env bash mongorestore ..
  • 74. Pod Container Volume (from ConfigMap) /deploy/backup.sh 볼륨 (4/4) - Configmap / Secret as Volume Volume (from Secret) /deploy/.ssh/id_rsa mounted mounted apiVersion: v1 kind: ConfigMap apiVersion: v1 kind: Secret apiVersion: v1 kind: Secret metadata: name: my-app-sshkey namespace: my-app-sandbox data: id_rsa: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVk... # base64 encoded string
  • 75. Pod Container Volume (from ConfigMap) /deploy/backup.sh 볼륨 (4/4) - Configmap / Secret as Volume Volume (from Secret) /deploy/.ssh/id_rsa mounted mounted apiVersion: v1 kind: ConfigMap apiVersion: v1 kind: Secret apiVersion: apps/v1 kind: Deployment metadata: ... spec: containers: ... volumeMounts: - name: my-app-scripts-volume mountPath: /deploy/backup.sh subPath: backup.sh - name: my-app-sshkey-volume mountPath: "/deploy/.ssh" readOnly: true volumes: - name: my-app-scripts-volume configMap: name: my-app-scripts defaultMode: 0777 - name: my-app-sshkey-volume secret: secretName: my-app-sshkey ...
  • 76. 크론잡 (스케쥴잡) apiVersion: v1 kind: ConfigMap metadata: name: my-app-scripts namespace: my-app-sandbox data: backup.sh: | #! /usr/bin/env bash mongodump --host my-app-rs/my-app-mongo.example.com -d my-app -o backups/ restore.sh: | #! /usr/bin/env bash mongorestore .. apiVersion: batch/v1beta1 kind: CronJob metadata: name: my-app-cron spec: concurrencyPolicy: Forbid startingDeadlineSeconds: 600 schedule: "30 2 * * *" jobTemplate: spec: backoffLimit: 0 template: spec: restartPolicy: Never containers: - name: my-app-backup image: mongo:3.6 command: - /deploy/backup.sh volumeMounts: - name: my-app-scripts-volume mountPath: /deploy/backup.sh subPath: backup.sh volumes: - name: my-app-scripts-volume configMap: name: my-app-scripts defaultMode: 0777
  • 77. Pod C app: my-app Pod B app: my-app 싱글톤 Pod Pod A app: my-app ... Service name: my-app-svc type: ClusterIp selector: app: my-app Ingress backend: serviceName: my-app-svc
  • 78. Pod C app: my-app Pod B app: my-app 싱글톤 Pod Pod A app: my-app ... Service name: my-app-svc type: ClusterIp selector: app: my-app Ingress backend: serviceName: my-app-svc Pod X app: my-store name: my-store-pod
  • 79. Pod C app: my-app Pod B app: my-app 싱글톤 Pod Pod A app: my-app ... Service name: my-app-svc type: ClusterIp selector: app: my-app Ingress backend: serviceName: my-app-svc Pod X app: my-store name: my-store-pod connect("my-store-pod:3306") X connect("my-store-pod:3306") X
  • 80. Service name: my-store-svc type: None selector: app: my-store Pod C app: my-app Pod B app: my-app 싱글톤 Pod - Headless Service Pod A app: my-app ... Service name: my-app-svc type: ClusterIp selector: app: my-app Ingress backend: serviceName: my-app-svc Pod X app: my-store name: my-store-pod connect("my-store-svc:3306") O
  • 81. Service name: my-store-svc type: None selector: app: my-store Pod C app: my-app Pod B app: my-app 싱글톤 Pod - Headless Service Pod A app: my-app ... Service name: my-app-svc type: ClusterIp selector: app: my-app Ingress backend: serviceName: my-app-svc Pod X app: my-store name: my-store-pod connect("my-store-svc:3306") O apiVersion: v1 kind: Service metadata: name: my-store-svc spec: clusterIP: None selector: app: my-store ports: - protocol: TCP port: 3306 targetPort: 3306
  • 83. 사이드카 (Sidecar) Pod Nginx main container Filebeat sidecar container /var/log/nginx/access.log /var/log/nginx/access.log Volume (Shared in Pod) mounted (read / write) mounted (read only) Removed when pod is terminated Kafka
  • 84. 사이드카 (Sidecar) Pod Nginx main container Filebeat sidecar container /var/log/nginx/access.log /var/log/nginx/access.log Volume (Shared in Pod) mounted (read / write) mounted (read only) Removed when pod is terminated Kafka apiVersion: apps/v1beta1 kind: Deployment metadata: name: my-app spec: ... template: spec: containers: - name: my-app image: "nginx:latest" ports: - containerPort: 80 volumeMounts: - name: my-app-logs mountPath: /var/log/nginx - name: filebeat image: docker.elastic.co/beats/filebeat:6.2.4 volumeMounts: - name: my-app-logs mountPath: /var/log/nginx readOnly: true command: - filebeat - -c - "/etc/filebeat.yml" volumes: - name: my-app-logs emptyDir: {} ...
  • 86. HTTPS(TLS) 설정 (1/3) - HTTP apiVersion: v1 kind: Service metadata: labels: app: my-app-svc name: my-app-svc spec: ports: - port: 8000 protocol: TCP targetPort: 8000 selector: app: my-app apiVersion: apps/v1 kind: Deployment metadata: labels: app: my-app name: my-app spec: template: spec: containers: - image: my-app:latest name: my-app ports: - containerPort: 8000 apiVersion: extensions/v1beta1 kind: Ingress metadata: labels: app: my-app-ingress name: my-app-ingress spec: rules: - host: my-app.example.com http: paths: - path: / backend: serviceName: my-app-svc servicePort: 8000
  • 87. HTTPS(TLS) 설정 (1/3) - TLS 설정 추가 apiVersion: extensions/v1beta1 kind: Ingress metadata: labels: app: my-app-ingress name: my-app-ingress spec: rules: - host: my-app.example.com http: paths: - path: / backend: serviceName: my-app-svc servicePort: 8000
  • 88. HTTPS(TLS) 설정 (1/3) - TLS 설정 추가 apiVersion: extensions/v1beta1 kind: Ingress metadata: labels: app: my-app-ingress name: my-app-ingress spec: rules: - host: my-app.example.com http: paths: - path: / backend: serviceName: my-app-svc servicePort: 8000 tls: - secretName: tls-secret hosts: - my-app.example.com
  • 89. HTTPS(TLS) 설정 (1/3) - TLS 설정 추가 apiVersion: extensions/v1beta1 kind: Ingress metadata: labels: app: my-app-ingress name: my-app-ingress spec: rules: - host: my-app.example.com http: paths: - path: / backend: serviceName: my-app-svc servicePort: 8000 tls: - secretName: tls-secret hosts: - my-app.example.com apiVersion: v1 kind: Secret metadata: name: tls-secret type: kubernetes.io/tls data: tls.crt: LS0tLS1CRUdJTiBDRVJ... tls.key: LS0tLS1CRUdJTiBSU0E...
  • 90. HTTPS(TLS) 설정 (1/3) - TLS 설정 추가 apiVersion: extensions/v1beta1 kind: Ingress metadata: labels: app: my-app-ingress name: my-app-ingress spec: rules: - host: my-app.example.com http: paths: - path: / backend: serviceName: my-app-svc servicePort: 8000 tls: - secretName: tls-secret hosts: - my-app.example.com apiVersion: v1 kind: Secret metadata: name: tls-secret type: kubernetes.io/tls data: tls.crt: LS0tLS1CRUdJTiBDRVJ... tls.key: LS0tLS1CRUdJTiBSU0E... ➜ $ cat STAR.example.com_crt.pem | base64 LS0tLS1CRUdJTiBDRVJ... ➜ $ cat STAR.example.com_key.pem | base64 LS0tLS1CRUdJTiBSU0E...
  • 91. HTTPS(TLS) 설정 (1/3) - 반영 Ingress Node Worker Node 2 ingress-nginx-controller (Pod) /etc/nginx/nginx/conf server { server_name my-app.example.com; listen 80; listen 443 ssl http2; ... 80 / 443 Ingress Node ingress-nginx-controller (Pod) /etc/nginx/nginx/conf server { server_name my-app.example.com; listen 80; listen 443 ssl http2; ... 80 / 443 VIP my-app.example.com my-app (Pod) 8000 Worker Node 1 my-app (Pod) 8000 Worker Node 3 my-app (Pod) 8000
  • 92. HTTPS(TLS) 설정 (2/3) - HTTP / HTTPS 동시 지원 TLS 설정을 추가하면 기본적으로는 HTTPS 만 지원함 (HTTP 로 요청하면 308 Redirect 시킴) https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#server-side-https-enforcement-through-redirect HTTP / HTTPS 를 모두 지원하고 싶다면 Ingress 에 Annotation을 추가 apiVersion: extensions/v1beta1 kind: Ingress metadata: labels: app: my-app-ingress name: my-app-ingress annotations: nginx.ingress.kubernetes.io/ssl-redirect: "false" ... Ingress 는 TLS 설정 없이, 외부 LB를 통해 항상 SSL offloading 을 하고 들어오게 하려면? nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
  • 93. HTTPS(TLS) 설정 (3/3) - gRPC 지원
  • 94. HTTPS(TLS) 설정 (3/3) - gRPC 지원 gRPC
  • 95. HTTPS(TLS) 설정 (3/3) - gRPC 지원 gRPC HTTP2 over
  • 96. HTTPS(TLS) 설정 (3/3) - gRPC 지원 gRPC HTTP2 TLS 1.2 or higher over MUST use
  • 97. apiVersion: extensions/v1beta1 kind: Ingress metadata: labels: app: my-app-ingress name: my-app-ingress annotations: nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/backend-protocol: "GRPC" spec: rules: - host: my-app.example.com http: paths: - path: / backend: serviceName: my-app-svc servicePort: 50051 tls: - secretName: tls-secret hosts: - my-app.example.com HTTPS(TLS) 설정 (3/3) - gRPC 지원
  • 98. apiVersion: extensions/v1beta1 kind: Ingress metadata: labels: app: my-app-ingress name: my-app-ingress annotations: nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/backend-protocol: "GRPC" spec: rules: - host: my-app.example.com http: paths: - path: / backend: serviceName: my-app-svc servicePort: 50051 tls: - secretName: tls-secret hosts: - my-app.example.com HTTPS(TLS) 설정 (3/3) - gRPC 지원 server { server_name my-app.example.com; listen 80; listen 443 ssl http2; # Allow websocket connections grpc_set_header Upgrade $http_upgrade; grpc_set_header Connection $connection_upgrade; grpc_set_header X-Request-ID $req_id; grpc_set_header X-Real-IP $the_real_ip; grpc_set_header X-Forwarded-For $the_real_ip; grpc_set_header X-Forwarded-Host $best_http_host; grpc_set_header X-Forwarded-Port $pass_port; grpc_set_header X-Forwarded-Proto $pass_access_scheme; grpc_set_header X-Original-URI $request_uri; grpc_set_header X-Scheme $pass_access_scheme; ... ... grpc_pass grpc://upstream_balancer; ... }
  • 99. apiVersion: extensions/v1beta1 kind: Ingress metadata: labels: app: my-app-ingress name: my-app-ingress annotations: nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/backend-protocol: "GRPC" spec: rules: - host: my-app.example.com http: paths: - path: / backend: serviceName: my-app-svc servicePort: 50051 tls: - secretName: tls-secret hosts: - my-app.example.com HTTPS(TLS) 설정 (3/3) - gRPC 지원 server { server_name my-app.example.com; listen 80; listen 443 ssl http2; # Allow websocket connections grpc_set_header Upgrade $http_upgrade; grpc_set_header Connection $connection_upgrade; grpc_set_header X-Request-ID $req_id; grpc_set_header X-Real-IP $the_real_ip; grpc_set_header X-Forwarded-For $the_real_ip; grpc_set_header X-Forwarded-Host $best_http_host; grpc_set_header X-Forwarded-Port $pass_port; grpc_set_header X-Forwarded-Proto $pass_access_scheme; grpc_set_header X-Original-URI $request_uri; grpc_set_header X-Scheme $pass_access_scheme; ... ... grpc_pass grpc://upstream_balancer; ... } http://nginx.org/en/docs/http/ngx_http_grpc_module.html
  • 100. Ingress Practice - Private / Public 용 그룹 분리 (보안) Internal Service A Client (Frontend App) External Service Internal Service B Internal Service C
  • 101. Ingress Practice - 로깅 포맷 변경 ➜ $ kubectl edit cm ingress-nginx -n ingress-nginx apiVersion: v1 kind: ConfigMap data: server-snippet: set $resp_body ""; http-snippet: |- body_filter_by_lua ' local resp_body = string.sub(ngx.arg[1], 1, 1000) ngx.ctx.buffered = (ngx.ctx.buffered or "") .. resp_body if ngx.arg[2] then ngx.var.resp_body = ngx.ctx.buffered end '; log-format-escape-json: "true" log-format-upstream: '{"time":"$time_iso8601","req_id":"$req_id","remote_address":"$proxy_add_x_forwarded_for","remote_port":$remote_port,"local_ address":"$server_addr","local_port":$server_port,"service_name":"$service_name","User- Agent":"$http_user_agent","Host":"$host","Content- Type":"$sent_http_content_type","body":"$request_body","status":$status,"body":"$resp_body","upstream_addr":"$upstream_addr" ,"request_time":"$request_time","upstream_response_time":"$upstream_response_time","request_method":"$request_method","url": "$uri","server_protocol":"$server_protocol"}' - 가령, Ingress(Nginx) 로그를 JSON 형태로 출력하고 싶다면 더 자세한 파라미터 참고: Log format - NGINX Ingress Controller (kubernetes.github.io)
  • 102. - Ingress(Nginx) 공통 설정은 ConfigMap으로 변경 Ingress Practice - 공통 설정 변경 apiVersion: v1 kind: ConfigMap data: generate-request-id: "false" use-forwarded-headers: "true" use-http2: "true" ... https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/
  • 103. - 컨테이너 이미지 보안 취약점 해결 (불필요 명령어 삭제 & Package Update) 컨테이너 이미지 취약점 점검 (1/3) # 패키지 업데이트 RUN apk -y update # for other linux distributions # or apt-get -y update # or yum -y update # 불필요한 패키지 제거 RUN rm /usr/bin/wget /usr/bin/curl /usr/bin/nc /sbin/route /bin/ping /bin/ping6
  • 104. - 컨테이너 실행 유저 non-root 로 변경 컨테이너 이미지 취약점 점검 (2/3) # Docker 를 쓰는 경우면 Dockerfile에 새로운 유저를 추가하고 # 실행 파일이 있거나 실행 중 필요한 경로에 대해 권한을 주는 명령을 추가 RUN addgroup deploy && adduser -D -G deploy deploy && chown -R deploy:deploy $DEPLOY_HOME && chmod -R 740 $DEPLOY_HOME USER deploy
  • 105. - 컨테이너 실행 유저 non-root 로 변경 (JIB) 컨테이너 이미지 취약점 점검 (3/3) # Gradle 설정을 통해 jib 을 쓰는 경우라면 # 이전장에서처럼 nonroot 유저가 추가된 이미지를 하나 생성한 후에 # jib.container.user 로 설정 jib { from { image = 'sangwonl/java-debian10:nonroot' } container { user = 'deploy' environment = [] jvmFlags = [] } }
  • 106. - 작업 중 가장 부담스러운 부분, 점진적으로 트래픽을 이전하는 방법이 없을까? 라이브 중인 기존 서비스 k8s 클러스터로 이전
  • 107. - 작업 중 가장 부담스러운 부분, 점진적으로 트래픽을 이전하는 방법이 없을까? - LB와 도메인 사이에 중간 LB를 둬서 일정 비율로 라우팅을 전환하는 방법을 권장 (Provision 가능한 LB 면 좋지만 그게 아니면 Nginx 등으로 직접 구성도 가능) - 신규 클러스터의 트래픽 유입 비율을 점진적으로 전환 가능 (가령, 1% → 5% → 25% → 50% → 75% → 100%) 라이브 중인 기존 서비스 k8s 클러스터로 이전
  • 108. - 혹시 새로운 LB-IP로 도메인을 이전했다면 중간 단계 DNS 혹은 클라이언트 DNS 캐싱에서 캐싱 가능성 - 따라서 기존 LB-IP가 도메인에서 완전히 제거되었는지, 트래픽 유입이 없는지 확인하시고 삭제하길 권장 - Ingress의 Routing Rule과 TLS 설정(시크릿) 등을 한번 더 확인하세요! (실수 많이 일어나는 부분) 라이브 중인 기존 서비스 k8s 클러스터로 이전 - 유의사항 apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: "nginx" # 분리한 외부/내부 Ingress 클래스가 맞는지! name: my-public-service-ingress spec: rules: - host: my-service.example.com # 적용하려는 도메인이 맞는지! http: paths: - path: / backend: serviceName: my-public-service servicePort: http-port tls: - secretName: tls-example-com # 해당 도메인의 인증서가 맞는지! hosts: - my-service.example.com
  • 109. 서비스(프론트) 점검 걸기 (1/3) - 배포시에 유저 유입 막기 위한 점검 페이지 띄우기 - Proxy(Nginx)에서 점검페이지를 띄우고 특정 Source IP는 Allow 해주는 방식 - k8s Ingress도 결국은 Nginx와 같은 Proxy 서버이기 때문에 Ingress 설정으로 가능
  • 110. 서비스(프론트) 점검 걸기 (2/3) - 점검 서비스를 별도로 apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/whitelist-source-range: 172.0.0.0/8,127.0.0.1 nginx.ingress.kubernetes.io/server-snippet: | error_page 403 @maintenance; # 위의 whitelist-source-range를 통해 제한된 유저들은 403 응답이 내려가는데 이를 캐치해서 location @maintenance { proxy_pass http://my-app-repair.example.com; # maintenance 서비스로 proxy_pass 한 결과로 응답 } name: my-app-ingress spec: rules: - host: my-app.example.com http: paths: - path: / backend: serviceName: my-app-svc servicePort: http-port server-snippet 참고 Allow 하지 않은 source에서 오는 요청들을 403 → 이때 점검 서비스로 proxy_pass
  • 111. 서비스(프론트) 점검 걸기 (3/3) - 프론트앱에서 점검도 제공 apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/whitelist-source-range: 172.0.0.0/8,127.0.0.1/32 nginx.ingress.kubernetes.io/server-snippet: | error_page 403 @maintenance; # 위의 whitelist-source-range를 통해 제한된 유저들은 403 응답이 내려가는데 이를 캐치해서 location @maintenance { proxy_pass http://upstream_balancer; # 이 Ingress의 원래 서비스(my-app-svc)로 그대로 넘기는데 proxy_set_header X-Maintenance "on"; # X-Maintenance: "on" 이라는 헤더를 추가해서 넘김 (그럼 앱에서 이 헤더를 보고 분기 가능) } name: my-app-ingress spec: rules: - host: my-app.example.com http: paths: - path: / backend: serviceName: my-app-svc servicePort: http-port Allow 하지 않은 source에서 오는 요청들을 403 → 이때 원래 서비스로 proxy_pass (단, 프론트앱에서 점검 상태인지를 알 수 있도록 헤더 제공)
  • 112. 서비스(프론트) 점검 걸기 (3/3) - 프론트앱에서 점검도 제공 if ($http_x_maintenance = "on") { return 501; } error_page 501 @maintenance; location @maintenance { root /etc/nginx/html; rewrite ^(.*)$ /maintenance.html break; } 프론트 앱에서 점검 플래그 헤더를 확인해서 점검 페이지를 노출 Nginx 예시
  • 114. ArgoCD 활용 - GitOps (https://www.gitops.tech/#what-is-gitops) The core idea of GitOps is having a Git repository that always contains declarative descriptions of the infrastructure currently desired in the production environment and an automated process to make the production environment match the described state in the repository.
  • 115. ArgoCD 활용 - GitOps (https://www.gitops.tech/#what-is-gitops) The core idea of GitOps is having a Git repository that always contains declarative descriptions of the infrastructure currently desired in the production environment and an automated process to make the production environment match the described state in the repository. - Tools: FluxCD vs ArgoCD vs JenkinsX
  • 117. - CLI 설치 ArgoCD 활용 - 설치 - ArgoCD 설치를 참고 (핵심은 install.yaml 파일이고 이걸 기준으로 커스터마이징 및 확장) $ kubectl create namespace argocd $ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml $ brew install argocd
  • 118. ArgoCD 활용 - 커스터마이징 - LDAP 설정을 위한 ConfigMap # argocd-cm.yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-cm data: statusbadge.enabled: 'true' url: https://argocd.example.com accounts.jenkins: apiKey accounts.jenkins.enabled: 'true' dex.config: | connectors: - type: ldap id: ldap name: LDAP config: host: ldap.example.com:389 insecureNoSSL: true startTLS: false bindDN: ... bindPW: ... userSearch: baseDN: 'o=identitymaster' username: uid idAttr: uid emailAttr: mail nameAttr: uid
  • 119. ArgoCD 활용 - 커스터마이징 - RBAC 설정을 위한 ConfigMap # argocd-rbac-cm.yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-rbac-cm data: policy.default: role:admin - HTTPS & GRPC를 위해 Secret 추가 # secret.yaml apiVersion: v1 kind: Secret metadata: name: tls-secret-example-com type: kubernetes.io/tls data: tls.crt: LS0tLS1CRUdJTiBDRVJUS*** tls.key: LS0tLS1CRUdJTiBSU0EgU***
  • 120. ArgoCD 활용 - 커스터마이징 - Ingress 설정 # ingress-https.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: argocd-server-http-ingress namespace: argocd annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/force-ssl-redirect: "true" spec: rules: - http: paths: - backend: serviceName: argocd-server servicePort: http host: argocd.example.com tls: - secretName: tls-secret-example-com hosts: - argocd.example.com
  • 121. ArgoCD 활용 - 커스터마이징 - gRPC 설정 # ingress-grpc.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: argocd-server-grpc-ingress namespace: argocd annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/backend-protocol: "GRPC" spec: rules: - http: paths: - backend: serviceName: argocd-server servicePort: https host: argocd-grpc.example.com tls: - hosts: - argocd-grpc.example.com secretName: argocd-secret
  • 122. ArgoCD 활용 - CLI 사용 - localhost → k8s argocd server 포워딩 설정 $ kubectl port-forward svc/argocd-server -n argocd 8080:443 Forwarding from 127.0.0.1:8080 -> 8080 Forwarding from [::1]:8080 -> 8080
  • 123. ArgoCD 활용 - CLI 사용 - k8s 클러스터 조회 / 추가 / 삭제 # 조회 $ argocd cluster list SERVER NAME VERSION STATUS MESSAGE https://my-dev-04-master-1.example.com:6443 my-dev-04-ctx 1.15 Successful https://my-prod-03-master-1.example.com:6443 my-prod-03-ctx 1.15 Successful # 추가 $ argocd cluster add my-devops-ctx INFO[0000] ServiceAccount "argocd-manager" already exists in namespace "kube-system" INFO[0000] ClusterRole "argocd-manager-role" updated INFO[0000] ClusterRoleBinding "argocd-manager-role-binding" updated Cluster 'https://my-devops-master-1.example.com:6443' added # 다시 조회 $ argocd cluster list SERVER NAME VERSION STATUS MESSAGE https://my-dev-04-master-1.example.com:6443 my-dev-04-ctx 1.15 Successful https://my-prod-03-master-1.example.com:6443 my-prod-03-ctx 1.15 Successful https://my-devops-master-1.example.com:6443 my-devops-ctx 1.11 Successful # 삭제 - 서버이름으로 해야함 $ argocd cluster rm https://my-devops-master-1.example.com:6443 # 다시 조회 $ argocd cluster list SERVER NAME VERSION STATUS MESSAGE https://my-dev-04-master-1.example.com:6443 my-dev-04-ctx 1.15 Successful https://my-prod-03-master-1.example.com:6443 my-prod-03-ctx 1.15 Successful
  • 124. ArgoCD 활용 - CLI 사용 - Git 리파지토리 접근 Credential 관리 # 조회 $ argocd repocreds list URL PATTERN USERNAME SSH_CREDS TLS_CREDS https://github.com/sangwonl sangwonl false false # 추가 $ argocd repocreds add https://github.com/sangwonl --username sangwonl --password "*********" # 삭제 $ argocd repocreds rm https://github.com/sangwonl
  • 125. ArgoCD 활용 - CLI 사용 - Git 리파지토리 조회 / 추가 / 삭제 # 조회 $ argocd repo list TYPE NAME REPO INSECURE OCI LFS CREDS STATUS MESSAGE git https://github.com/sangwonl/kube-recipes.git false false false true Successful # 추가 $ argocd repo add https://github.com/sangwonl/k8s-apps.git # 다시 조회 $ argocd repo list TYPE NAME REPO INSECURE OCI LFS CREDS STATUS MESSAGE git https://github.com/sangwonl/kube-recipes.git false false false true Successful git https://github.com/sangwonl/k8s-apps.git false false false true Successful # 삭제 $ argocd repo rm https://github.com/sangwonl/k8s-apps.git
  • 126. ArgoCD 활용 - Pipeline
  • 130. Pod?

Editor's Notes

  1. 주요 포인트는 Pending - Pod 아직 워커에 스케쥴링 안되었거나 볼륨마운트가 안되어서 혹은 다른 이유로 컨테이너가 시작 못된 상태 Running - Pod 에 있는 컨테이너가 하나라도 실행되면 "다른 조건이 없는 한" 이 상태로 넘어간다. Succeeded - Cronjob이나 커맨드처럼 실행 완료하고 종료하는 Pod 일 경우, 정상적인 종료시 상태 Failed - Pod 이 실행하다가 (재시도 포함) 실행 실패하고 종료되었을때 상태 Unknown - 보통 Pod 이 떠 있던 워커에 문제가 생겨서 Not Ready 상태로 빠지면 해당 워커의 Pod 들이 이 상태가 됨 그 중에서도 Running 상태에서 언급한 "다른 조건이 없는 한" 이라는 부분을 좀 더 살펴보겠습니다. (참고) 일회성으로 실행하는 앱은 보통 별다른 조건이 필요하지 않으나 Service → Pod 구조처럼 서비스가 연결된 앱에서는 Pod 이 Running 상태가 되면 바로 트래픽이 들어오게 됩니다. 이때, 컨테이너 자체는 실행되었지만 아직 앱이 트래픽을 받을 준비가 안되어 있을 수 있기 때문에 Running으로 넘어가기전 조건이 필요할 수 있습니다. 이 준비가 되었는지를 확인하는 동작을 Readiness Probe라고 하며 Pod 설정에서 할 수 있습니다. 한편, 반대로 앱을 종료 시킬때(배포시 롤링 업데이트 포함)에도 고려할게 있을 수 있습니다. (참고) 가령, HTTP 핸들러 혹은 Queue Consumer가 요청 받은 Task를 처리 중일때 Term 시그널을 받고 바로 종료시키면 문제가 될 수 있습니다. 이때 Termination Grace Period 를 설정해두면 Term 시그널시에 컨테이너를 바로 종료시키는게 아니라 시그널만 넘기고 앱이 Graceful Shutdown할 수 있게 시간을 벌어줍니다. 그 후에 Grace Period 가 지나면 Pod 을 완전히 종료시킬 수 있습니다. 단, 앱의 Graceful Shutdown은 직접 구현해주셔야 합니다.
  2. 주요 포인트는 Pending - Pod 아직 워커에 스케쥴링 안되었거나 볼륨마운트가 안되어서 혹은 다른 이유로 컨테이너가 시작 못된 상태 Running - Pod 에 있는 컨테이너가 하나라도 실행되면 "다른 조건이 없는 한" 이 상태로 넘어간다. Succeeded - Cronjob이나 커맨드처럼 실행 완료하고 종료하는 Pod 일 경우, 정상적인 종료시 상태 Failed - Pod 이 실행하다가 (재시도 포함) 실행 실패하고 종료되었을때 상태 Unknown - 보통 Pod 이 떠 있던 워커에 문제가 생겨서 Not Ready 상태로 빠지면 해당 워커의 Pod 들이 이 상태가 됨 그 중에서도 Running 상태에서 언급한 "다른 조건이 없는 한" 이라는 부분을 좀 더 살펴보겠습니다. (참고) 일회성으로 실행하는 앱은 보통 별다른 조건이 필요하지 않으나 Service → Pod 구조처럼 서비스가 연결된 앱에서는 Pod 이 Running 상태가 되면 바로 트래픽이 들어오게 됩니다. 이때, 컨테이너 자체는 실행되었지만 아직 앱이 트래픽을 받을 준비가 안되어 있을 수 있기 때문에 Running으로 넘어가기전 조건이 필요할 수 있습니다. 이 준비가 되었는지를 확인하는 동작을 Readiness Probe라고 하며 Pod 설정에서 할 수 있습니다. 한편, 반대로 앱을 종료 시킬때(배포시 롤링 업데이트 포함)에도 고려할게 있을 수 있습니다. (참고) 가령, HTTP 핸들러 혹은 Queue Consumer가 요청 받은 Task를 처리 중일때 Term 시그널을 받고 바로 종료시키면 문제가 될 수 있습니다. 이때 Termination Grace Period 를 설정해두면 Term 시그널시에 컨테이너를 바로 종료시키는게 아니라 시그널만 넘기고 앱이 Graceful Shutdown할 수 있게 시간을 벌어줍니다. 그 후에 Grace Period 가 지나면 Pod 을 완전히 종료시킬 수 있습니다. 단, 앱의 Graceful Shutdown은 직접 구현해주셔야 합니다.
  3. 주요 포인트는 Pending - Pod 아직 워커에 스케쥴링 안되었거나 볼륨마운트가 안되어서 혹은 다른 이유로 컨테이너가 시작 못된 상태 Running - Pod 에 있는 컨테이너가 하나라도 실행되면 "다른 조건이 없는 한" 이 상태로 넘어간다. Succeeded - Cronjob이나 커맨드처럼 실행 완료하고 종료하는 Pod 일 경우, 정상적인 종료시 상태 Failed - Pod 이 실행하다가 (재시도 포함) 실행 실패하고 종료되었을때 상태 Unknown - 보통 Pod 이 떠 있던 워커에 문제가 생겨서 Not Ready 상태로 빠지면 해당 워커의 Pod 들이 이 상태가 됨 그 중에서도 Running 상태에서 언급한 "다른 조건이 없는 한" 이라는 부분을 좀 더 살펴보겠습니다. (참고) 일회성으로 실행하는 앱은 보통 별다른 조건이 필요하지 않으나 Service → Pod 구조처럼 서비스가 연결된 앱에서는 Pod 이 Running 상태가 되면 바로 트래픽이 들어오게 됩니다. 이때, 컨테이너 자체는 실행되었지만 아직 앱이 트래픽을 받을 준비가 안되어 있을 수 있기 때문에 Running으로 넘어가기전 조건이 필요할 수 있습니다. 이 준비가 되었는지를 확인하는 동작을 Readiness Probe라고 하며 Pod 설정에서 할 수 있습니다. 한편, 반대로 앱을 종료 시킬때(배포시 롤링 업데이트 포함)에도 고려할게 있을 수 있습니다. (참고) 가령, HTTP 핸들러 혹은 Queue Consumer가 요청 받은 Task를 처리 중일때 Term 시그널을 받고 바로 종료시키면 문제가 될 수 있습니다. 이때 Termination Grace Period 를 설정해두면 Term 시그널시에 컨테이너를 바로 종료시키는게 아니라 시그널만 넘기고 앱이 Graceful Shutdown할 수 있게 시간을 벌어줍니다. 그 후에 Grace Period 가 지나면 Pod 을 완전히 종료시킬 수 있습니다. 단, 앱의 Graceful Shutdown은 직접 구현해주셔야 합니다.
  4. 주요 포인트는 Pending - Pod 아직 워커에 스케쥴링 안되었거나 볼륨마운트가 안되어서 혹은 다른 이유로 컨테이너가 시작 못된 상태 Running - Pod 에 있는 컨테이너가 하나라도 실행되면 "다른 조건이 없는 한" 이 상태로 넘어간다. Succeeded - Cronjob이나 커맨드처럼 실행 완료하고 종료하는 Pod 일 경우, 정상적인 종료시 상태 Failed - Pod 이 실행하다가 (재시도 포함) 실행 실패하고 종료되었을때 상태 Unknown - 보통 Pod 이 떠 있던 워커에 문제가 생겨서 Not Ready 상태로 빠지면 해당 워커의 Pod 들이 이 상태가 됨 그 중에서도 Running 상태에서 언급한 "다른 조건이 없는 한" 이라는 부분을 좀 더 살펴보겠습니다. (참고) 일회성으로 실행하는 앱은 보통 별다른 조건이 필요하지 않으나 Service → Pod 구조처럼 서비스가 연결된 앱에서는 Pod 이 Running 상태가 되면 바로 트래픽이 들어오게 됩니다. 이때, 컨테이너 자체는 실행되었지만 아직 앱이 트래픽을 받을 준비가 안되어 있을 수 있기 때문에 Running으로 넘어가기전 조건이 필요할 수 있습니다. 이 준비가 되었는지를 확인하는 동작을 Readiness Probe라고 하며 Pod 설정에서 할 수 있습니다. 한편, 반대로 앱을 종료 시킬때(배포시 롤링 업데이트 포함)에도 고려할게 있을 수 있습니다. (참고) 가령, HTTP 핸들러 혹은 Queue Consumer가 요청 받은 Task를 처리 중일때 Term 시그널을 받고 바로 종료시키면 문제가 될 수 있습니다. 이때 Termination Grace Period 를 설정해두면 Term 시그널시에 컨테이너를 바로 종료시키는게 아니라 시그널만 넘기고 앱이 Graceful Shutdown할 수 있게 시간을 벌어줍니다. 그 후에 Grace Period 가 지나면 Pod 을 완전히 종료시킬 수 있습니다. 단, 앱의 Graceful Shutdown은 직접 구현해주셔야 합니다.
  5. 주요 포인트는 Pending - Pod 아직 워커에 스케쥴링 안되었거나 볼륨마운트가 안되어서 혹은 다른 이유로 컨테이너가 시작 못된 상태 Running - Pod 에 있는 컨테이너가 하나라도 실행되면 "다른 조건이 없는 한" 이 상태로 넘어간다. Succeeded - Cronjob이나 커맨드처럼 실행 완료하고 종료하는 Pod 일 경우, 정상적인 종료시 상태 Failed - Pod 이 실행하다가 (재시도 포함) 실행 실패하고 종료되었을때 상태 Unknown - 보통 Pod 이 떠 있던 워커에 문제가 생겨서 Not Ready 상태로 빠지면 해당 워커의 Pod 들이 이 상태가 됨 그 중에서도 Running 상태에서 언급한 "다른 조건이 없는 한" 이라는 부분을 좀 더 살펴보겠습니다. (참고) 일회성으로 실행하는 앱은 보통 별다른 조건이 필요하지 않으나 Service → Pod 구조처럼 서비스가 연결된 앱에서는 Pod 이 Running 상태가 되면 바로 트래픽이 들어오게 됩니다. 이때, 컨테이너 자체는 실행되었지만 아직 앱이 트래픽을 받을 준비가 안되어 있을 수 있기 때문에 Running으로 넘어가기전 조건이 필요할 수 있습니다. 이 준비가 되었는지를 확인하는 동작을 Readiness Probe라고 하며 Pod 설정에서 할 수 있습니다. 한편, 반대로 앱을 종료 시킬때(배포시 롤링 업데이트 포함)에도 고려할게 있을 수 있습니다. (참고) 가령, HTTP 핸들러 혹은 Queue Consumer가 요청 받은 Task를 처리 중일때 Term 시그널을 받고 바로 종료시키면 문제가 될 수 있습니다. 이때 Termination Grace Period 를 설정해두면 Term 시그널시에 컨테이너를 바로 종료시키는게 아니라 시그널만 넘기고 앱이 Graceful Shutdown할 수 있게 시간을 벌어줍니다. 그 후에 Grace Period 가 지나면 Pod 을 완전히 종료시킬 수 있습니다. 단, 앱의 Graceful Shutdown은 직접 구현해주셔야 합니다.
  6. 주요 포인트는 Pending - Pod 아직 워커에 스케쥴링 안되었거나 볼륨마운트가 안되어서 혹은 다른 이유로 컨테이너가 시작 못된 상태 Running - Pod 에 있는 컨테이너가 하나라도 실행되면 "다른 조건이 없는 한" 이 상태로 넘어간다. Succeeded - Cronjob이나 커맨드처럼 실행 완료하고 종료하는 Pod 일 경우, 정상적인 종료시 상태 Failed - Pod 이 실행하다가 (재시도 포함) 실행 실패하고 종료되었을때 상태 Unknown - 보통 Pod 이 떠 있던 워커에 문제가 생겨서 Not Ready 상태로 빠지면 해당 워커의 Pod 들이 이 상태가 됨 그 중에서도 Running 상태에서 언급한 "다른 조건이 없는 한" 이라는 부분을 좀 더 살펴보겠습니다. (참고) 일회성으로 실행하는 앱은 보통 별다른 조건이 필요하지 않으나 Service → Pod 구조처럼 서비스가 연결된 앱에서는 Pod 이 Running 상태가 되면 바로 트래픽이 들어오게 됩니다. 이때, 컨테이너 자체는 실행되었지만 아직 앱이 트래픽을 받을 준비가 안되어 있을 수 있기 때문에 Running으로 넘어가기전 조건이 필요할 수 있습니다. 이 준비가 되었는지를 확인하는 동작을 Readiness Probe라고 하며 Pod 설정에서 할 수 있습니다. 한편, 반대로 앱을 종료 시킬때(배포시 롤링 업데이트 포함)에도 고려할게 있을 수 있습니다. (참고) 가령, HTTP 핸들러 혹은 Queue Consumer가 요청 받은 Task를 처리 중일때 Term 시그널을 받고 바로 종료시키면 문제가 될 수 있습니다. 이때 Termination Grace Period 를 설정해두면 Term 시그널시에 컨테이너를 바로 종료시키는게 아니라 시그널만 넘기고 앱이 Graceful Shutdown할 수 있게 시간을 벌어줍니다. 그 후에 Grace Period 가 지나면 Pod 을 완전히 종료시킬 수 있습니다. 단, 앱의 Graceful Shutdown은 직접 구현해주셔야 합니다.
  7. Termination 프로세스에 대해서 간단히 요약해보자면, kubectl delete pod 이든, rolling update 든 pod delete 요청이 생겼다면 pod status는 terminating 으로 표시되고 이때 해당 pod이 실행 중의 노드에서 kubelet이, container 마다 (정의되어 있다면) preStop 을 실행 (순서 보장) 각 container의 process 1번에 sigterm 보냄 (각 컨테이너 1번 프로세스가 이 시그널을 받는건 순서 보장 못함) 2와 동시에 service 가 terminating 중인 pod에 트래픽을 못 넣도록 endpoint 에서 제거 그리고 terminationGracePeriodSeconds 만큼 기다림 이 안에 종료되면 ok grace period 를 넘어가면 아직 돌고 있는 컨테이너한테 sigkill 보내서 강제로 종료시킴
  8. Termination 프로세스에 대해서 간단히 요약해보자면, kubectl delete pod 이든, rolling update 든 pod delete 요청이 생겼다면 pod status는 terminating 으로 표시되고 이때 해당 pod이 실행 중의 노드에서 kubelet이, container 마다 (정의되어 있다면) preStop 을 실행 (순서 보장) 각 container의 process 1번에 sigterm 보냄 (각 컨테이너 1번 프로세스가 이 시그널을 받는건 순서 보장 못함) 2와 동시에 service 가 terminating 중인 pod에 트래픽을 못 넣도록 endpoint 에서 제거 그리고 terminationGracePeriodSeconds 만큼 기다림 이 안에 종료되면 ok grace period 를 넘어가면 아직 돌고 있는 컨테이너한테 sigkill 보내서 강제로 종료시킴