November 2017
Seungkyu Ahn /
Taeil choi (Robert Choi)
From Kubernetes to OpenStack
Index
▪ TACO Overview (SKT All Container OpenStack)
▪ Why Kubernetes?
▪ Software stack
▪ Demo (Installing OpenStack)
▪ Kubespray
▪ Kolla
▪ Helm
▪ OpenStack-Helm
▪ Deployment profiles
▪ Deploy OpenStack
▪ Challenges
▪ What’s missing
▪ TACO Milestone & Future Plan
 OpenStack Helm + Continuous Integration/Deployment
 OpenStack Lifecycle Management on Kubernetes
• Easy version upgrade
• Minimize service impact on deployment (Rolling update)
• Scale out and simply add a Compute server
• Self-Healing (Automatic recovery in process down)
TACO (SKT All Container OpenStack)
Why Kubernetes?
▪ Automatic binpacking (Managing container)
▪ Horizontal scaling
▪ Automated rollouts and rollbacks
▪ Self-healing
▪ Service discovery and load balancing
▪ Secret and configuration management
Software stack
Chart
Kubespray
Demo System
deploy
node
k1-master01 k1-master02 k1-master03
k1-node01 k1-node02 k1-node03
k1-node04
Label : openstack-control-plane=enabled
openvswitch=enabled
Label : openstack-compute-node=enabled
openvswitch=enabled
k1-node05
Why Kubernetes?
Demo
Installation order
1. Installing Kubernetes using kubespray
2. Creating ceph user secret and storageclass
3. Setting the label on nodes
4. Building OpenStack docker images using Kolla
5. Packaging OpenStack helm charts
6. Deploying OpenStack
Kubespray
• Kubernetes incubator project
• Ansible
• Latest version support
✓ Kubernetes: v1.8.0
✓ Calico: v2.5.0 or Flannel: v0.8.0 or Weave: 2.0.1
✓ Helm: v2.6.1
✓ EFK (Elastic Search, Fluentd, Kibana) : v5.4.0, 1.22, v5.4.0
• Added features in TACO (SKT All Container OpenStack)
✓ CI / CD
✓ Prometheus for monitoring
Kubespray
• Should be changed files
✓ inventory/inventory.example
✓ inventory/group_vars/k8s-cluster.yml
• Install Kubernetes
✓ ansible-playbook -u taco -b -i inventory/inventory.example cluster.yml
• scale.yml : Adding nodes
• upgrade-cluster.yaml : Upgrading kubernetes
• reset.yaml : Uninstalling kubernetes cluster
Kubespray
• Should be changed files
✓ inventory/inventory.example
✓ inventory/group_vars/k8s-cluster.yml
• Install Kubernetes
✓ ansible-playbook -u taco -b -i inventory/inventory.example cluster.yml
Inventory example
k1-master01 ansible_port=22 ansible_host=k1-master01 ip=192.168.30.13
k1-master02 ansible_port=22 ansible_host=k1-master02 ip=192.168.30.14
k1-master03 ansible_port=22 ansible_host=k1-master03 ip=192.168.30.15
k1-node01 ansible_port=22 ansible_host=k1-node01 ip=192.168.30.12
k1-node02 ansible_port=22 ansible_host=k1-node02 ip=192.168.30.17
k1-node03 ansible_port=22 ansible_host=k1-node03 ip=192.168.30.18
k1-node04 ansible_port=22 ansible_host=k1-node04 ip=192.168.30.21
[etcd]
k1-master01
k1-master02
k1-master03
[kube-master]
k1-master01
k1-master02
k1-master03
[kube-node]
k1-node01
k1-node02
k1-node03
k1-node04
[k8s-cluster:children]
kube-master
kube-node
Kubespray
• Should be changed files
✓ inventory/inventory.example
✓ inventory/group_vars/k8s-cluster.yml
• Install Kubernetes
✓ ansible-playbook -u taco -b -i inventory/inventory.example cluster.yml
k8s-cluster.yml example
kube_version: v1.8.0
kube_network_plugin: calico
kube_service_addresses: 10.96.0.0/16
kube_pods_subnet: 172.16.0.0/16
etcd_deployment_type: docker
kubelet_deployment_type: host
etcd_memory_limit: 8192M
dashboard_enabled: true
efk_enabled: true
helm_enabled: true
Kubespray
• Should be changed files
✓ inventory/inventory.example
✓ inventory/group_vars/k8s-cluster.yml
• Install Kubernetes
✓ ansible-playbook -u taco -b -i inventory/inventory.example cluster.yml
Storage - PV and PVC (w/ Ceph)
• Secret files (openstack namespace) - user
✓ ceph-secret-user.yml
• Storage class
✓ ceph-storageclass.yml
• Secret files (kube-system namespace) - admin, user
✓ ceph-secret-admin.yml
✓ ceph-secret-user.yml
Kubernetes storage (w/ ceph)
• Static Provosioning
✓ rbd manual creation
PV manual creation : Setting rbd and storageclass
PVC manual creation : Connect with PV (PV Name or PV
Selector), Setting Storageclass (if not exist, using default
storageclass)
• Dynamic Provisioning
✓ Manual creation of PVC (Storageclass) : PV, rbd are
automatically generated
✓ Automatic generation : Stateful (volumeClaimTemplates)
Storage - PV and PVC (w/ Ceph)
• Secret files (openstack namespace) - user
✓ ceph-secret-user.yml
• Storage class
✓ ceph-storageclass.yml
• Secret files (kube-system namespace) - admin, user
✓ ceph-secret-admin.yml
✓ ceph-secret-user.yml
Secret file - ceph-secret-admin.yml
apiVersion: v1
kind: Secret
metadata:
name: "ceph-secret-admin"
namespace: "kube-system"
type: "kubernetes.io/rbd"
data:
key: ”xxxxxxx=="
grep key /etc/ceph/ceph.client.admin.keyring | awk '{printf "%s", $NF}' | base64
Storage - PV and PVC (w/ Ceph)
• Secret files (openstack namespace) - user
✓ ceph-secret-user.yml
• Storage class
✓ ceph-storageclass.yml
• Secret files (kube-system namespace) - admin, user
✓ ceph-secret-admin.yml
✓ ceph-secret-user.yml
Secret file - ceph-secret-user.yml
apiVersion: v1
kind: Secret
metadata:
name: "ceph-secret-user"
namespace: "kube-system"
type: "kubernetes.io/rbd"
data:
key: ”xxxxxx=="
grep key /etc/ceph/ceph.client.kube.keyring | awk '{printf "%s", $NF}' | base64
• Secret files (openstack namespace) - user
✓ ceph-secret-user.yml
• Storage class
✓ ceph-storageclass.yml
• Secret files (kube-system namespace) - admin, user
✓ ceph-secret-admin.yml
✓ ceph-secret-user.yml
Storage - PV and PVC (w/ Ceph)
Storage class file - ceph-storageclass.yml
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: "ceph"
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/rbd
parameters:
monitors: "192.168.30.23:6789,192.168.30.24:6789,192.168.30.25:6789"
adminId: "admin"
adminSecretName: "ceph-secret-admin"
adminSecretNamespace: "kube-system"
pool: "kube"
userId: "kube"
userSecretName: "ceph-secret-user"
• Secret files (openstack namespace) - user
✓ ceph-secret-user.yml
• Storage class
✓ ceph-storageclass.yml
• Secret files (kube-system namespace) - admin, user
✓ ceph-secret-admin.yml
✓ ceph-secret-user.yml
Storage - PV and PVC (w/ Ceph)
Secret file - ceph-secret-user.yml
apiVersion: v1
kind: Secret
metadata:
name: "ceph-secret-user"
namespace: ”openstack"
type: "kubernetes.io/rbd"
data:
key: ”xxxxxx=="
grep key /etc/ceph/ceph.client.kube.keyring | awk '{printf "%s", $NF}' | base64
Label
kubectl label node k1-node01 openstack-control-plane=enabled
kubectl label node k1-node01 openvswitch=enabled
kubectl label node k1-node02 openstack-control-plane=enabled
kubectl label node k1-node02 openvswitch=enabled
kubectl label node k1-node03 openstack-control-plane=enabled
kubectl label node k1-node03 openvswitch=enabled
kubectl label node k1-node04 openstack-compute-node=enabled
kubectl label node k1-node04 openvswitch=enabled
Kolla
● OpenStack project 로 OpenStack service 들의 docker image 를 생성 및 관리하는 Tool
● OpenStack 서비스들 뿐만 아니라 다양한 관련 application들의 docker image 제공
Kolla - Dockerfile example
Kolla Dockerfile.j2
Kolla build
• kolla-build -b ubuntu -t source --template-override template-overrides.j2 keystone
override
template-override.j2
• Automation tool for managing Kubernetes applications.
• Helm Charts helps you define, install, and upgrade Kubernetes application.
(Server)
(client)
- Helm Architecture -
Helm chart structure
Helm chart structure
Kubernetes manifest format
• Manifest file for deploying minio pod
kind: Deployment
metadata:
name: minio
labels:
app: minio
spec:
replicas: 1
template:
metadata:
labels:
app: minio
spec:
affinity:
nodeAffinity:
…
containers:
- name: minio
image: minio/minio:latest
imagePullPolicy: Always
args:
- server
- /storage
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ template "minio.fullname" . }}
labels:
app: {{ template "minio.fullname" . }}
spec:
{{- if eq .Values.mode "shared" }}
replicas: {{ .Values.replicas }}
{{- end }}
template:
metadata:
name: {{ template "minio.fullname" . }}
labels:
app: {{ template "minio.fullname" . }}
spec:
volumes:
- name: export
{{- if .Values.persistence.enabled }}
persistentVolumeClaim:
claimName: {{ template "minio.fullname" . }}
{{- else }}
emptyDir: {}
{{- end }}
- name: minio-server-config
configMap:
name: {{ template "minio.fullname" . }}-config-cm
- name: minio-user
secret:
secretName: {{ template "minio.fullname" . }}-user
containers:
- name: minio
image: {{ .Values.image }}:{{ .Values.imageTag }}
…
• Rendering: actual values are
assigned from separate values file
• Rendered manifest is passed to k8s
API (where is rendering done?)
replicas: 1
image: "minio/minio:latest"
imagePullPolicy: "Always“
…
Values.yaml
Helm Chart Template
35
OpenStack-Helm
• Collection of charts for managing most openstack services.
• Since November 2016 by AT&T
(https://github.com/openstack/openstack-helm)
36
Openstakc-helm > Keystone chart structure
Launches keystone pod
Test pod
Contains values
Keystone config
37
SKT’s pipeline > Wrapper Chart
• Customize values for target environment.
• Don’t touch original chart and the wrapper only has values to override.
• Values merged -> SKT chart generated -> Pushed into internal repository.
Deployment Profiles
• Necessary to deploy openstack cluster into various environments
• Charts URLs + configuration overrides
(Eg, network conf, repository URL and so on)
• Open-source orchestration tools
• Landscaper
• Started on Nov 2016 by Eneco.
• Each conf file is for single chart -> Many small configs
• Pretty stable, but only provides basic functionality.
• Armada
• Started on Feb 2017 by AT&T.
• One global big conf file for all charts
• Not as stable as Landscaper yet, but has extra functionality.
(Pre/post actions, undeploy, chart grouping, and so on.)
• We’re trying to migrate from Landscaper to Armada.
Deployment Profiles (cont.)
• Charts URL + env-specific configuration overrides
Deployment Profiles (cont.)
• Profile: Templates + original values + wrapper values + env-specific values
• To apply: “$ armada apply PROFILE_NAME”
templates
original values
values by wrapper
values
for
‘dev’
values
values
for
‘stg’
values
for
‘stg’
Keystone
templates
original values
values by wrapper
values
for
‘dev’
values
values
for
‘stg’
values
for
‘stg’
Glance
…
‘dev’ profile
SKT
Wrapper
Chart
Deployment Profiles (cont.)
• Profile: Templates + original values + wrapper values + env-specific values
• To apply: “$ armada apply PROFILE_NAME”
templates
original values
values by wrapper
values
for
‘dev’
values
values
for
‘stg’
values
for
‘stg’
Keystone
templates
original values
values by wrapper
values
for
‘dev’
values
values
for
‘stg’
values
for
‘stg’
Glance
…
‘stg’ profile
SKT
Wrapper
Chart
Challenges > Summary (#1)
▪ Too many artifacts to track or manage
▪ Tools (binary)
▪ Docker, Kubernetes, Helm, Landscaper or Armada, …
▪ Docker images & sources
▪ Base OS images & Kolla images
▪ Wrapper images
▪ Helm charts & source
▪ Openstack-helm charts
▪ Wrapper charts: Additional template, Override SKT-specific values
▪ Deployment profiles
▪ For various environments ( Eg, ‘dev’, ‘stg’, ‘prod’, … )
▪ Versioning and promotion policies for the above artifacts
▪ Isolated environment for each build job
▪ Eg) daemonset conflicts for OVS or libvirt
▪ Etc
Challenges > Summary (#2)
▪ Too many artifacts to track or manage
▪ Tools (binary)
▪ Docker, Kubernetes, Helm, Landscaper or Armada, …
▪ Docker images & sources
▪ Base OS images & Kolla images
▪ Wrapper images
▪ Helm charts & source
▪ Openstack-helm charts
▪ Wrapper charts: Additional template, Override SKT-specific values
▪ Deployment profiles
▪ For various environments ( Eg, ‘dev’, ‘stg’, ‘prod’, … )
▪ Versioning and promotion policies for the above artifacts
▪ Isolated environment for each build job
▪ Eg) daemonset conflicts for OVS or libvirt
▪ Etc
Challenge > Track upstream changes
▪ Problems: too many things to track
▪ Version upgrade of tools
▪ K8s, helm upgrade -> broken build!
▪ Openstack source, kolla source (trivial)
▪ Openstack-helm project (major one)
▪ Fast and actively moving target
▪ Hard to track upstream changes immediately by hand
▪ Periodic sync/merge -> too many change -> broken build (painful to fix it)
▪ Solution: Automation (on Jenkins)
▪ Fetch hourly -> Build SKT chart -> Test -> Merge if the test passes
▪ If test fails, create ticket and notify developers about the failure
▪ Jira plugin for create the ticket
▪ Slack plugin for the notification
Challenge > Versioning and Promotion
▪ Problems
▪ Should be able to identify relationship between related artifacts
(Eg, kolla image <-> Helm chart)
▪ Solution
▪ Consistent versioning
▪ Dev: after build stage
▪ Stage: after daily integration test
▪ Release: on demand by hand
Dev (hourly) Stage (daily) Release (manual)
Artifact
Kolla
Image
0.1.0 yy.mm.dd 1.0.0 -> … -> 1.0.x
Helm chart
& profile
0.1.0 yy.mm.dd 1.0.0 -> … -> 1.0.x
Source
Code
Branch Master stage ReleaseX
Tag N/A yy.mm.dd 1.0.0 -> … -> 1.0.x
What’s missing
▪ CI for Kubernetes itself
▪ Track kubernetes version upgrade
▪ Apply new version ASAP with some validation tests
▪ Resiliency Test (like chaos-monkey)
▪ Make sure the openstack cluster tolerate node failure
▪ Randomly terminate resources such as pod, daemonset in the cluster at specified
interval & duration
▪ TACO Client Tool
▪ CLI Tool with which users can use most functions of TACO easily.
▪ Deploy/undeploy/patch/upgrade openstack services
▪ Adding/removing openstack node (usually compute node)
TACO Milestone
• Current Status
• Currently beta release
• Upstream-related work
• Cooperating closely with members of OpenStack-Helm project (e.g., AT&T, Intel)
• 3rd place in code contribution ranking in the OpenStack-Helm (as of 11/02/17)
• OpenStack-Helm is now official project: Join us!
• Future plan
• Once Missing part is done -> Production-Ready Release!
• Release Plan
• 2018: Greenfield Production Deployment (SKT Internal Private Cloud)
• 2018: Feasibility Test and PoC for Telco Infra (e.g., dataplane acceleration,
security, etc)
• 2019~ : Production Deployment for Telco Infra
• TBD: Infra Service that provides both VM and Containers & Container-Based SW
Delivery Platform
Q & A
Question?

From Kubernetes to OpenStack in Sydney

  • 1.
    November 2017 Seungkyu Ahn/ Taeil choi (Robert Choi) From Kubernetes to OpenStack
  • 2.
    Index ▪ TACO Overview(SKT All Container OpenStack) ▪ Why Kubernetes? ▪ Software stack ▪ Demo (Installing OpenStack) ▪ Kubespray ▪ Kolla ▪ Helm ▪ OpenStack-Helm ▪ Deployment profiles ▪ Deploy OpenStack ▪ Challenges ▪ What’s missing ▪ TACO Milestone & Future Plan
  • 3.
     OpenStack Helm+ Continuous Integration/Deployment  OpenStack Lifecycle Management on Kubernetes • Easy version upgrade • Minimize service impact on deployment (Rolling update) • Scale out and simply add a Compute server • Self-Healing (Automatic recovery in process down) TACO (SKT All Container OpenStack)
  • 4.
    Why Kubernetes? ▪ Automaticbinpacking (Managing container) ▪ Horizontal scaling ▪ Automated rollouts and rollbacks ▪ Self-healing ▪ Service discovery and load balancing ▪ Secret and configuration management
  • 5.
  • 6.
    Demo System deploy node k1-master01 k1-master02k1-master03 k1-node01 k1-node02 k1-node03 k1-node04 Label : openstack-control-plane=enabled openvswitch=enabled Label : openstack-compute-node=enabled openvswitch=enabled k1-node05
  • 7.
  • 8.
    Installation order 1. InstallingKubernetes using kubespray 2. Creating ceph user secret and storageclass 3. Setting the label on nodes 4. Building OpenStack docker images using Kolla 5. Packaging OpenStack helm charts 6. Deploying OpenStack
  • 9.
    Kubespray • Kubernetes incubatorproject • Ansible • Latest version support ✓ Kubernetes: v1.8.0 ✓ Calico: v2.5.0 or Flannel: v0.8.0 or Weave: 2.0.1 ✓ Helm: v2.6.1 ✓ EFK (Elastic Search, Fluentd, Kibana) : v5.4.0, 1.22, v5.4.0 • Added features in TACO (SKT All Container OpenStack) ✓ CI / CD ✓ Prometheus for monitoring
  • 10.
    Kubespray • Should bechanged files ✓ inventory/inventory.example ✓ inventory/group_vars/k8s-cluster.yml • Install Kubernetes ✓ ansible-playbook -u taco -b -i inventory/inventory.example cluster.yml • scale.yml : Adding nodes • upgrade-cluster.yaml : Upgrading kubernetes • reset.yaml : Uninstalling kubernetes cluster
  • 11.
    Kubespray • Should bechanged files ✓ inventory/inventory.example ✓ inventory/group_vars/k8s-cluster.yml • Install Kubernetes ✓ ansible-playbook -u taco -b -i inventory/inventory.example cluster.yml
  • 12.
    Inventory example k1-master01 ansible_port=22ansible_host=k1-master01 ip=192.168.30.13 k1-master02 ansible_port=22 ansible_host=k1-master02 ip=192.168.30.14 k1-master03 ansible_port=22 ansible_host=k1-master03 ip=192.168.30.15 k1-node01 ansible_port=22 ansible_host=k1-node01 ip=192.168.30.12 k1-node02 ansible_port=22 ansible_host=k1-node02 ip=192.168.30.17 k1-node03 ansible_port=22 ansible_host=k1-node03 ip=192.168.30.18 k1-node04 ansible_port=22 ansible_host=k1-node04 ip=192.168.30.21 [etcd] k1-master01 k1-master02 k1-master03 [kube-master] k1-master01 k1-master02 k1-master03 [kube-node] k1-node01 k1-node02 k1-node03 k1-node04 [k8s-cluster:children] kube-master kube-node
  • 13.
    Kubespray • Should bechanged files ✓ inventory/inventory.example ✓ inventory/group_vars/k8s-cluster.yml • Install Kubernetes ✓ ansible-playbook -u taco -b -i inventory/inventory.example cluster.yml
  • 14.
    k8s-cluster.yml example kube_version: v1.8.0 kube_network_plugin:calico kube_service_addresses: 10.96.0.0/16 kube_pods_subnet: 172.16.0.0/16 etcd_deployment_type: docker kubelet_deployment_type: host etcd_memory_limit: 8192M dashboard_enabled: true efk_enabled: true helm_enabled: true
  • 15.
    Kubespray • Should bechanged files ✓ inventory/inventory.example ✓ inventory/group_vars/k8s-cluster.yml • Install Kubernetes ✓ ansible-playbook -u taco -b -i inventory/inventory.example cluster.yml
  • 16.
    Storage - PVand PVC (w/ Ceph) • Secret files (openstack namespace) - user ✓ ceph-secret-user.yml • Storage class ✓ ceph-storageclass.yml • Secret files (kube-system namespace) - admin, user ✓ ceph-secret-admin.yml ✓ ceph-secret-user.yml
  • 17.
    Kubernetes storage (w/ceph) • Static Provosioning ✓ rbd manual creation PV manual creation : Setting rbd and storageclass PVC manual creation : Connect with PV (PV Name or PV Selector), Setting Storageclass (if not exist, using default storageclass) • Dynamic Provisioning ✓ Manual creation of PVC (Storageclass) : PV, rbd are automatically generated ✓ Automatic generation : Stateful (volumeClaimTemplates)
  • 18.
    Storage - PVand PVC (w/ Ceph) • Secret files (openstack namespace) - user ✓ ceph-secret-user.yml • Storage class ✓ ceph-storageclass.yml • Secret files (kube-system namespace) - admin, user ✓ ceph-secret-admin.yml ✓ ceph-secret-user.yml
  • 19.
    Secret file -ceph-secret-admin.yml apiVersion: v1 kind: Secret metadata: name: "ceph-secret-admin" namespace: "kube-system" type: "kubernetes.io/rbd" data: key: ”xxxxxxx==" grep key /etc/ceph/ceph.client.admin.keyring | awk '{printf "%s", $NF}' | base64
  • 20.
    Storage - PVand PVC (w/ Ceph) • Secret files (openstack namespace) - user ✓ ceph-secret-user.yml • Storage class ✓ ceph-storageclass.yml • Secret files (kube-system namespace) - admin, user ✓ ceph-secret-admin.yml ✓ ceph-secret-user.yml
  • 21.
    Secret file -ceph-secret-user.yml apiVersion: v1 kind: Secret metadata: name: "ceph-secret-user" namespace: "kube-system" type: "kubernetes.io/rbd" data: key: ”xxxxxx==" grep key /etc/ceph/ceph.client.kube.keyring | awk '{printf "%s", $NF}' | base64
  • 22.
    • Secret files(openstack namespace) - user ✓ ceph-secret-user.yml • Storage class ✓ ceph-storageclass.yml • Secret files (kube-system namespace) - admin, user ✓ ceph-secret-admin.yml ✓ ceph-secret-user.yml Storage - PV and PVC (w/ Ceph)
  • 23.
    Storage class file- ceph-storageclass.yml apiVersion: storage.k8s.io/v1beta1 kind: StorageClass metadata: name: "ceph" annotations: storageclass.beta.kubernetes.io/is-default-class: "true" provisioner: kubernetes.io/rbd parameters: monitors: "192.168.30.23:6789,192.168.30.24:6789,192.168.30.25:6789" adminId: "admin" adminSecretName: "ceph-secret-admin" adminSecretNamespace: "kube-system" pool: "kube" userId: "kube" userSecretName: "ceph-secret-user"
  • 24.
    • Secret files(openstack namespace) - user ✓ ceph-secret-user.yml • Storage class ✓ ceph-storageclass.yml • Secret files (kube-system namespace) - admin, user ✓ ceph-secret-admin.yml ✓ ceph-secret-user.yml Storage - PV and PVC (w/ Ceph)
  • 25.
    Secret file -ceph-secret-user.yml apiVersion: v1 kind: Secret metadata: name: "ceph-secret-user" namespace: ”openstack" type: "kubernetes.io/rbd" data: key: ”xxxxxx==" grep key /etc/ceph/ceph.client.kube.keyring | awk '{printf "%s", $NF}' | base64
  • 26.
    Label kubectl label nodek1-node01 openstack-control-plane=enabled kubectl label node k1-node01 openvswitch=enabled kubectl label node k1-node02 openstack-control-plane=enabled kubectl label node k1-node02 openvswitch=enabled kubectl label node k1-node03 openstack-control-plane=enabled kubectl label node k1-node03 openvswitch=enabled kubectl label node k1-node04 openstack-compute-node=enabled kubectl label node k1-node04 openvswitch=enabled
  • 27.
    Kolla ● OpenStack project로 OpenStack service 들의 docker image 를 생성 및 관리하는 Tool ● OpenStack 서비스들 뿐만 아니라 다양한 관련 application들의 docker image 제공
  • 28.
  • 29.
  • 30.
    Kolla build • kolla-build-b ubuntu -t source --template-override template-overrides.j2 keystone override template-override.j2
  • 31.
    • Automation toolfor managing Kubernetes applications. • Helm Charts helps you define, install, and upgrade Kubernetes application. (Server) (client) - Helm Architecture - Helm chart structure
  • 32.
  • 33.
    Kubernetes manifest format •Manifest file for deploying minio pod kind: Deployment metadata: name: minio labels: app: minio spec: replicas: 1 template: metadata: labels: app: minio spec: affinity: nodeAffinity: … containers: - name: minio image: minio/minio:latest imagePullPolicy: Always args: - server - /storage
  • 34.
    apiVersion: extensions/v1beta1 kind: Deployment metadata: name:{{ template "minio.fullname" . }} labels: app: {{ template "minio.fullname" . }} spec: {{- if eq .Values.mode "shared" }} replicas: {{ .Values.replicas }} {{- end }} template: metadata: name: {{ template "minio.fullname" . }} labels: app: {{ template "minio.fullname" . }} spec: volumes: - name: export {{- if .Values.persistence.enabled }} persistentVolumeClaim: claimName: {{ template "minio.fullname" . }} {{- else }} emptyDir: {} {{- end }} - name: minio-server-config configMap: name: {{ template "minio.fullname" . }}-config-cm - name: minio-user secret: secretName: {{ template "minio.fullname" . }}-user containers: - name: minio image: {{ .Values.image }}:{{ .Values.imageTag }} … • Rendering: actual values are assigned from separate values file • Rendered manifest is passed to k8s API (where is rendering done?) replicas: 1 image: "minio/minio:latest" imagePullPolicy: "Always“ … Values.yaml Helm Chart Template
  • 35.
    35 OpenStack-Helm • Collection ofcharts for managing most openstack services. • Since November 2016 by AT&T (https://github.com/openstack/openstack-helm)
  • 36.
    36 Openstakc-helm > Keystonechart structure Launches keystone pod Test pod Contains values Keystone config
  • 37.
    37 SKT’s pipeline >Wrapper Chart • Customize values for target environment. • Don’t touch original chart and the wrapper only has values to override. • Values merged -> SKT chart generated -> Pushed into internal repository.
  • 38.
    Deployment Profiles • Necessaryto deploy openstack cluster into various environments • Charts URLs + configuration overrides (Eg, network conf, repository URL and so on) • Open-source orchestration tools • Landscaper • Started on Nov 2016 by Eneco. • Each conf file is for single chart -> Many small configs • Pretty stable, but only provides basic functionality. • Armada • Started on Feb 2017 by AT&T. • One global big conf file for all charts • Not as stable as Landscaper yet, but has extra functionality. (Pre/post actions, undeploy, chart grouping, and so on.) • We’re trying to migrate from Landscaper to Armada.
  • 39.
    Deployment Profiles (cont.) •Charts URL + env-specific configuration overrides
  • 40.
    Deployment Profiles (cont.) •Profile: Templates + original values + wrapper values + env-specific values • To apply: “$ armada apply PROFILE_NAME” templates original values values by wrapper values for ‘dev’ values values for ‘stg’ values for ‘stg’ Keystone templates original values values by wrapper values for ‘dev’ values values for ‘stg’ values for ‘stg’ Glance … ‘dev’ profile SKT Wrapper Chart
  • 41.
    Deployment Profiles (cont.) •Profile: Templates + original values + wrapper values + env-specific values • To apply: “$ armada apply PROFILE_NAME” templates original values values by wrapper values for ‘dev’ values values for ‘stg’ values for ‘stg’ Keystone templates original values values by wrapper values for ‘dev’ values values for ‘stg’ values for ‘stg’ Glance … ‘stg’ profile SKT Wrapper Chart
  • 42.
    Challenges > Summary(#1) ▪ Too many artifacts to track or manage ▪ Tools (binary) ▪ Docker, Kubernetes, Helm, Landscaper or Armada, … ▪ Docker images & sources ▪ Base OS images & Kolla images ▪ Wrapper images ▪ Helm charts & source ▪ Openstack-helm charts ▪ Wrapper charts: Additional template, Override SKT-specific values ▪ Deployment profiles ▪ For various environments ( Eg, ‘dev’, ‘stg’, ‘prod’, … ) ▪ Versioning and promotion policies for the above artifacts ▪ Isolated environment for each build job ▪ Eg) daemonset conflicts for OVS or libvirt ▪ Etc
  • 43.
    Challenges > Summary(#2) ▪ Too many artifacts to track or manage ▪ Tools (binary) ▪ Docker, Kubernetes, Helm, Landscaper or Armada, … ▪ Docker images & sources ▪ Base OS images & Kolla images ▪ Wrapper images ▪ Helm charts & source ▪ Openstack-helm charts ▪ Wrapper charts: Additional template, Override SKT-specific values ▪ Deployment profiles ▪ For various environments ( Eg, ‘dev’, ‘stg’, ‘prod’, … ) ▪ Versioning and promotion policies for the above artifacts ▪ Isolated environment for each build job ▪ Eg) daemonset conflicts for OVS or libvirt ▪ Etc
  • 44.
    Challenge > Trackupstream changes ▪ Problems: too many things to track ▪ Version upgrade of tools ▪ K8s, helm upgrade -> broken build! ▪ Openstack source, kolla source (trivial) ▪ Openstack-helm project (major one) ▪ Fast and actively moving target ▪ Hard to track upstream changes immediately by hand ▪ Periodic sync/merge -> too many change -> broken build (painful to fix it) ▪ Solution: Automation (on Jenkins) ▪ Fetch hourly -> Build SKT chart -> Test -> Merge if the test passes ▪ If test fails, create ticket and notify developers about the failure ▪ Jira plugin for create the ticket ▪ Slack plugin for the notification
  • 45.
    Challenge > Versioningand Promotion ▪ Problems ▪ Should be able to identify relationship between related artifacts (Eg, kolla image <-> Helm chart) ▪ Solution ▪ Consistent versioning ▪ Dev: after build stage ▪ Stage: after daily integration test ▪ Release: on demand by hand Dev (hourly) Stage (daily) Release (manual) Artifact Kolla Image 0.1.0 yy.mm.dd 1.0.0 -> … -> 1.0.x Helm chart & profile 0.1.0 yy.mm.dd 1.0.0 -> … -> 1.0.x Source Code Branch Master stage ReleaseX Tag N/A yy.mm.dd 1.0.0 -> … -> 1.0.x
  • 46.
    What’s missing ▪ CIfor Kubernetes itself ▪ Track kubernetes version upgrade ▪ Apply new version ASAP with some validation tests ▪ Resiliency Test (like chaos-monkey) ▪ Make sure the openstack cluster tolerate node failure ▪ Randomly terminate resources such as pod, daemonset in the cluster at specified interval & duration ▪ TACO Client Tool ▪ CLI Tool with which users can use most functions of TACO easily. ▪ Deploy/undeploy/patch/upgrade openstack services ▪ Adding/removing openstack node (usually compute node)
  • 47.
    TACO Milestone • CurrentStatus • Currently beta release • Upstream-related work • Cooperating closely with members of OpenStack-Helm project (e.g., AT&T, Intel) • 3rd place in code contribution ranking in the OpenStack-Helm (as of 11/02/17) • OpenStack-Helm is now official project: Join us! • Future plan • Once Missing part is done -> Production-Ready Release! • Release Plan • 2018: Greenfield Production Deployment (SKT Internal Private Cloud) • 2018: Feasibility Test and PoC for Telco Infra (e.g., dataplane acceleration, security, etc) • 2019~ : Production Deployment for Telco Infra • TBD: Infra Service that provides both VM and Containers & Container-Based SW Delivery Platform
  • 48.