青云CoreOS虚拟机部署Kubernetes
Felix.Liang
Kubernetes集群拓扑
- etcd
- kube-apiserver
- kube-scheduler
- kube-controller-
manager
Kubernetes Master
10.60.33.151
- flannel
- docker
- kubelet
- kube-proxy
Kubernetes Node
10.60.49.71
- flannel
- docker
- kubelet
- kube-proxy
Kubernetes Node
10.60.135.238
软件安装包 CoreOS默认安装etcd和docker
Flannel当前最新版本为0.5.2
wget https://github.com/coreos/flannel/releases/download/v0.5.2/flannel-0.5.2-linux-
amd64.tar.gz
tar zxvf flannel-0.5.2-linux-amd64.tar.gz
cp flannel-0.5.2/flanneld /opt/bin
Kubernetes当前最新版本为1.0.1
wget https://github.com/GoogleCloudPlatform/kubernetes/releases/download/v1.0.1/
kubernetes.tar.gz
tar zxvf kubernetes.tar.gz
tar zxvf kubernetes/server/kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/kubernetes/server/bin
cp kube-apiserver kube-controller-manager kube-proxy kube-scheduler kubectl kubelet /opt/
bin
Kubernetes Master配置
etcd启动配置文件: /etc/systemd/system/k8setcd.service
[Unit]
Description=Etcd Key-Value Store for Kubernetes Cluster
[Service]
ExecStart=/usr/bin/etcd2 
--name 'default' 
--data-dir '/root/Data/etcd/data' 
--advertise-client-urls 'http://0.0.0.0:4001' 
--listen-client-urls 'http://0.0.0.0:4001'
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
Kubernetes Master配置
kube-apiserver启动配置文件: /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
After=k8setcd.service
Wants=k8setcd.service
[Service]
ExecStart=/opt/bin/kube-apiserver 
--v=3 
--admission_control=NamespaceLifecycle,NamespaceAutoProvision,LimitRanger,ResourceQuota 
--address=0.0.0.0 
--port=8080 
--etcd_servers=http://127.0.0.1:4001 
--service-cluster-ip-range=10.0.0.0/24
ExecStartPost=-/bin/bash -c "until /usr/bin/curl http://127.0.0.1:8080; do echo "waiting for API server to come
online..."; sleep 3; done"
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
Kubernetes Master配置
kube-apiserver启动配置文件: /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
After=k8setcd.service
After=kube-apiserver.service
Wants=k8setcd.service
Wants=kube-apiserver.service
[Service]
ExecStart=/opt/bin/kube-scheduler 
--v=3 
--master=http://127.0.0.1:8080
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
Kubernetes Master配置
kube-apiserver启动配置文件: /etc/systemd/system/kube-controller-
manager.service
[Unit]
Description=Kubernetes Controller Manager
After=k8setcd.service
After=kube-apiserver.service
Wants=k8setcd.service
Wants=kube-apiserver.service
[Service]
ExecStart=/opt/bin/kube-controller-manager 
--v=3 
--master=http://127.0.0.1:8080
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
Kubernetes Master启动组件
systemctl enable k8setcd.service
systemctl enable kube-apiserveri.service
systemctl enable kube-scheduler.service
systemctl enable kube-controller-manager.service
systemctl start k8setcd.service
systemctl start kube-apiserveri.service
systemctl start kube-scheduler.service
systemctl start kube-controller-manager.service
# kubectl get services
NAME LABELS SELECTOR IP(S) PORT(S)
kubernetes component=apiserver,provider=kubernetes <none> 10.0.0.1 443/TCP
# kubectl get endpoints
NAME ENDPOINTS
kubernetes 10.60.33.151:6443
# kubectl get nodes
NAME LABELS STATUS
Kubernetes Master其他设置
etcd中配置flannel
etcdctl set /coreos.com/network/config "{ "Network": "10.100.0.0/16", "Backend": { "Type": "udp", "Port
": 8285 } }”
青云控制台中配置防火墙规则
Kubernetes Node需要访问Kubernetes Master上etcd的4001端口和kube-apiserver的8080端口
Kubernetes Node配置
flannel启动配置文件: /etc/systemd/system/flannel.service
[Unit]
Description=Flannel for Overlay Network
[Service]
ExecStart=/opt/bin/flanneld 
-v=3 
-etcd-endpoints=http://10.60.33.151:4001
ExecStartPost=-/bin/bash -c "until [ -e /var/run/flannel/subnet.env ]; do echo "waiting for write."; sleep 3;
done"
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
Kubernetes Node配置
docker启动配置文件: /etc/systemd/system/docker.service
[Unit]
Description=Docker container engine configured to run with flannel
Requires=flannel.service
After=flannel.service
[Service]
EnvironmentFile=/var/run/flannel/subnet.env
ExecStartPre=-/usr/bin/ip link set dev docker0 down
ExecStartPre=-/usr/sbin/brctl delbr docker0
ExecStart=/usr/bin/docker -d -s=btrfs -H fd:// --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
Kubernetes Node配置
kubelet启动配置文件: /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Wants=docker.service
[Service]
ExecStart=/opt/bin/kubelet 
--v=3 
--chaos_chance=0.0 
--container_runtime=docker 
--hostname_override=10.60.135.238 
--address=10.60.135.238 
--api_servers=10.60.33.151:8080 
--port=10250
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
Kubernetes Node配置
kube-proxy启动配置文件: /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes proxy server
After=docker.service
Wants=docker.service
[Service]
ExecStart=/opt/bin/kube-proxy --v=3 --master=http://10.60.33.151:8080
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
Kubernetes Node启动组件
systemctl enable flannel.service
systemctl enable docker.service
systemctl enable kubelet.service
systemctl enable kube-proxy.service
systemctl start flannel.service
systemctl start docker.service
systemctl start kubelet.service
systemctl start kube-proxy.service
# kubectl get nodes
NAME LABELS STATUS
10.60.135.238 kubernetes.io/hostname=10.60.135.238 Ready
10.60.49.71 kubernetes.io/hostname=10.60.49.71 Ready
Kubernetes Master
eth0: 10.60.33.151
Kubernetes Node
eth0: 10.60.49.71
flannel0: 10.100.18.0
docker0: 10.100.18.1
Kubernetes Node
eth0: 10.60.135.238
flannel0: 10.100.17.0
docker0: 10.100.17.1
Kubernetes Node其他设置
准备pause镜像,Kubernetes会为每个Pod启动一个pause容器,默认镜像地址
被墙
docker pull docker.io/kubernetes/pause
docker tag docker.io/kubernetes/pause gcr.io/google_containers/pause:0.8.0
青云控制台中配置防火墙规则
flannel通过udp包实现ip数据包的封装来实现overlay network,需要打开udp端口8285
创建Pods
创建Replication Controller
# kubectl --server="http://121.201.63.213:8080" create -f ~/Repository/kubernetes-project/
kubernetes-1.0.0/examples/replication.yaml
replicationcontrollers/felix-nginx-repcrl-001
# kubectl --server="http://121.201.63.213:8080" get replicationcontrollers
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR
REPLICAS
felix-nginx-repcrl-001 nginx nginx app=nginx,phase=test,role=frontend 2
# kubectl --server="http://121.201.63.213:8080" get pods
NAME READY STATUS RESTARTS AGE
felix-nginx-repcrl-001-92wo1 1/1 Running 0 19s
felix-nginx-repcrl-001-9h7b6 1/1 Running 0 19s
apiVersion: v1
kind: ReplicationController
metadata:
name: felix-nginx-repcrl-001
spec:
replicas: 2
selector:
app: nginx
role: frontend
phase: test
template:
metadata:
name: nginx
labels:
app: nginx
role: frontend
phase: test
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
创建Service
创建外网可访问的service
# kubectl --server="http://121.201.63.213:8080" create -f ~/Repository/kubernetes-project/kubernetes-1.0.0/examples/service.yaml
services/felix-nginx-service-001
# kubectl --server="http://121.201.63.213:8080" get services
NAME LABELS SELECTOR IP(S) PORT(S)
felix-nginx-service-001 <none> app=nginx,phase=test,role=frontend 10.0.0.88 10080/TCP
kubernetes component=apiserver,provider=kubernetes <none> 10.0.0.1 443/TCP
# kubectl --server="http://121.201.63.213:8080" get endpoints
NAME ENDPOINTS
felix-nginx-service-001 10.100.17.4:80,10.100.18.2:80
kubernetes 10.60.33.151:6443
apiVersion: v1
kind: Service
metadata:
name: felix-nginx-service-001
spec:
ports:
- port: 10080
targetPort: 80
nodePort: 30576
selector:
app: nginx
role: frontend
phase: test
type: NodePort
clusterIP: 10.0.0.88
创建青云负载均衡器 通过http监听器实现对两个Kubernetes Node
的负载均衡,监听端口号10080
公⺴⽹网访问nginx Browser
QingCloud LoadBalancer
121.201.63.213:10080
Kubernetes Node
eth0: 10.60.49.71
DNAT
Kube-Proxy
Pod: felix-nginx-repcrl-001-92wo1
eth0: 10.100.18.2
10.60.49.71:30576 10.60.135.238:30576
Kubernetes Node
eth0: 10.60.135.238
DNAT
Kube-Proxy
Pod: felix-nginx-repcrl-001-9h7b6
eth0: 10.100.17.4

青云CoreOS虚拟机部署kubernetes