1. Creating a multinode cluster
using kubeadm
The networking of the cluster is the big-ticket item, flannel
Support is available to customers having an Oracle Linux Premier support
subscription and is restricted to the combination of Oracle Container Services for
Kubernetes and Oracle Container Runtime for Docker on Oracle Linux 7
https://blogs.oracle.com/linux/announcing-oracle-container-services-119-for-use-with-kubernetes
https://docs.oracle.com/cd/E52668_01/E88884/html/requirements-network.html
Environments:
OS: Linux OS
Hosts: Master (master node) , node01 & node02 (worker nodes)
Disable SELinux and swap:
setenforce 0
First check if SELinux is disabled or enabled:
sestatus
If this command returns "SELinux status: enabled", you can disable SELinux with the
following command.
setenforce 0
The command above will only disable SELinux for now, but the change will be reverted
across reboots. Therefore, disable it permanently by opening the
file /etc/sysconfig/selinux and replace the current SELINUX directive with a value
of disabled so it looks like this:
2. SELINUX=disabled
Disable SWAP for kubernetes installation by running the following commands.
swapoff -a
And then edit the '/etc/fstab' file.
vim /etc/fstab
Comment the swap line UUID as below.
Enable br_netfilter Kernel Module
The br_netfilter module is required for kubernetes installation. Enable this kernel
module so that the packets traversing the bridge are processed by iptables for filtering
and for port forwarding, and the kubernetes pods across the cluster can communicate
with each other.
[root@master ~]# lsmod|grep br_netfilter
[root@master ~]# modprobe br_netfilter
[root@master ~]# echo "br_netfilter" > /etc/modules-load.d/br_netfilter.conf
[root@master ~]#
Adjust hosts file:
On all nodes
[root@node01 ~]# cat /etc/hosts
10.142.0.2 node01
10.142.0.3 node02
10.142.0.4 master
Install Docker-ce
It's time to install the necessary Docker tool. On all three machines, install the
Docker-ce dependencies with the following command
Install the package dependencies for docker-ce.
3. yum install -y yum-utils device-mapper-persistent-data lvm2
Add the docker repository to the system and install docker-ce using the yum command.
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker
-ce.repo
yum install -y docker-ce
Wait for the docker-ce installation.
Install Kubernetes
This is also done on all three servers. First we need to create a repository
entry for yum
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
Now install the kubernetes packages kubeadm, kubelet, and kubectl using the yum
command below.
yum install -y kubelet kubeadm kubectl
After the installation is complete, restart all those servers.
sudo reboot
Log in again to the server and start the services, docker and kubelet.
4. systemctl start docker && systemctl enable docker
systemctl start kubelet && systemctl enable kubelet
Change the cgroup-driver
We need to make sure the docker-ce and kubernetes are using same 'cgroup'.
[root@master ~]# systemctl start docker && systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to
/usr/lib/systemd/system/docker.service.
[root@master ~]# docker info | grep -i cgroup
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Cgroup Driver: cgroupfs
[root@master ~]#
And you see the docker is using 'cgroupfs' as a cgroup-driver.
Now run the command below to change the kuberetes cgroup-driver to 'cgroupfs'.
sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system
/kubelet.service.d/10-kubeadm.conf
Reload the systemd system and restart the kubelet service.
systemctl daemon-reload
systemctl restart kubelet
When using Docker, kubeadm will automatically detect the cgroup driver for the kubelet and set it in th
e /var/lib/kubelet/kubeadm-flags.env file during runtime.
[root@node01 ~]# cat /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --network-plugin=cni --pod-infra-
container-image=k8s.gcr.io/pause:3.1
Now we're ready to configure the Kubernetes Cluster.
5. Kubernetes Cluster Initialization
In this step, we will initialize the kubernetes master cluster configuration.
Move the shell to the master server 'k8s-master' and run the command below to set up
the kubernetes master.
[root@master ~]# kubeadm init --apiserver-advertise-address=10.142.0.4 --pod-network-
cidr=10.142.0.0/24
When the Kubernetes initialization is complete, you will get the result as below.
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 10.142.0.4:6443 --token y0epoc.zan7yp35sow5rorw --discovery-token-ca-cert-hash
sha256:f02d43311c2696e1a73e157bda583247b9faac4ffb368f737ee9345412c9dea4
[root@node01 ~]# kubeadm join 10.142.0.4:6443 --token y0epoc.zan7yp35sow5rorw --discovery-token-
ca-cert-hash sha256:f02d43311c2696e1a73e157bda583247b9faac4ffb368f737ee9345412c9dea4
6. [preflight] Running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable
docker.service'
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0.
Latest validated version: 18.06
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable
kubelet.service'
[discovery] Trying to connect to API Server "10.142.0.4:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.142.0.4:6443"
[discovery] Requesting info from "https://10.142.0.4:6443" again to validate TLS against the pinned
public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned
roots, will use API Server "10.142.0.4:6443"
[discovery] Successfully established connection with API Server "10.142.0.4:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the
kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object
"node01" as an annotation
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
7. Run 'kubectl get nodes' on the master to see this node join the cluster.
[root@node01 ~]#
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f
https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c
0d86c5cd2643275/Documentation/kube-flannel.yml
restart server.
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady master 7m53s v1.13.1
node01 NotReady <none> 4m13s v1.13.1
node02 NotReady <none> 3m36s v1.13.1
[root@master ~]#
[root@master ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-86c58d9df4-xrpvf 1/1 Running 1 5h17m
kube-system coredns-86c58d9df4-z8pt8 1/1 Running 1 5h17m