Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Kubernetes on Bare Metal at the Kitchener-Waterloo Kubernetes and Cloud Native Meetup

85 views

Published on

Charlie Drage discussed Kubernetes on bare metal at last week's Kubernetes and Cloud Native meetup in Kitchener-Waterloo. His presentation demonstrated how to deploy Kubernetes on bare metal servers. Charlie is an active Kubernetes maintainer, and his contributions have included fixing some common issues with bare metal servers and using Ansible to build clusters with kubedm.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Kubernetes on Bare Metal at the Kitchener-Waterloo Kubernetes and Cloud Native Meetup

  1. 1. Kubernetes on Bare-metal (the fun and sad parts) Charlie Drage Red Hat November 26th, 2018 Lightning-ish talk
  2. 2. I work on the Developer Tools team at Red Hat I deal with *a lot* of Kubernetes I maintain Kompose (Docker Compose to Kubernetes tool) I’m frugal and I don’t like using paid Kubernetes services I work on OpenShift tools (project called Odo) (short) Introduction
  3. 3. Why bare-metal?
  4. 4. You get to use your spare computers! Development cluster Home Monitoring
  5. 5. You get to learn about Kubernetes!
  6. 6. It’s free! (well, not totally if you pay for your electricity)
  7. 7. You can pick and choose whatever OS and environment you want!
  8. 8. Who’s using bare-metal clusters?
  9. 9. Ever visit Chick-Fil-A?
  10. 10. Seriously: https://medium.com/@cfatechblog/bare-metal-k8s-clustering-at-chick-fil-a-scale-7b0607bd3541 You’re visiting a Kubernetes datacenter!
  11. 11. At every restaurant! (2,200 restaurants, 6,600 devices!)
  12. 12. Who else
  13. 13. https://www.youtube.com/watch?v=7rqvRwfZHF4
  14. 14. Why Wikipedia created a Kubernetes infrastructure (summary) - Kubernetes is so good that it only takes 4 people to manage the entire infrastructure - Super versatile - Containers! Containers! Containers! - Single-node failure management
  15. 15. Okay, you’ve convinced me, let’s create a cluster
  16. 16. Wait! Let’s look at some cloud offerings first
  17. 17. It’s *so* easy to setup a cluster (if it’s paid for…) - Using Kops or KubeSpray kops create cluster --node-count=2 --node-size=t2.medium --zones=us-east-1a --name=${KOPS_CLUSTER_NAME} - Using Google Kubernetes Engine gcloud container clusters create - Using any other paid services (DigitalOcean, IBM Cloud, Oracle, etc…) The above will happen if you provide Kubernetes as a Service
  18. 18. Everything is taken care of with the Clouuudddddd They take of this for you: ● Deployment ● Volumes ● LoadBalancing ● Ingress ● Logging and monitoring ● Automatic Cluster Scaling ● Node Auto-Repair You pay them so they’ll take care of the above for you.
  19. 19. These gifs will make sense later
  20. 20. Let’s use all these awesome features!
  21. 21. Setting up bare metal
  22. 22. Easy since 2017! - Before kubeadm it was a pain in the butt. Now it’s painless! - Want to know how it used to be? Setup using Kubernetes the Hard Way (https://github.com/kelseyhightower/kubernetes-the-hard-way) - Networking sucked before CNI (Container Network Interface) now we can choose between Flannel, Calico, Canal, etc. without having to worry about networking
  23. 23. Instructions from https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#before-you-begin
  24. 24. Debian, Ubuntu, CentOS, Fedora, HypriotOS (Raspberry Pi)
  25. 25. sudo apt-get install kubeadm or sudo yum install kubeadm
  26. 26. kubeadm init master
  27. 27. kubeadm init --pod-network-cidr=10.244.0.0/16
  28. 28. kubeadm join node(s)
  29. 29. kubeadm join --token TOKEN 192.168.1.100:6443 --discovery-token-ca-cert-hash HASH
  30. 30. kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml setup the networking
  31. 31. Done!
  32. 32. Extreme laziness - Using Ansible! - https://github.com/kairen/kubeadm-ansible - As long as you have either CentOS, Fedora, Ubuntu or Debian it will do it all for you
  33. 33. kubeadm-ansible $ vim hosts.ini [master] 192.16.35.12 [node] 192.16.35.[10:11] [kube-cluster:children] master node
  34. 34. kubeadm-ansible $ ansible-playbook site.yaml ... ==> master1: TASK [addon : Create Kubernetes dashboard deployment] ************************** ==> master1: changed: [192.16.35.12 -> 192.16.35.12] ==> master1: ==> master1: PLAY RECAP ********************************************************************* ==> master1: 192.16.35.10 : ok=18 changed=14 unreachable=0 failed=0 ==> master1: 192.16.35.11 : ok=18 changed=14 unreachable=0 failed=0 ==> master1: 192.16.35.12 : ok=34 changed=29 unreachable=0 failed=0
  35. 35. kubeadm-ansible $ scp k8s@k8s-master:/etc/kubernetes/admin.conf . $ export KUBECONFIG=~/admin.conf $ kubectl get node NAME STATUS AGE VERSION master1 Ready 22m v1.6.3 node1 Ready 20m v1.6.3 node2 Ready 20m v1.6.3
  36. 36. The state of bare-metal support within Kubernetes
  37. 37. So why aren’t there many people using bare-metal k8s?
  38. 38. GKE, AWS, DigitalOcean, etc. Bare metal users
  39. 39. I’ll explain why
  40. 40. Remember these? ● Deployment ● Volumes ● LoadBalancing ● Ingress ● Logging and monitoring ● Automatic Cluster Scaling ● Node Auto-Repair
  41. 41. You’ve got to set it up yourself ● Deployment ● Volumes ● LoadBalancing ● Ingress ● Logging and monitoring ● Automatic Cluster Scaling ● Node Auto-Repair
  42. 42. Deployment: Helm to the rescue! Which is an AWESOME tool
  43. 43. Helm: Install $ kubectl --namespace kube-system create serviceaccount tiller $ kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller $ helm init --service-account tiller --upgrade
  44. 44. Helm: Usage # Deploying Wordpress $ helm install --name wordpress stable/wordpress
  45. 45. Volumes on Bare Metal - Volumes provide dynamic storage for containers - SO MANY OPTIONS TO CHOOSE FROM! (26 options) - For a home cluster, you’d go for either nfs or hostPath (mounting directly onto the cluster) - But even after setup… why can’t I dynamically create volumes? Well, only certain ones are setup for that. Most being Cloud services. - We’ve got Dynamic NFS Volumes https://github.com/kubernetes-incubator/external-storage
  46. 46. Volumes: Install # On an NFS host $ docker run -d --restart=always --net=host --name nfs --privileged -v /mnt/storage/k8s:/nfsshare -e SHARED_DIRECTORY=/nfsshare cdrage/nfs-server-alpine # Install nfs support on each node $ sudo apt-get install nfs-common -y # Finally, we setup the volumes! $ helm install stable/nfs-client-provisioner -n nfs-client --set nfs.server=192.168.1.91 --set nfs.path=/ --set storageClass.defaultClass=true
  47. 47. Volumes: Usage $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE data-loopy-hydra-mariadb-0 Bound pvc-ad2d3724-edce-11e8-895e-52540046b08b 8Gi RWO nfs-client 7d data-wordpress-mariadb-0 Bound pvc-81aeb087-edd1-11e8-895e-52540046b08b 8Gi RWO nfs-client 7d wordpress-wordpress Bound pvc-81a56a8e-edd1-11e8-895e-52540046b08b 10Gi RWO nfs-client 7d ~ $ kubectl get pv NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-81a56a8e-edd1-11e8-895e-52540046b08b 10Gi RWO Delete Bound default/wordpress-wordpress nfs-client 7d pvc-81aeb087-edd1-11e8-895e-52540046b08b 8Gi RWO Delete Bound default/data-wordpress-mariadb-0 nfs-client 7d pvc-ad2d3724-edce-11e8-895e-52540046b08b 8Gi RWO Delete Bound default/data-loopy-hydra-mariadb-0 nfs-client 7d
  48. 48. LoadBalancing on Bare Metal - LoadBalancing assigns an IP Address (ideally a public one) to a service - If not, you’re forced to use an Ingress, NodePort or ClusterIP (internal IP) instead. - Really only one option, and that’s MetalLB (https://github.com/google/metallb) - Uses local IPs (or optionally BGP routers) to distribute IP Addresses - Seems complicated, but it’s super easy to setup
  49. 49. LoadBalancing: Install $ helm install --name metallb stable/metallb # Create a ConfigMap kind: ConfigMap metadata: namespace: default name: metallb-config data: config: | address-pools: - name: default protocol: layer2 addresses: - 192.168.1.96-100
  50. 50. LoadBalancing: Usage $ kubectl get svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.96.0.1 <none> 443/TCP 22d wordpress-mariadb 10.103.71.121 <none> 3306/TCP 7d wordpress-wordpress 10.99.189.46 192.168.1.98 80:30295/TCP,443:31509/TCP 7d
  51. 51. Ingress on Bare Metal - Ingress exposes https and http traffic routes - Kubernetes acts as a master port 80/443 HTTP server and routes traffic - Most popular implementation is kubernetes/nginx-ingress
  52. 52. Ingress: Install $ helm install stable/nginx-ingress --namespace nginx-ingress --set controller.hostNetwork=true,controller.kind=DaemonSet # Create an Ingress apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx name: test-ingress namespace: default spec: rules: - host: test.charliedrage.com http: paths: - path: /foobar backend: serviceName: myhttpservice servicePort: 8080
  53. 53. Ingress: Usage ▶ kubectl get ingress NAME HOSTS ADDRESS PORTS AGE test-ingress test.charliedrage.com 80, 443 6d
  54. 54. Monitoring and Alerts on Bare Metal - Using Prometheus for data collection - Grafana to create all those pretty graphs
  55. 55. Monitoring and Alerts: Install $ helm install --name prometheus stable/prometheus $ helm install --name grafana stable/grafana
  56. 56. Monitoring and Alerts: Usage $ export POD_NAME=$(kubectl get pods --namespace default -l "app=grafana" -o jsonpath="{.items[0].metadata.name}") $ kubectl --namespace default port-forward $POD_NAME 3000 $ kubectl get secret --namespace default grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
  57. 57. Two more! ● Deployment ● Volumes ● LoadBalancing ● Ingress ● Logging and monitoring ● Automatic Cluster Scaling ● Node Auto-Repair
  58. 58. Automatic Cluster Scaling on Bare Metal - Haha - There’s https://github.com/kubernetes/autoscaler with support for only cloud providers. - Please update issue #1060 for me when you push a PR, it’s been inactive since July, thanks!
  59. 59. Node Auto Repair on Bare Metal - Haha x2 - Nope! But there’s support for it! - I swear, there is actually support for this
  60. 60. DollarShaveClub.com These are actually from one of their commercials
  61. 61. I’m serious, this is the only support
  62. 62. Why in the world is it like this?
  63. 63. The truth: Developers are lazy. It’s easier to let someone else take care of it.
  64. 64. It’s still a viable solution! Just with caveats and some setup
  65. 65. And most importantly, you’ll learn!
  66. 66. We’re getting there! (slowly) ● We’ve got: kubeadm, kubespray, kops with bare metal support to make it easier for us ● Kubernetes has been modularizing / splitting off parts of the ecosystem ● We’ve got Kubernetes SIGs (Special Interest Groups) adding new projects all the time ● Maintainers added support for bare-metal! For example, kops added bare-metal support when I requested it, but it was then subsequently dropped in favour for kubeadm.. ● Ansible is (sometimes) a decent solution for setting up baremetal ● Components are slowly coming out of beta / alpha (nfs AutoProvisioner, MetalLB)
  67. 67. Go try it out! Don’t be lazy!
  68. 68. Follow me on Twitter / Github @cdrage charliedrage.com/notes/kubernetes
  69. 69. Thanks for listening
  70. 70. Q&A?

×