Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Kubernetes Services are sooo Yesterday!

30 views

Published on

At the Kubernetes + CloudNative meetup in Toronto of March, 2019, Christopher Liljenstolpe, co-founder and CTO at Tigera, presented ‘Kubernetes Services are sooo yesterday!’ He also provided a demo of Tigera Secure. As Istio, MetalLB, and CoreDNS continue to be adopted en masse, Christopher’s review of the service landscape was most relevant.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Kubernetes Services are sooo Yesterday!

  1. 1. © 2019 Tigera, Inc. | Proprietary and Confidential KUBERNETES SERVICES ARE SO LAST HOUR... > 6 MARCH 2019
  2. 2. © 2019 Tigera, Inc. | Proprietary and Confidential 2 CHRISTOPHER LILJENSTOLPE CTO, Solutions @ Tigera @liljenstolpe @liljenstolpe cdl@tigera.io https://slack.projectcalico.org
  3. 3. © 2019 Tigera, Inc. | Proprietary and Confidential 3 SO WHAT ARE WE TALKING ABOUT TONIGHT > Kubernetes is great - my pods are deploying, my deployments are life-cycling, and everything is scaling just as I… whoa, wait a minute… ok, all better now! > Now how do I find them and consume them in the cluster? ○ Ok - got that…. > Now how about from outside, no cluster is an island, afterall….
  4. 4. © 2018 Tigera, Inc. | Proprietary and Confidential 4 LOGICAL SERVICES AND ENDPOINTS 4 Pod A IP address 192.0.2.5/24 Pod B IP address 198.51.100.3/24
  5. 5. © 2018 Tigera, Inc. | Proprietary and Confidential 5 LOGICAL SERVICES AND ENDPOINTS 5 Pod A Pod B 198.51.100.3/24 192.0.2.5/24 Pod A’ Pod A’’ Pod A’’’ Pod A’’’’ 192.0.2.6/24 192.0.2.7/24 192.0.2.8/24 192.0.2.9/24
  6. 6. © 2018 Tigera, Inc. | Proprietary and Confidential 6 LOGICAL SERVICES AND ENDPOINTS 6 Pod A Pod B 198.51.100.3/24 192.0.2.5/24 Pod A’ Pod A’’ Pod A’’’ Pod A’’’’ 192.0.2.6/24 192.0.2.7/24 192.0.2.8/24 192.0.2.9/24 Service A
  7. 7. © 2018 Tigera, Inc. | Proprietary and Confidential 7 LOGICAL SERVICES AND ENDPOINTS 7 Pod A Pod B 198.51.100.3/24 192.0.2.5/24 Pod A’ Pod A’’ Pod A’’’ Pod A’’’’ 192.0.2.6/24 192.0.2.7/24 192.0.2.8/24 192.0.2.9/24 Service A
  8. 8. © 2018 Tigera, Inc. | Proprietary and Confidential 8 LOGICAL SERVICES AND ENDPOINTS 8 Pod A Pod B 198.51.100.3/24 192.0.2.5/24 Pod A’ Pod A’’ Pod A’’’ Pod A’’’’ 192.0.2.6/24 192.0.2.7/24 192.0.2.8/24 192.0.2.9/24 Service A Endpoints
  9. 9. © 2018 Tigera, Inc. | Proprietary and Confidential 9 LOGICAL SERVICES AND ENDPOINTS 9 Pod A Pod B 198.51.100.3/24 192.0.2.5/24 Pod A’ Pod A’’ Pod A’’’ Pod A’’’’ 192.0.2.6/24 192.0.2.7/24 192.0.2.8/24 192.0.2.9/24 Service A Endpoints resource
  10. 10. © 2018 Tigera, Inc. | Proprietary and Confidential 10 LOGICAL SERVICES AND ENDPOINTS 10 Pod A Pod B 198.51.100.3/24 192.0.2.5/24 Pod A’ Pod A’’ Pod A’’’ Pod A’’’’ 192.0.2.6/24 192.0.2.7/24 192.0.2.8/24 192.0.2.9/24 Service A Endpoints 172.16.0.5/24 VIP
  11. 11. © 2018 Tigera, Inc. | Proprietary and Confidential 11 LOGICAL SERVICES AND ENDPOINTS 11 Pod A Pod B 198.51.100.3/24 192.0.2.5/24 Pod A’ Pod A’’ Pod A’’’ Pod A’’’’ 192.0.2.6/24 192.0.2.7/24 192.0.2.8/24 192.0.2.9/24 Service A Endpoints 172.16.0.5/24 VIP Load Balancer resource
  12. 12. © 2018 Tigera, Inc. | Proprietary and Confidential 12 LOGICAL SERVICES AND ENDPOINTS 12 Pod A Pod B 198.51.100.3/24 192.0.2.5/24 Pod A’ Pod A’’ Pod A’’’ Pod A’’’’ 192.0.2.6/24 192.0.2.7/24 192.0.2.8/24 192.0.2.9/24 Service A Endpoints 172.16.0.5/24 VIP Load Balancer (kube-proxy) (Istio) resource
  13. 13. © 2018 Tigera, Inc. | Proprietary and Confidential 13 KUBE-PROXY 13 kube-proxy
  14. 14. © 2018 Tigera, Inc. | Proprietary and Confidential 14 KUBE-PROXY 14 kube-proxy Service A EP EP EP watches
  15. 15. © 2018 Tigera, Inc. | Proprietary and Confidential 15 KUBE-PROXY 15 kube-proxy Service A EP EP EP watches iptables crude lb with jumps (DNAT) nodeports pods
  16. 16. © 2018 Tigera, Inc. | Proprietary and Confidential 16 KUBE-PROXY 16 kube-proxy Service A EP EP EP watches IPVS nodeports pods
  17. 17. © 2018 Tigera, Inc. | Proprietary and Confidential KUBERNETES SERVICES > Provides logical “service” abstraction for a set of “endpoints” (pods) > Implemented via kube-proxy ○ Historically: userspace proxy ○ Now: iptables rules ○ Future: IPVS > Calico functions seamlessly with standard kube-proxy > Different types - for now, we will look at Cluster VIP 17
  18. 18. Kubernetes Services Worker Y IPb3IPb2IPb1 Worker X IPx IPa3IPa2IPa1 Master kube apiserver kube scheduler kube controller manager Etcd Calico CNI + IPAM IPy Calico CNI + IPAM kubelet kubelet IPw kubectl expose --namespace=ProjectPink deployment nginx --port=80 User/Developer creates service through yaml or ‘kubectl expose’
  19. 19. Kubernetes Services Worker Y IPb3IPb2IPb1 Worker X IPx IPa3IPa2IPa1 Master kube apiserver kube scheduler kube controller manager Etcd Calico CNI + IPAM IPy Calico CNI + IPAM kubelet kubelet kube-proxy kube-proxy N N IPw N kubectl expose --namespace=ProjectPink deployment nginx --port=80 Kube-proxy creates corresponding NAT rules in host
  20. 20. Kubernetes Services: Cluster VIP Worker Y IPb3IPb2IPb1 Worker X IPx IPa3IPa2IPa1 Master kube apiserver kube scheduler kube controller manager Etcd Calico CNI + IPAM IPy Calico CNI + IPAM kubelet kubelet kube-proxy kube-proxy N N IPw N kubectl expose --namespace=ProjectPink deployment nginx --port=80 Cluster VIP assigned from IP range allocated for services
  21. 21. © 2018 Tigera, Inc. | Proprietary and Confidential SERVICE DISCOVERY - KUBERNETES DNS 21 In Kubernetes, DNS is used for service discovery > KubeDNS service (future possibly CoreDNS) Kubernetes creates a DNS record for: > every Service (including the DNS server itself) ○ my-svc.my-namespace.svc.cluster.local > Pods (where configured) ○ pod-ip-address.my-namespace.pod.cluster.local
  22. 22. Naming and Service Discovery: Kube-DNS Worker Y IPb3IPb2IPb1 Worker X IPx IPa3IPa2IPa1 Master kube apiserver kube scheduler kube controller manager Etcd Calico CNI + IPAM IPy Calico CNI + IPAM kubelet kubelet IPw webserver.project-red.svc.cluster.local KubeDNS deployment created (incl. Service) Search path for names set in /etc/resolv.conf within each pod
  23. 23. © 2018 Tigera, Inc. | Proprietary and Confidential SERVICE TYPES > None used to track service endpoints but not load balance. Typically used by operators/controllers and other automation integrations. > Cluster VIP assigned to service; kube-proxy instantiates snat and dnat rules to translate pod traffic destined to cluster vip to a pod IP address and redirect traffic > Node port (optionally) assigned to service (default range 30000-32767); static port exposed on each node allowing access from outside the cluster > Load Balancer automatically creates rules on (supported) Cloud Providers load balancer > External Name maps service to externalName value specified (using dns CNAME) 23
  24. 24. Kubernetes Services Worker Y IPb3IPb2IPb1 Worker X IPx IPa3IPa2IPa1 Master kube apiserver kube scheduler kube controller manager Etcd Calico CNI + IPAM IPy Calico CNI + IPAM kubelet kubelet IPw kubectl expose --namespace=ProjectPink deployment nginx --port=80 User/Developer creates service through yaml or ‘kubectl expose’
  25. 25. Kubernetes Services Worker Y IPb3IPb2IPb1 Worker X IPx IPa3IPa2IPa1 Master kube apiserver kube scheduler kube controller manager Etcd Calico CNI + IPAM IPy Calico CNI + IPAM kubelet kubelet kube-proxy kube-proxy N N IPw N kubectl expose --namespace=ProjectPink deployment nginx --port=80 Kube-proxy creates corresponding NAT rules in host
  26. 26. Kubernetes Services: Cluster VIP Worker Y IPb3IPb2IPb1 Worker X IPx IPa3IPa2IPa1 Master kube apiserver kube scheduler kube controller manager Etcd Calico CNI + IPAM IPy Calico CNI + IPAM kubelet kubelet kube-proxy kube-proxy N N IPw N kubectl expose --namespace=ProjectPink deployment nginx --port=80 Cluster VIP assigned from IP range allocated for services
  27. 27. Kubernetes Services: NodePort Worker Y IPb3IPb2IPb1 Worker X IPx IPa3IPa2IPa1 Master kube apiserver kube scheduler kube controller manager Etcd Calico CNI + IPAM IPy Calico CNI + IPAM kubelet kubelet kube-proxy kube-proxy N N IPw N Nodeport assigned from special port range (typically 30000-32767)
  28. 28. Kubernetes Services: LoadBalancer Worker Y IPb3IPb2IPb1 Worker X IPx IPa3IPa2IPa1 Master kube apiserver kube scheduler kube controller manager Etcd Calico CNI + IPAM IPy Calico CNI + IPAM kubelet kubelet kube-proxy kube-proxy N N IPw N Rules created on supported external load balancers
  29. 29. > DNS ○ CoreDNS ○ External-DNS > MetalLB > Istio Ingress Gateway > How might these all fit together in the ‘nearish’ future? 29 So, What’s New?
  30. 30. © 2018 Tigera, Inc. | Proprietary and Confidential 30 CoreDNS > KubeDNS was a bit on the creaky side > CoreDNS ○ Based on the Caddy HTTP server ○ Very extensible ○ Much more durable and useable as an external DNS server. ○ In 1.12, the default DNS service for Kubernetes
  31. 31. CoreDNS USAGE > Drop in replacement for KubeDNS > Can apply DNS rewriting rules > Can be configured to trade-off memory usage for external resolution time. > External CoreDNS servers can connect to one or more Kubernetes clusters and namespaces 31
  32. 32. 32 CoreDNS Autopath
  33. 33. > rewrite name regex (.*)-(us-west-1).example.org {1}.service.{2}.consul > rewrite name suffix .schmoogle.com. .google.com. 33 CoreDNS Rewrite and CoreDNS Multiple K8s Clusters
  34. 34. > Incubator Project ○ I.e. don’t expect the API to be static at this point > Allows a service, deployment, pod, etc. to be annotated with an external DNS name ○ $ kubectl annotate service nginx "external-dns.alpha.kubernetes.io/hostname=nginx.example.org." ○ Can also handle TTL ○ Can be authoritative for a zone, or just already existing entries > Supports a number of backends EXTERNAL DNS 34
  35. 35. > Kubernetes Load Balancer for non-cloud clusters > Deploys as a Kubernetes deployment > Can run in L2 or L3 mode (BGP) ○ Anycast spreadable load balancer ○ Use affinities and anti-affinities wisely 35 MetalLB
  36. 36. © 2018 Tigera, Inc. | Proprietary and Confidential > Istio is great for your east-west service mesh ○ Backoffs and circuit breakers ○ Retries and blue/green ○ Metrics and controls > But what about north-south? ○ Use an existing load balancer - but now I have a different set of capabilities and data sets ○ What if I could use Istio as the ingress as well... ISTIO INGRESS GATEWAY 36
  37. 37. ISTIO INGRESS GATEWAY 37Source: https://blog.jayway.com/2018/10/22/understanding-istio-ingress-gateway-in-kubernetes/
  38. 38. > Istio Gateway Service ○ Comprised of one or more Istio Gateway Pods > Defined by a two objects ○ Istio Gateway Object and one or more ● Defines what ports and hosts we are listening for ○ Virtual Service Object(s) ● Defines the routing rules and points to the underlying service 38 ISTIO GATEWAY COMPONENTS
  39. 39. cat <<EOF | kubectl apply -f - apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: httpbin-gateway spec: selector: istio: ingressgateway # use Istio default gateway implementation servers: - port: number: 80 name: http protocol: HTTP hosts: - "httpbin.example.com" EOF 39 ISTIO GATEWAY
  40. 40. cat <<EOF | kubectl apply -f - apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: httpbin spec: hosts: - "httpbin.example.com" gateways: - httpbin-gateway http: - match: - uri: prefix: /status - uri: prefix: /delay route: - destination: port: number: 8000 host: httpbin EOF 40 VIRTUAL SERVICE
  41. 41. > Horizontally distributed MetalLB instances anycasting service VIPs to the infrastructure > Istio Ingress Gateways as the target of the MetalLB instances > External-DNS to select the Well-known external name > Ex-cluster CoreDNS servers to serve those names to the rest of the world > CoreDNS re-writes to map external names to internal names 41 PUTTING IT ALL TOGETHER (IN THEORY, ANYWAY)

×