how to join legacy VMs and bare metal machines to a Kubernetes service mesh so that VMs can consume Kubernetes services AND publish services used by Kubernetes hosted applications
OSS Japan 2019 service mesh bridging Kubernetes and legacy
1. 2019
Bridging, not breaking tradition:
Use service mesh expansion to connect legacy
and Kubernetes workloads and services
Steven Wong, VMware
@cantbewong
2. 3
Steven Wong
Los Angeles
Open Source Engineer, VMware
Active in Kubernetes community since 2015 –
storage, IoT+Edge, running K8s on VMware
infrastructure.
GitHub: @cantbewong
Presenter
3. Agenda
4
Quick Service mesh recap: What and Why
Service Mesh for VMs: Benefits, Limitations
Overview: Deploy Mesh Expansion for VMs
Demo
• Send requests from VM workloads to Kubernetes services
• Run services on a VM that is part of a mesh expansion
4. 5
Why service mesh?
Issues:
• A lot of connected items (microservices)
• A lot of connection churn (short lived components)
• Deployed in many places (on premise, public cloud,
multi-cloud)
Photo by Andrew
Wulf on Unsplash
6. 7
Issues:
What happens when you have version creep?
App developers are not usually networking or security experts.
Costly to get them there and OR
Costly mistakes will be made
Apps or tiers of an app link to network related code in libraries
Old model
7. 8
Service mesh:
• Get network code out of apps/services
• Move to “sidecar” as a reusable separate containerized implementation maintained by
network and security experts
• The Envoy implementation (associated with the Istio control plane) is commonly injected
in pods when used on Kubernetes
Get network code out of applications/services
New model
8. 9
Service mesh sounds really good..
But what about my legacy?
I my legacy software still works
great,
but I do want to use Kubernetes for
some new things too
Is there no hope?
Photo by Carsten Pietzsch - Eigenes Werk
de.Wikimedia.org used under CC by 3.0 licence
9. 10
The envoy implementation used in side car containers also is available in .deb package
Good news
feature
Kubernetes
containers
VMs
bare metal
Service discovery
☑ ☑
Canaries
☑ ☑
security
☑ ☑
Central management
☑ ☑
observability
☑ ☑
10. 11
Demo Show preinstalled Istio in Kubernetes
Install Envoy in a VM
Consume a Kubernetes service from a VM
Publish a simulated legacy service in a VM and consume it
from Envoy
11. 12
Minikube:
• Kubernetes v1.14.3
• ingress & dashboard
enabled
• Hosts:
• MetalLB load balancer
• Istio “demo profile”
• Bookinfo example app
Kubernetes on minikube + a VM
Platform for this demo
Hardware: CPU=i7 memory=32G OS Ubuntu18
Hypervisor: VMware Workstation
Minikube VM: 4 core,16GB
“Legacy” VM:
v18.04
Virtual
network
Minikube VM actually uses
14GB+ for this
38 pods, 28 services
12. 13
Very fast look at install steps
too much to demo
but in deck so you can come back later
much of his is not currently documented
Photo toine Garnier on Unsplash
14. 15
curl –Lo docker-machine-driver-vmware https://github.com/machine-drivers/docker-
machine-driver-vmware/releases/download/v0.1.0/docker-machine-driver-
vmware_linux_amd64 && chmod +x docker-machine-driver-vmware
sudo install docker-machine-driver-vmware /usr/local/bin
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-
linux-amd64 && chmod +x minikube
sudo install minikube /usr/local/bin
Recommended: set hypervisor default NAT network to CIDR to 192.168.99.0/24
• Need to adjust these instructions if another CIDR is used instead
Install binaries
Minikube install
15. 16
minikube config set vm-driver vmware
minikube config set cpus 4
minikube config set memory 16384
minikube config set disk-size 50g
minikube config set host-only-cidr 192.168.99.1/24
minikube config set kubernetes-version v1.14.3
minikube config view
minikube start --alsologtostderr -v=8
minikube addons enable ingress
minikube addons enable dashboard (optional)
configure and start minkube
Minikube install
16. 17
Pilot, Mixer, Citadel and other services need to be accessed by mesh expansion
machines.
Kubernetes docs:
When creating a service, you have the option of automatically creating a cloud
network load balancer. This provides an externally-accessible IP address that sends
traffic to the correct port on your cluster nodes provided your cluster runs in a
supported environment and is configured with the correct cloud load balancer
provider package.
When Istio is deployed using helm with mesh expansion enabled, it will attempt to
configure a load balancer. Kubernetes does ship with are all glue code that calls out
to various IaaS platforms. If you’re not running on a supported IaaS platform (GCP,
AWS, Azure…), LoadBalancers will remain in the “pending” state indefinitely when
created.
Current mesh documentation and scripts are based on public cloud with a load balancer
Load balancer
17. 18
MetalLB runs inside Kubernetes with two parts: a cluster-wide controller, and a per-node
protocol speaker.
Install MetalLB by applying the manifest:
kubectl apply -f
https://raw.githubusercontent.com/google/metallb/v0.7.3/manifests/metallb.yaml
This manifest creates a bunch of resources. Most of them are related to access control, so that
MetalLB can read and write the Kubernetes objects it needs to do its job.
Ignore those bits for now, the two pieces of interest are the “controller” deployment, and the
“speaker” DaemonSet. Wait for these to start by monitoring kubectl get pods -n metallb-
system. Eventually, you should see a controller pod plus a speaker pod for each worker node
(only one worker for minikube).
minikube tunnel might work for you – I gave up on getting it to work for me
Install MetalLB load balancer
18. 19
cat ~/metallbconfig.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config:
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.99.40-192.168.99.50
kubectl apply -f ~/metallbconfig.yaml
Docs have an example of deploying a load balancer to expose a nginx web server – use it to test
your load balancer
This configuration is based on L2 ARP – docs have an alternate BGP version
Configure MetalLB load balancer
19. 20
curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.2.2 sh -
cd istio-1.2.2
export PATH=$PWD/bin:$PATH
curl -L https://git.io/get_helm.sh | bash
kubectl apply -f install/kubernetes/helm/helm-service-account.yaml
helm init --history-max 200 --service-account tiller
helm repo add istio.io https://storage.googleapis.com/istio-release/releases/1.2.2/charts/
get binaries and prepare helm
Istio install
20. 21
curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.2.2 sh -
cd istio-1.2.2
export PATH=$PWD/bin:$PATH
curl -L https://git.io/get_helm.sh | bash
kubectl apply -f install/kubernetes/helm/helm-service-account.yaml
helm init --history-max 200 --service-account tiller
helm repo add istio.io https://storage.googleapis.com/istio-release/releases/1.2.2/charts/
get binaries and prepare helm
Istio install
21. 22
helm install install/kubernetes/helm/istio-init --name istio-init --namespace istio-system
Takes about 1 minute
Verify 23 CRDs are present
kubectl get crds | grep 'istio.io|certmanager.k8s.io' | wc -l
Install configuration profile (demo configuration chosen to support BookInfo app)
helm install install/kubernetes/helm/istio --name istio --namespace istio-system --set
global.meshExpansion.enabled=true --values install/kubernetes/helm/istio/values-istio-demo.yaml
Takes about 10 minutes
kubectl get svc -n istio-system
kubectl get pods -n istio-system
Istio is running if all pods show Running or Completed
With mesh expansion enabled
Install Istio control plane in Kubernetes
22. 23
Confirm this shows an EXTERNAL-IP which means a LoadBalancer is present for use by mesh expansion machines.
kubectl get svc istio-ingressgateway -n istio-system
Set environment variables - preparing for access to services from outside the Kubernetes cluster
export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o
jsonpath='{.status.loadBalancer.ingress[0].ip}')
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o
jsonpath='{.spec.ports[?(@.name=="http2")].port}')
export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o
jsonpath='{.spec.ports[?(@.name=="https")].port}')
export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
echo $GATEWAY_URL
192.168.99:40:80
And gather addresses and ports for later steps
Check Istio installation
23. 24
The default Istio installation uses automatic sidecar injection. Label the namespace that will host the application with
istio-injection=enabled:
kubectl label namespace default istio-injection=enabled
Deploy BookInfo
kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
Wait for all pods to start. Confirm that the Bookinfo application is running, send a request to it by a curl command
from some pod, for example from ratings - which should return a title:
kubectl exec -it $(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}') -c
ratings -- curl productpage:9080/productpage | grep -o "<title>.*</title>"
Supported because we used the demo Istio profile – we will use it to test operation
Start BookInfo App
24. 25
An Istio Gateway is used for this purpose.
Define the ingress gateway for the application:
kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
Confirm:
kubectl get gateway
To confirm that the Bookinfo application is accessible from outside the cluster, run the following curl command:
curl -s http://${GATEWAY_URL}/productpage | grep -o "<title>.*</title>"
You can open the application in a web browser
echo "http://"$GATEWAY_URL"/productpage"
Refresh will randomly cycle through versions with red, black and no stars
Make application accessible outside the Kubernetes cluster, from a browser
Expose the BookInfo App
25. 26
Define the namespace the VM joins
export SERVICE_NAMESPACE="default"
Determine and store the IP address of the Istio ingress gateway since the mesh expansion machines access Citadel
and Pilot through this IP address.Used in a later step.
export GWIP=$(kubectl get -n istio-system service istio-ingressgateway -o
jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo $GWIP
Enable machines outside the Kubernetes cluster
Mesh Expansion
26. 27
Generate a cluster.env file. The script in the current documentation is GCE specific and will fail in a minikube
deployment. This file contains the Kubernetes cluster IP address ranges to intercept and redirect via Envoy. It also
contains the ports of used by services hosts on
ISTIO_SERVICE_CIDR=10.96.0.0/12
ISTIO_SYSTEM_NAMESPACE=istio-system
ISTIO_CP_AUTH=MUTUAL_TLS
ISTIO_INBOUND_PORTS=3306,8080
Define Envoy intercept ranges in operation in a file to be deployed to expansion VM
Mesh Expansion
27. 28
Extract the initial keys the service account needs to use on the VMs (to files to be copied to VMs).
kubectl -n $SERVICE_NAMESPACE get secret istio.default
-o jsonpath='{.data.root-cert.pem}' |base64 --decode > root-cert.pem
kubectl -n $SERVICE_NAMESPACE get secret istio.default
-o jsonpath='{.data.key.pem}' |base64 --decode > key.pem
kubectl -n $SERVICE_NAMESPACE get secret istio.default
-o jsonpath='{.data.cert-chain.pem}' |base64 --decode > cert-chain.pem
These will be installed on mesh expansion VMs
Prepare certs and keys for distribution
28. 29
• Fetch and install Istio sidecar package
• Install certificate files
• Install cluster CIDR definition
• Check connectivity
These packages are used for what is described here, plus a few extras to support
troubleshooting if needed.
sudo apt install curl host net-tools openssh-server openssl open-vm-tools-
desktop python
Summary of prep steps for each VM joining cluster
Prepare VM
29. 30
From Kubernetes control plane host:
scp cluster.env root-cert.pem cert-chain.pem key.pem steve@192.168.99.129:/tmp
From mesh expansion machine:
cd /tmp
curl -L https://storage.googleapis.com/istio-release/releases/1.2.2/deb/istio-sidecar.deb -o
istio-sidecar.deb
sudo dpkg -i istio-sidecar.deb
sudo mkdir -p /etc/certs
sudo cp {root-cert.pem,cert-chain.pem,key.pem} /etc/certs
sudo cp cluster.env /var/lib/istio/envoy
Transfer ownership of the files in /etc/certs/ and /var/lib/istio/envoy/ to the Istio proxy.
sudo chown -R istio-proxy /etc/certs /var/lib/istio/envoy
Certs and Envoy sidecar binary
Install pre-reqs on expansion machine
30. 31
echo "192.168.99.40 istio-citadel istio-pilot istio-pilot.istio-system" | sudo tee -a /etc/hosts
Verify the node agent works (note underscore):
sudo node_agent
“CSR is approved successfully. Will renew cert in 1079h59m59.84568493s”
Start Istio on VM using systemctl.
sudo systemctl start istio-auth-node-agent
sudo systemctl start istio
Verify Istio (envoy proxy) is running
sudo systemctl status istio
istio.service - istio-sidecar: The Istio sidecar
Loaded: loaded (/lib/systemd/system/istio.service; disabled; vendor preset: e
Active: active (running) since Wed 2019-07-17 22:38:05 UTC;
And test connectivity
Add Istio addresses to DNS for mesh expansion machine
31. 32
On the VM, open a secondary command session (will be tied up running web server) and
use python to start an HTTP web server
python -m SimpleHTTPServer 8080
Certs and Envoy sidecar binary
Run a service on mesh expansion machine
32. 33
You can add VM services to the mesh using a service entry.
Service entries let you manually add additional services to Pilot’s abstract model of the
mesh.
Once VM services are part of the mesh’s abstract model, other services can find and direct
traffic to them.
Each service entry configuration contains the IP addresses, ports, and appropriate labels of
all VMs exposing a particular service
enable service discovery for services on an expansion machine
What is a ServiceEntry?
34. 35
The workloads in a Kubernetes cluster need a mapping to resolve the domain names of VM
services. To integrate the mapping, use istioctl to register and create a Kubernetes selector-
less service:
istioctl register -n ${SERVICE_NAMESPACE} vmhttp 192.168.99.129 8080
2019-07-17T22:39:40.551948Z info No pre existing exact matching ports list found, created new subset
{[{192.168.99.129 <nil> nil}] [] [{http 8080 }]}
2019-07-17T22:39:40.557364Z info Successfully updated vmhttp, now with 1 endpoints
Using the Istio control plane, from the kubectl machine, not the VM
Map the service
35. 36
The Istio sample directory has a spec for a helper pod with curl installed that sleeps until needed.
Deploy the pod in the Kubernetes cluster, and wait until it is ready:
kubectl apply -f samples/sleep/sleep.yaml
kubectl get pod
sleep-5fb55468cb-tkml7 2/2 Running 0 12s
We will use the pod to send a curl request from the sleep pod to the VM’s HTTP service -
demonstrating that the VM hosted service is exposed to potential consumers in Kubernetes via service
mesh :
kubectl exec -it sleep-5fb55468cb-tkml7 -c sleep -- curl
vmhttp.${SERVICE_NAMESPACE}.svc.cluster.local:8080
Using the Istio control plane, from the kubectl machine, not the VM
Access the VM hosted service from Kubernetes
36. 37
On a machine managing Kubernetes cluster, get the virtual IP address (clusterIP) for the service:
kubectl get svc productpage -o jsonpath='{.spec.clusterIP}'
10.104.202.166
On the mesh expansion machine (VM), add the service name and virtual IP address (clusterIP) for the service to its
etc/hosts file. You can then connect to the cluster service from the VM, as in the example below:
echo "10.104.202.166 productpage.default.svc.cluster.local" | sudo tee -a /etc/hosts
curl -v productpage.default.svc.cluster.local:9080
< HTTP/1.1 200 OK
< content-type: text/html; charset=utf-8
< content-length: 1836
< server: envoy
... html content ...
The “server: envoy” in the header indicates that envoy intercepted the traffic
Using the Istio control plane, from the kubectl machine, not the VM
Sending requests from VM worldloads to Kubernetes
37. 38
Envoy package is a .deb
.rpm has been done but not a release artifact at this time
Windows support
Istio mesh expansion has opportunities for enhancement
Limitations
40. 41
This deck http://bit.ly/2Y9flo4
Contacts
● Detailed instructions in form of a draft document used to prepare this deck – if you follow
these instructions on your own this may be easier to use than the deck
https://docs.google.com/document/d/1F8a2TCLoju7xed9JV4WDnVAkWNF4MItIFB7DAC
e8nAQ/edit?usp=sharing
Steven Wong
@cantbewong