6. Tanzu is a VMware brand name for anything related to Kubernetes or containers and
their associated services. It covers a range of products related to modern application
development.
7.
8.
9.
10. - Tanzu Kubernetes Grid, or TKG, is the distribution of Kubernetes by VMware.
TKG comes in two primary variants. One is more suitable to run on multi-cloud
environments and on-premises, while the other works on VSphere 7.0 or above :
respectively TKGm and TKGs.
- It is a collection of pieces which has the upstream Kubernetes binaries at its core
and is specifically designed to integrate with and run best on VMware's platforms
.
- Given the collection of pieces, TKG also gives you the freedom and flexibility to ru
n in the core, in the public cloud, or at the edge.
- If you want that control but also want access to professional guidance, there is a T
KG+ (TKG Plus) offering that additionally gives you access to VMware's Custo
mer Reliability Engineering (CRE) group.
11. Vanilla Kubernetes vs TKG
- Vanilla Kubernetes is a K8s environment that runs the most basic components
required, but not much more than that.
- What you get when deploying vanilla :
• Upstream Kubernetes : open-source basic components.
• (Really, that's it. The rest is up to you.)
- What you get when deploying TKG :
• Upstream Kubernetes : the same ones you'd get from GitHub.
• Cluster API, Cluster API Provider vSphere, Cluster API Provider AWS :
the bits necessary to declaratively deploy Kubernetes clusters on vSphere
or AWS.
• Calico or Antrea for networking
• Cert-manager for Certificates
12. Vanilla Kubernetes vs TKG
• CSI and CPI for vSphere : for integrating with vSphere storage and
topology .
• Contour for Ingress controller.
• Fluent Bit for shipping logs and events.
• Crash-Diagnotics for troubleshooting.
• TKG CLI : an easy-to-use Command Line interface for bootstrapping and
managing clusters.
• Pre-buit OS templates (for quicker time-to-clusters).
• Everythings approved and signed by Vmware (for peace in mind).
• 24/7 support
There are many added components which are necessary to run Kubernetes in
a prod-
uction environment, components which you'd otherwise have to figure out yours
elf !!!
13. Tanzu Kubernetes Grid multi-cloud (TKGm)
• TKGm provides a consistent experience running a Kubernetes cluster in
the cloud and on-premises.
• With TKGm, the first component that one has to create is the management
cluster. It is a Kubernetes cluster responsible for managing other
Kubernetes clusters for workloads.
• The management cluster hosts a Kubernetes project called Cluster API,
which allows cluster creation and management to be declarative using
Kubernetes-style APIs.
• Management cluster can hosts several Cluster API providers, each one for
deploying and managing workload cluster on different cloud providers
(hyperscalers or on-premise), hence the name TKG multi-Cloud.
• All the workload Kubernetes clusters share certain Kubernetes Services
like container registry (Harbor), Observability tools (Prometheus and Grafana),
Ingress control (Contour), Networking (Calico), and others.
• The management cluster, with its workload clusters and the shared
Kubernetes services collectively, is known as Tanzu Kubernetes Grid
Instance
14. Tanzu Kubernetes Grid multi-cloud (TKGm)
An overview of Tanzu Kubernetes Grid multi-
cloud.
16. Tanzu Kubernetes Grid multi-cloud (TKGm) --
Demo
• tkg CLI tool on workstation (Linux or Mac),
• You then have the choice of deploying the management cluster to either
vSphere or AWS on EC2. After populating the details of how they wish to
connect and the characteristics of their management cluster, tkg runs kind
on their local machine.
• kind serves as a single node bootstrap cluster into which the Cluster API
get loaded. Whith this, kind carry out the build of management cluster.
• Once the management cluster is up and running, kind hands off those
resources to the newly-running management cluster and then is deleted.
• After the management cluster is up, a user then interacts with the tkg tool
which speaks to the management cluster to deploy workload ("guest")
clusters.
Demo build on vSphere pre-requisites : Workstation with Docker runtime and access
to regitry, node and HA proxy templates imported and ssh key pair.
17. Tanzu Kubernetes Grid multi-cloud (TKGm) --
Demo
After downloading tkg CLI, we run it in UI mode :
This open a browser tab :
20. Tanzu Kubernetes Grid multi-cloud (TKGm) --
Demo
Next, we populate our control plane. It's highly recommended that you deploy an HA
control plane of three nodes.
Note that even if you deploy a development control plane with a single node, the load
balancer still gets deployed.
This is to provide a consistent entry point and allows for easy scaling out on day 2.
Regardless of which one you choose, you'll get only a single worker.
22. Tanzu Kubernetes Grid multi-cloud (TKGm) --
Demo
Specify your resource pool, VM folder, and datastore and click next.
23. Tanzu Kubernetes Grid multi-cloud (TKGm) --
Demo
Pick the network that has DHCP enabled and either accept the defaults for the Kubernetes
CIDRs or change them to something that doesn't overlap in your environment.
24. Tanzu Kubernetes Grid multi-cloud (TKGm) --
Demo
Finally, select the node template you imported earlier and click next.
25. Tanzu Kubernetes Grid multi-cloud (TKGm) --
Demo
With everything confirmed, continue on to review and then deploy.
You'll begin to see the wizard kick into action, deploy kind, and then start the deployment
process to vSphere.
26. Tanzu Kubernetes Grid multi-cloud (TKGm) --
Demo
After a few minutes, the process should be complete and the management cluster is ready to go.
27. Tanzu Kubernetes Grid multi-cloud (TKGm) --
Demo
Back at your terminal, you'll see that the process has ended and your kubeconfig will be set to the
context of this management cluster, allowing you to immediately begin interacting with it.
You can see we've got our three control plane nodes and one worker. The load balancer is not a
Kubernetes node type so it won't show. However, you can see it has been configured as the entrypoint
into the cluster.
28. Tanzu Kubernetes Grid multi-cloud (TKGm) --
Demo
Now that the management cluster is set up, we can begin to very easily build workload clusters.
The dev plan is a single control plane and single worker while the prod is three control planes and a
single worker.
By issuing the command “tkg create cluster <cluster_name> -p dev “we can get such a cluster. If we
wanted to maybe add on a second worker node to get two rather than one, we can simply add the -w
flag along with the number of total workers (i.e., -w 2).
After a few minutes, the cluster from that plan has been created and our context is set to it :
Simple Done !
29.
30. Tanzu Kubernetes Grid Service (TKGs)
Below is an overview of the key characteristics of TKGs :
• Tanzu Kubernetes Grid service works with vSphere 7.0 or above.
• VMware completely overhauled the vSphere 7.0 architecture to ensure that one
can
deploy virtual machines, pods and TKCs using Kubernetes constructs.
• Each vSphere cluster will have a Supervisor Cluster, and the relationship between
a
vSphere cluster and a Supervisor Cluster is always 1:1.
• A Supervisor Cluster is a Tanzu running on vSphere that relies on ESXi as its
compute
layer. In other words, the Supervisor Cluster is a Kubernetes control plane inside
the
hypervisor that enables running container workloads in ESXi.
• Once a Supervisor Cluster is enabled, you can create a supervisor namespace,
called
vSphere Namespaces. This namespace is not the same as a Kubernetes
namespace.
• With the supervisor namespace created, you can create a Tanzu Kubernetes
cluster, which
acts as your workload cluster. The workload Kubernetes cluster, like every other
31. Tanzu Kubernetes Grid Service (TKGs)
• You can also have virtual machines and vSphere pods in the same supervisor namespace.
• vSphere pods differ from Kubernetes pods since they are created directly on top of the ESXi
host.
• To run vSphere pods, you don't need a Tanzu Kubernetes cluster, but the Supervisor cluster is
required.