Kubernetes is a deep and complex technology that is evolving fast with new functionality and a growing ecosystem of cloud-native solutions. While the public cloud delivers an almost frictionless user experience, configuring and managing a production Kubernetes environment is an enormous technical challenge for the majority of enterprises that choose to do so on premises. Without the right approach, operationalizing Kubernetes in the data center can take upwards of 6 months, jeopardizing developer productivity and speed-to-market.
In this webinar, you’ll learn from Nutanix cloud native experts on how to fast-track your way to operationalizing a production-ready Kubernetes environment on-prem.
Specifically, we’ll talk about:
How containerized applications use IT resources (and why legacy infrastructure isn’t built for Kubernetes);
The main advantages of running Kubernetes on prem (as part of a multi-cloud strategy);
Key aspects of Kubernetes lifecycle management that greatly benefit from automation.
2. Sean Roth
Director, Product Marketing - Cloud
Native Solutions
AGENDA
• A look at ‘Cloud Native’ and the
challenges of the journey
• Kubernetes, the cloud native ecosystem,
and infrastructure
• Simplifying Kubernetes Management:
7 Areas To Focus On
• Q&A
3. What is a Cloud Native Application?
1. Packaged as lightweight containers
2. Developed with best-of-breed
languages and frameworks
3. Designed as loosely coupled
microservices
4. Centered around APIs for interaction
and collaboration
5. Architected with a clean separation of
stateless and stateful services
6. Isolated from server and operating
system dependencies
7. Deployed on self-service, elastic, cloud
infrastructure
8. Managed through agile DevOps
processes
9. Automated capabilities
10. Subject to defined, policy-driven
resource allocation
Source: The NewStack.io
10 KEY ATTRIBUTES OF CLOUD-NATIVE APPLICATIONS
https://thenewstack.io/10-key-attributes-of-cloud-native-applications/
4. Challenges Of Going ‘Cloud Native’
• Kubernetes is deep and complex, and evolves fast
with its growing ecosystem of technologies
• Legacy infrastructure isn’t built for Kubernetes
5. • Cost efficiency: public cloud is not always
cheaper for some workloads at scale
• Compliance: many organizations are subject to
regulation around data locality
• Improved data center efficiency: opportunity to
modernize and get more out of existing
infrastructure investment
• Performance: certain workloads might require
higher IOPS and lower latency than public cloud
can deliver
Why Run Kubernetes On Prem?
Enterprises are taking a
multi-cloud approach to
running cloud-native
applications.
+
7. Kubernetes
“Kubernetes is the
Linux of the cloud.”
--Kelsey Hightower
Staff Developer Advocate Google
What Kubernetes does:
• Assigns containers to machines (scheduling)
• Boots the specified containers through the container runtime
• Deals with upgrades, rollbacks, and the constantly changing
nature of the system
• Responds to failures (container crashes, etc.)
• Creates cluster resources like service discovery, inter VM
networking, cluster ingress/egress, etc.
8. API
CLI
UI
Kubernetes Under The Hood
• Designed for scalability, availability, security,
and portability
• Optimizes cost of infrastructure
– Workloads distributed across available
resources
• Each component of a Kubernetes cluster
(etcd, API server, nodes) can be configured
for HA
• For apps, Kubernetes ensures HA by means
of replica sets, replication controllers, etc.
• Kubernetes endpoints secured with TLS
• Every operation that manages a process running
on the cluster must be initiated by an
authenticated user
NODE 1
kubelet
api server
scheduler
controllers
NODE 2
kubelet
NODE n
kubelet
Control PlaneUsers
Worker Nodes
9. Pods: Kubernetes’ Unit Of Execution
• Pods represent processes running on the
Kubernetes cluster
• A pod encapsulates an application’s container(s),
storage resources, unique network IP, and options
• Controllers run pods according to a user-created
pod spec
apiVersion: v1
kind: Pod
metadata:
name: cpu-demo
namespace: cpu-example
spec:
containers:
- name: cpu-demo-ctr
image: test/stress
resources:
limits:
cpu: "1"
requests:
cpu: "0.5"
args:
- -cpus
- "2"
10. “Infrastructure As Code”
• Carving out CPU and memory
resources uses a simple declarative
model
…easy, right?
apiVersion: v1
kind: Pod
metadata:
name: cpu-demo
namespace: cpu-example
spec:
containers:
- name: cpu-demo-ctr
image: test/stress
resources:
limits:
cpu: "1"
requests:
cpu: "0.5"
args:
- -cpus
- "2"
11. …But What About:
Persistent storage?
Networking and Load balancing?
Security?
Monitoring and logging?
Application management?
Availability?
12. Kubernetes And The Cloud Native Ecosystem
Security &
Governance
Databases
Container Orchestration
CI/CD
Container StorageContainer
Networking Observability & Analysis
Proxy, Gateway & Service
Mesh
500+
open-source and
commercial cloud-native
technologies are rapidly
evolving
www.cncf.io
14. Kubernetes Master Node Upgrade Process
1. Drain the first master node (which incurs downtime, unless two or more master Kubernetes
nodes are running)
2. Upgrade the cluster orchestrator / infra piece (typically kubeadm, but there are others) on that
master node
3. Upgrade the master control plane
4. Upgrade the master kubelet and kubectl
5. Uncordon the upgraded master node
6. Repeat steps 1 through 5 for each of the remaining master nodes
Then, upgrade worker nodes…
…and etcd (Kubernetes key-value store)
Challenge #1: Kubernetes and Cluster Upgrades
15. Upgrading Host OS
• Upgrading the Host OS is a similar process to upgrading the Kubernetes version
• each node is drained one at a time, upgraded, rebooted, and then un-cordoned
Challenge #1: Kubernetes and Node Upgrades
16. • Seek out a dedicated Kubernetes management solution
• Upgrades (as well as other undifferentiated heavy-lifting) should be push-
button processes
• Ensure your solution can execute non-disruptive upgrades
Simplifying Kubernetes/Host Upgrades
17. Challenge #2: Persistent Storage
• Containers are ephemeral, making storage a huge challenge
– Provisioned storage needs to remain connected to pods hosting stateful applications
• CSI is the standard mechanism for exposing block and file storage to
containerized workloads
• Big decisions:
– What type of storage will be used?
– How it will be made accessible to Kubernetes clusters?
– How it will be provisioned and used by applications?
18. Simplifying Persistent Storage
• Leverage a container storage solution that offers support for file, block, and
object storage classes
• Different applications value different mediums:
– Performance-intensive app? Block storage
– Multiple Pods need to access the same storage? file storage with read-write-
many
– Need simple configuration and enormous scale? object storage
• Automate!
– Automatically install CSI drivers on every Kubernetes cluster, along with the creation
of a default storage class
19. Challenge #3: Managing Secrets
• Secret: Kubernetes object used to store SSH keys, tokens, passwords, etc. that
are required when containerized applications need to interface with other
systems
– Critical responsibility for Kubernetes admins and security practitioners alike
• Kubernetes provides some basic security capabilities around secrets (encryption,
policies, and whitelist access) but they require enforcement
• CAUTION: Secrets can break applications in production if they change!
20. Simplifying Secrets Management
• Dedicated secrets management tool is key!
– Should work on individual containers
• Change management capabilities are critical
– automatically push changed secrets to the application containers that rely on them
21. Challenge #4: Service Discovery
• Networking in Kubernetes is a complex challenge
– A Pod can be scheduled on one cluster node and later be moved to another, so any
internal IPs that this Pod is assigned can change over time
– Another layer of abstraction is required
22. Simplifying Service Discovery
• Employ a Load Balancer
– Not natively part of Kubernetes functionality
– Provides each Pod a unique IP accessible from outside the cluster
– Either rely on infrastructure provider or a tool like MetalLB
• Leverage Kubernetes Ingress for business-critical applications
– Ingress is also complicated
– Check out a 3rd-party Ingress controller such as Nginx, Traefik, or Istio
23. Challenge #5: Managing Applications
• Kubernetes applications will likely consist of:
– several services spanning dozens of containers
– Persistent Volumes
– Secrets
– StatefulSets
• Grouping each application into a dedicated namespace for better cluster
management doesn’t scale
• Need to be able to deploy, modify, track changes, and upgrade containerized
applications
24. Simplifying Application Management
• Leveraging the Helm package manager is a good start
– However, new challenges arise in preventing untracked changes
• Employ Kubernetes operators, especially for production workloads
– They take a long time to build, but it’s worth it!
Operators will allow IT team members to manage applications and initiate
upgrades without needing expertise in the app
25. Challenge #6: Monitoring Cluster Health
• Kubernetes is highly dynamic and yields a tremendous amount of activity data
– How do you make sense of the data to identify and remediate issues?
• Deploying any open source monitoring and logging tool doesn’t solve the problem
– Need a separate backend to store, analyze and query logs
26. Simplifying Health Monitoring
• Deploy a stack to effectively store, search, analyze, and visualize Kubernetes
environment data
– ELK (ElasticSearch, Logstash, Kibana)
– EFK (ElasticSearch, FluentD, Kibana)
• Also, Prometheus is widely used for systems monitoring and alerting
• BE AWARE: Properly configuring, sizing, and utilizing logging stacks is challenging on its
own
• Cluster-level logging, and application logging are generally separate processes
27. Challenge #7: Scaling the Cluster
• Kubernetes is capable of autoscaling applications, Pods, and clusters
– But how do you figure out the right approach?
28. Simplifying Scaling
• Automated application (Pod) scaling
– first, ensure enough cluster capacity to support maximum scaling values
• Automated worker node scaling:
– lean on cloud provider or on-prem Ops teams to help
– Be mindful of actual resource limits
29. All Kubernetes Offerings Aren’t Created Equal
Users should seek:
• A CNCF-certified Kubernetes distribution (conformance enables
interoperability)
• A native Kubernetes user experience (no lock-in)
• Intelligent automation around lifecycle management features
• Easy integration of storage, networking, security, and monitoring
solutions
30. Join The Academy!
• The Linux Foundation and CNCF offer a certification
program for Kubernetes Admins
• Training develops competency in:
• Application Lifecycle
Management
• Installation, Configuration &
Validation
• Core Concepts
• Networking
• Scheduling
• Security
• Cluster Maintenance
• Logging / Monitoring
• Storage
• Troubleshooting
31. Nutanix Karbon: Kubernetes Made Simple
Karbon is an enterprise
Kubernetes management
solution that enables
turnkey provisioning,
operations, and lifecycle
management of
Kubernetes.
Simple
• Less than 20 minutes to deploy production-ready Kubernetes clusters
• Public cloud-like operations, on premises
• Automated scaling and upgrades
• Expert technical support covers the entire stack
Complete Solution
• Seamlessly integrates Kubernetes monitoring, logging, and alerting
• Integrated CSI delivers persistent block and file storage
No Lock-in
• Native Kubernetes user experience with standard APIs
Karbon is Kubernetes
Certified.