This document provides an overview of Kubernetes and how it compares to VMware technologies. It begins with an analogy that containers are to operating systems what virtual machines are to server hardware. It then discusses how Kubernetes orchestrates multiple containers across nodes by splitting applications into smaller services. The remainder of the document discusses key Kubernetes concepts like pods, replica sets, deployments and services. It provides a mapping of how Kubernetes concepts compare to VMware concepts like vCenter and vSphere hosts. It also discusses considerations for installing Kubernetes and operating it at scale.
Enterprise Java on Azure: From Java EE to Spring, we have you coveredEd Burns
Ed Burns brings his seventeen years of server side Java experience to bear on the topic of Enterprise Java on Microsoft Azure. Before the advent of cloud infrastructure, the stack was the main thing. This gave rise to many entertaining platform wars, and even personality feuds among the principals. Spring or J2EE? Spring MVC or JSF (or Struts/Wicket/Tapestry/WebWork...)? Spring REST or JAX-RS? Spring DI or CDI? Spring Boot or MicroProfile? Single-vendor proprietary de-facto standard or multi-vendor community developed standard? Ed has seen these "wars" come and go, and even fought in some of them. While "wars" make for great conference talks, blog posts, and articles, at the end of the day creating business value is the whole point of enterprise Java. Ed contends that nowadays, the cloud vendor is the main thing, and the best cloud vendor is one that best supports "all of the above", from lift and shift of existing workloads, to lift and modernize, on through to turn-key PaaS solutions.
This session will briefly survey the history of enterprise Java to establish the need for an "all of the above" enterprise cloud platform, examine some ways enterprises can use the current offerings from Microsoft Azure, and give a peek into what's in store in the near future.
Enterprise Java on Azure: From Java EE to Spring, we have you coveredEd Burns
Ed Burns brings his seventeen years of server side Java experience to bear on the topic of Enterprise Java on Microsoft Azure. Before the advent of cloud infrastructure, the stack was the main thing. This gave rise to many entertaining platform wars, and even personality feuds among the principals. Spring or J2EE? Spring MVC or JSF (or Struts/Wicket/Tapestry/WebWork...)? Spring REST or JAX-RS? Spring DI or CDI? Spring Boot or MicroProfile? Single-vendor proprietary de-facto standard or multi-vendor community developed standard? Ed has seen these "wars" come and go, and even fought in some of them. While "wars" make for great conference talks, blog posts, and articles, at the end of the day creating business value is the whole point of enterprise Java. Ed contends that nowadays, the cloud vendor is the main thing, and the best cloud vendor is one that best supports "all of the above", from lift and shift of existing workloads, to lift and modernize, on through to turn-key PaaS solutions.
This session will briefly survey the history of enterprise Java to establish the need for an "all of the above" enterprise cloud platform, examine some ways enterprises can use the current offerings from Microsoft Azure, and give a peek into what's in store in the near future.
Introduction to OS LEVEL Virtualization & ContainersVaibhav Sharma
This Presentation contains information about os level virtualization and Containers internals. It has used other material on slide share which is referenced in Notes of PPT
Introducing github.com/open-cluster-management – How to deliver apps across c...Michael Elder
Introducing Open Cluster Management, a community-driven project focused on multicluster and multicloud scenarios for Kubernetes apps. Open APIs are evolving within this project for cluster registration, work distribution, dynamic placement of policies and workloads and cluster and workload health management. In this session, Michael will introduce the project and demonstrate what you can do on OpenShift and Managed Kubernetes as a Service today from community operators on OperatorHub.io.
Pivotal Container Service (PKS) at SF Cloud Foundry Meetupcornelia davis
Overview of Pivotal Container Service (PKS), built on the open source Cloud Foundry Container Runtime (CFCR). Covers what Kubernetes is, how PKS presents a complete platform that includes Kubernetes and much more, and key cloud principles.
Presented at the San Francisco-Bay Area Cloud Foundry meetup.
PKS: The What and How of Enterprise-Grade KubernetesVMware Tanzu
SpringOne Platform 2017
Cornelia Davis, Pivotal; Fred Melo, Pivotal
Because of its well thought out and powerful abstractions, robust and cloud-native architecture, and the vibrant community around it, the use of Kubernetes for containerized workloads has surged. And while Kubernetes is theoretically ready to run applications in production, the actual viability is highly dependent on how Kubernetes itself is managed. In this session Cornelia and Fred will cover role of the container orchestration system in your IT landscape, and they’ll dive under the covers to show how it provides the enterprise-class Kubernetes services you need to trust your most critical workloads to it. Yes, technical details revealed!
How to Run Amazon Web Services Workloads on Your VMware vCloud®Cloudsoft Corp
This presentation from VMworld 2012 demonstrates running workloads on vCloud using and AWS compatible API.
* How to map AWS concepts to vCloud
* All about Cinderella, an open source toolset to allow AWS applications to run on your vCloud
* Why vCloud is no fun: AWS tricks you no longer need to do
* How to find a vCloud provider
Developing and Deploying Microservices to IBM Cloud PrivateShikha Srivastava
IBM Cloud Private (ICP) is a Kubernetes based environment that hosts a variety of workloads that helps developers create secure and highly available services for their cloud environment. Developers will experience a catalog of enterprise software that is deployed and managed as containers and run a complete microservices-based application in ICP.Join us to get hands-on experience using the Stock Trader sample (https://github.com/IBMStockTrader) running on IBM Cloud Private. Run the app and see it talk to Db2, MQ, and Redis, all also running in IBM Cloud Private. The app also talks to API Connect running in the public IBM Cloud. Developers will also experience how to author and deploy a microservice in ICP. Experience both the IBM Cloud Private web console and the kubectl command line interface to see how things are running, and to perform problem determination. You’ll also learn some tips and tricks that arose from this sample.
Microservices, Containers, Docker and a Cloud-Native Architecture in the Midd...Kai Wähner
Microservices are the next step after SOA: Services implement a limited set of functions. Services are developed, deployed and scaled independently. Continuous Integration and Continuous Delivery automate deployments. This way you get shorter time to results and increased flexibility. Containers improve these even more offering a very lightweight and flexible deployment option.
In the middleware world, you use concepts and tools such as an Enterprise Service Bus (ESB), Complex Event Processing (CEP), Business Process Management (BPM) or API Gateways. Many people still think about complex, heavyweight central brokers here. However, Microservices and containers are relevant not just for custom self-developed applications, but they are also a key requirement to make the middleware world more flexible, agile and automated.
This session discusses the requirements, best practices and challenges for creating a good Microservices architecture in the middleware world. A live demo with the open source PaaS framework CloudFoundry shows how technologies and frameworks such as Java, SOAP / REST Web Services, Jenkins and Docker are used to create an agile software development lifecycle to realize “Middleware Microservices”. It also discusses other modern cloud-native alternatives such as Kubernetes, Docker, Mesos, Mesosphere or Amazon ECS / AWS.
Introduction to OS LEVEL Virtualization & ContainersVaibhav Sharma
This Presentation contains information about os level virtualization and Containers internals. It has used other material on slide share which is referenced in Notes of PPT
Introducing github.com/open-cluster-management – How to deliver apps across c...Michael Elder
Introducing Open Cluster Management, a community-driven project focused on multicluster and multicloud scenarios for Kubernetes apps. Open APIs are evolving within this project for cluster registration, work distribution, dynamic placement of policies and workloads and cluster and workload health management. In this session, Michael will introduce the project and demonstrate what you can do on OpenShift and Managed Kubernetes as a Service today from community operators on OperatorHub.io.
Pivotal Container Service (PKS) at SF Cloud Foundry Meetupcornelia davis
Overview of Pivotal Container Service (PKS), built on the open source Cloud Foundry Container Runtime (CFCR). Covers what Kubernetes is, how PKS presents a complete platform that includes Kubernetes and much more, and key cloud principles.
Presented at the San Francisco-Bay Area Cloud Foundry meetup.
PKS: The What and How of Enterprise-Grade KubernetesVMware Tanzu
SpringOne Platform 2017
Cornelia Davis, Pivotal; Fred Melo, Pivotal
Because of its well thought out and powerful abstractions, robust and cloud-native architecture, and the vibrant community around it, the use of Kubernetes for containerized workloads has surged. And while Kubernetes is theoretically ready to run applications in production, the actual viability is highly dependent on how Kubernetes itself is managed. In this session Cornelia and Fred will cover role of the container orchestration system in your IT landscape, and they’ll dive under the covers to show how it provides the enterprise-class Kubernetes services you need to trust your most critical workloads to it. Yes, technical details revealed!
How to Run Amazon Web Services Workloads on Your VMware vCloud®Cloudsoft Corp
This presentation from VMworld 2012 demonstrates running workloads on vCloud using and AWS compatible API.
* How to map AWS concepts to vCloud
* All about Cinderella, an open source toolset to allow AWS applications to run on your vCloud
* Why vCloud is no fun: AWS tricks you no longer need to do
* How to find a vCloud provider
Developing and Deploying Microservices to IBM Cloud PrivateShikha Srivastava
IBM Cloud Private (ICP) is a Kubernetes based environment that hosts a variety of workloads that helps developers create secure and highly available services for their cloud environment. Developers will experience a catalog of enterprise software that is deployed and managed as containers and run a complete microservices-based application in ICP.Join us to get hands-on experience using the Stock Trader sample (https://github.com/IBMStockTrader) running on IBM Cloud Private. Run the app and see it talk to Db2, MQ, and Redis, all also running in IBM Cloud Private. The app also talks to API Connect running in the public IBM Cloud. Developers will also experience how to author and deploy a microservice in ICP. Experience both the IBM Cloud Private web console and the kubectl command line interface to see how things are running, and to perform problem determination. You’ll also learn some tips and tricks that arose from this sample.
Microservices, Containers, Docker and a Cloud-Native Architecture in the Midd...Kai Wähner
Microservices are the next step after SOA: Services implement a limited set of functions. Services are developed, deployed and scaled independently. Continuous Integration and Continuous Delivery automate deployments. This way you get shorter time to results and increased flexibility. Containers improve these even more offering a very lightweight and flexible deployment option.
In the middleware world, you use concepts and tools such as an Enterprise Service Bus (ESB), Complex Event Processing (CEP), Business Process Management (BPM) or API Gateways. Many people still think about complex, heavyweight central brokers here. However, Microservices and containers are relevant not just for custom self-developed applications, but they are also a key requirement to make the middleware world more flexible, agile and automated.
This session discusses the requirements, best practices and challenges for creating a good Microservices architecture in the middleware world. A live demo with the open source PaaS framework CloudFoundry shows how technologies and frameworks such as Java, SOAP / REST Web Services, Jenkins and Docker are used to create an agile software development lifecycle to realize “Middleware Microservices”. It also discusses other modern cloud-native alternatives such as Kubernetes, Docker, Mesos, Mesosphere or Amazon ECS / AWS.
Evolve or Fall Behind: Driving Transformation with Containers - Sai Vennam - ...CodeOps Technologies LLP
This presentation was the opening session in the container conference 2018 in Bangalore.
"IBM Developer Advocate Sai Vennam speaks about the latest emerging technology in the container space - from managed Kubernetes offerings to open-source tools like Istio and containerd. You'll also see how container technology is driving transformation in all industries across the world."
URL: www.containerconf.in
Pivotal Container Service : la nuova soluzione per gestire Kubernetes in aziendaVMware Tanzu
Le applicazioni moderne vengono distribuite in poche ore anziché giorni o settimane, consentendo alle aziende di accelerare il time-to-value e fornire una migliore esperienza al loro cliente finale. Uno dei modi più rapidi per passare dall'ideazione alla produzione è quello di disporre di una piattaforma di gestione dei container coerente e affidabile che aiuti gli sviluppatori a erogare il software più velocemente e all'IT di semplificare le operazioni
VMware e Pivotal mettono insieme le nostre competenze combinate per offrire una soluzione di gestione dei container completa con Pivotal Container Service (PKS).
Unisciti ai tuoi colleghi in questo evento gratuito della durata di un'ora per sapere in che modo le aziende possono implementare i containers su vSphere con PKS, semplificando la gestione di un ambiente Kubernetes dall’installazione (day 1) fino all’aggiornamento ed evoluzione infrastrutturale (day 2).
Agenda del webinar:
- Kubernetes e l'orchestrazione dei container
- La gestione dei container e di Kubernetes in ambienti di produzione con VMware e -
- Pivotal Container Service (PKS)
- La modernizzazione delle applicazioni con PKS
- Demo di Pivotal Container Service e delle integrazioni con l'infrastruttura VMware
- Chiusura del webinar e Q/A
Presenters :
Fabio Chiodini, Advisory Platform Architect EMEA, Pivotal Ruggero Citterio, Senior System Engineer, VMware
Driving Digital Transformation With Containers And Kubernetes Complete DeckSlideTeam
Introducing Kubernetes Concepts And Architecture PowerPoint Presentation Slides. This readily available open-source architecture PPT infographics well explains the concept of containers. You can also depict the architecture of containers and microservices with the help of a visually appealing PPT slideshow. Our content-ready containers PPT slideshow allow you to showcase the reasons for opting for Kubernetes by an organization. Depict the roadmap for installing Kubernetes in the organization in a presentable manner by using this slide design. The major advantages of Kubernetes, such as the stability of application run, improving productivity, and many more can be presented in this slide deck. Cover 30 60 90 days plan to implement Kubernetes in the organization with this thoroughly researched PowerPoint templates. Discuss the key components of Kubernetes with a diagram using this modern-designed cluster architecture PowerPoint layouts. Describe each element’s functionality using these PowerPoint visuals. Hence manage the clusters efficiently by downloading Kubernetes architecture PPT slides. https://bit.ly/3p6xEoS
As more and more enterprises look at leveraging the capabilities of public clouds, they face an array of important decisions. for example, they must decide which cloud(s) and what technologies they should use, how they operate and manage resources, and how they deploy applications.
One And Done Multi-Cloud Load Balancing Done Right.pptxAvi Networks
Did you know that on average, it takes organizations more than three months using legacy load balancers to scale their load balancing capacity? That includes tedious policy management, expensive over-provisioning (or even more expensive under-provisioning), and the risk of supply-chain delays.
Join us for an eye-opening discussion of application delivery done right. By following the guiding principles of a cloud operating model, your team can get operational simplicity, multi-cloud consistency, pervasive analytics, holistic security and full life-cycle automation. This means less time spent on manual, repetitive tasks and troubleshooting, freeing up more time to proactively manage and automate your load balancers.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
As a quick overview, we need to understand how containers are disrupting the current status quo.
Virtual machines simplified operating systems by providing common virtual hardware which abstracted the complexity of the underlying infrastructure. You can think of containers as abstracting operating system complexity from the application. Meaning, I can package up not only the application, but all the dependencies for that application regardless of the operating system it runs on.
There’s plenty of websites out there that show the trajectory of container adoption over time compared to VM adoption so it’s coming at a rapid pace.
The driving forces behind any shift can point root cause back to the application. The container movement is happening because of new application architectures. We are hearing stories of companies who are deconstructing their monolithic applications to cerate small services that can be maintained and upgraded independently. This in turn will allow a person or a team to own a particular service and be responsible for it’s communication and hooks into the rest of the application. It also allows experimental product sets or features to be implemented without effecting the core components. And containers exposing their service becomes that core construct.
I learn by using analogies. Taking something I’m already familiar with an mapping it to as new idea.
Virtual machines were able to take the constraints of physical hardware and make them ubiquitous. This allowed the operating system to have virtualized hardware. This in turn allows the operating system to own the dependencies of the application. These dependencies could be tied to a certain version of ruby, node.js, or golang the operating system needed to have installed. Of course, this limits multiple applications from running on a single VM because of version dependency or even language dependency. Once the dependency is in place, the application can be deployed in a multitude of ways.
With containers, the abstraction layer moves to the operating system. The container host is your operating system and the only dependency It requires is to have a container runtime installed. From there, your application and it’s dependencies are wrapped inside the container. The container itself is sharing the kernel and its properties from the container engine so we can have multiple containers, eaching having its own application with a different dependency as needed such as golang 1.4 in one container and golang1.12 in another.
Before, when you needed to deploy an application you needed and VM image as your base OS, then some sort fo Configuration Management technique to configure the OS and install dependenciesm, and then lean on other configuration Management tooling to install/run the application.
Now we can take something as simple as a dockerfile, build our application through a series of runtime commands, push it to a container registry such as DockerHub or if you’re running locally in your own datacenter, you can use an open source project like Harbor. From there, the container host will issue a docker run command that pulls from a registry and runs that application with all the dependencies it needs. This makes applications super portable, in a way that virtual machines can’t.
When looking to see what type of applications you can containerize, this chart is helpful in knowing what level of complexity it can be accomplished with. Going from left to right, we see the progression. In bucket 1, you may have software that is coming from an ISV in some form of binaries. You have no access to the source code, so you have to build it based on a series of trial and error. In bucket 2, you are in the same situation except you are looking at the vendor to provide the best possible way of running it in a container by giving you the images to make it happen. Bucket 3 is for software that many enterprises struggle with containerizing using .NET. This is getting better as time progresses but windows support is always tricky. Swiftly moving over to bucket 5 we have more modern types of applications that are purposely built with containers in mind.
We’ve talked a lot how Docker is making this all possible. But Docker alone only gives us part of the functionality we need to successfully run at scale. If we were to take a single application that has multiple container components, it can be ran but we miss out on those higher level pieces that give us more availability and easily scaling when needed. This is why we need an orchestrator.
Google has won the battle and that’s why we are here talking about Kubernetes. Kubernetes has emerged as the defacto container orchestrator as every major container technology company is supporting it. Kubernetes provides the abiliy to use the docker container run time but add higher level value such as scheduling, service discovery, scaling, resource management and much more.
Now that we have an idea of why we need Kubernetes, lets look at the architectural components and how that translates into the vSphere environment we all know and love.
Kubernetes has two main pieces, there is the control plane which is our master nodes and the data plane which is our worker nodes. We will take a look at each at a fairly high level.
The Kubernetes scheduler is policy-rich and topology-aware. It makes snap that effect availability, performance, and capacity of the cluster. The scheduler takes into account resource requirements, quality of service requirements, hardware, software, and policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, deadlines, and so on.
The API server is the central communication hub. It provides REST based services for the components to talk to one another as well as user interaction when deploying applications.
The Kubernetes controller manager is a service that watches the shared state of the cluster through the apiserver and makes changes attempting to move the current state towards the desired state.
The Cloud controller manager is a daemon that embeds the cloud specific components shipped with Kubernetes such as pieces relating to AWS or vSphere.
These two make up everything needed for management of the running state
The scheduler is what will watch for new pods as they are requested and created
Etcd is like pretty much our database. It saves the current state of the cluster.
This is pretty similar to what happens in vCenter.
Instead of etcd, we have our database which is some flavor of SQL. There is the scheduler that places VMs in certain places. There are all kinds of services build into vCenter such as the web client, inventory, licensing, and more. The difference is that not everything is communicating over a singular API construct. But there is still an API available for these services as well.
The control plane of Kubernets can scale as well. There is a lot more complex configuration that needs to take place that isn’t mentioned in this diagram such as fronting all these additional master nodes with a load balancer but etcd will replicate changes across the master nodes give it a highly available solution.
The data plane is where our workloads are running. We start off with a base operating system that has a container runtime installed and our two Kubernetes components. The Kubelet is like a kubernetes agent. It’s responsible for issuing commands on the local node that spin up pods. Each pod can container 1 or more containers and that’s how our applications are packaged. The kube-proxy is exactly that, a network proxy. It can do simple or round robin TCP, UDP, and SCTP stream forwarding across a your choice of overlay networks. The kubelet is in costant communication with the API server for resource monitoring and heartbeating. This is analgous to our vSphere model. The ESXi worker is not something we typically interact with. Its run workloads but vCenter is it’s main source of communication and orchestration.
Interacting with kubernetes is a bit different from vSphere. Most of us are ingrained with the instinct to use the vSphere Web Client to perform everything we need. Then we learn how to use other tooling like PowerCLI to automate some things and use a cli based control mechanism. Of course, we use vCenter as the main touch point here as well.
In Kubernetes, kubectl is a binary that is used on any computer to access the Kuberentes API server. It’s what is used to issue commands to the API server that then kicks off any application deployments. Today, 99% of the work is all done through this command line tool. Kubernetes does come with a GUI but it’s read-only and is mostly used for resource consumption statistics. There are other GUIs being developed like Scope from weaveworks but you will have to become comfortable with the CLI for a while.
If you’re interested to see what the cli can do, I’ve highlighted most of the common commands you will issue. Apply and create are very similar but these you will use most often when applying a policy or deployment to the api servier. When you need more instances of an application, that’s where scale will come into play. Looking to update your application, use the rolling update to update the pods in a fashion where there won’t be any hiccups in the app. Lastly, if you need to get into a container for any reason, there is the exec command line that’s similar to docker exec if you used that in the past.
So a final note on the architecture, the container is wrapping your application. The pod runs multiple containers for your application. The worker node run the container runtime and kubernetes agents. The Control plane is your management components. All of this can run on top of vSphere as well.
Going down a bit deeper is when we look at mapping more components from kubernetes to our infrastructure. Our application developer wants to provision a deployment and issues the apply command to the K8s api service. The application has specific requirements for it’s resources and affinity policies, security policy, how it is going to be accessed from the outside world using a load balancer, how the application storage is managed through persistent volumes, and what application metrics are pushed out for continual monitoring.
As a vSphere admin, these can be tied back to components that exist today. The vSphere Cloud Provider within Kubernetes will help with workload direction. NSX-T can take care of networking security profiles as well as being one of the only on-premise solutions that provide Load Balancer primitives from Kubernetes. The vSphere Cloud Provider also manages where persistent volumes are stored by orchestrating all the necessary steps needed to create, attach and mount and VMDK to a worker node so data can be preserved after the lifecycle has ended. Lastly, integrations with Wavefront and vRealize Operations can conintually monitor the application and the infrastructure.
Building your own solution by selecting individual pieces is exciting, but where does the fun end?
Time spent researching integration and compatibility of components
Does the management or orchestration layer know how to interoperate with all its resources?
When an update is available, is there interdependency management matrices?
If there is a problem, where is a line of support?
What’s my organization’s level of maturity and willingness to spend time?
Quicker ROI
Updates and maintenance is verified by the assembler
Deterministic capabilities and feature set
Support becomes common instead of custom
Common components mean tighter integrations that develop enhanced capabilities
Easy manufactured repeatability
A better overall user experience
Now that we know about the architecture and how it maps to vSphere, in addition to the level of difficulty when it comes to building a kubernetes cluster on your own, let’s examine the high level constructs of deploying your first applications.
Labels is what help us tie components together. We can label particular volumes so only certain containers can access them. In addition, we can map it our to higher levels such as saying a load balancer needs to tie itself to the service we call Front End for multiple types of applications, and in this case we have one called hello.
In the vSphere world we use labels as well. Probably the most notable is using storage policies. When we create a new VM we can attach a storage policy to it that only allows datastores that meet that policy. In addition there are tags and custom attributes that can be used by other applications
A replica set makes sure multiple copies of an application are running. It’s fairly simple to see how this is functioning but you are never going to be deploying a replicaSet or even a Pod on it’s own.
That’s why we look to higher level constructs such as a deployment. The deployment will take these lower level constructs and orchestrate the roll out based on our needs
Then when we need to access an application, we create services and can then expose pods based on the labels we had created previously. These all end up tying back to each other in meaningful ways.
But we’re only scratching the surface with types of application deployments. There’s far too many concepts that we don’t have time to cover such as namespaces, ingress, autoscaling, and Daemonsets.