Bring modern, advanced load balancing to VMware Cloud (VMC) on AWS in a matter of minutes. EasyAvi is a new VMware Fling and enables day 0 automation and quick consumption of public cloud.
9. Few-Clicks Deployment
Bring the ease of use of
modern load balancing to VMC
with a few simple clicks
Automated Operationalization
Automate day 0 and day 1 operations in
your VMC on AWS and make it
application-ready
Reduced Deployment Time
Reduce the deployment time from hours
to minutes
From Day 0 to Application Ready
EasyAvi Fling Tool
VMware Cloud on AWS together with VMware Hybrid Cloud Extension enables customers to accelerate their cloud migration in the simplest, fastest and lowest risk way with compelling TCO.
VMware CloudTM on AWS brings VMware’s enterprise class Software-Defined Data Center software to the AWS Cloud, and enables customers to run production applications across VMware vSphere®-based private, public and hybrid cloud environments, with optimized access to AWS services. Delivered, sold and supported by VMware and its partners as an on-demand service, VMware Cloud on AWS enables IT teams to manage their cloud-based resources with familiar VMware tools – without the hassles of learning new skills or utilizing new tools. VMware Cloud on AWS integrates VMware’s flagship compute, storage and network virtualization products (VMware vSphere®, VMware vSANTM and VMware NSX®) along with VMware vCenter® management and robust disaster protection, and optimizes it to run on dedicated, elastic, Amazon EC2 bare-metal infrastructure that is fully integrated as part of the AWS Cloud. With the same architecture and operational experience on-premisess and in the cloud, IT teams can now quickly derive instant business value from use of the AWS and VMware hybrid cloud experience.
VMware SDDC running on AWS bare metal
Sold, operated & supported by VMware & its partners
On-demand capacity & flexible consumption
Full operational consistency with on-premisess SDDC
Seamless workload portability and hybrid operations
Global AWS footprint, reach, availability
Direct access to native AWS services
VMware Hybrid Cloud Extension (HCX) provides application migration and infrastructure hybridity without application downtime or infrastructure retrofit. The VMware HCX service offers bi-directional application landscape mobility and datacenter extension capabilities between any vSphere version. HCX includes patent-pending capabilities to support VMware vSphere® vMotion®, Bulk Migration, High Throughput Network Extension, WAN optimization, traffic engineering, automated VPN with Strong Encryption (Suite B) and secured datacenter interconnectivity with built-in vSphere protocol proxies. VMware HCX enables cloud on-boarding without retrofitting source infrastructure supporting migration from vSphere 5.0+ to VMware Cloud on AWS without introducing application risk and complex migration assessments.
Key Ideas
Cloud provider LBs are not enterprise-grade yet provide good automation and elastic scale. Why not both?
Customers should not have to choose between features and automation / analytics, not to mention lack of multi-cloud consistency
Virtual appliances are not cloud-native and carry the same architectural debt like legacy LBs
Transcription
A lot of customers that we're talking to today, the majority of them are multi-cloud. They may be on premise, but they're doing some exploration of a Kubernetes or also looking at maybe something like an Azure and AWS. They're looking at different environments or different opportunities like this as to where they may be able to change things up and try to improve the efficiencies.
What you find is that they start looking at, if I'm going to go into a public cloud, maybe I'll just take a look at using that public clouds load balancer. The automation tends to be much better, in fact, the elasticity actually is much better, but they tend to lack in features. They tend to lack in performance. They also tend to require a lot of do it yourself. A lot of stitching together. A lot of tools to make them work and to make them functional.
If you want to go multi-cloud, if you want to have different environments like this on premise. Plus in the cloud, plus may be different. Something like vCenter as well as Kubernetes, as well as OpenStack and OpenShift. And now you're finding that you're using different load balancer in every environment. That lack of cloud consistency becomes very, very expensive to manage. And on top of all of this, any of these public cloud load balancers have very, very limited visibility. You're flying blind, but the customers have the choice of either the cloud load balancer or that more legacy approach that they've been used to before. And those legacy approach, yes, they have great features, but they have very minimal automation, very minimal elasticity, so customers are effectively torn between these two choices.
Avi Networks Vantage platform, now known as VMware NSX Advanced Load Balancer is a modern – software defined – elastic – application delivery fabric. It is composed of a central control plane and a distributed data plane.
Avi Controllers provide a centralized policy engine which delivers full life-cycle management for applications
Avi Service Engines are the load-balancers which can be deployed anywhere, natively in a fully orchestrated fashion by the Avi Controllers, On-premise or in the public-clouds.
Avi Controller consumes application intent via REST APIs and strives to realize that intent. As an example Avi Controller would
- Creates Avi SEs
- Acquires IP for VIP through IPAM
- Registers FQDN to DNS
- Manages application certificates
- and so on ...
All the user ever needs to do is to simply convey the intent
Eliminates the problems of overprovisioning and overspending by scaling load balancers elastically based on real-time traffic.
Provides a self-healing fabric. If an Avi SE fails, applications are dynamically moved to other available Avi SEs ensuring that the required capacity for applications is always available and also creates new Avi SEs to replace the failed Avi SE.
Provides a single point of control and multi-cloud support: This enables a universal solution for traditional, modern, and cloud-native use cases across all environments. Applications can reside on any-cloud and Avi Controllers would provide the same level of automation regardless of where the application is provisioned.
Provides rich performance monitoring and visibility into client, security, and application insights that simplifies troubleshooting and automates decisions.
Key Message:
Developer ready infrastructure is comprised of many components, and brining so many pieces together can be a challenge. However, Cloud Foundation provides everything you need, in a single comprehensive solution.
Talk Track:
While VMware Cloud Foundation makes it easy to deploy Kubernetes on your datacenter, it is by no means the only way. So why run Kubernetes on Cloud Foundation?
Providing developer ready infrastructure requires the integration of many different components. There's the compute infrastructure where you will host the Kubernetes control plane and worker nodes. There's the network infrastructure needed to provide connectivity, load balancing, NAT’ing, as well as supporting ingress and egress traffic flows. There's the underlying storage infrastructure needed to provide persistent storage services to the workloads deployed by Kubernetes. Sourcing these different components in a piecemeal fashion, and across multiple vendors is challenging and introduces complexity.
Cloud Foundation on the other hand, provides is a complete solution for running Kubernetes right out of the box. There are no 3rd party add-ons and no requirements to integrate external components.
Notes:
- We use the term “supervisor cluster” to denote a vSphere cluster where the native K8s functionality provided by vSphere 7 has been enabled. Customers can deploy container workloads directly on the supervisor cluster. However, it’s important to understand that the K8s version used on the supervisor cluster is tied to the version of vSphere. A more common use case for developers is to limit the use of the “supervisor cluster” to bootstraping Tanzu Kubernetes Grid Clusters, or what are referred to as “guest clusters”. With TKG clusters the developers can deploy upstream conformant K8s using different K8s versions. As such, we call out TKG on the slide and are not calling out the native K8s capabilities.
- The focus of this slide is to show that we complete solution for VMs and Containers, with the flexibility to meet the developers needs through support for TKG.
References: