My presentation on the role of abstraction in virtualization and cloud computing. This was the keynote presentation for the 10th anniversary event for eGroup in Charleston, SC, on April 16, 2010.
Unified means tightly coupled…applications tightly coupled to the operating system, which was tightly coupled to the underlying hardware
Data was tightly coupled to applications, making it hard to get data into or out of applications
Centralized computing power, even the terminals lacked computing power
Had benefits, but it was monolithic (couldn’t replace individual components) and inflexible (not easily repurposed for new tasks)
The rise of the PC stemmed from a need to address these shortcomings, but PCs weren’t the ultimate answer
PCs were the start down a new path
Client-server computing came along and separated the various components of the computing environment
Three-tier client-server architectures added more components, introduced more flexibility in deployment—and introduced complexity
Various forms of client-server computing emerged, including server-based computing
The client-server model was everywhere, it seemed
But what is the client-server model if not just another form of… (advance slide)
The client-server model introduced abstraction
Abstraction is defined as “considering something independently of its associations or attributes”
Abstraction is inserted between layers of an application—abstraction between the clients and the application servers, the application servers and the database servers, the database servers and the data model itself
Server-based computing added abstraction between the location of a display and the location of the processing that produced the display
Web-based architectures are just another example of the client-server model
The client-server model had many great benefits, but it too was not without its problems
The biggest problem? Server sprawl
Organizations needed more and more servers to handle all these new layers of the client-server model…front-end servers, back-end servers, application servers, middleware servers, database servers, terminal servers, messaging servers…you name it
For better or worse, the x86 architecture and the operating systems led companies to a “1 application per server” approach, which further accelerated the server sprawl problem
Vendors responded by making servers smaller—first 2U and 1U rack-optimized servers, then blade servers (8, 14, or 16 servers in a chassis)
These were just stopgap solutions, though
Applications are still tightly coupled to the OS, and the OS is still coupled to the hardware
So how do we fix this problem?
Once again, we turn to abstraction
Once again we turn to abstraction to solve the problem
This time we need to insert a layer of abstraction in a different place—between the hardware and the operating system
Inserting abstraction between the hardware and the operating system leads to virtualization
Specifically, the machine virtualization made possible initially by VMware and later by other vendors
By leveraging the three key properties of virtualization—encapsulation, isolation, and partitioning—we were able to achieve the first goal of virtualization: consolidation
Consolidation was great—many workloads collapsed onto fewer servers, reductions in power, reductions in cooling, reductions in capital expenditures
Customers were able to eliminate lots of hardware, often doing this in conjunction with a hardware refresh
Millions of dollars saved in cost avoidance or in direct savings
But consolidation was only the first step
Consolidation was great, but we needed something more
We needed the ability to have the infrastructure respond dynamically to changing workloads
We needed the ability to have an elastic infrastructure that we could expand and contract as needed
We needed resources to be pooled and allocated on-demand to workloads
The virtualization solutions adapted to meet these needs adding features like VM templating, rapid deployment, live migration, workload mirroring, and dynamic workload placement
But all these features still didn’t take us the whole way…they only get us part of the way on our journey
Desktop virtualization is a further extension of this strategy out of the data center
This, BTW, is where most organizations find themselves today
In order to get to the “next level” we are seeking, there are still things that we need:
We need self-service—we are still expending too many human resources to manage the data center, even highly virtualized data centers (if your admins are still provisioning VMs, you haven’t gotten there yet)
Need greater levels of automation (again, need to reduce the human footprint)
Need increased visibility into the workings of the virtualized environment, which will come through improved instrumentation, greater integration with the hardware, and improved management functionality
Perhaps most importantly, organizations need new operational models to take advantage of these features, to streamline efficiency and utilization (both electronic and human)
And really what you get when you marry these additional needs with virtualization is cloud computing (as defined by VMware, Cisco, EMC, VCE Coalition)
The industry touts cloud computing as the evolution of virtualization
Virtualization + automation + orchestration = cloud computing
There are lots of different cloud computing definitions, but not all of them mean running your workloads on the public Internet
Cloud computing is really nothing more than leveraging virtualization to create highly fluid, very elastic, extremely automated infrastructure to create “IT as a Service”
Some significant challenges still remain…how do we get there? Yep, you guessed it…
Abstraction once again becomes the key to how we move forward toward our vision of cloud computing, including building the private cloud
Abstraction will allow us to move to policy-driven storage, where the location of data is determined by policies placed on the data for performance or availability, increasing storage efficiency (FAST)
Abstraction will simplify the creation of data center interconnects (OTV). Data center interconnects are a key component of geographic workload portability.
Application virtualization enables application portability and makes JeOS possible.
Abstracting data access from data location enables new ways of thinking about workload placement (EMC data federation).