You have an application, you deploy servers to run it and you deploy storage to store the data. You have another application, another set of servers, another set of storage, etc.
Typically these silos are independent of each other, many times there are separate choices made about servers and storage between the different silos. All sorts of defense mechanisms come up such as approved vendor lists (SAN vendors, HIGH END TIER TWO, etc.)
If you wanted to roll out a new application, step number one is to roll out new hardware and get that out, and it would take months to put new things into production.
It was hard to share resources. You had resources that were stranded and resources that weren\'t growing while you\'re running out of resources on an application segue.
Zones of Virtualization
Next came server virtualization. Server virtualization had a compelling value proposition, and a profound implication. The value proposition was a simple: most of my servers are underutilized; if I could run multiple apps on the same server, I could reduce my server footprint, drive up my utilization, save a lot of money. Some examples: British Telecom, went from 3,000 serves down to 100-and-something blades. Citibank, when they had no money to spend, they still rolled out the VMware worldwide and put a petabyte of data storage behind it because the savings to the company were compelling.
Virtualization allowed a decoupling of the application from the hardware. Now applications are mobile. Applications can move from server to server for load balancing, they can move from data center to data center for disaster recovery, they can move into the cloud and back out of the cloud for capacity planning, flexibility, and cost.
Virtualization enabled the ability to decouple applications from servers, build a broad, homogeneous, horizontal server infrastructure that\'s capable of running multiple applications simultaneously.
You no longer have to deploy hardware in order to deploy a new application. You can flow the resources to the demand. You have tremendous flexibility to move your applications around, and you can drive a degree of standardization.
Shared Storage Infrastructure
The same is true of storage. NetApp was early to recognize that this has tremendous implications for storage as well. Just like customers who want to build a broad horizontal infrastructure running multiple applications for servers, they want to do the exact same thing for storage as well.
If you look at the evolution of this chart, we saw the silo model giving way to the virtualization model. This broader model of basically having applications, many applications running on the same infrastructure, that’s optimized for flexibility and speed and scale, is a will-call to shared infrastructure. There are other names: virtual data center, dynamic data center, virtual dynamic data center, internal cloud, whatever you want to call it. We’ll use shared infrastructure
Now, all of these models I think will all exist, but the bottom line is that the application-based silos are ultimately going to get relegated to legacy applications that nobody wants to migrate or to the small set of key applications in the data center that people believe still warrant their own dedicated infrastructure. The vast majority of the storage and the vast majority of the applications are going to move to the shared infrastructure over time.
Our goal, and where we’re trying to go in the market, is to be the platform choice for the shared infrastructure. That is the design point on Data ONTAP 8.
This has implications. Just like when virtualization came along, that had implications for the market, and it drew a whole different set of purchasing criteria.