SlideShare a Scribd company logo
OPTIMIZING THE ECONOMICS OF STORAGE:
IT’S ALL ABOUT THE BENJAMINS
By
Jon Toigo
Chairman, Data Management Institute
jtoigo@toigopartners.com
INTRODUCTION
Even casual observers of the server virtualization trend have likely heard claims by advocates
regarding significant cost savings that accrue to workload virtualization and virtual machine
consolidation onto fewer, more commoditized gear. One early survey from VMware placed the
total cost of ownership savings at an average of 74%.i
Other studies have measured greater
(and sometimes lesser) benefits accrued to the consolidation of idle resources, the increased
efficiencies in IT operations, improvement in time to implement new services, enhanced
availability, and staff size reductions enabled by the technology.
The same story has not, however, held true for the part of the infrastructure used to store data.
Unfortunately, storage has mostly been treated as an afterthought by infrastructure designers,
resulting in the overprovisioning and underutilization of storage capacity and a lack of uniform
management or inefficient allocation of storage services to the workload that requires them.
This situation has led to increasing capacity demand and higher cost with storage, depending on
the analyst one consults, consuming between .33 and .70 cents of every dollar spent on IT
hardware acquisition.
At the same time, storage capacity demand is spiking – especially in highly virtualized
environments. In 2011, IDC pegged capacity demand growth at around 40% per year
OPTIMIZING THE ECONOMICS OF STORAGE 2
© COPYRIGHT 2015 BY THE DATA MANAGEMENT INSTITUTE, LLC. ALL RIGHTS RESERVED.
worldwide. At on-stage events in the summer of 2013, analysts from the same firm were
noting that some of their customers – those with highly virtualized environments who were
adopting early “hyper-converged infrastructure” models proffered by their preferred hypervisor
vendors -- were actually seeing their storage capacity demand increase year over year by as
much as 300%, driven by the minimum three node storage clustering configurations embodied
in proprietary hyper-converged architecture. Gartner analysts responded by nearly doubling
that estimate to take into account additional copies of data required for archive, data mining
and disaster recovery.
Bottom line: in an era of frugal budgets, storage infrastructure stands out like a nail in search of
a cost reducing hammer. This paper examines storage cost of ownership and seeks to identify
ways to bend the cost-curve without shortchanging applications and their data of the
performance, capacity, availability, and other services they require.
BUILDING STORAGE 101
Selecting the right storage infrastructure for the application infrastructure deployed by a
company requires that several questions be considered. These may include:
 Which technology will work with the applications, hypervisors and data that we have
and are producing?
 Which technology will enhance application performance?
 Which technology will provide greater data availability?
 Which technology can be deployed, configured and managed quickly and effectively
using available, on-staff skills?
 Which technology will provide greater, if not optimal, storage capacity efficiency?
 Which technology will enable storage flexibility; i.e., ability to add capacity or
performance in the future without impacting the applications?
To these questions, business-savvy IT planners will also add another: which technology will fit
with available budget? Truth be told, there are ideal ways to store data that will optimize
performance, preservation, protection and management, but probably far fewer ways that are
cost-efficient or even affordable. Moreover, obtaining budgetary approval for storage
technology is often challenged by the need to educate decision-makers about the nuances of
storage technology itself. While everyone accepts that data needs to be stored and that the
volume of data is growing, some technical knowledge is required to grasp the differences
between various storage products and the benefits they can deliver over a usage interval that
has grown from three years to five (or even seven) years in many organizations.
The simplest approach may seem to be to follow the lead of a trusted vendor. Hypervisor
vendors have proffered many strategies since the early 2000s for optimizing storage I/O. These
strategies, however, have generally proven to have limited longevity. For example, VMware’s
OPTIMIZING THE ECONOMICS OF STORAGE 3
© COPYRIGHT 2015 BY THE DATA MANAGEMENT INSTITUTE, LLC. ALL RIGHTS RESERVED.
push for industry-wide adoption of its vStorage APIs for Array Integration (VAAI), a technology
intended to improve storage performance by offloading certain storage operations to
intelligent hardware controllers on arrays, now seems to have been relegated to the dustbin of
history by the vendor as it pushes a new “hyper-converged storage” model, VSAN, that doesn’t
support VAAI at all.
Indeed, a key challenge for IT planners is to see beyond momentary trends and fads in storage
and to arrive at a strategy that will provide a durable and performant storage solution for the
firm at an acceptable cost. This requires a clear-headed assessment of the components of
storage cost and the alternative ways to deliver storage functionality that optimizes both CAPEX
and OPEX expense.
WHY DOES STORAGE COST SO MUCH?
On its face, storage technology, like server technology, should be subject to commodity pricing.
Year after year the cost of a disk drive has plummeted while the capacity of the drive has
expanded. The latter trend has leveled out recently, according to analysts at Horison
Information Strategies, but the fact remains that the cost per gigabyte of the disk drive has
steadily declined since the mid-1980s.
OPTIMIZING THE ECONOMICS OF STORAGE 4
© COPYRIGHT 2015 BY THE DATA MANAGEMENT INSTITUTE, LLC. ALL RIGHTS RESERVED.
At the same time, the chassis used to create disk arrays, the hardware controllers used to
organize disks into larger sets with RAID or JBOD technologies, and even the RAID software
itself have all become less expensive. However, finished arrays have increased in price by
upwards of 100% per year despite the commoditization of their components.
Part of the explanation has been the addition by hardware vendors of “value-add software” to
array controllers each year. A recent example saw an array comprising 300 1TB disk drives in
commodity shelves (hardware costs totaling about $3,000) being assigned a manufacturer’s
suggested retail price of $410,000 – because the vendor had added de-duplication software to
the array controller.ii
Value-add software is used by array makers to differentiate their
products from those of competitors and often adds significant cost to products irrespective of
the actual value added to or realized from the kit.
Ultimately, hardware costs and value-add software licenses, plus leasing costs and other
factors, contribute to acquisition expense, which companies seek to amortize over and
increasingly lengthy useful life. IT planners now routinely buy storage with an eye toward a 5–
to-7 year useful life, up from about 3 years only a decade ago. Interestingly, most storage kits
ship with a three year warranty and maintenance agreement, and re-upping that agreement
when it reaches end of life costs about as much as an entirely new array!
OPTIMIZING THE ECONOMICS OF STORAGE 5
© COPYRIGHT 2015 BY THE DATA MANAGEMENT INSTITUTE, LLC. ALL RIGHTS RESERVED.
But hardware acquisition expense is only
about a fifth of the estimated annual cost
of ownership in storage, according to
leading analysts. Cost to acquire (CAPEX)
is dwarfed by the cost to own, operate and
manage (OPEX).
Source: Multiple storage cost of ownership studies from Gartner, Forrester, and Clipper Group
Per the preceding illustration, Gartner and other analysts suggest that management costs are
the real driver of storage total cost of ownership. More specifically, many suggest that
heterogeneity in storage infrastructure, which increases the difficulties associated with unified
management, is a significant cost accelerator. While these arguments may hold some validity,
they do not justify the replacement of heterogeneous storage platforms for a homogeneous set
of gear. As discussed later, heterogeneous infrastructure emerges in most data centers as a
function of deliberate choice – to leverage a new, best-of-breed technology or to facilitate
IT planners now routinely buy storage with an eye
toward a 5–to-7 year useful life, up from about 3
years only a decade ago. Interestingly, most storage
kits ship with a three year warranty and
maintenance agreement, and re-upping that
agreement when it reaches end of life costs about as
much as an entirely new array!
OPTIMIZING THE ECONOMICS OF STORAGE 6
© COPYRIGHT 2015 BY THE DATA MANAGEMENT INSTITUTE, LLC. ALL RIGHTS RESERVED.
storage tiering. Truth be told, few, if any, hardware vendors have diversified enough product
offerings to meet the varying storage needs of different workloads and data. Those that have a
variety of storage goods typically do not offer common management across all wares, especially
when some kits have become part of the vendor’s product line as the result of technology
acquisitions.
From a simplified standpoint, the annual cost of ownership of storage can be calculated as
follows:
In truth, expense manifests itself in terms of capacity allocation efficiency (CAE) and capacity
utilization efficiency (CUE). Allocation efficiency is a measure of how efficiently storage
capacity is allocated to data. Utilization efficiency refers to the placement of the right data on
the right storage based on factors such as re-reference frequency.
Businesses tend to purchase and deploy capacity in a sub-optimal manner, purchasing “tier
one” storage to host the data from applications that do not require the expensive high
performance attributes of the kit. Moreover, the movement to “flatten” storage infrastructure
represented by Hadoop and many of the “hyper-converged” storage models is eliminating the
benefits of tiered storage altogether. Tiered storage is supposed to enable the placement of
data on performance/capacity/cost-appropriate storage infrastructure, reducing the overall
cost of storage. The best mix for most firms is fairly well understood.
OPTIMIZING THE ECONOMICS OF STORAGE 7
© COPYRIGHT 2015 BY THE DATA MANAGEMENT INSTITUTE, LLC. ALL RIGHTS RESERVED.
Horison information Strategies and other analysts have identified the traditional tiers of storage
technology and the optimal percentages of business data that tend to occupy each tier.
Ignoring storage tiering and failing to place data on access-appropriate tiers is a huge cost of
ownership accelerator. Using the recommended percentages and storage cost-per-GB
estimates in the chart above, building a 100 TB storage complex using only Tier 1 and 2 storage
(all disk) will cost approximately $765,000. The same storage complex, with data segregated
using the ratios described in the preceding model, would cost approximately $482,250.iii
Failure to leverage the benefits of Hierarchical
Storage Management (HSM), which is the basic
technology for moving data around
infrastructure based on access frequency, update
frequency and other criteria, has shown up in
dramatically poor utilization efficiency in most
data centers today. A study of nearly 3000
storage environments performed in 2010 by the
Data Management Institute found that an
average of nearly 70% of the space of every disk
drive in a firm was being wasted – either
allocated, forgotten and unused, or storing
never-referenced data, data with no owner in
metadata, or contraband.
The argument could be made that a significant
amount of storage capacity could be recovered
OPTIMIZING THE ECONOMICS OF STORAGE 8
© COPYRIGHT 2015 BY THE DATA MANAGEMENT INSTITUTE, LLC. ALL RIGHTS RESERVED.
and put to better use if companies would practice better storage management, data hygiene,
and archive. That would effectively reduce storage cost of ownership by lessening the amount
of new capacity a company would need to acquire and deploy year over year. Instead, most of
the attention in storage technology has been placed lately on new storage topologies that are
intended, nominally at least, to drive cost out of storage by reducing kit down to its commodity
parts and removing value-add software from the controllers of individual arrays and placing it
into a common storage software layer: so-called “software-defined storage” (SDS).
Software-defined Storage, in theory, enables the common management of storage by removing
barriers to service interoperability, data migration, etc. that exist in proprietary, branded
arrays. Currently, storage arrays from different manufacturers, and even different models of
storage arrays from the same manufacturer, create isolated islands of storage capacity that are
difficult to share or to manage coherently. Standards-based efforts to develop common
management schemes that transcend vendor barriers, including the SNIA’s SMI-S protocol and
the World Wide Web Consortium’s REST, have met with limited adoption by the industry,
leading Forrester and other analysts to insist that only by deploying homogeneous storage (all
equipment from one vendor) can the OPEX costs of storage be managed.
SOFTWARE-DEFINED STORAGE TO THE RESCUE?
Software-defined Storage, while it holds out some hope for surmounting the problems of
hardware proprietary-ness, has in some cases been hijacked by hypervisor vendors, who seek
to portray their hyper-converged infrastructure offerings as a kind of “open” SDS. In fact,
hypervisor-specific storage models have done little more than institute a new “isolated island of
technology” problem. Each of the leading hypervisor software vendors has its own game plan
with respect to hyper-converged and its own proprietary technology that produces an
infrastructure that can only be used to store data from that hypervisor. VSAN from VMware
will only store the data from workload virtualized with VMware. Microsoft’s clustered storage
spaces SDS model is intended for use only with Hyper-V workload, though the company
provides a conversion utility that will convert a VMware VMDK file into a Hyper-V VHD file if the
customer wants to use the storage space managed by the Hyper-V SDS utility to store “alien”
workload.
Of course, the SDS model is not a standard and there are many takes on the design and
function of this software stack. Third party developers and ISVs have taken the initiative to
improve upon the concepts and architecture articulated by leading hypervisor vendors to make
SDS more robust and affordable. These improvements include:
 The implementation of SDS using “bare bones” hardware in a direct attached
configuration: no more complex switched fabric or LAN-attached storage to deal with,
no more proprietary storage gear
OPTIMIZING THE ECONOMICS OF STORAGE 9
© COPYRIGHT 2015 BY THE DATA MANAGEMENT INSTITUTE, LLC. ALL RIGHTS RESERVED.
 The abstraction of value-add functions away from array controller, placed in server
software layer: no more proprietary software licenses and firmware levels to be
concerned about; storage services can be applied to all capacity, not just media
“trapped” behind a specific hardware array controller
 Ease of storage service management via a unified user interface: no searching for third
party tools or array-specific element managers to monitor or administer storage
infrastructure
These attributes, while potentially improving on the problems with legacy storage
infrastructure, do not address all of the challenges that make storage so expensive to own and
operate. While a robust software-defined storage solution may remove barriers to “federated”
management and improve the agility with which storage services (like mirroring, continuous
data protection, de-duplication, etc.) can be applied to specific infrastructure and assigned to
specific workload, most SDS solutions do nothing whatsoever to aid in the management of the
existing storage resource – or of storage capacity generally. In fact, some SDS evangelists argue
that capacity management should remain the domain of the hardware array controller –
though a compelling explanation as to why is never offered. Truth be told, by excluding
capacity management from the list of functions provided by the software-defined storage layer,
the sharing of capacity to support workload data from virtual machines running under different
hypervisor brands (or workloads running without any sort of virtualization at all) is very
problematic. This leads to a problem of storage stove-piping and increases management
complexity and cost.
The good news is that some thought leaders in the SDS space are seeking to manage not only
services, but capacity, in a more comprehensive, infrastructure-wide manner. Such an
approach is evolving in the SDS offerings of IBM with its Spectrum Virtualize offering, but it is
already available from DataCore Software.
STORAGE VIRTUALIZATION ENHANCES SDS AND REDUCES TCO
Just as server virtualization was introduced into the server world to improve the cost-metrics
and allocation/utilization efficiencies of server kit, so too storage virtualization can be leveraged
to make the most out of storage infrastructure while reducing cost of ownership. This begins
with storage acquisition costs.
In a virtualized storage environment, all storage – whether direct attached or SAN attached (so-
called legacy storage) – can be included in a storage resource pool. This eliminates the need to
“rip and replace” infrastructure in order to adopt an “isolated island” hyper-converged storage
model and the costs associated with infrastructure replacement.
A virtualized storage environment reduces CAPEX and OPEX. Storage can be divided into virtual
pools, each comprising a different set of characteristics and services. A tier one pool may be
OPTIMIZING THE ECONOMICS OF STORAGE 10
© COPYRIGHT 2015 BY THE DATA MANAGEMENT INSTITUTE, LLC. ALL RIGHTS RESERVED.
optimized for performance, while a tier zero pool may be comprised completely of silicon.
Similarly, high capacity, low cost, low performance disks may be fashioned into a pool intended
for the lion’s share of data that evidences infrequent access or update. So, virtualized storage
enables the implementation of HSM and other processes that have the effect of improving
capacity utilization efficiency.
As for capacity allocation efficiency, virtualized storage provides the means for managing
capacity and for allocating it to workload -- and for scaling it over time -- in an agile way.
Technologies such as thin provisioning, compression and de-duplication can be applied across
entire infrastructure, rather than isolated behind specific hardware controllers, to help capacity
to be used more efficiently. This in turn can slow the rate at which new capacity must be added
and enables less expensive, and even used, equipment to be added to the pool. Centralizing
this functionality for ease of administration and allocation may also reduce the costs associated
with software maintenance and renewal fees, which are currently misunderstood by many IT
planners, according to Amazon Web Services and others. According to AWS, an important, yet
under-disclosed rationale for adding cloud storage to the storage infrastructure in a company is
to reduce storage software maintenance and renewal costs.iv
There are many approaches to virtualizing storage capacity, from establishing a hardware
controller to which all storage attaches, to virtualizing the connection or mount points where
storage connects to a server and its operating system. DataCore was an early innovator in
mount point virtualization, so any storage that can be connected to a Microsoft OS server can
be seen, used and virtualized by DataCore Software-Defined Storage platform.
Virtualizing capacity builds on the SDS model by placing under centralized control and
management, the configuration and allocation of physical storage infrastructure as virtual
storage pools. Additionally, DataCore uses DRAM on devices in its virtualized infrastructure to
create a unified cache across servers that can be used to buffer and accelerate applications.
Moreover, DataCore’s SDS solution virtualizes the “plumbing” – the I/O pathways and channels
between storage devices and the server – and load balances across these connections to deliver
consistently the best possible interconnect performance between server and storage. These
are all advantages of storage virtualization that can contribute to making SDS a truly effective
driver of better storage at a lower cost.
The rest of the traditional use case for SDS involves the delivery of storage services to the
storage shares. This was the original intent of SDS, to abstract storage services away from
hardware controllers and place them in a software layer that could be used across all storage
infrastructure. With DataCore, these services are of four types: performance, availability,
management and efficiency.
OPTIMIZING THE ECONOMICS OF STORAGE 11
© COPYRIGHT 2015 BY THE DATA MANAGEMENT INSTITUTE, LLC. ALL RIGHTS RESERVED.
Source: DataCore Software
Efficiency and performance refer to processes and tools for accomplishing the objectives
already discussed: to wrangle all physical storage, memories and interconnects into a
comprehensive resource infrastructure that can be allocated and de-allocated, scaled and
managed readily. Performance involves the movement of data between tiers, the inclusion of
specific data in clearly defined data preservation and archive schemes and the migration of data
between volumes or pools. All of these activities, which can be time-consuming and expensive
in a heterogeneous equipment setting are simplified in a virtualized storage environment,
thereby saving operator time and potentially the need to purchase specialized software for
data migration, etc.
Availability involves the allocation of special services like mirroring and replication, failover and
failback, and Continuous Data Protection (CDP) to workload that requires them. This can be
done in a wizard-based way when a virtual volume is carved out from a pool for use as a
repository for a specific workload and its data. The ability to associate specific data protection
services with specific workload has a tendency to reduce the cost of data protection overall by
custom-fitting the right services to the right data.
In the diagram, the block marked Management comprises additional functionality designed to
facilitate the integration of the DataCore infrastructure with NAS and cloud storage, and to
enable the federated management of multiple DataCore infrastructure installations, all under
one management console. That provides all of the controls, processes and tools required to
drive down OPEX (which is the majority of cost) by making a heterogeneous storage
environment as simple to manage as a homogenous environment. This is “true” SDS.
OPTIMIZING THE ECONOMICS OF STORAGE 12
© COPYRIGHT 2015 BY THE DATA MANAGEMENT INSTITUTE, LLC. ALL RIGHTS RESERVED.
CHECKLIST FOR BUYERS
From the preceding discussion, it should be clear that bending the cost curve in storage
requires a clear-headed assessment of options. As a general rule, CAPEX cost reductions derive
from the reduction of proprietary hardware kit and the implementation of technology that will
enable unified management of the myriad moving parts of a storage infrastructure. Virtualizing
storage is less expensive than simply ripping and replacing on kit for another.
OPEX cost reductions usually derive from managing the allocation of capacity in a more agile
way, managing the utilization of capacity in a more automated way, and managing all
infrastructure in a unified way, supported by wizards, etc. With the right software-defined
storage solution, especially one that includes storage virtualization technology, applying the
right software services to infrastructure is a much easier process.
Here is a list of additional items that should be on your buyer’s checklist when evaluating SDS
offerings with an eye toward reducing storage TCO:
 Support for currently deployed infrastructure
 Support for planned hardware technology acquisitions
 Viable application I/O acceleration technology that works with server infrastructure for
improved performance
 Ease of installation
 Ease of storage capacity allocation
 Ease of storage service allocation
 Effective capacity efficiency (e.g. Compression, De-Duplication) capabilities
 Mirroring/replication service without hardware constraints
 Data migration between different virtual storage pools without hardware constraints
 Common management of centralized and distributed storage components
 Failover/Fail Back capabilities
 Automatic I/O re-routing
OPTIMIZING THE ECONOMICS OF STORAGE 13
© COPYRIGHT 2015 BY THE DATA MANAGEMENT INSTITUTE, LLC. ALL RIGHTS RESERVED.
ENDNOTES
i
Reducing Server Total Cost of Ownership with VMware Virtualization Software, VMware Inc., Palo Alto, CA,
2/13/2006, p. 3.
ii
This reference is to an early model of the Data Domain de-duplicating storage appliance, circa 2010. The unit
featured 300 1TB SATA drives with a RAID 6 controller and commodity drive shelves and array enclosure.
Estimated hardware costs were approximately $3000 to $4000. The addition of de-duplication software, which the
vendor suggested would provide a 70:1 reduction ratio, contributed to a $410,000 MSRP.
iii
100TB of storage deployed at a cost of $50-$100 per GB for flash (1-3% as tier 0), plus $7-$20 per GB for fast disk
(12-20% as tier1), plus $1 to $8 per GB for capacity disk (20-25% as tier 3), plus $.20 - $2 per GB for low
performance, high capacity storage (40-60% as tier 4) totals approximately $482,250. Splitting the same capacity
between only tier 1 and tier 2 at the estimated cost range per GB for each type of storage yields an estimated
infrastructure acquisition cost of $765,000.
iv
See http://www.slideshare.net/AmazonWebServices/optimizing-total-cost-of-ownership-for-the-aws-cloud-
36852296

More Related Content

What's hot

Save costs by using IBM Tivoli Storage Manager
Save costs by using IBM Tivoli Storage ManagerSave costs by using IBM Tivoli Storage Manager
Save costs by using IBM Tivoli Storage Manager
IBM Software India
 
Achieve Higher Quality Decisions Faster for a Competitive Edge in the Oil and...
Achieve Higher Quality Decisions Faster for a Competitive Edge in the Oil and...Achieve Higher Quality Decisions Faster for a Competitive Edge in the Oil and...
Achieve Higher Quality Decisions Faster for a Competitive Edge in the Oil and...
Hitachi Vantara
 
Solve the Top 6 Enterprise Storage Issues White Paper
Solve the Top 6 Enterprise Storage Issues White PaperSolve the Top 6 Enterprise Storage Issues White Paper
Solve the Top 6 Enterprise Storage Issues White PaperHitachi Vantara
 
Hitachi Virtual Storage Platform Competitive Comparison Guide
Hitachi Virtual Storage Platform Competitive Comparison GuideHitachi Virtual Storage Platform Competitive Comparison Guide
Hitachi Virtual Storage Platform Competitive Comparison GuideHitachi Vantara
 
Insider's Guide- Building a Virtualized Storage Service
Insider's Guide- Building a Virtualized Storage ServiceInsider's Guide- Building a Virtualized Storage Service
Insider's Guide- Building a Virtualized Storage ServiceDataCore Software
 
"ESG Whitepaper: Hitachi Data Systems VSP G1000: - Pushing the Functionality ...
"ESG Whitepaper: Hitachi Data Systems VSP G1000: - Pushing the Functionality ..."ESG Whitepaper: Hitachi Data Systems VSP G1000: - Pushing the Functionality ...
"ESG Whitepaper: Hitachi Data Systems VSP G1000: - Pushing the Functionality ...
Hitachi Vantara
 
IDC Analyst Connection: Flash, Cloud, and Software-Defined Storage: Trends Di...
IDC Analyst Connection: Flash, Cloud, and Software-Defined Storage: Trends Di...IDC Analyst Connection: Flash, Cloud, and Software-Defined Storage: Trends Di...
IDC Analyst Connection: Flash, Cloud, and Software-Defined Storage: Trends Di...Hitachi Vantara
 
Comparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCAS
Comparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCASComparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCAS
Comparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCAS
IT Brand Pulse
 
The State of Software Defined Storage Survey 2015
The State of Software Defined Storage Survey 2015The State of Software Defined Storage Survey 2015
The State of Software Defined Storage Survey 2015
DataCore Software
 
Five Best Practices for Improving the Cloud Experience
Five Best Practices for Improving the Cloud ExperienceFive Best Practices for Improving the Cloud Experience
Five Best Practices for Improving the Cloud Experience
Hitachi Vantara
 
The Future of Enterprise IT: DevOps and Data Lifecycle Management
The Future of Enterprise IT: DevOps and Data Lifecycle ManagementThe Future of Enterprise IT: DevOps and Data Lifecycle Management
The Future of Enterprise IT: DevOps and Data Lifecycle Management
actifio
 
Global Financial Leader Consolidates Mainframe Storage and Reduces Costs with...
Global Financial Leader Consolidates Mainframe Storage and Reduces Costs with...Global Financial Leader Consolidates Mainframe Storage and Reduces Costs with...
Global Financial Leader Consolidates Mainframe Storage and Reduces Costs with...
Hitachi Vantara
 
Vendor Landscape Small to Midrange Storage Arrays
Vendor Landscape Small to Midrange Storage ArraysVendor Landscape Small to Midrange Storage Arrays
Vendor Landscape Small to Midrange Storage Arrays
NetApp
 
G11.2014 magic quadrant for general-purpose disk
G11.2014   magic quadrant for general-purpose diskG11.2014   magic quadrant for general-purpose disk
G11.2014 magic quadrant for general-purpose disk
Satya Harish
 
Step 2: Back Up Less Datasheet
Step 2: Back Up Less DatasheetStep 2: Back Up Less Datasheet
Step 2: Back Up Less Datasheet
Hitachi Vantara
 
AWS #3 Storage Vendor in 2018, #1 in 2020
AWS #3 Storage Vendor in 2018, #1 in 2020AWS #3 Storage Vendor in 2018, #1 in 2020
AWS #3 Storage Vendor in 2018, #1 in 2020
IT Brand Pulse
 
Effectively Managing Your Historical Data
Effectively Managing Your Historical DataEffectively Managing Your Historical Data
Effectively Managing Your Historical Data
Callidus Software
 
EMC InfoArchive Overview: Offered by Sigma
EMC InfoArchive Overview: Offered by SigmaEMC InfoArchive Overview: Offered by Sigma
EMC InfoArchive Overview: Offered by Sigma
Jonathan Simpson
 
Epic Migration to Software Defined Storage
Epic Migration to Software Defined StorageEpic Migration to Software Defined Storage
Epic Migration to Software Defined Storage
IT Brand Pulse
 

What's hot (20)

Save costs by using IBM Tivoli Storage Manager
Save costs by using IBM Tivoli Storage ManagerSave costs by using IBM Tivoli Storage Manager
Save costs by using IBM Tivoli Storage Manager
 
Achieve Higher Quality Decisions Faster for a Competitive Edge in the Oil and...
Achieve Higher Quality Decisions Faster for a Competitive Edge in the Oil and...Achieve Higher Quality Decisions Faster for a Competitive Edge in the Oil and...
Achieve Higher Quality Decisions Faster for a Competitive Edge in the Oil and...
 
Solve the Top 6 Enterprise Storage Issues White Paper
Solve the Top 6 Enterprise Storage Issues White PaperSolve the Top 6 Enterprise Storage Issues White Paper
Solve the Top 6 Enterprise Storage Issues White Paper
 
Hitachi Virtual Storage Platform Competitive Comparison Guide
Hitachi Virtual Storage Platform Competitive Comparison GuideHitachi Virtual Storage Platform Competitive Comparison Guide
Hitachi Virtual Storage Platform Competitive Comparison Guide
 
Insider's Guide- Building a Virtualized Storage Service
Insider's Guide- Building a Virtualized Storage ServiceInsider's Guide- Building a Virtualized Storage Service
Insider's Guide- Building a Virtualized Storage Service
 
"ESG Whitepaper: Hitachi Data Systems VSP G1000: - Pushing the Functionality ...
"ESG Whitepaper: Hitachi Data Systems VSP G1000: - Pushing the Functionality ..."ESG Whitepaper: Hitachi Data Systems VSP G1000: - Pushing the Functionality ...
"ESG Whitepaper: Hitachi Data Systems VSP G1000: - Pushing the Functionality ...
 
IDC Analyst Connection: Flash, Cloud, and Software-Defined Storage: Trends Di...
IDC Analyst Connection: Flash, Cloud, and Software-Defined Storage: Trends Di...IDC Analyst Connection: Flash, Cloud, and Software-Defined Storage: Trends Di...
IDC Analyst Connection: Flash, Cloud, and Software-Defined Storage: Trends Di...
 
Comparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCAS
Comparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCASComparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCAS
Comparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCAS
 
The State of Software Defined Storage Survey 2015
The State of Software Defined Storage Survey 2015The State of Software Defined Storage Survey 2015
The State of Software Defined Storage Survey 2015
 
Five Best Practices for Improving the Cloud Experience
Five Best Practices for Improving the Cloud ExperienceFive Best Practices for Improving the Cloud Experience
Five Best Practices for Improving the Cloud Experience
 
The Future of Enterprise IT: DevOps and Data Lifecycle Management
The Future of Enterprise IT: DevOps and Data Lifecycle ManagementThe Future of Enterprise IT: DevOps and Data Lifecycle Management
The Future of Enterprise IT: DevOps and Data Lifecycle Management
 
Global Financial Leader Consolidates Mainframe Storage and Reduces Costs with...
Global Financial Leader Consolidates Mainframe Storage and Reduces Costs with...Global Financial Leader Consolidates Mainframe Storage and Reduces Costs with...
Global Financial Leader Consolidates Mainframe Storage and Reduces Costs with...
 
Stephen Kennett presentation
Stephen Kennett   presentationStephen Kennett   presentation
Stephen Kennett presentation
 
Vendor Landscape Small to Midrange Storage Arrays
Vendor Landscape Small to Midrange Storage ArraysVendor Landscape Small to Midrange Storage Arrays
Vendor Landscape Small to Midrange Storage Arrays
 
G11.2014 magic quadrant for general-purpose disk
G11.2014   magic quadrant for general-purpose diskG11.2014   magic quadrant for general-purpose disk
G11.2014 magic quadrant for general-purpose disk
 
Step 2: Back Up Less Datasheet
Step 2: Back Up Less DatasheetStep 2: Back Up Less Datasheet
Step 2: Back Up Less Datasheet
 
AWS #3 Storage Vendor in 2018, #1 in 2020
AWS #3 Storage Vendor in 2018, #1 in 2020AWS #3 Storage Vendor in 2018, #1 in 2020
AWS #3 Storage Vendor in 2018, #1 in 2020
 
Effectively Managing Your Historical Data
Effectively Managing Your Historical DataEffectively Managing Your Historical Data
Effectively Managing Your Historical Data
 
EMC InfoArchive Overview: Offered by Sigma
EMC InfoArchive Overview: Offered by SigmaEMC InfoArchive Overview: Offered by Sigma
EMC InfoArchive Overview: Offered by Sigma
 
Epic Migration to Software Defined Storage
Epic Migration to Software Defined StorageEpic Migration to Software Defined Storage
Epic Migration to Software Defined Storage
 

Similar to Optimizing The Economics of Storage: It's All About the Benjamins

Insiders Guide- Full Business Value of Storage Assets
Insiders Guide- Full Business Value of Storage AssetsInsiders Guide- Full Business Value of Storage Assets
Insiders Guide- Full Business Value of Storage AssetsDataCore Software
 
Hitachi white-paper-storage-virtualization
Hitachi white-paper-storage-virtualizationHitachi white-paper-storage-virtualization
Hitachi white-paper-storage-virtualizationHitachi Vantara
 
Josh Krischer - How to get more for less (4 november 2010 Storage Expo)
Josh Krischer - How to get more for less (4 november 2010 Storage Expo)Josh Krischer - How to get more for less (4 november 2010 Storage Expo)
Josh Krischer - How to get more for less (4 november 2010 Storage Expo)VNU Exhibitions Europe
 
How Savvy Firms Choose the best Hyperconverged Infrastructure for their Business
How Savvy Firms Choose the best Hyperconverged Infrastructure for their BusinessHow Savvy Firms Choose the best Hyperconverged Infrastructure for their Business
How Savvy Firms Choose the best Hyperconverged Infrastructure for their Business
DataCore Software
 
Netmagic the-storage-matrix
Netmagic the-storage-matrixNetmagic the-storage-matrix
Netmagic the-storage-matrix
Netmagic Solutions Pvt. Ltd.
 
The storage matrix netmagic
The storage matrix   netmagicThe storage matrix   netmagic
The storage matrix netmagic
Netmagic Solutions Pvt. Ltd.
 
Storage Resource Optimization Delivers “Best Fit” Resources for Your Applicat...
Storage Resource Optimization Delivers “Best Fit” Resources for Your Applicat...Storage Resource Optimization Delivers “Best Fit” Resources for Your Applicat...
Storage Resource Optimization Delivers “Best Fit” Resources for Your Applicat...
Capgemini
 
Scale-Out Architectures for Secondary Storage
Scale-Out Architectures for Secondary StorageScale-Out Architectures for Secondary Storage
Scale-Out Architectures for Secondary Storage
InteractiveNEC
 
Hitachi white-paper-future-proof-your-datacenter-with-the-right-nas-platform
Hitachi white-paper-future-proof-your-datacenter-with-the-right-nas-platformHitachi white-paper-future-proof-your-datacenter-with-the-right-nas-platform
Hitachi white-paper-future-proof-your-datacenter-with-the-right-nas-platformHitachi Vantara
 
Insiders Guide- Managing Storage Performance
Insiders Guide- Managing Storage PerformanceInsiders Guide- Managing Storage Performance
Insiders Guide- Managing Storage PerformanceDataCore Software
 
How to Sell Storage Virtualization to The CIO
How to Sell Storage Virtualization to The CIOHow to Sell Storage Virtualization to The CIO
How to Sell Storage Virtualization to The CIO
DataCore Software
 
Storage Virtualization isn’t About Storage
Storage Virtualization isn’t About StorageStorage Virtualization isn’t About Storage
Storage Virtualization isn’t About Storage
IBM India Smarter Computing
 
Product Brief Storage Virtualization isn’t About Storage
Product Brief Storage Virtualization isn’t About StorageProduct Brief Storage Virtualization isn’t About Storage
Product Brief Storage Virtualization isn’t About Storage
IBM India Smarter Computing
 
Enterprise Storage Solutions for Overcoming Big Data and Analytics Challenges
Enterprise Storage Solutions for Overcoming Big Data and Analytics ChallengesEnterprise Storage Solutions for Overcoming Big Data and Analytics Challenges
Enterprise Storage Solutions for Overcoming Big Data and Analytics Challenges
INFINIDAT
 
Enterprise Mass Storage TCO Case Study
Enterprise Mass Storage TCO Case StudyEnterprise Mass Storage TCO Case Study
Enterprise Mass Storage TCO Case Study
IT Brand Pulse
 
Backing Up Mountains of Data to Disk
Backing Up Mountains of Data to DiskBacking Up Mountains of Data to Disk
Backing Up Mountains of Data to Disk
IT Brand Pulse
 
critical_capabilities_for_ob_271719 copy
critical_capabilities_for_ob_271719 copycritical_capabilities_for_ob_271719 copy
critical_capabilities_for_ob_271719 copyChris Woeppel
 
Tape and cloud strategies for VM backups
Tape and cloud strategies for VM backupsTape and cloud strategies for VM backups
Tape and cloud strategies for VM backups
Veeam Software
 
A New Era in Midrange Storage IDC Analyst paper
A New Era in Midrange Storage IDC Analyst paperA New Era in Midrange Storage IDC Analyst paper
A New Era in Midrange Storage IDC Analyst paper
IBM India Smarter Computing
 
50 Shades of Grey in Software-Defined Storage
50 Shades of Grey in Software-Defined Storage50 Shades of Grey in Software-Defined Storage
50 Shades of Grey in Software-Defined Storage
StorMagic
 

Similar to Optimizing The Economics of Storage: It's All About the Benjamins (20)

Insiders Guide- Full Business Value of Storage Assets
Insiders Guide- Full Business Value of Storage AssetsInsiders Guide- Full Business Value of Storage Assets
Insiders Guide- Full Business Value of Storage Assets
 
Hitachi white-paper-storage-virtualization
Hitachi white-paper-storage-virtualizationHitachi white-paper-storage-virtualization
Hitachi white-paper-storage-virtualization
 
Josh Krischer - How to get more for less (4 november 2010 Storage Expo)
Josh Krischer - How to get more for less (4 november 2010 Storage Expo)Josh Krischer - How to get more for less (4 november 2010 Storage Expo)
Josh Krischer - How to get more for less (4 november 2010 Storage Expo)
 
How Savvy Firms Choose the best Hyperconverged Infrastructure for their Business
How Savvy Firms Choose the best Hyperconverged Infrastructure for their BusinessHow Savvy Firms Choose the best Hyperconverged Infrastructure for their Business
How Savvy Firms Choose the best Hyperconverged Infrastructure for their Business
 
Netmagic the-storage-matrix
Netmagic the-storage-matrixNetmagic the-storage-matrix
Netmagic the-storage-matrix
 
The storage matrix netmagic
The storage matrix   netmagicThe storage matrix   netmagic
The storage matrix netmagic
 
Storage Resource Optimization Delivers “Best Fit” Resources for Your Applicat...
Storage Resource Optimization Delivers “Best Fit” Resources for Your Applicat...Storage Resource Optimization Delivers “Best Fit” Resources for Your Applicat...
Storage Resource Optimization Delivers “Best Fit” Resources for Your Applicat...
 
Scale-Out Architectures for Secondary Storage
Scale-Out Architectures for Secondary StorageScale-Out Architectures for Secondary Storage
Scale-Out Architectures for Secondary Storage
 
Hitachi white-paper-future-proof-your-datacenter-with-the-right-nas-platform
Hitachi white-paper-future-proof-your-datacenter-with-the-right-nas-platformHitachi white-paper-future-proof-your-datacenter-with-the-right-nas-platform
Hitachi white-paper-future-proof-your-datacenter-with-the-right-nas-platform
 
Insiders Guide- Managing Storage Performance
Insiders Guide- Managing Storage PerformanceInsiders Guide- Managing Storage Performance
Insiders Guide- Managing Storage Performance
 
How to Sell Storage Virtualization to The CIO
How to Sell Storage Virtualization to The CIOHow to Sell Storage Virtualization to The CIO
How to Sell Storage Virtualization to The CIO
 
Storage Virtualization isn’t About Storage
Storage Virtualization isn’t About StorageStorage Virtualization isn’t About Storage
Storage Virtualization isn’t About Storage
 
Product Brief Storage Virtualization isn’t About Storage
Product Brief Storage Virtualization isn’t About StorageProduct Brief Storage Virtualization isn’t About Storage
Product Brief Storage Virtualization isn’t About Storage
 
Enterprise Storage Solutions for Overcoming Big Data and Analytics Challenges
Enterprise Storage Solutions for Overcoming Big Data and Analytics ChallengesEnterprise Storage Solutions for Overcoming Big Data and Analytics Challenges
Enterprise Storage Solutions for Overcoming Big Data and Analytics Challenges
 
Enterprise Mass Storage TCO Case Study
Enterprise Mass Storage TCO Case StudyEnterprise Mass Storage TCO Case Study
Enterprise Mass Storage TCO Case Study
 
Backing Up Mountains of Data to Disk
Backing Up Mountains of Data to DiskBacking Up Mountains of Data to Disk
Backing Up Mountains of Data to Disk
 
critical_capabilities_for_ob_271719 copy
critical_capabilities_for_ob_271719 copycritical_capabilities_for_ob_271719 copy
critical_capabilities_for_ob_271719 copy
 
Tape and cloud strategies for VM backups
Tape and cloud strategies for VM backupsTape and cloud strategies for VM backups
Tape and cloud strategies for VM backups
 
A New Era in Midrange Storage IDC Analyst paper
A New Era in Midrange Storage IDC Analyst paperA New Era in Midrange Storage IDC Analyst paper
A New Era in Midrange Storage IDC Analyst paper
 
50 Shades of Grey in Software-Defined Storage
50 Shades of Grey in Software-Defined Storage50 Shades of Grey in Software-Defined Storage
50 Shades of Grey in Software-Defined Storage
 

More from DataCore Software

Software-Defined Storage Accelerates Storage Cost Reduction and Service-Level...
Software-Defined Storage Accelerates Storage Cost Reduction and Service-Level...Software-Defined Storage Accelerates Storage Cost Reduction and Service-Level...
Software-Defined Storage Accelerates Storage Cost Reduction and Service-Level...
DataCore Software
 
NVMe and Flash – Make Your Storage Great Again!
NVMe and Flash – Make Your Storage Great Again!NVMe and Flash – Make Your Storage Great Again!
NVMe and Flash – Make Your Storage Great Again!
DataCore Software
 
Zero Downtime, Zero Touch Stretch Clusters from Software-Defined Storage
Zero Downtime, Zero Touch Stretch Clusters from Software-Defined StorageZero Downtime, Zero Touch Stretch Clusters from Software-Defined Storage
Zero Downtime, Zero Touch Stretch Clusters from Software-Defined Storage
DataCore Software
 
From Disaster to Recovery: Preparing Your IT for the Unexpected
From Disaster to Recovery: Preparing Your IT for the UnexpectedFrom Disaster to Recovery: Preparing Your IT for the Unexpected
From Disaster to Recovery: Preparing Your IT for the Unexpected
DataCore Software
 
How to Integrate Hyperconverged Systems with Existing SANs
How to Integrate Hyperconverged Systems with Existing SANsHow to Integrate Hyperconverged Systems with Existing SANs
How to Integrate Hyperconverged Systems with Existing SANs
DataCore Software
 
How to Avoid Disasters via Software-Defined Storage Replication & Site Recovery
How to Avoid Disasters via Software-Defined Storage Replication & Site RecoveryHow to Avoid Disasters via Software-Defined Storage Replication & Site Recovery
How to Avoid Disasters via Software-Defined Storage Replication & Site Recovery
DataCore Software
 
Cloud Infrastructure for Your Data Center
Cloud Infrastructure for Your Data CenterCloud Infrastructure for Your Data Center
Cloud Infrastructure for Your Data Center
DataCore Software
 
Building a Highly Available Data Infrastructure
Building a Highly Available Data InfrastructureBuilding a Highly Available Data Infrastructure
Building a Highly Available Data Infrastructure
DataCore Software
 
TUI Case Study
TUI Case StudyTUI Case Study
TUI Case Study
DataCore Software
 
Thorntons Case Study
Thorntons Case StudyThorntons Case Study
Thorntons Case Study
DataCore Software
 
Top 3 Challenges Impacting Your Data and How to Solve Them
Top 3 Challenges Impacting Your Data and How to Solve ThemTop 3 Challenges Impacting Your Data and How to Solve Them
Top 3 Challenges Impacting Your Data and How to Solve Them
DataCore Software
 
Business Continuity for Mission Critical Applications
Business Continuity for Mission Critical ApplicationsBusiness Continuity for Mission Critical Applications
Business Continuity for Mission Critical Applications
DataCore Software
 
Dynamic Hyper-Converged Future Proof Your Data Center
Dynamic Hyper-Converged Future Proof Your Data CenterDynamic Hyper-Converged Future Proof Your Data Center
Dynamic Hyper-Converged Future Proof Your Data Center
DataCore Software
 
Community Health Network Delivers Unprecedented Availability for Critical Hea...
Community Health Network Delivers Unprecedented Availability for Critical Hea...Community Health Network Delivers Unprecedented Availability for Critical Hea...
Community Health Network Delivers Unprecedented Availability for Critical Hea...
DataCore Software
 
Case Study: Mission Community Hospital
Case Study: Mission Community HospitalCase Study: Mission Community Hospital
Case Study: Mission Community Hospital
DataCore Software
 
Emergency Communication of Southern Oregon
Emergency Communication of Southern OregonEmergency Communication of Southern Oregon
Emergency Communication of Southern Oregon
DataCore Software
 
DataCore At VMworld 2016
DataCore At VMworld 2016DataCore At VMworld 2016
DataCore At VMworld 2016
DataCore Software
 
Integrating Hyper-converged Systems with Existing SANs
Integrating Hyper-converged Systems with Existing SANs Integrating Hyper-converged Systems with Existing SANs
Integrating Hyper-converged Systems with Existing SANs
DataCore Software
 
Fighting the Hidden Costs of Data Storage
Fighting the Hidden Costs of Data StorageFighting the Hidden Costs of Data Storage
Fighting the Hidden Costs of Data Storage
DataCore Software
 
Can $0.08 Change your View of Storage?
Can $0.08 Change your View of Storage?Can $0.08 Change your View of Storage?
Can $0.08 Change your View of Storage?
DataCore Software
 

More from DataCore Software (20)

Software-Defined Storage Accelerates Storage Cost Reduction and Service-Level...
Software-Defined Storage Accelerates Storage Cost Reduction and Service-Level...Software-Defined Storage Accelerates Storage Cost Reduction and Service-Level...
Software-Defined Storage Accelerates Storage Cost Reduction and Service-Level...
 
NVMe and Flash – Make Your Storage Great Again!
NVMe and Flash – Make Your Storage Great Again!NVMe and Flash – Make Your Storage Great Again!
NVMe and Flash – Make Your Storage Great Again!
 
Zero Downtime, Zero Touch Stretch Clusters from Software-Defined Storage
Zero Downtime, Zero Touch Stretch Clusters from Software-Defined StorageZero Downtime, Zero Touch Stretch Clusters from Software-Defined Storage
Zero Downtime, Zero Touch Stretch Clusters from Software-Defined Storage
 
From Disaster to Recovery: Preparing Your IT for the Unexpected
From Disaster to Recovery: Preparing Your IT for the UnexpectedFrom Disaster to Recovery: Preparing Your IT for the Unexpected
From Disaster to Recovery: Preparing Your IT for the Unexpected
 
How to Integrate Hyperconverged Systems with Existing SANs
How to Integrate Hyperconverged Systems with Existing SANsHow to Integrate Hyperconverged Systems with Existing SANs
How to Integrate Hyperconverged Systems with Existing SANs
 
How to Avoid Disasters via Software-Defined Storage Replication & Site Recovery
How to Avoid Disasters via Software-Defined Storage Replication & Site RecoveryHow to Avoid Disasters via Software-Defined Storage Replication & Site Recovery
How to Avoid Disasters via Software-Defined Storage Replication & Site Recovery
 
Cloud Infrastructure for Your Data Center
Cloud Infrastructure for Your Data CenterCloud Infrastructure for Your Data Center
Cloud Infrastructure for Your Data Center
 
Building a Highly Available Data Infrastructure
Building a Highly Available Data InfrastructureBuilding a Highly Available Data Infrastructure
Building a Highly Available Data Infrastructure
 
TUI Case Study
TUI Case StudyTUI Case Study
TUI Case Study
 
Thorntons Case Study
Thorntons Case StudyThorntons Case Study
Thorntons Case Study
 
Top 3 Challenges Impacting Your Data and How to Solve Them
Top 3 Challenges Impacting Your Data and How to Solve ThemTop 3 Challenges Impacting Your Data and How to Solve Them
Top 3 Challenges Impacting Your Data and How to Solve Them
 
Business Continuity for Mission Critical Applications
Business Continuity for Mission Critical ApplicationsBusiness Continuity for Mission Critical Applications
Business Continuity for Mission Critical Applications
 
Dynamic Hyper-Converged Future Proof Your Data Center
Dynamic Hyper-Converged Future Proof Your Data CenterDynamic Hyper-Converged Future Proof Your Data Center
Dynamic Hyper-Converged Future Proof Your Data Center
 
Community Health Network Delivers Unprecedented Availability for Critical Hea...
Community Health Network Delivers Unprecedented Availability for Critical Hea...Community Health Network Delivers Unprecedented Availability for Critical Hea...
Community Health Network Delivers Unprecedented Availability for Critical Hea...
 
Case Study: Mission Community Hospital
Case Study: Mission Community HospitalCase Study: Mission Community Hospital
Case Study: Mission Community Hospital
 
Emergency Communication of Southern Oregon
Emergency Communication of Southern OregonEmergency Communication of Southern Oregon
Emergency Communication of Southern Oregon
 
DataCore At VMworld 2016
DataCore At VMworld 2016DataCore At VMworld 2016
DataCore At VMworld 2016
 
Integrating Hyper-converged Systems with Existing SANs
Integrating Hyper-converged Systems with Existing SANs Integrating Hyper-converged Systems with Existing SANs
Integrating Hyper-converged Systems with Existing SANs
 
Fighting the Hidden Costs of Data Storage
Fighting the Hidden Costs of Data StorageFighting the Hidden Costs of Data Storage
Fighting the Hidden Costs of Data Storage
 
Can $0.08 Change your View of Storage?
Can $0.08 Change your View of Storage?Can $0.08 Change your View of Storage?
Can $0.08 Change your View of Storage?
 

Recently uploaded

Assure Contact Center Experiences for Your Customers With ThousandEyes
Assure Contact Center Experiences for Your Customers With ThousandEyesAssure Contact Center Experiences for Your Customers With ThousandEyes
Assure Contact Center Experiences for Your Customers With ThousandEyes
ThousandEyes
 
Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
Safe Software
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
Jemma Hussein Allen
 
Elizabeth Buie - Older adults: Are we really designing for our future selves?
Elizabeth Buie - Older adults: Are we really designing for our future selves?Elizabeth Buie - Older adults: Are we really designing for our future selves?
Elizabeth Buie - Older adults: Are we really designing for our future selves?
Nexer Digital
 
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Product School
 
PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)
Ralf Eggert
 
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™
UiPathCommunity
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
mikeeftimakis1
 
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
Product School
 
Key Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdfKey Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdf
Cheryl Hung
 
When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...
Elena Simperl
 
Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !
KatiaHIMEUR1
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
Kari Kakkonen
 
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdf
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfSAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdf
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdf
Peter Spielvogel
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
Laura Byrne
 
Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
Alison B. Lowndes
 
Elevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object CalisthenicsElevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object Calisthenics
Dorra BARTAGUIZ
 
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfObservability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Paige Cruz
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
ControlCase
 

Recently uploaded (20)

Assure Contact Center Experiences for Your Customers With ThousandEyes
Assure Contact Center Experiences for Your Customers With ThousandEyesAssure Contact Center Experiences for Your Customers With ThousandEyes
Assure Contact Center Experiences for Your Customers With ThousandEyes
 
Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
 
Elizabeth Buie - Older adults: Are we really designing for our future selves?
Elizabeth Buie - Older adults: Are we really designing for our future selves?Elizabeth Buie - Older adults: Are we really designing for our future selves?
Elizabeth Buie - Older adults: Are we really designing for our future selves?
 
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
 
PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)
 
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
 
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
 
Key Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdfKey Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdf
 
When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...
 
Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
 
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdf
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfSAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdf
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdf
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
 
Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
 
Elevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object CalisthenicsElevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object Calisthenics
 
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfObservability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
 

Optimizing The Economics of Storage: It's All About the Benjamins

  • 1. OPTIMIZING THE ECONOMICS OF STORAGE: IT’S ALL ABOUT THE BENJAMINS By Jon Toigo Chairman, Data Management Institute jtoigo@toigopartners.com INTRODUCTION Even casual observers of the server virtualization trend have likely heard claims by advocates regarding significant cost savings that accrue to workload virtualization and virtual machine consolidation onto fewer, more commoditized gear. One early survey from VMware placed the total cost of ownership savings at an average of 74%.i Other studies have measured greater (and sometimes lesser) benefits accrued to the consolidation of idle resources, the increased efficiencies in IT operations, improvement in time to implement new services, enhanced availability, and staff size reductions enabled by the technology. The same story has not, however, held true for the part of the infrastructure used to store data. Unfortunately, storage has mostly been treated as an afterthought by infrastructure designers, resulting in the overprovisioning and underutilization of storage capacity and a lack of uniform management or inefficient allocation of storage services to the workload that requires them. This situation has led to increasing capacity demand and higher cost with storage, depending on the analyst one consults, consuming between .33 and .70 cents of every dollar spent on IT hardware acquisition. At the same time, storage capacity demand is spiking – especially in highly virtualized environments. In 2011, IDC pegged capacity demand growth at around 40% per year
  • 2. OPTIMIZING THE ECONOMICS OF STORAGE 2 © COPYRIGHT 2015 BY THE DATA MANAGEMENT INSTITUTE, LLC. ALL RIGHTS RESERVED. worldwide. At on-stage events in the summer of 2013, analysts from the same firm were noting that some of their customers – those with highly virtualized environments who were adopting early “hyper-converged infrastructure” models proffered by their preferred hypervisor vendors -- were actually seeing their storage capacity demand increase year over year by as much as 300%, driven by the minimum three node storage clustering configurations embodied in proprietary hyper-converged architecture. Gartner analysts responded by nearly doubling that estimate to take into account additional copies of data required for archive, data mining and disaster recovery. Bottom line: in an era of frugal budgets, storage infrastructure stands out like a nail in search of a cost reducing hammer. This paper examines storage cost of ownership and seeks to identify ways to bend the cost-curve without shortchanging applications and their data of the performance, capacity, availability, and other services they require. BUILDING STORAGE 101 Selecting the right storage infrastructure for the application infrastructure deployed by a company requires that several questions be considered. These may include:  Which technology will work with the applications, hypervisors and data that we have and are producing?  Which technology will enhance application performance?  Which technology will provide greater data availability?  Which technology can be deployed, configured and managed quickly and effectively using available, on-staff skills?  Which technology will provide greater, if not optimal, storage capacity efficiency?  Which technology will enable storage flexibility; i.e., ability to add capacity or performance in the future without impacting the applications? To these questions, business-savvy IT planners will also add another: which technology will fit with available budget? Truth be told, there are ideal ways to store data that will optimize performance, preservation, protection and management, but probably far fewer ways that are cost-efficient or even affordable. Moreover, obtaining budgetary approval for storage technology is often challenged by the need to educate decision-makers about the nuances of storage technology itself. While everyone accepts that data needs to be stored and that the volume of data is growing, some technical knowledge is required to grasp the differences between various storage products and the benefits they can deliver over a usage interval that has grown from three years to five (or even seven) years in many organizations. The simplest approach may seem to be to follow the lead of a trusted vendor. Hypervisor vendors have proffered many strategies since the early 2000s for optimizing storage I/O. These strategies, however, have generally proven to have limited longevity. For example, VMware’s
  • 3. OPTIMIZING THE ECONOMICS OF STORAGE 3 © COPYRIGHT 2015 BY THE DATA MANAGEMENT INSTITUTE, LLC. ALL RIGHTS RESERVED. push for industry-wide adoption of its vStorage APIs for Array Integration (VAAI), a technology intended to improve storage performance by offloading certain storage operations to intelligent hardware controllers on arrays, now seems to have been relegated to the dustbin of history by the vendor as it pushes a new “hyper-converged storage” model, VSAN, that doesn’t support VAAI at all. Indeed, a key challenge for IT planners is to see beyond momentary trends and fads in storage and to arrive at a strategy that will provide a durable and performant storage solution for the firm at an acceptable cost. This requires a clear-headed assessment of the components of storage cost and the alternative ways to deliver storage functionality that optimizes both CAPEX and OPEX expense. WHY DOES STORAGE COST SO MUCH? On its face, storage technology, like server technology, should be subject to commodity pricing. Year after year the cost of a disk drive has plummeted while the capacity of the drive has expanded. The latter trend has leveled out recently, according to analysts at Horison Information Strategies, but the fact remains that the cost per gigabyte of the disk drive has steadily declined since the mid-1980s.
  • 4. OPTIMIZING THE ECONOMICS OF STORAGE 4 © COPYRIGHT 2015 BY THE DATA MANAGEMENT INSTITUTE, LLC. ALL RIGHTS RESERVED. At the same time, the chassis used to create disk arrays, the hardware controllers used to organize disks into larger sets with RAID or JBOD technologies, and even the RAID software itself have all become less expensive. However, finished arrays have increased in price by upwards of 100% per year despite the commoditization of their components. Part of the explanation has been the addition by hardware vendors of “value-add software” to array controllers each year. A recent example saw an array comprising 300 1TB disk drives in commodity shelves (hardware costs totaling about $3,000) being assigned a manufacturer’s suggested retail price of $410,000 – because the vendor had added de-duplication software to the array controller.ii Value-add software is used by array makers to differentiate their products from those of competitors and often adds significant cost to products irrespective of the actual value added to or realized from the kit. Ultimately, hardware costs and value-add software licenses, plus leasing costs and other factors, contribute to acquisition expense, which companies seek to amortize over and increasingly lengthy useful life. IT planners now routinely buy storage with an eye toward a 5– to-7 year useful life, up from about 3 years only a decade ago. Interestingly, most storage kits ship with a three year warranty and maintenance agreement, and re-upping that agreement when it reaches end of life costs about as much as an entirely new array!
  • 5. OPTIMIZING THE ECONOMICS OF STORAGE 5 © COPYRIGHT 2015 BY THE DATA MANAGEMENT INSTITUTE, LLC. ALL RIGHTS RESERVED. But hardware acquisition expense is only about a fifth of the estimated annual cost of ownership in storage, according to leading analysts. Cost to acquire (CAPEX) is dwarfed by the cost to own, operate and manage (OPEX). Source: Multiple storage cost of ownership studies from Gartner, Forrester, and Clipper Group Per the preceding illustration, Gartner and other analysts suggest that management costs are the real driver of storage total cost of ownership. More specifically, many suggest that heterogeneity in storage infrastructure, which increases the difficulties associated with unified management, is a significant cost accelerator. While these arguments may hold some validity, they do not justify the replacement of heterogeneous storage platforms for a homogeneous set of gear. As discussed later, heterogeneous infrastructure emerges in most data centers as a function of deliberate choice – to leverage a new, best-of-breed technology or to facilitate IT planners now routinely buy storage with an eye toward a 5–to-7 year useful life, up from about 3 years only a decade ago. Interestingly, most storage kits ship with a three year warranty and maintenance agreement, and re-upping that agreement when it reaches end of life costs about as much as an entirely new array!
  • 6. OPTIMIZING THE ECONOMICS OF STORAGE 6 © COPYRIGHT 2015 BY THE DATA MANAGEMENT INSTITUTE, LLC. ALL RIGHTS RESERVED. storage tiering. Truth be told, few, if any, hardware vendors have diversified enough product offerings to meet the varying storage needs of different workloads and data. Those that have a variety of storage goods typically do not offer common management across all wares, especially when some kits have become part of the vendor’s product line as the result of technology acquisitions. From a simplified standpoint, the annual cost of ownership of storage can be calculated as follows: In truth, expense manifests itself in terms of capacity allocation efficiency (CAE) and capacity utilization efficiency (CUE). Allocation efficiency is a measure of how efficiently storage capacity is allocated to data. Utilization efficiency refers to the placement of the right data on the right storage based on factors such as re-reference frequency. Businesses tend to purchase and deploy capacity in a sub-optimal manner, purchasing “tier one” storage to host the data from applications that do not require the expensive high performance attributes of the kit. Moreover, the movement to “flatten” storage infrastructure represented by Hadoop and many of the “hyper-converged” storage models is eliminating the benefits of tiered storage altogether. Tiered storage is supposed to enable the placement of data on performance/capacity/cost-appropriate storage infrastructure, reducing the overall cost of storage. The best mix for most firms is fairly well understood.
  • 7. OPTIMIZING THE ECONOMICS OF STORAGE 7 © COPYRIGHT 2015 BY THE DATA MANAGEMENT INSTITUTE, LLC. ALL RIGHTS RESERVED. Horison information Strategies and other analysts have identified the traditional tiers of storage technology and the optimal percentages of business data that tend to occupy each tier. Ignoring storage tiering and failing to place data on access-appropriate tiers is a huge cost of ownership accelerator. Using the recommended percentages and storage cost-per-GB estimates in the chart above, building a 100 TB storage complex using only Tier 1 and 2 storage (all disk) will cost approximately $765,000. The same storage complex, with data segregated using the ratios described in the preceding model, would cost approximately $482,250.iii Failure to leverage the benefits of Hierarchical Storage Management (HSM), which is the basic technology for moving data around infrastructure based on access frequency, update frequency and other criteria, has shown up in dramatically poor utilization efficiency in most data centers today. A study of nearly 3000 storage environments performed in 2010 by the Data Management Institute found that an average of nearly 70% of the space of every disk drive in a firm was being wasted – either allocated, forgotten and unused, or storing never-referenced data, data with no owner in metadata, or contraband. The argument could be made that a significant amount of storage capacity could be recovered
  • 8. OPTIMIZING THE ECONOMICS OF STORAGE 8 © COPYRIGHT 2015 BY THE DATA MANAGEMENT INSTITUTE, LLC. ALL RIGHTS RESERVED. and put to better use if companies would practice better storage management, data hygiene, and archive. That would effectively reduce storage cost of ownership by lessening the amount of new capacity a company would need to acquire and deploy year over year. Instead, most of the attention in storage technology has been placed lately on new storage topologies that are intended, nominally at least, to drive cost out of storage by reducing kit down to its commodity parts and removing value-add software from the controllers of individual arrays and placing it into a common storage software layer: so-called “software-defined storage” (SDS). Software-defined Storage, in theory, enables the common management of storage by removing barriers to service interoperability, data migration, etc. that exist in proprietary, branded arrays. Currently, storage arrays from different manufacturers, and even different models of storage arrays from the same manufacturer, create isolated islands of storage capacity that are difficult to share or to manage coherently. Standards-based efforts to develop common management schemes that transcend vendor barriers, including the SNIA’s SMI-S protocol and the World Wide Web Consortium’s REST, have met with limited adoption by the industry, leading Forrester and other analysts to insist that only by deploying homogeneous storage (all equipment from one vendor) can the OPEX costs of storage be managed. SOFTWARE-DEFINED STORAGE TO THE RESCUE? Software-defined Storage, while it holds out some hope for surmounting the problems of hardware proprietary-ness, has in some cases been hijacked by hypervisor vendors, who seek to portray their hyper-converged infrastructure offerings as a kind of “open” SDS. In fact, hypervisor-specific storage models have done little more than institute a new “isolated island of technology” problem. Each of the leading hypervisor software vendors has its own game plan with respect to hyper-converged and its own proprietary technology that produces an infrastructure that can only be used to store data from that hypervisor. VSAN from VMware will only store the data from workload virtualized with VMware. Microsoft’s clustered storage spaces SDS model is intended for use only with Hyper-V workload, though the company provides a conversion utility that will convert a VMware VMDK file into a Hyper-V VHD file if the customer wants to use the storage space managed by the Hyper-V SDS utility to store “alien” workload. Of course, the SDS model is not a standard and there are many takes on the design and function of this software stack. Third party developers and ISVs have taken the initiative to improve upon the concepts and architecture articulated by leading hypervisor vendors to make SDS more robust and affordable. These improvements include:  The implementation of SDS using “bare bones” hardware in a direct attached configuration: no more complex switched fabric or LAN-attached storage to deal with, no more proprietary storage gear
  • 9. OPTIMIZING THE ECONOMICS OF STORAGE 9 © COPYRIGHT 2015 BY THE DATA MANAGEMENT INSTITUTE, LLC. ALL RIGHTS RESERVED.  The abstraction of value-add functions away from array controller, placed in server software layer: no more proprietary software licenses and firmware levels to be concerned about; storage services can be applied to all capacity, not just media “trapped” behind a specific hardware array controller  Ease of storage service management via a unified user interface: no searching for third party tools or array-specific element managers to monitor or administer storage infrastructure These attributes, while potentially improving on the problems with legacy storage infrastructure, do not address all of the challenges that make storage so expensive to own and operate. While a robust software-defined storage solution may remove barriers to “federated” management and improve the agility with which storage services (like mirroring, continuous data protection, de-duplication, etc.) can be applied to specific infrastructure and assigned to specific workload, most SDS solutions do nothing whatsoever to aid in the management of the existing storage resource – or of storage capacity generally. In fact, some SDS evangelists argue that capacity management should remain the domain of the hardware array controller – though a compelling explanation as to why is never offered. Truth be told, by excluding capacity management from the list of functions provided by the software-defined storage layer, the sharing of capacity to support workload data from virtual machines running under different hypervisor brands (or workloads running without any sort of virtualization at all) is very problematic. This leads to a problem of storage stove-piping and increases management complexity and cost. The good news is that some thought leaders in the SDS space are seeking to manage not only services, but capacity, in a more comprehensive, infrastructure-wide manner. Such an approach is evolving in the SDS offerings of IBM with its Spectrum Virtualize offering, but it is already available from DataCore Software. STORAGE VIRTUALIZATION ENHANCES SDS AND REDUCES TCO Just as server virtualization was introduced into the server world to improve the cost-metrics and allocation/utilization efficiencies of server kit, so too storage virtualization can be leveraged to make the most out of storage infrastructure while reducing cost of ownership. This begins with storage acquisition costs. In a virtualized storage environment, all storage – whether direct attached or SAN attached (so- called legacy storage) – can be included in a storage resource pool. This eliminates the need to “rip and replace” infrastructure in order to adopt an “isolated island” hyper-converged storage model and the costs associated with infrastructure replacement. A virtualized storage environment reduces CAPEX and OPEX. Storage can be divided into virtual pools, each comprising a different set of characteristics and services. A tier one pool may be
  • 10. OPTIMIZING THE ECONOMICS OF STORAGE 10 © COPYRIGHT 2015 BY THE DATA MANAGEMENT INSTITUTE, LLC. ALL RIGHTS RESERVED. optimized for performance, while a tier zero pool may be comprised completely of silicon. Similarly, high capacity, low cost, low performance disks may be fashioned into a pool intended for the lion’s share of data that evidences infrequent access or update. So, virtualized storage enables the implementation of HSM and other processes that have the effect of improving capacity utilization efficiency. As for capacity allocation efficiency, virtualized storage provides the means for managing capacity and for allocating it to workload -- and for scaling it over time -- in an agile way. Technologies such as thin provisioning, compression and de-duplication can be applied across entire infrastructure, rather than isolated behind specific hardware controllers, to help capacity to be used more efficiently. This in turn can slow the rate at which new capacity must be added and enables less expensive, and even used, equipment to be added to the pool. Centralizing this functionality for ease of administration and allocation may also reduce the costs associated with software maintenance and renewal fees, which are currently misunderstood by many IT planners, according to Amazon Web Services and others. According to AWS, an important, yet under-disclosed rationale for adding cloud storage to the storage infrastructure in a company is to reduce storage software maintenance and renewal costs.iv There are many approaches to virtualizing storage capacity, from establishing a hardware controller to which all storage attaches, to virtualizing the connection or mount points where storage connects to a server and its operating system. DataCore was an early innovator in mount point virtualization, so any storage that can be connected to a Microsoft OS server can be seen, used and virtualized by DataCore Software-Defined Storage platform. Virtualizing capacity builds on the SDS model by placing under centralized control and management, the configuration and allocation of physical storage infrastructure as virtual storage pools. Additionally, DataCore uses DRAM on devices in its virtualized infrastructure to create a unified cache across servers that can be used to buffer and accelerate applications. Moreover, DataCore’s SDS solution virtualizes the “plumbing” – the I/O pathways and channels between storage devices and the server – and load balances across these connections to deliver consistently the best possible interconnect performance between server and storage. These are all advantages of storage virtualization that can contribute to making SDS a truly effective driver of better storage at a lower cost. The rest of the traditional use case for SDS involves the delivery of storage services to the storage shares. This was the original intent of SDS, to abstract storage services away from hardware controllers and place them in a software layer that could be used across all storage infrastructure. With DataCore, these services are of four types: performance, availability, management and efficiency.
  • 11. OPTIMIZING THE ECONOMICS OF STORAGE 11 © COPYRIGHT 2015 BY THE DATA MANAGEMENT INSTITUTE, LLC. ALL RIGHTS RESERVED. Source: DataCore Software Efficiency and performance refer to processes and tools for accomplishing the objectives already discussed: to wrangle all physical storage, memories and interconnects into a comprehensive resource infrastructure that can be allocated and de-allocated, scaled and managed readily. Performance involves the movement of data between tiers, the inclusion of specific data in clearly defined data preservation and archive schemes and the migration of data between volumes or pools. All of these activities, which can be time-consuming and expensive in a heterogeneous equipment setting are simplified in a virtualized storage environment, thereby saving operator time and potentially the need to purchase specialized software for data migration, etc. Availability involves the allocation of special services like mirroring and replication, failover and failback, and Continuous Data Protection (CDP) to workload that requires them. This can be done in a wizard-based way when a virtual volume is carved out from a pool for use as a repository for a specific workload and its data. The ability to associate specific data protection services with specific workload has a tendency to reduce the cost of data protection overall by custom-fitting the right services to the right data. In the diagram, the block marked Management comprises additional functionality designed to facilitate the integration of the DataCore infrastructure with NAS and cloud storage, and to enable the federated management of multiple DataCore infrastructure installations, all under one management console. That provides all of the controls, processes and tools required to drive down OPEX (which is the majority of cost) by making a heterogeneous storage environment as simple to manage as a homogenous environment. This is “true” SDS.
  • 12. OPTIMIZING THE ECONOMICS OF STORAGE 12 © COPYRIGHT 2015 BY THE DATA MANAGEMENT INSTITUTE, LLC. ALL RIGHTS RESERVED. CHECKLIST FOR BUYERS From the preceding discussion, it should be clear that bending the cost curve in storage requires a clear-headed assessment of options. As a general rule, CAPEX cost reductions derive from the reduction of proprietary hardware kit and the implementation of technology that will enable unified management of the myriad moving parts of a storage infrastructure. Virtualizing storage is less expensive than simply ripping and replacing on kit for another. OPEX cost reductions usually derive from managing the allocation of capacity in a more agile way, managing the utilization of capacity in a more automated way, and managing all infrastructure in a unified way, supported by wizards, etc. With the right software-defined storage solution, especially one that includes storage virtualization technology, applying the right software services to infrastructure is a much easier process. Here is a list of additional items that should be on your buyer’s checklist when evaluating SDS offerings with an eye toward reducing storage TCO:  Support for currently deployed infrastructure  Support for planned hardware technology acquisitions  Viable application I/O acceleration technology that works with server infrastructure for improved performance  Ease of installation  Ease of storage capacity allocation  Ease of storage service allocation  Effective capacity efficiency (e.g. Compression, De-Duplication) capabilities  Mirroring/replication service without hardware constraints  Data migration between different virtual storage pools without hardware constraints  Common management of centralized and distributed storage components  Failover/Fail Back capabilities  Automatic I/O re-routing
  • 13. OPTIMIZING THE ECONOMICS OF STORAGE 13 © COPYRIGHT 2015 BY THE DATA MANAGEMENT INSTITUTE, LLC. ALL RIGHTS RESERVED. ENDNOTES i Reducing Server Total Cost of Ownership with VMware Virtualization Software, VMware Inc., Palo Alto, CA, 2/13/2006, p. 3. ii This reference is to an early model of the Data Domain de-duplicating storage appliance, circa 2010. The unit featured 300 1TB SATA drives with a RAID 6 controller and commodity drive shelves and array enclosure. Estimated hardware costs were approximately $3000 to $4000. The addition of de-duplication software, which the vendor suggested would provide a 70:1 reduction ratio, contributed to a $410,000 MSRP. iii 100TB of storage deployed at a cost of $50-$100 per GB for flash (1-3% as tier 0), plus $7-$20 per GB for fast disk (12-20% as tier1), plus $1 to $8 per GB for capacity disk (20-25% as tier 3), plus $.20 - $2 per GB for low performance, high capacity storage (40-60% as tier 4) totals approximately $482,250. Splitting the same capacity between only tier 1 and tier 2 at the estimated cost range per GB for each type of storage yields an estimated infrastructure acquisition cost of $765,000. iv See http://www.slideshare.net/AmazonWebServices/optimizing-total-cost-of-ownership-for-the-aws-cloud- 36852296