Successfully reported this slideshow.
Your SlideShare is downloading. ×

Fighting the Hidden Costs of Data Storage

Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Loading in …3
×

Check these out next

1 of 37 Ad

Fighting the Hidden Costs of Data Storage

Download to read offline

Next to performance and scalability, cost efficiency is one of the top three reasons most companies cite as their motivations for acquiring storage technology. Businesses are struggling to control the storage costs, and to reduce OPEX costs for administrative staff, infrastructure and data management, and environmental and energy. Every storage vendor, it seems, including most of the Software-defined Storage purveyors, are promising ROIs that require nothing short of a suspension of disbelief.

In this presentation, Jon Toigo of the Data Management Institute digs out the root causes of high storage costs and sketches out a prescription for addressing them. He is joined by Ibrahim “Ibby” Rahmani of DataCore Software, who will address the specific cost efficiency advantages that are being realized by customers of Software-defined Storage

Next to performance and scalability, cost efficiency is one of the top three reasons most companies cite as their motivations for acquiring storage technology. Businesses are struggling to control the storage costs, and to reduce OPEX costs for administrative staff, infrastructure and data management, and environmental and energy. Every storage vendor, it seems, including most of the Software-defined Storage purveyors, are promising ROIs that require nothing short of a suspension of disbelief.

In this presentation, Jon Toigo of the Data Management Institute digs out the root causes of high storage costs and sketches out a prescription for addressing them. He is joined by Ibrahim “Ibby” Rahmani of DataCore Software, who will address the specific cost efficiency advantages that are being realized by customers of Software-defined Storage

Advertisement
Advertisement

More Related Content

Slideshows for you (20)

Similar to Fighting the Hidden Costs of Data Storage (20)

Advertisement

More from DataCore Software (20)

Recently uploaded (20)

Advertisement

Fighting the Hidden Costs of Data Storage

  1. 1. Jon Toigo Chairman, Data Management Institute Ibrahim “Ibby” Rahmani Director of Product Marketing DataCore 2 Today’s Presenters
  2. 2. Before you got into the “ring”… • You probably wish that someone had told you… • That to realize the promised value of server virtualization you would need to “rip and replace” your storage infrastructure • You would need to isolate the data from workload virtualized using one hypervisor from all other data from other hypervisors and non-virtualized workloads • You would need separate storage infrastructure with separate management for each storage silo • And that, until the above were accomplished, you would realize only minimal CAPEX/OPEX gains from your virtualization initiatives… 3
  3. 3. Now your questions are pretty straightforward… • What kind of storage do you need to make your virtual applications more performant and available? • How do you justify budget requests for replacing “legacy” storage you only bought a few days/months/years ago? • Is there any guarantee that this software-defined storage thing is going to make any difference? • How can you evaluate the contribution that SDS can make to bending the storage cost curve? 4
  4. 4. Some tips to get you started • Analyze what your storage is currently costing you so you have some baseline data to measure the impact of change… • Not just how much capacity you are adding… • How you move data around on infrastructure to optimize its use… • How you manage the allocation and deallocation of resources and services… 5
  5. 5. Storage costs a lot of money (in case you didn’t know) and it is growing… • Between .33 and .75 of every dollar, euro, pound spent annually on IT hardware • Capacity demand is growing with virtualization according to IDC and Gartner… • Partly a result of data growth… • Partly from inefficient use and poor management… • Partly a function of server virtualization… IDC, 2011 IDC, 2014 Gartner, 2014 6
  6. 6. How does server virtualization contribute to storage expense? • Three basic ways… • Hypervisor vendors villianizing “legacy” storage, arguing for rip and replacement… • “New” storage topologies preferred by hypervisor vendors require minimum of three storage nodes with data replicated on each node… • “New” storage topologies are collapsing storage tiers into a flat infrastructure… 7
  7. 7. So, what’s wrong with “legacy” storage? • Blamed, often incorrectly, for slow application performance in virtualized environments… • Has its share of issues too… • High cost despite commoditization of component parts • On-array controller software licenses and fees add cost, limit manageability • No common management method, even when all gear comes from a single vendor 8
  8. 8. Capacities grow, cost per GB decreases… Yet the cost of an array grows by as much as 120% per year… 9
  9. 9. Value-add software adds cost and resource/service isolation… 10
  10. 10. Adding to OPEX (administration and management) cost! 11
  11. 11. OPEX is an important part of total cost of ownership (TCO)… • But not necessarily a cost readily discovered in budget reports… 12
  12. 12. Administering the storage resource is a key stumbling point… • Storage lacks a common set of management protocols (because the hardware industry doesn’t want one and users don’t press for one) • As a result, storage is oversubscribed and underutilized… The old 1-2… 13
  13. 13. Meanwhile, capacity is being wasted… • We don’t manage data very well and we are losing the concept of storage tiering for capacity allocation and utilization efficiency… On average, 70% of the capacity of a disk drive is wasted… 14
  14. 14. And tiering will soon follow… • According to Horison Information Strategies, a well- disciplined storage infrastructure leverages tiering to control storage costs… 15
  15. 15. Otherwise, you have a problem of balance… Cost for 100 TB of Storage Capacity Using Tiers 0 through 3: $482,250 Cost for 100 TB of Storage Capacity Using Only Tiers 1 and 2: $765,000 16
  16. 16. Strategy? Option 1: Keep your defenses up and try not to get hit. 17
  17. 17. Option 2: Go homogeneous. • Use only one vendor’s storage and/or • Use only one vendor’s hypervisor and storage software stack and/or • Manage storage services better… Like shadow boxing, it works in practice, but not necessarily in operation… 18
  18. 18. Option 3 • Get real, power up, and take the problem head on… 19
  19. 19. Simple to conceptualize… • First, go to a real software-defined storage stack…one that includes storage virtualization • May be able to retain legacy infrastructure, saving CAPEX • Can centralize storage services and capacity management, lowering OPEX • Can tier storage as virtual pools, preserving cost-savings, and optimize performance using DRAM and Flash memory • Can deliver availability from a single console 20
  20. 20. Thank you. • The demand for more storage capacity is inevitable: data growth is inevitable • Storage costs, however, cannot be allowed to accelerate at the same rate • A software-defined storage architecture, leveraging storage capacity virtualization, can help to bend the storage cost curve 21
  21. 21. Copyright © 2015 DataCore Software Corp. – All Rights Reserved. Enterprise-class Storage without the Enterprise Cost Ibrahim “Ibby” Rahmani Director, Product & Solutions Marketing
  22. 22. Today’s IT Administrator’s Challenges Storage Challenges Business Challenges  Customers expectation  Limited budgets  Limited resources (people)  Meeting application performance and SLA  Dealing with data growth  Managing different storage silos Storage accounts for more than 40% of IT hardware cost & is growing (Gartner) 23
  23. 23. Address key cost challenges Performance Capacity Management 24
  24. 24.  Accelerate application performance ► DRAM for caching ► Different types (read vs write) ► Mixed (random vs sequential) IOPS  Optimize database performance ► Random write accelerator* ► SATA performance: 33x faster ► SSD performance 3.6x faster  Cost-efficient performance allocation ► Storage Tiering ► Fluidly Migrates data fluidly between tiers of storage Adaptive Performance Maximize the hardware throughput Automated Storage Tiering Hosts EMC HP Commodity Storage *http://www.datacore.com/products/features/random-write-accelerator Caching Random write accelerator 25
  25. 25. Performance benefits Performance Capacity Management  Get more performance out of existing storage  Meet diverse application performance requirements  Increase performance without incurring cost O C Capital Expense (CAPEX) Operational Expense (OPEX) C O C 26
  26. 26. Address key cost challenges Performance Capacity Management 27
  27. 27.  Enhance storage capacity ► Deduplication/ Compression reduces data footprint ► Up to 3x capacity reduction ► Up to 10x reduction of backup data  Eliminate oversubscription of storage ► Thin Provisioning reduces storage costs by up to 3 times  Efficient use of Tier 1 capacity ► Space efficient snapshots from Tier 1 storage to commodity storage Maximize Capacity Utilization Data reduction technology on heterogeneous storage Hosts EMC HP Commodity Storage Snapshot Thin Provisioning Deduplication /Compression 28
  28. 28. Capacity benefits Performance Capacity Management  Retain existing storage by extending life of storage by 3-5 years  Free up Tier 1 storage capacity by leveraging commodity storage  Leverage commodity storage without incurring management penalty C C C 29 Capital Expense (CAPEX) Operational Expense (OPEX) C O
  29. 29. Address key cost challenges Performance Capacity Management 30
  30. 30.  Reduce Operational Expense ► Single management console with enterprise capability ► All storage under one management ► No additional software fees ► Consolidate diverse storage ► Flexibility through pooling of storage (including commodity storage)  Extend the life of existing storage ► One time software investment ► No software charge for storage upgrades Unified storage management Enterprise-class services across heterogeneous storage Hosts EMC HP Hosts Commodity Storage 31
  31. 31. Management benefits Performance Capacity Management  Single console to manage heterogeneous storage  One storage platform for all virtualized and non-virtualized workloads  No new software licensing for changing storage O O C 32 Capital Expense (CAPEX) Operational Expense (OPEX) C O
  32. 32. DataCore SDS platform provides availability, performance, efficiency & management DataCore Software-defined Storage Platform AVAILABILITY PERFORMANCE EFFICIENCY MANAGEMENT HYPER- CONVERGED SAN CLOUD 33
  33. 33. DataCore Software-defined Storage Platform And, all storage vendors 34
  34. 34. 25,000 + Deployments Worldwide 10,000 + Customers 10th Gen Technology SMB to Large Enterprise Market: Software-defined Storage Technology: Storage Virtualization Major Support Centers: Japan UK USA Proven. Globally. 35
  35. 35. • Get free trial now • Virtual SAN (hyper-converged) • SANsymphony-V (SAN and Cloud) • Contact us: 877-780-5111 • info@datacore.com • www.datacore.com Summary & Next Steps 75% reduction in storage costs 10x performance increase 4x capacity utilization 100% reduction in storage-related downtime 90% decrease in time spent on routine storage tasks 36
  36. 36. QUESTIONS? Contact: info@datacore.com www.datacore.com © 2015 DataCore Software Corporation. All Rights Reserved. DataCore, the DataCore logo and SANsymphony are trademarks or registered trademarks of DataCore Software Corporation. All other products, services and company names mentioned herein may be trademarks of their respective owners.

Editor's Notes

  • Thank you for your time.
    Final thought: the demand for storage capacity is inevitable because the growth of data in your business is inevitable.
    The one thing that we can’t afford is for the cost of storage to accelerate at the same pace as data growth. A storage hypervisor can help to bend the storage cost curve.
    It’s just the smarter way to build and manage storage infrastructure.
    <number>
  • Updated: May 2, 2014Key Points:
    Introduce yourself, be yourself
    Sample Talk track: Software defined storage is a hot topic these days. EMC and NetApp have both unveiled their software defined storage strategies in the last few months. VMware has announced their VSAN product. My goal today is for us to have an informative discussion for you around all this software defined storage hype.
    Transition to next slide: DataCore is in a unique position to comment on this subject…
  • A brief summary of the current state of the business, industry, or technology.
    There are some wrong perception around data protection of the virtual machines
    Businesses grow, technology gets obsolete, disasters happen.
    Complication: Challenge, obstacle or problem
    Review any issues that are impacting the situation just described and causing the customer pain.
    myths continue to grow due to competitive FUD and overall long standing perceptions from our current install base. You may know of many yourself!
    You are challenged in solving your growing business, expanding your IT business, disaster
    Implication: Consequences to your customer & their business of failing to act on the complication?
    This VITAL element answer the “so what?” question, providing a logical transition and urgency to your core message.
  • There are different ways of backing up a virtual environment. Backup Clinet to VM, Array Snapshot and VCB. We will continue supporting VCB and will talk about it in detail later.
    There were some challenges with a Non-VCB Virtual backup before the vSphere launch. Lets articulate these challenges to see what Vmware has accomplished to give to the customer in vSphere around data protection.
    A customer who don’t want to implement advance features ends up performing client level backups. This method is tested and similar to the physical deployment. This also guarantees consistency. The restore is simple and doesn’t have complexity . The challenge with backing up client per VM is similar to the physical deployment as well. It requires multiple agents that add to cost and complexity. The server and network impact should also be taken into consideration.
    Some customers deploy the array snapshot. It is reduces the load on the client server, as data in moved from the client to the backupserver. It is direct from the storage array to the backup server. It helps eliminate the client agent. However, it does require a separate ESX to perform the backup to Tape. There is also scripting involved to ensure this is working in sync with the data protection tool.
  • With DataCore, it’s very easy to introduce mission critical resilience.
    Tech Validate did a detailed sample of 748 of our customers and found that 50% of them performed all their storage adjustments live – all with zero down-time, for over 2+ years. This means upgrading equipment, moving data around to different storage arrays, and bringing equipment offline, with zero-data downtime for over 2+ years.
    You can add Metro-Clustering or Stretch clustering, that presents a single virtual disk to the applications. This operates with zero touch failover. Our customers that have been hit with disaster situations had their infrastructure seamlessly fail-over to standby datacenters. DataCore also does active site replication over the WAN for DR, and many of our customers are using this to protect data across broader regions, across coasts or between continents.
    [Pause for questions]
    3/30/16
    Copyright © 2011 DataCore Software Corp. – All Rights Reserved.
    <number>
  • There are different ways of backing up a virtual environment. Backup Clinet to VM, Array Snapshot and VCB. We will continue supporting VCB and will talk about it in detail later.
    There were some challenges with a Non-VCB Virtual backup before the vSphere launch. Lets articulate these challenges to see what Vmware has accomplished to give to the customer in vSphere around data protection.
    A customer who don’t want to implement advance features ends up performing client level backups. This method is tested and similar to the physical deployment. This also guarantees consistency. The restore is simple and doesn’t have complexity . The challenge with backing up client per VM is similar to the physical deployment as well. It requires multiple agents that add to cost and complexity. The server and network impact should also be taken into consideration.
    Some customers deploy the array snapshot. It is reduces the load on the client server, as data in moved from the client to the backupserver. It is direct from the storage array to the backup server. It helps eliminate the client agent. However, it does require a separate ESX to perform the backup to Tape. There is also scripting involved to ensure this is working in sync with the data protection tool.
  • There are different ways of backing up a virtual environment. Backup Clinet to VM, Array Snapshot and VCB. We will continue supporting VCB and will talk about it in detail later.
    There were some challenges with a Non-VCB Virtual backup before the vSphere launch. Lets articulate these challenges to see what Vmware has accomplished to give to the customer in vSphere around data protection.
    A customer who don’t want to implement advance features ends up performing client level backups. This method is tested and similar to the physical deployment. This also guarantees consistency. The restore is simple and doesn’t have complexity . The challenge with backing up client per VM is similar to the physical deployment as well. It requires multiple agents that add to cost and complexity. The server and network impact should also be taken into consideration.
    Some customers deploy the array snapshot. It is reduces the load on the client server, as data in moved from the client to the backupserver. It is direct from the storage array to the backup server. It helps eliminate the client agent. However, it does require a separate ESX to perform the backup to Tape. There is also scripting involved to ensure this is working in sync with the data protection tool.
  • With DataCore, it’s very easy to introduce mission critical resilience.
    Tech Validate did a detailed sample of 748 of our customers and found that 50% of them performed all their storage adjustments live – all with zero down-time, for over 2+ years. This means upgrading equipment, moving data around to different storage arrays, and bringing equipment offline, with zero-data downtime for over 2+ years.
    You can add Metro-Clustering or Stretch clustering, that presents a single virtual disk to the applications. This operates with zero touch failover. Our customers that have been hit with disaster situations had their infrastructure seamlessly fail-over to standby datacenters. DataCore also does active site replication over the WAN for DR, and many of our customers are using this to protect data across broader regions, across coasts or between continents.
    [Pause for questions]
    3/30/16
    Copyright © 2011 DataCore Software Corp. – All Rights Reserved.
    <number>
  • There are different ways of backing up a virtual environment. Backup Clinet to VM, Array Snapshot and VCB. We will continue supporting VCB and will talk about it in detail later.
    There were some challenges with a Non-VCB Virtual backup before the vSphere launch. Lets articulate these challenges to see what Vmware has accomplished to give to the customer in vSphere around data protection.
    A customer who don’t want to implement advance features ends up performing client level backups. This method is tested and similar to the physical deployment. This also guarantees consistency. The restore is simple and doesn’t have complexity . The challenge with backing up client per VM is similar to the physical deployment as well. It requires multiple agents that add to cost and complexity. The server and network impact should also be taken into consideration.
    Some customers deploy the array snapshot. It is reduces the load on the client server, as data in moved from the client to the backupserver. It is direct from the storage array to the backup server. It helps eliminate the client agent. However, it does require a separate ESX to perform the backup to Tape. There is also scripting involved to ensure this is working in sync with the data protection tool.
  • There are different ways of backing up a virtual environment. Backup Clinet to VM, Array Snapshot and VCB. We will continue supporting VCB and will talk about it in detail later.
    There were some challenges with a Non-VCB Virtual backup before the vSphere launch. Lets articulate these challenges to see what Vmware has accomplished to give to the customer in vSphere around data protection.
    A customer who don’t want to implement advance features ends up performing client level backups. This method is tested and similar to the physical deployment. This also guarantees consistency. The restore is simple and doesn’t have complexity . The challenge with backing up client per VM is similar to the physical deployment as well. It requires multiple agents that add to cost and complexity. The server and network impact should also be taken into consideration.
    Some customers deploy the array snapshot. It is reduces the load on the client server, as data in moved from the client to the backupserver. It is direct from the storage array to the backup server. It helps eliminate the client agent. However, it does require a separate ESX to perform the backup to Tape. There is also scripting involved to ensure this is working in sync with the data protection tool.
  • With DataCore, it’s very easy to introduce mission critical resilience.
    Tech Validate did a detailed sample of 748 of our customers and found that 50% of them performed all their storage adjustments live – all with zero down-time, for over 2+ years. This means upgrading equipment, moving data around to different storage arrays, and bringing equipment offline, with zero-data downtime for over 2+ years.
    You can add Metro-Clustering or Stretch clustering, that presents a single virtual disk to the applications. This operates with zero touch failover. Our customers that have been hit with disaster situations had their infrastructure seamlessly fail-over to standby datacenters. DataCore also does active site replication over the WAN for DR, and many of our customers are using this to protect data across broader regions, across coasts or between continents.
    [Pause for questions]
    3/30/16
    Copyright © 2011 DataCore Software Corp. – All Rights Reserved.
    <number>
  • There are different ways of backing up a virtual environment. Backup Clinet to VM, Array Snapshot and VCB. We will continue supporting VCB and will talk about it in detail later.
    There were some challenges with a Non-VCB Virtual backup before the vSphere launch. Lets articulate these challenges to see what Vmware has accomplished to give to the customer in vSphere around data protection.
    A customer who don’t want to implement advance features ends up performing client level backups. This method is tested and similar to the physical deployment. This also guarantees consistency. The restore is simple and doesn’t have complexity . The challenge with backing up client per VM is similar to the physical deployment as well. It requires multiple agents that add to cost and complexity. The server and network impact should also be taken into consideration.
    Some customers deploy the array snapshot. It is reduces the load on the client server, as data in moved from the client to the backupserver. It is direct from the storage array to the backup server. It helps eliminate the client agent. However, it does require a separate ESX to perform the backup to Tape. There is also scripting involved to ensure this is working in sync with the data protection tool.
  • First piece of the puzzle is Infrastructure Agility. If we think of software as the brains of an array and the physical media, raid controllers, and fabric as the body of an array – we can elegantly separate the brains and the body. This is where DataCore comes in as a high performance software layer, and it’s very non-disruptive. Before Datacore, workloads are trapped and the data is managed separately on each array or appliance. After Datacore, the data is managed across as many arrays or appliances as desired. Once again this is very non-disruptive, and you solve vendor lock-in.
    3/30/16
    Copyright © 2011 DataCore Software Corp. – All Rights Reserved.
    <number>
  • First piece of the puzzle is Infrastructure Agility. If we think of software as the brains of an array and the physical media, raid controllers, and fabric as the body of an array – we can elegantly separate the brains and the body. This is where DataCore comes in as a high performance software layer, and it’s very non-disruptive. Before Datacore, workloads are trapped and the data is managed separately on each array or appliance. After Datacore, the data is managed across as many arrays or appliances as desired. Once again this is very non-disruptive, and you solve vendor lock-in.
    3/30/16
    Copyright © 2011 DataCore Software Corp. – All Rights Reserved.
    <number>
  • Storage solutions take a long time to harden. Unlike application solutions where a reboot will do the trick, losing data or storage downtime is not an option. While startups like to tout innovation and cast their lack of features as simplification, DataCore has been committed to solving these challenges comprehensively. We’re proven in over 25,000+ Deployments Worldwide, managing storage from EMC, HP, IBM, NetAPP, and DELL.
    3/30/16
    Copyright © 2011 DataCore Software Corp. – All Rights Reserved.
    <number>
  • 3/30/16
    Copyright © 2011 DataCore Software Corp. – All Rights Reserved.
    <number>

×