Cost analysis for acquisition of 250 terabytes of storage growing at 25% per year for five years. Products from EMC, NetAPP, NEC, Dot Hill were compared to a software defined storage solution based on SUSE Enterprise Storage software.
This document summarizes the findings of a case study comparing the 5-year total cost of ownership (TCO) for 4 disk array solutions and 4 software-defined storage solutions for backup to disk. The study found that SUSE Enterprise Storage 4 provided the lowest overall 5-year TCO that was $181,457 less than the most expensive solution from EMC. SUSE offered multiple layers of cost savings, including using standard hardware, low annual software licensing fees spread over 5 years, and support included in the license cost. The study concludes that software-defined storage solutions can provide disk backup for half the cost of branded storage arrays.
IME is DDN's software-defined elastic data services product that uses NVMe SSDs to accelerate I/O between compute clusters and parallel file systems. It intelligently virtualizes disparate NVMe SSDs into a shared memory pool for high performance. IME delivers better performance than other burst buffer solutions, particularly for small, random, and shared file I/O patterns. It also offers fault tolerance and data protection through distributed erasure coding.
Major trends impacting data centers include exploding data growth, the rise of virtual servers, and increasing compute demands. This is creating a "perfect storm" that is straining legacy storage systems and driving up costs. New technologies like server virtualization, storage virtualization, dynamic tiering, and capacity virtualization can help address these challenges by enabling more efficient utilization of resources, automated data placement, and non-disruptive operations to reduce costs.
The document provides an overview of Macroview Solution's data center virtualization offerings. It discusses their technology partners including VMware, Cisco, Citrix, Microsoft, and NetApp. It then summarizes their service catalog including virtualization, compute, storage, virtual desktop, enterprise mobility, disaster recovery, and multi-cloud capabilities. Specific storage solutions from NetApp are highlighted including all-flash arrays, snapshots, cloning, deduplication, encryption, quality of service, and data replication technologies.
This document discusses challenges with modern data infrastructure and how DataCore software addresses them. It summarizes that data is growing faster than storage budgets, storage silos waste capacity and are hard to manage, and applications often run slowly due to storage performance issues. DataCore software solves these problems by pooling storage, providing infrastructure services independently of hardware, separating software and hardware advances, and providing single-pane management of disparate infrastructure.
Virtual SAN vs Good Old SANs: Can't they just get along?DataCore Software
Dark forces in the IT industry like to polarize popular opinion; most recently they argue for keeping all the storage in the servers using Virtual SANs, leaving nothing external. These sudden mood swings, while attracting a young cult following, lose sight of lessons learned over the past 20 years.
Truth is, a blend of internal storage close to the apps with good old fashioned external secondary storage out on the network makes a heck of a lot of sense.
In this presentation, Senior Analyst Jim Bagley from SSG-NOW, shows the not so black-and-white considerations driving customers to tap into the internal storage resources of clustered servers. Also, it provides some practical guidance on how to incorporate existing storage arrays and even public cloud capacity into your Virtual SAN rollout
The document discusses application mobility across data centers using VMware VMotion, Cisco networking technologies, and NetApp storage solutions. It describes how VMotion can be used over long distances for business continuity and disaster recovery. A joint validation by VMware, Cisco and NetApp tested VMotion over 200km and found application performance degradation of less than 3%. Infrastructure requirements, configuration options, and best practices are provided to support long distance VMotion across data centers.
This document summarizes the findings of a case study comparing the 5-year total cost of ownership (TCO) for 4 disk array solutions and 4 software-defined storage solutions for backup to disk. The study found that SUSE Enterprise Storage 4 provided the lowest overall 5-year TCO that was $181,457 less than the most expensive solution from EMC. SUSE offered multiple layers of cost savings, including using standard hardware, low annual software licensing fees spread over 5 years, and support included in the license cost. The study concludes that software-defined storage solutions can provide disk backup for half the cost of branded storage arrays.
IME is DDN's software-defined elastic data services product that uses NVMe SSDs to accelerate I/O between compute clusters and parallel file systems. It intelligently virtualizes disparate NVMe SSDs into a shared memory pool for high performance. IME delivers better performance than other burst buffer solutions, particularly for small, random, and shared file I/O patterns. It also offers fault tolerance and data protection through distributed erasure coding.
Major trends impacting data centers include exploding data growth, the rise of virtual servers, and increasing compute demands. This is creating a "perfect storm" that is straining legacy storage systems and driving up costs. New technologies like server virtualization, storage virtualization, dynamic tiering, and capacity virtualization can help address these challenges by enabling more efficient utilization of resources, automated data placement, and non-disruptive operations to reduce costs.
The document provides an overview of Macroview Solution's data center virtualization offerings. It discusses their technology partners including VMware, Cisco, Citrix, Microsoft, and NetApp. It then summarizes their service catalog including virtualization, compute, storage, virtual desktop, enterprise mobility, disaster recovery, and multi-cloud capabilities. Specific storage solutions from NetApp are highlighted including all-flash arrays, snapshots, cloning, deduplication, encryption, quality of service, and data replication technologies.
This document discusses challenges with modern data infrastructure and how DataCore software addresses them. It summarizes that data is growing faster than storage budgets, storage silos waste capacity and are hard to manage, and applications often run slowly due to storage performance issues. DataCore software solves these problems by pooling storage, providing infrastructure services independently of hardware, separating software and hardware advances, and providing single-pane management of disparate infrastructure.
Virtual SAN vs Good Old SANs: Can't they just get along?DataCore Software
Dark forces in the IT industry like to polarize popular opinion; most recently they argue for keeping all the storage in the servers using Virtual SANs, leaving nothing external. These sudden mood swings, while attracting a young cult following, lose sight of lessons learned over the past 20 years.
Truth is, a blend of internal storage close to the apps with good old fashioned external secondary storage out on the network makes a heck of a lot of sense.
In this presentation, Senior Analyst Jim Bagley from SSG-NOW, shows the not so black-and-white considerations driving customers to tap into the internal storage resources of clustered servers. Also, it provides some practical guidance on how to incorporate existing storage arrays and even public cloud capacity into your Virtual SAN rollout
The document discusses application mobility across data centers using VMware VMotion, Cisco networking technologies, and NetApp storage solutions. It describes how VMotion can be used over long distances for business continuity and disaster recovery. A joint validation by VMware, Cisco and NetApp tested VMotion over 200km and found application performance degradation of less than 3%. Infrastructure requirements, configuration options, and best practices are provided to support long distance VMotion across data centers.
Back up deduplicated data in less time with the Dell DR6000 Disk Backup Appli...Principled Technologies
Backing up data is a key component in data protection. However, long backup windows can cause headaches for IT and users while slowing down the network. We found that using source-side deduplication and Rapid CIFS technology to back up data to the Dell DR6000 Disk Backup Appliance was faster—with the average rate of data backup at 8.99 TB per hour. The backup to the DR6000 completed in two-thirds the time that the backup to the industry-leading deduplication appliance completed. Backing up to the DR6000 consumed less than one-sixth the bandwidth needed to back up to the industry-leading deduplication appliance. In addition, the DR6000 needed less rack space and cost a third less than the competition. The solution to lengthy backup windows is clear: Save time and network bandwidth with source-side deduplication built into the Dell DR6000 Disk Backup Appliance.
The skyrocketing costs to achieve continuous data availability, cope with exponential data growth, and provide timely data access ranks among the most pressing challenges facing Healthcare IT organizations.
This presentation highlights how DataCore's Software-defined Storage solution can help Healthcare IT organizations increase uptime, optimize capacity and accelerate performance cost-effectively.
Delivering First Class performance and Availability for Virtualized Tier 1 Apps DataCore Software
This document discusses how virtualization introduces performance barriers and availability issues for applications. It presents networked flash storage and storage virtualization as solutions to provide predictable, high performance and continuous availability for virtualized applications in a simple and cost-effective manner. Specifically, it allows introducing flash as a new high performance tier, provides continuous availability through data separation and mirroring across rooms, and offers a scalable platform to meet growing needs.
OLTP with Dell EqualLogic hybrid arrays: A comparative study with an industry...Principled Technologies
The effectiveness of your OLTP database environment can depend to an enormous degree on the storage system you select. We compared a database server solution using the Dell EqualLogic PS6210XS with a one using competing industry-leading SAN storage.
The EqualLogic PS6210XS solution was overwhelmingly superior in all areas we tested. It delivered twice the performance with half the response time, and used a fraction of the power.
These factors make it clear that any business that relies on its database servers and wants to get the greatest return on its storage investment must consider the Dell EqualLogic PS6210XS.
Hitachi Virtual Storage Platform is the only 3D scaling storage platform designed for all data types. It is the only storage architecture that flexibly adapts for performance, capacity and multivendor storage. Combined with unique Hitachi Command Suite management software, it transforms the data center.
Capacity Efficiency: Identifying the Right Solutions for the Right ChallengeHitachi Vantara
Justin Augat, Hitachi Data Systems Senior Product Marketing Manager shares strategies to identify current storage costs, measure the unit cost of data storage, and set preliminary plans to reduce the total cost of storage.
The document discusses EMC's ViPR software-defined storage platform. ViPR abstracts physical storage into a single virtual storage pool that automates storage provisioning. It provides a unified platform to manage multiple storage arrays from different vendors through a single API. ViPR also includes object and HDFS data services to enable cloud-like capabilities and expand big data analytics. The goal of ViPR is to provide flexibility, choice and a path to the future for customers' evolving storage and data management needs.
Virtual SAN: It’s a SAN, it’s Virtual, but what is it really?DataCore Software
What do you think of when you hear the words “Virtual SAN”? For some, it may mean addressing application latency and infrastructure costs through consolidation. For others, it may be addressing potential single point of failures. Regardless of the use case, Virtual SANs are becoming one of the hottest software-defined storage solutions for IT organizations to maximize storage resources, lower overall TCO, and increase availability of critical applications and data.
This presentation introduces the concept of Virtual SAN and does a technical deep dive on the most common use cases and deployment models involved with a DataCore Virtual SAN solution.
S de0882 new-generation-tiering-edge2015-v3Tony Pearson
IBM offers a variety of storage optimization technologies that balance performance and cost. This session covers Easy Tier, Storage Analytics, and Spectrum Scale.
An overview of Converged and Hyperconverged Systems, including VersaStack and IBM Hyperconverged Systems. Presented in Orlando, FL IBM Technical University.
This document discusses DDN's optimization of Lustre and GPFS file systems. It provides an overview of DDN's extensive testing and benchmarking facilities and describes their long involvement with the Lustre file system, including major contributions to the open source code. It also presents performance results demonstrating the benefits of various DDN technologies and configurations.
The media company needed a NAS solution that could deliver over 10,000 IOPS to support hundreds of simultaneous users. Traditional HDDs could not meet this requirement cost-effectively. Netweb Technologies implemented an SSD caching solution within the NAS, allowing it to meet the high performance needs while keeping costs reasonable. This delivered over 10,000 IOPS to support over 200 users without lag or downtime, within the company's budget.
This document discusses techniques for implementing storage tiering to simplify management, lower costs, and increase performance. It describes using IBM's Easy Tier technology to automatically move data between tiers of flash, disk, and tape storage based on I/O density and age. The tiers include flash, solid state drives, enterprise HDDs, and nearline HDDs. Easy Tier measures activity every 5 minutes and moves hot data to faster tiers and cold data to slower tiers with little administration needed. Case studies show how storage tiering saved IBM Global Accounts $17 million in one year and $90 million over 5 years by optimizing data placement across tiers.
Provisioning server high_availability_considerations2Nuno Alves
The purpose of this document is to give the target audience an overview about the critical components of a Citrix
Provisioning Server infrastructure with regards to a high availability implementation. These considerations focus on the
following areas:
• Virtual Disk (vDisk) Storage
• Write Cache Placement
• SQL Database
• TFTP Service
• DHCP Service
EMC Starter Kit - IBM BigInsights - EMC IsilonBoni Bruno
The document provides an overview of deploying IBM BigInsights v4.0 with EMC Isilon OneFS for HDFS storage. It includes a pre-installation checklist of supported software versions and hardware requirements. The installation overview section describes prerequisites and steps to prepare the Isilon storage, Linux compute nodes, and install IBM Open Platform and value packages. It also covers security configuration and administration after deployment.
Sizing Splunk SmartStore - Spend Less and Get More Out of SplunkPaula Koziol
Data is growing exponentially; however IT budgets are not. Growth in internal use cases and additional data sources can put organizations under intense pressure to manage spiraling costs. The good news is that help is on the way. We will show how to size and configure Splunk SmartStore to yield significant cost savings, for both current and future data growth. In addition, learn how to configure the Splunk deployment for optimal search performance.
Originally presented at Splunk .conf19 on October 22, 2019
IBM Spectrum Scale is software-defined storage that provides file storage for cloud, big data, and analytics solutions. It offers data security through native encryption and secure erase, scalability via snapshots, and high performance using flash acceleration. Spectrum Scale is proven at over 3,000 customers handling large datasets for applications such as weather modeling, digital media, and healthcare. It scales to over a billion petabytes and supports file sharing in on-premises, private, and public cloud deployments.
This Solutions Brief provides information about high-growth opportunities, All-Flash products from Nimbus, and resources available to help turn them into profits.
Back up deduplicated data in less time with the Dell DR6000 Disk Backup Appli...Principled Technologies
Backing up data is a key component in data protection. However, long backup windows can cause headaches for IT and users while slowing down the network. We found that using source-side deduplication and Rapid CIFS technology to back up data to the Dell DR6000 Disk Backup Appliance was faster—with the average rate of data backup at 8.99 TB per hour. The backup to the DR6000 completed in two-thirds the time that the backup to the industry-leading deduplication appliance completed. Backing up to the DR6000 consumed less than one-sixth the bandwidth needed to back up to the industry-leading deduplication appliance. In addition, the DR6000 needed less rack space and cost a third less than the competition. The solution to lengthy backup windows is clear: Save time and network bandwidth with source-side deduplication built into the Dell DR6000 Disk Backup Appliance.
The skyrocketing costs to achieve continuous data availability, cope with exponential data growth, and provide timely data access ranks among the most pressing challenges facing Healthcare IT organizations.
This presentation highlights how DataCore's Software-defined Storage solution can help Healthcare IT organizations increase uptime, optimize capacity and accelerate performance cost-effectively.
Delivering First Class performance and Availability for Virtualized Tier 1 Apps DataCore Software
This document discusses how virtualization introduces performance barriers and availability issues for applications. It presents networked flash storage and storage virtualization as solutions to provide predictable, high performance and continuous availability for virtualized applications in a simple and cost-effective manner. Specifically, it allows introducing flash as a new high performance tier, provides continuous availability through data separation and mirroring across rooms, and offers a scalable platform to meet growing needs.
OLTP with Dell EqualLogic hybrid arrays: A comparative study with an industry...Principled Technologies
The effectiveness of your OLTP database environment can depend to an enormous degree on the storage system you select. We compared a database server solution using the Dell EqualLogic PS6210XS with a one using competing industry-leading SAN storage.
The EqualLogic PS6210XS solution was overwhelmingly superior in all areas we tested. It delivered twice the performance with half the response time, and used a fraction of the power.
These factors make it clear that any business that relies on its database servers and wants to get the greatest return on its storage investment must consider the Dell EqualLogic PS6210XS.
Hitachi Virtual Storage Platform is the only 3D scaling storage platform designed for all data types. It is the only storage architecture that flexibly adapts for performance, capacity and multivendor storage. Combined with unique Hitachi Command Suite management software, it transforms the data center.
Capacity Efficiency: Identifying the Right Solutions for the Right ChallengeHitachi Vantara
Justin Augat, Hitachi Data Systems Senior Product Marketing Manager shares strategies to identify current storage costs, measure the unit cost of data storage, and set preliminary plans to reduce the total cost of storage.
The document discusses EMC's ViPR software-defined storage platform. ViPR abstracts physical storage into a single virtual storage pool that automates storage provisioning. It provides a unified platform to manage multiple storage arrays from different vendors through a single API. ViPR also includes object and HDFS data services to enable cloud-like capabilities and expand big data analytics. The goal of ViPR is to provide flexibility, choice and a path to the future for customers' evolving storage and data management needs.
Virtual SAN: It’s a SAN, it’s Virtual, but what is it really?DataCore Software
What do you think of when you hear the words “Virtual SAN”? For some, it may mean addressing application latency and infrastructure costs through consolidation. For others, it may be addressing potential single point of failures. Regardless of the use case, Virtual SANs are becoming one of the hottest software-defined storage solutions for IT organizations to maximize storage resources, lower overall TCO, and increase availability of critical applications and data.
This presentation introduces the concept of Virtual SAN and does a technical deep dive on the most common use cases and deployment models involved with a DataCore Virtual SAN solution.
S de0882 new-generation-tiering-edge2015-v3Tony Pearson
IBM offers a variety of storage optimization technologies that balance performance and cost. This session covers Easy Tier, Storage Analytics, and Spectrum Scale.
An overview of Converged and Hyperconverged Systems, including VersaStack and IBM Hyperconverged Systems. Presented in Orlando, FL IBM Technical University.
This document discusses DDN's optimization of Lustre and GPFS file systems. It provides an overview of DDN's extensive testing and benchmarking facilities and describes their long involvement with the Lustre file system, including major contributions to the open source code. It also presents performance results demonstrating the benefits of various DDN technologies and configurations.
The media company needed a NAS solution that could deliver over 10,000 IOPS to support hundreds of simultaneous users. Traditional HDDs could not meet this requirement cost-effectively. Netweb Technologies implemented an SSD caching solution within the NAS, allowing it to meet the high performance needs while keeping costs reasonable. This delivered over 10,000 IOPS to support over 200 users without lag or downtime, within the company's budget.
This document discusses techniques for implementing storage tiering to simplify management, lower costs, and increase performance. It describes using IBM's Easy Tier technology to automatically move data between tiers of flash, disk, and tape storage based on I/O density and age. The tiers include flash, solid state drives, enterprise HDDs, and nearline HDDs. Easy Tier measures activity every 5 minutes and moves hot data to faster tiers and cold data to slower tiers with little administration needed. Case studies show how storage tiering saved IBM Global Accounts $17 million in one year and $90 million over 5 years by optimizing data placement across tiers.
Provisioning server high_availability_considerations2Nuno Alves
The purpose of this document is to give the target audience an overview about the critical components of a Citrix
Provisioning Server infrastructure with regards to a high availability implementation. These considerations focus on the
following areas:
• Virtual Disk (vDisk) Storage
• Write Cache Placement
• SQL Database
• TFTP Service
• DHCP Service
EMC Starter Kit - IBM BigInsights - EMC IsilonBoni Bruno
The document provides an overview of deploying IBM BigInsights v4.0 with EMC Isilon OneFS for HDFS storage. It includes a pre-installation checklist of supported software versions and hardware requirements. The installation overview section describes prerequisites and steps to prepare the Isilon storage, Linux compute nodes, and install IBM Open Platform and value packages. It also covers security configuration and administration after deployment.
Sizing Splunk SmartStore - Spend Less and Get More Out of SplunkPaula Koziol
Data is growing exponentially; however IT budgets are not. Growth in internal use cases and additional data sources can put organizations under intense pressure to manage spiraling costs. The good news is that help is on the way. We will show how to size and configure Splunk SmartStore to yield significant cost savings, for both current and future data growth. In addition, learn how to configure the Splunk deployment for optimal search performance.
Originally presented at Splunk .conf19 on October 22, 2019
IBM Spectrum Scale is software-defined storage that provides file storage for cloud, big data, and analytics solutions. It offers data security through native encryption and secure erase, scalability via snapshots, and high performance using flash acceleration. Spectrum Scale is proven at over 3,000 customers handling large datasets for applications such as weather modeling, digital media, and healthcare. It scales to over a billion petabytes and supports file sharing in on-premises, private, and public cloud deployments.
This Solutions Brief provides information about high-growth opportunities, All-Flash products from Nimbus, and resources available to help turn them into profits.
Data is being generated at rates never before encountered. The explosion of data threatens to consume all of our IT resources: People, budget, power, cooling and data center floor space. Are your systems coping with your data now? Will they continue to deliver as the stress on data centers increases and IT budgets dwindle?
Imagine if you could be ahead of the data explosion by being proactive about your storage instead of reactive. Now you can be, with NetApp's approach to the designs and deployment of storage systems. With it, you can take advantage of NetApp's latest storage enhancements and take control of your storage. This will allow you to focus on gathering more insights from your data and deliver more value to your business.
NetApp's most advanced storage solutions are NetApp Virtualization & scale out. By taking control of your existing storage platform with either solution, you get:
• Immortal Storage system
• Infinite scalability
• Best possible ROI from existing environment
NetApp provides an enterprise-grade all-flash storage solution called AFF (All Flash FAS) that delivers flash performance and data services. SolidFire is another all-flash storage platform in NetApp's portfolio that is designed for large-scale infrastructure and can guarantee performance to thousands of applications through its quality of service features. The document discusses the benefits of flash storage and how NetApp's solutions help customers transform their data centers and lower costs through flash innovation like inline data compaction in ONTAP 9.
This document provides information about IBM's IT Economics practice and no-charge studies they provide to help clients prove the value of IBM technology and make financially-based IT decisions. It discusses how IT Economics assesses total IT costs, compares infrastructure alternatives, and uses transparent models to identify cost savings and efficiencies. The document outlines the benefits clients have experienced from IT Economics studies, including reduced software licensing costs through workload consolidation on IBM Z systems. It invites readers to request a no-charge study to assess opportunities in their own IT environment.
Handle transaction workloads and data mart loads with better performancePrincipled Technologies
Database work is a big deal—in terms of its importance to your company, and the sheer magnitude of the work. Our tests with the Dell EMC PowerEdge R930 server and Unity 400F All-Flash storage array demonstrated that it could perform comparably to an HPE ProLiant DL380 Gen9 server and 3PAR array during OLTP workloads, with a better compression ratio (3.2-to-1 vs. 1.3-to-1). For loading large sets of data, the Dell EMC Unity finished 22 percent faster than the HPE 3PAR, which can result in less hassle for the administrator in charge of data marts. When running both OLTP and data mart workloads in tandem, the Unity array outperformed the HPE 3PAR in terms of orders processed per minute by 29 percent. For additional product information concerning the Unity 400F storage array, visit DellEMC.com/Unity.
Cloud computing is driving increased standardization, automation and centralized management across both public and private IT systems. Improvements in bandwidth are enabling a convergence of separate network types. Changing energy costs, economics and performance characteristics are shifting the roles that different storage solutions play.
Comparing Cost of Dell EMC Centera and HPE/SUSE/iTernity iCASIT Brand Pulse
This document compares the total cost of ownership of two archival storage solutions over five years: Dell EMC Centera and HPE/SUSE/iTernity iCAS. It finds that the HPE solution has significantly lower costs, with hardware 47% cheaper, software 97% cheaper, and support 354% cheaper. As a result, the cumulative five-year cost is over $350,000 more for Centera, which is more than double the cost of the HPE solution. The document concludes the HPE solution better addresses the high costs of compliant archive storage.
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based HardwareRed_Hat_Storage
This document discusses how data growth driven by mobile, social media, IoT, and big data/cloud is requiring a fundamental shift in storage cost structures from scale-up to scale-out architectures. It provides an overview of key storage technologies and workloads driving public cloud storage, and how Ceph can help deliver on the promise of the cloud by providing next generation storage architectures with flash to enable new capabilities in small footprints. It also illustrates the wide performance range Ceph can provide for different workloads and hardware configurations.
Josh Krischer - How to get more for less (4 november 2010 Storage Expo)VNU Exhibitions Europe
1. Storage procurement accounts for a large percentage of data center costs, and new technologies are emerging to help reduce costs through improved efficiency and functionality.
2. When negotiating storage contracts, it is important to avoid restrictive damage limitations and carefully consider maintenance costs, upgrade options, and future price projections to maximize savings over the lifespan of the system.
3. Adopting strategies like tiered storage, deduplication, thin provisioning, and virtualization can help lower total storage costs through improved utilization and reduced power consumption.
IBM eX5 Workload Optimized x86 ServersCliff Kinard
Learn about how these IBM eX5 servers are purposely built for workloads. This presentation shows how IBM's pre-configured solutions can reduce deployment time from months to weeks while saving clients over $100,000 in installation and setup costs.
This document compares Hitachi Virtual Storage Platform G1000 and Hitachi Storage Virtualization Operating System to other enterprise storage virtualization platforms. It summarizes that Hitachi's solution delivers enterprise-class storage virtualization to help manage more data more efficiently at higher performance and service levels with lower operating costs. It can reduce operating costs and increase value by simplifying storage management and enabling greater scalability. It also helps maximize the return on existing storage assets and reduces risks while maximizing availability.
The document compares Hitachi Virtual Storage Platform G1000 and other storage virtualization platforms. It summarizes that VSP G1000 offers enterprise-class storage virtualization to help manage data more efficiently at higher performance and lower costs than alternatives. It delivers features like active-active support across data centers, simplified management across vendors through a single interface, high availability, and accelerated innovation through non-disruptive upgrades. The document concludes that VSP G1000 can transform IT infrastructure into a flexible, cost-effective platform for business innovation and growth.
K5.Fujitsu World Tour 2016-Winning with NetApp in Digital Transformation Age,...Fujitsu India
The document discusses next generation datacenters and flash storage. It provides an overview of how data and storage needs are growing exponentially. It then discusses how all-flash storage arrays from NetApp can help by providing much higher performance, lower total cost of ownership, and ability to consolidate infrastructure. The document outlines NetApp's broad portfolio of all-flash solutions and positioning across performance, scale, and data services.
1) SAS based storage provides advantages over SATA for enterprise environments by offering higher performance, reliability, and suitability for multi-drive systems through features like dual-port connectivity and enhanced data integrity.
2) As data center workloads increase in complexity due to trends like cloud computing and multi-core processing, the demands for storage performance will also grow, benefiting SAS which is designed for enterprise settings.
3) Choosing the right drive interface involves considering factors like workload requirements, capacity needs, robustness, and suitability for large scale deployments, where SAS excels over SATA particularly for performance-oriented applications.
This document discusses analyzing and optimizing costs when using AWS. It begins by addressing common misconceptions about AWS costs, such as that hardware costs are always cheaper than AWS or that cloud is not cost-effective for steady workloads. It then examines the total cost of ownership for on-premises infrastructure versus AWS, considering various fixed costs like hardware, software, facilities, administration, etc. The document provides examples of how tools like reserved instances, spot instances, and Trusted Advisor can help optimize costs over time. It emphasizes that AWS allows customers to scale resources up and down as needed to match actual demand.
Storwize V7000 Solution Tco White Paper AlineanSuzyIBM
The document provides a three year total cost of ownership analysis comparing the IBM Storwize V7000 storage solution to a competitive solution. It finds that the Storwize V7000 can reduce hardware, software, maintenance and support costs by 30-65% through features like virtualization, thin provisioning, and integrated storage management software. Over three years, the Storwize V7000 is projected to save over $218,000, representing a 45% reduction in total storage costs.
Microsoft Azure intro - common information and blah blah blah about cloud computing, virtual machines - comparing A and D series by numbers ( performance CPU, RAM, storage ) and variability, Web apps ( ex-Web sites ).
Similar to Enterprise Mass Storage TCO Case Study (20)
The 2022 Flash brand leader surveys cover 9 Flash products.
This report includes the results of voting for six categories of brand leadership for each service: Market, Price, Performance, Reliability, Service & Support and Innovation.
The 2020 Storage Brand Leader Survey covers 14 Storage products. This report includes the results of IT Pro voting for six categories of brand leadership for each service: Market, Price, Performance, Reliability, Innovation, and Service & Support.
The 2019 Servers Brand Leader Survey covers 11 server products. This report includes the results of IT Pro voting for six categories of brand leadership for each service: Market, Price, Performance, Reliability, Innovation, and Service & Support.
Industry's First Petabyte-Scale On-Prem STaaSIT Brand Pulse
Infinidat provides on-premises petabyte-scale storage as a service that allows customers to pay for only the storage capacity they need. Their solution uses commodity hardware and data reduction technologies to deliver high performance and low cost storage that can scale to multiple petabytes. Infinidat handles maintenance, support, and upgrades to provide customers a fully-managed storage service on site.
The 2019 Storage brand leader surveys cover 14 storage products. This report includes the results of IT Pro voting for six categories of brand leadership for each service: Market, Price, Performance, Reliability, Innovation, and Service & Support.
AWS is estimated to be the #3 enterprise storage vendor by revenue in 2018, with storage revenue estimated between $1.1-1.486 billion in Q4 2018. By extrapolating AWS's 52% market share of the IaaS market to the storage-as-a-service market, and estimating that 15-20% of AWS's revenue comes from storage, AWS is calculated to have overtaken traditional storage vendors by revenue. If AWS maintains 35-45% annual growth, it is projected to become the #1 storage vendor by 2020, with over $3 billion in storage revenue, cementing its position as a top storage leader through sustained large investments in R&D.
AWS #3 Storage Vendor in 2017 | #1 in 2020IT Brand Pulse
This IT Brand Pulse industry brief uses crowd-sourced data about storage and cloud revenues to estimate the size and ranking of the AWS storage business. The bottom line is AWS is the #3 ranked storage vendor in 2017 and will be #1 in 2020.
The document discusses how server and storage utilization has changed with the adoption of virtualization, noting that a survey found over half of small to medium companies were using VMware hypervisors by 2016, especially among larger companies. It also explains how vMotion traffic can either share the Ethernet network with application servers or be isolated to a dedicated FC-SAS storage network. Finally, it lists some VMware and ATTO solutions related to using direct attached storage to create a SAN.
The document discusses how small to medium businesses have swung between using direct-attached storage (DAS) and storage area networks (SANs) for their VMware environments. It provides an example of a construction firm, Torcon, that converted their DAS setup into a SAN using ATTO technology to save costs and extend the usable life of their servers and storage. The conversion took less than three hours and provided benefits like isolated storage networking and easier expansion capacity. The document advocates that SAS-based SANs provide performance and flexibility comparable to fibre channel SANs at a lower cost that is suitable for cost-conscious small to medium businesses.
The 2018 IaaS brand leader surveys cover fourteen Infrastructure-as-a-Service products. This report includes the results of IT Pro voting for six categories of brand leadership for each service: Market, Price, Performance, Reliability, Service & Support and Innovation.
This IT Brand Pulse report includes data from the independent, non-sponsored annual survey on Scale-Out File Storage--voted on by IT professionals--covering six categories of brand leadership: Market, Price, Performance, Reliability, Service & Support and Innovation.
Please contact us at info@itbrandpulse.com for information on this or other technology product brand leader surveys.
2017 Servers for Software-Defined Storage Brand Leader ReportIT Brand Pulse
This IT Brand Pulse report includes data from the independent, non-sponsored annual survey on Servers for SDS--voted on by IT professionals--covering six categories of brand leadership: Market, Price, Performance, Reliability, Service & Support and Innovation.
Please contact us at info@itbrandpulse.com for information on this or other technology product brand leader surveys.
This IT Brand Pulse report includes data from the independent, non-sponsored annual survey on Enterprise HDDs--voted on by IT professionals--covering six categories of brand leadership: Market, Price, Performance, Reliability, Service & Support and Innovation.
Please contact us at info@itbrandpulse.com for information on this or other technology product brand leader surveys.
2017 Flash Storage and NVME Brand Leader Mini-ReportIT Brand Pulse
This IT Brand Pulse mini-report includes only market leader data from the independent, non-sponsored survey covering six categories of brand leadership—Market, Price, Performance, Reliability, Service & Support and Innovation—for twelve Flash Storage and NVMe products.
Complete survey data for each product category is available. Please contact us at info@itbrandpulse.com for information and pricing.
2017 AI and Cloud Brand Leader Mini-ReportIT Brand Pulse
This IT Brand Pulse mini-report includes only market leader data from the independent, non-sponsored survey covering six categories of brand leadership—Market, Price, Performance, Reliability, Service & Support and Innovation—for nine AI and Cloud products.
Complete survey data for each product category is available. Please contact us at info@itbrandpulse.com for information and pricing.
This document is a mini-report from a 2017 survey of IT professionals on brand leaders in networking and scale-out storage. It provides the methodology of the survey and charts showing the market leader voted for each of 12 product categories by the survey respondents. Categories included bare metal switch OS, embedded blade server networking, Ethernet NICs, file synchronization and sharing, Ethernet switches, open networking switches, scale-out file storage, scale-out object storage appliances and software, servers for software-defined storage, and WAN optimization.
2017 Server & Database Brand Leader Mini ReportIT Brand Pulse
This IT Brand Pulse mini-report includes only market leader data from the independent, non-sponsored survey conducted in February, 2017 covering six categories of brand leadership–Market, Price, Performance, Reliability, Service & Support and Innovation–for fourteen Server and Database products.
Complete survey data for each product category is available. Please contact us at info@itbrandpulse.com for information and pricing.
This IT Brand Pulse mini-report includes only market leader data from the independent, non-sponsored survey conducted in January 2017 covering six categories of brand leadership–Market, Price, Performance, Reliability, Service & Support and Innovation–for twelve Networked Storage products.
Complete survey data for each product category is available. Please contact us at info@itbrandpulse.com for information and pricing.
The document discusses emerging trends in cloud storage over the next 10 years. It predicts that storage will become fully automated and instrumented, allowing storage management tasks to be performed automatically based on policies. Artificial intelligence is expected to allow storage systems to recognize and respond to complex problems on their own. By 2026, neural networks and deep learning may allow storage systems to develop capabilities independently. The rise of technologies like artificial intelligence, machine learning, and deep learning will transform the storage industry and drive innovation in areas like computer vision, natural language processing, and medical diagnosis.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
2. Document # TCO2015001 v1, March, 2015 Page 2 of 13
TCO Case Studies
TCO Case Studies
IT professionals know the cost of owning servers, networking and storage equipment is more than the
purchase price of the hardware. The total cost of IT equipment also includes installation, software licenses,
service, support, training, upgrades, and other costs related to a specific product or situation.
TCO case studies are designed to provide busy IT Pros with vendor-independent data about the total cost of
specific products. This case study examines eight comparably equipped enterprise storage solutions: two from
AWS, five from disk array vendors, and one from a software defined storage vendor. It turns out one of the
vendors can deliver an HDD-based mass storage solution for less than 1 penny per GB per month.
Read the rest of this report to find out who it is. Hint: it isn’t Amazon.
Table of Contents
Topic Page
Introduction 2
The Cost of Owning Mass Storage 3
The Storage Systems 4
EMC VNXe 3200 Disk Array 5
NetApp E2700 Disk Array 6
AWS Glacier Storage-as-a-Service 7
NEC M110 Disk Array 8
Dot Hill Ultra56 Disk Array 9
SUSE Enterprise Storage (Software Defined Storage) 10
Side-by-Side Comparison 11
Resources 12
3. Document # TCO2015001 v1, March, 2015 Page 3 of 13
The Cost of Owning Mass Storage
Cost Components
Below are the cost components used to calculate the total cost of a owning mass storage over a 5 year period.
Hardware Product Cost -The purchase price for storage array chassis’, servers and HDDs.
Recurring Software License Fees - Annual license fees for software if applicable.
Recurring Annual Service & Support Fees -The cost of a service agreement providing 24 x7 on-site service and
spares with 4-hour response time.
Training - The cost of certifying one network engineer for this class of product (not applicable in this report).
Spare Parts - The cost of on-site spare power supplies and SFPs (not applicable in this report).
Total Cost of Ownership - The sum of the hardware product cost, software license fees, service and support
fees, training, and spare parts over a 5 year period.
Getting the Cost Data
The product pricing (cost) data used in this case study comes from on-line resellers and solution provider
quotations who are responding to a request for quote (RFQ) from IT Brand Pulse.
Apples-to-Apples Comparison
The hardware, software and service products used in this case study were selected because they were
comparable to each other. Differences in the products and services are described in the product overviews.
Mass Storage
The application for the systems evaluated in this report is storing large quantities of data
which is infrequently accessed. Examples of applications which use mass storage are backup,
archive, and replication for disaster recovery. In all cases, the data must be on-line and highly
available. Other names for this application are Bulk Storage and Nearline Storage.
4. Document # TCO2015001 v1, March, 2015 Page 4 of 13
The Storage Systems
Cloud, Hardware and Software-Defined
Storage architects and administrators are now faced with three distinctly different classes of storage solutions
to evaluate: 1) cloud storage-as-a-service, 2) traditional hardware disk array, and 3) software defined storage
apps which run on industry standard servers.
This report examines one cloud offering from Amazon, four disk array systems, and one software defined
storage solution consisting of software from SUSE and servers from Supermicro.
Entry-level disk arrays were used because they met the performance, availability and useable capacity of our
application. If mid-range or high-end storage arrays were used, the 5-year TCO would have been significantly
higher.
Dozens of features could have been added to all the configurations to enhance the performance (SSD),
availability (RAID) and useable capacity (compression and dedup). But a simple storage configuration met our
requirements for bulk storage which is infrequently accessed.
Solution Type Configuration
Amazon AWS Glacier Cloud Storage-as-a-Service Starting at 250TB
Growing at 25% per year
(appx. 600TB after 5 years)
Fully redundant
24x7 support
4 hour on-site service
(not applicable for cloud)
Cost of raw storage
(no compression, dedup, etc.)
Dot Hill Ultra56 Disk Array
EMC VNXe 3200 Disk Array
NEC M110 Disk Array
NetApp E2700 Disk Array
SUSE Enterprise Storage
Server-based
Software Defined Storage
5. Document # TCO2015001 v1, March, 2015 Page 5 of 13
EMC VNXe 3200
EMC’s Most Affordable Unified, Hybrid Storage Array is Flat Out Expensive
We chose the VNXe 3200 because it’s the most affordable unified, hybrid storage array available from EMC. A
single 2U Disk Processor Enclosure can support 150 HDDs up to 500TB of raw capacity. To meet our initial
requirement for 250TB of raw capacity and growth of 25% per year, we had to purchase two VNXe systems
and a total of 13 chassis’.
Highlights
This NAS and SAN array offers high-end features as standard, and includes a ton of software: VNXe Operating
Environment (OE), Unisphere Web Console, Unisphere Central (Multi-site management), Unified Snapshots,
Native Asynchronous Block Replication, FAST Suite, VNXe Monitoring and Reporting, Integrated Support/Dial-
Home Services, File Deduplication and Compression, Thin Provisioning, Event Enabler (common anti-virus)
and File-Level Retention (WORM).
Why it Wasn’t the Lowest Cost Solution
The VNXe 3200 may be EMC’s most affordable array, but that is like saying the C-Class is Mercedes's most
affordable car starting at $40,000. In addition, this system is not optimized for mass storage. You can scale,
but it will be with only a dozen 4TB drives per chassis, resulting in the need for 2-3 new chassis’ every year.
5-Year Cost of Ownership: $376, 976
6. Document # TCO2015001 v1, March, 2015 Page 6 of 13
NetApp E2700
Entry-Level System, High-End Cost
The E2700 is NetApp’s entry level block storage systems with an intuitive interface for administering E-Series
storage systems such that no storage expertise is required. Dynamic disk pools simplify the management of
traditional RAID groups by distributing data parity information and spare capacity across a pool of drives.
Highlights
The NetApp E2700 block storage system is available in configurations designed for capacity intensive
environments. At the center of these configurations are ultra-dense 60-drive 4U system and disk shelves.
Why it Wasn’t the Lowest Cost Solution
While the NetApp E2700 gets high marks for ease-of-use, and for offering high-density configurations which
should make scaling mass storage cost effective. But customers are asked to pay a hefty premium for the
NetApp brand. For example, add-on 6TB drives for a NetApp E2700 cost $2,512 — over four times the cost of
add-on drives for industry standard servers supporting software defined storage.
5-Year Cost of Ownership: $261,622
7. Document # TCO2015001 v1, March, 2015 Page 7 of 13
AWS Glacier
Hyperscale Storage for the Masses
Amazon Glacier is their lowest cost storage offering and promoted as a storage service for data archiving and
online backup. According to Amazon, customers can reliably store large or small amounts of data for as little
as $0.01 per gigabyte per month, a significant savings compared to on-premises solutions. They go on to say
that to keep costs low, Amazon Glacier is optimized for infrequently accessed data where a retrieval time of
several hours is suitable.
Highlights
With basic storage costing $0.01 per gigabyte per month, Amazon Glacier allows customers to archive large
amounts of data, pay for what they need, with no minimum commitments or up-front fees.
Why it Wasn’t the Lowest Cost Solution
Amazon says Glacier allows you to archive large amounts of data “at a very low cost” — and $0.01 per
gigabyte per month doesn't’ sound like much — but it adds up fast. So fast that without any additional
charges for requests or transfers, Glacier storage-as-a-service is the second most expensive mass storage
solution covered. Even if we discount 25% because storage-as-a-service eliminates the need to purchase
capacity headroom, the 5-year cost would be $184,000, and Glacier would be the third most expensive
solution. And if there are bursts of requests or large data transfers, which is likely, the costs would go up.
5-Year Cost of Ownership: $245,611
Amazon Glacier
8. Document # TCO2015001 v1, March, 2015 Page 8 of 13
NEC M110
Middle-of-the-Road Among Disk Arrays
According to NEC, the NEC M110 SAN disk array is designed to serve as primary storage, high capacity
secondary, or tiered storage. The M110 storage controller can support up to 120 HDDs.
Highlights
Among disk arrays, the NEC M110 is a middle-of-the-road product in terms of features and price. But for a
mass storage application which demands the lowest cost of ownership, the NEC M110 is more than double
the cost of our lowest cost solution.
Lowlights
In contrast to the lowest cost solutions evaluated in this report which support 6TB drives, the highest
capacity drive supported by the NEC M110 is 4TB. Combine that with a controller supporting only 120 drives,
and enclosures fitting only 12 drives per chassis, and it becomes clear the NEC M110 is not a high-density
platform needed for scaling mass storage.
5-Year Cost of Ownership: $225,203
9. Document # TCO2015001 v1, March, 2015 Page 9 of 13
Dot Hill Ultra56
Optimized for Mass Storage
The Dot Hill Ultra56 is a member of the AssuredSAN Ultra Series of storage arrays. The products are designed
for datacenters that require the highest storage density. The Ultra56 chassis houses up to 56 3.5-inch large
form factor drives.
Highlights
The Dot Hill Ultra56 is optimized for scaling mass storage cost-effectively. The Ultra56 chassis offer 2 to 4
times the capacity of most general-purpose mid-range disk arrays.
Why it Wasn’t the Lowest Cost Solution
With a high-density chassis design, this product should have offered the lowest cost, at least among disk
arrays. And it was for the first 2 years. However, the drive pricing, which makes up such a huge part of the
overall cost, was 26% higher than the lowest cost disk array, and the Ultra56 fell behind starting in year 3.
5-Year Cost of Ownership: $138,894
10. Document # TCO2015001 v1, March, 2015 Page 10 of
SUSE Enterprise Storage
Hyperscale Storage for the Masses
SUSE Enterprise Storage is a self-managing, self-healing, distributed software-based storage solution for
enterprise customers. Based on the Firefly version of the Ceph open source project, the fully featured SUSE
Enterprise Storage is well suited for object, archival and bulk storage, with features including cache tiering,
thin provisioning, copy-on-write cloning and erasure coding.
Highlights
The scalability of a SAN is limited by the capability of the controller head in each system. In a software
defined storage architecture, storage nodes can be added to high-availability server clusters without limits,
while maintaining a single namespace. For IT organizations maintaining a mass storage environment, this is
the architecture of the future.
Why it is the Lowest Cost Solution
Software defined storage, hardened by hyperscale companies such as Amazon, Facebook and Google, brings
IT organizations into a world of open storage where the controller hardware is commodity x86 servers, and
disk drives can be acquired on the open market. The result is low-cost hyperscale storage for the masses.
5-Year Cost of Ownership: $108,607
11. Document # TCO2015001 v1, March, 2015 Page 11 of
Side-by-Side Comparison
Software Defined Storage Eliminates Branded Storage and Cloud Taxes
IT organizations have shown a strong preference for branded storage. Everyone knows they’re paying a tax for
the EMC or NetApp logo, but also know they won’t get fired when something goes wrong, because they
deployed the Mercedes of storage arrays. This branded storage tax is applied to every disk drive a customer
purchases during the life of the system, and is as much as 4x the cost of HDDs used in industry standard
servers and software defined storage systems.
At one penny per gigabyte, cloud storage-as-a-service is an attractive alternative for less than 300TB. As you
scale beyond 300TB, the cloud tax of one penny per gigabyte per month continues, while storage arrays and
software defined storage systems drive the cost per gigabyte per month well below one penny. Based on a
simple storage model that does not discount for unused storage, or add cost for requests or transfers, the
cloud tax per gigabyte makes AWS Glacier storage the third most expensive total cost of ownership. It’s also
worth noting that AWS Elastic Block Storage (EBS) is 2-3 times more expensive than AWS Glacier.
Cumulative Cost
SUSE Dot Hill NEC EMC NetApp
Drive Capacity 6TB 6GB 4TB 4TB 6TB
Drive Price $589 $774 $678 $907 $2,512
Cost/GB $.098 $.129 $.169 $.227 $.419
12. Document # TCO2015001 v1, March, 2015 Page 12 of
The Bottom Line
The Future is Software Defined Storage
The data in this report indicates that traditional enterprise storage is under tremendous price pressure from
cloud storage-as-a-service which use software defined storage on a massive scale, and by commercial
versions of the same storage solution available from vendors like SUSE.
This report also reveals that Amazon’s claim of 1₵ per gigabyte per month is true, but not necessarily the
lowest cost solution.
The bottom line is this: IT organizations looking to lower the cost of archived data should evaluate software
defined storage solutions. Based on easy-to-service x86 servers, the technology is proven by hyperscale
public cloud providers and can be deployed in private clouds for 1/4 the cost of branded storage and 1/2 the
cost of cloud storage-as-a-service.
Related Links
Total Cost of Ownership Wiki
AWS Glacier Pricing Info
EMC VNXe 3200 Product Info
IBM Storwize v3700 Product Info
NEC M110 Product Info
NetApp E2700 Product Info
SUSE Enterprise Storage Product Info
IT Brand Pulse
About the Author
Frank Berry is founder and senior analyst for IT Brand Pulse, a trusted source of data
and analysis about IT infrastructure, including servers, storage and networking. As
former vice president of product marketing and corporate marketing for QLogic, and
vice president of worldwide marketing for the automated tape library (ATL) division of
Quantum, Mr. Berry has over 30 years experience in the development and marketing
of IT infrastructure. If you have any questions or comments about this report, contact
frank.berry@itbrandpulse.com.