Dell Storage Handbook


Published on

The power to transform your storage.

At Dell, we’re constantly building a new breed of data management solutions that intelligently manage and automatically store data in the right place at the right time for the right cost.

This handbook is your definitive reference for unique, thoughtful information on the current and future landscape of storage technologies.

Published in: Technology, Business
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Dell Storage Handbook

  1. 1. Storage HandbookYour go-to for what’s now and what’s next in storage solutions.Flash makes it asoftware game now.Bob PlankersPage 24
  2. 2. Storage HandbookThe power to transform your storage.At Dell, we’re constantly building a new breed of data management solutions that intelligently manage andautomatically store data in the right place at the right time for the right cost. That’s why our award-winning storagearrays have changed the status quo for thousands of customers around the world.This handbook is your definitive reference for unique, thoughtful information on the current and future landscape ofstorage technologies.Please feel free to contact us for more information on how we can help your business reach new levels of storageefficiency and agility—see Contact information and resources on page 25.
  3. 3. Executive summaryA storage state of the union by Bob Ganley, Senior MarketingManager of Dell’s Storage Solutions Team.4Introduction to storageA brief history of the evolution of storage technologies.8Current landscapeA look at what’s happening and what to consider instorage solutions.10The latest trendsAn overview of the storage industry.12Case studiesDiscover what Mazda, Navicure Inc., and other businesses aredoing to tackle their storage needs with Dell solutions.15Storage buying decisionsGain insight to drive innovative approaches for end-to-endsolutions with examples of storage-intensive workloads.19Storage HandbookSeven words on storageThe storage industry’s top influencers share their viewson storage technologies.22Contact information and resources25
  4. 4. Executive summary
  5. 5. By Bob GanleySenior ManagerDell Storage Solutions TeamWhat we’re trying to dofor customers comes fromwhere storage has beenHistorically, storage has beenabout the physicality of the data.When you think about data, you think about a customerdatabase. That customer database is like an old filingcabinet that’s been put online, it’s physical. As peoplebegan implementing storage systems, the hardware wastangible, made up of spinning disks with gravity and mass.For those of us in the technology sector, that has meanta few things. First, you really want to make sure you don’tlose the physical hardware that’s storing your valuabledata. Also, you’d better make a copy of it, which will leaveyou with two pieces of hardware. Now that you have twocopies, you probably want one of those copies to be in adifferent building in case the creek rises and a tree falls onyour power line.That’s where we’ve been.Storage technology has been slow to change. Manyvendors are still producing storage systems based onlegacy architectures that literally tie a conceptual object(your volume) to a physical device (a disk drive). Butthanks to virtualization, this conceptual and physical bondis loosening at a rapid pace.Executive summary5Storage Handbook
  6. 6. This is a crucial step toward the next generation ofstorage technology. Essentially, if you can truly break thebond between the hardware and the managed object,you can have an environment that allows you to act asquickly as you need to, whether you need to stay aheadof a change in business conditions, or respond to ahardware failure.That’s where we’re going.Making boundaries frictionlessImplications of this tight bond between a conceptualobject and a physical device are prominent in the waytraditional storage has been purchased. Storage capabilitywas purchased three, four, even five years out and sinceit takes a long time to fill up large amounts of expensivestorage, a lot has gone unused.When data gets created, it may never get used again.Ninety-five percent of data sitting on storage is cold,yet it’s being stored on the same old storage where itwas put when it was first created. Because data can’tflow to the right storage and reach its own level (likewater reaches its own level) resource allocation is notoptimized. Over time, companies are buying more andmore of the same storage to put that data on. Wouldn’tit be a better idea to have older data automaticallymove to more affordable storage?Dell has a system which uses information about the dataitself (metadata) which can quickly and easily look atwhich data has been accessed by recency or frequencyand move the data that has been accessed less frequently(cold data) to a less expensive storage option. This systemis called the Fluid Data Architecture.The system has worked out quite nicely, as mostcustomers build their systems over time they only addmore low performing data to appropriate storage througha process known as automated data tiering. This processensures data that doesn’t need fast drives and low latencyremains easily available at a lower cost. Automatedtiering translates into major performance benefits whenapplications run better and dollars per terabyte (return onassets) improve over time.Building a Fluid Data ArchitectureDell’s storage strategy is centered on virtualization.We’ve very dramatically virtualized our storageinfrastructure in such a way that data can be easily put inthe right place for maximum performance and efficiencywithout regard to the physical location. This is whatwe mean by Fluid Data.The Fluid Data Architecture allows storage to bemanaged in a way that takes the burden off theadministrator. It’s efficient through intelligencewithout having to incur extra labor.Providing a more solid type of storageTraditionally, two types of storage have existed: raw orblock based (database systems, etc.) provided by a SAN;and file based (documents, videos, music, etc.) providedby an NAS. Now, we have the capability of providinga single pool of storage where storage capacity is nolonger directly linked to the service that’s provided.With a pool of shared storage, companies can accessdata through the appropriate set of services on thefront end, while storage self-manages on the back end(hot vs. cold data). This aligns with Dell’s strategy ofmaximum data efficiency for our customers and is drivenby dynamic tiering. That efficiency is taken to a new levelwhen both file and block data can be tiered automaticallyin a unified storage environment. This reduces operating,management and capital expenses by fostering a moreefficient utilization of a pool of storage, without needingmultiple skill sets to manage it.Incorporating flash/hybrid storageFor a long time, storage was about hard drives.Engineering has increased the speed of the drives andthe density of the bits, reaching a point of diminishingreturns. Over time, the performance increases in servershas greatly outpaced improvements in storage. In the lastfew years major advancements have been made in usingflash memory—a chip of non-volatile memory with nomoving parts—to replace slower disks. These are knownas solid-state drives (SSDs).These chips are being combined in a way that makesthem look just like a hard drive when they’re actually newmedia. SSDs yield dramatic improvements and benefitsin speed, density and power. As data centers becomemore sensitive to power requirements, it’s critical thatyour data center uses less power. Eventually, we will seeall active data move to flash or SSD storage; it is the onlyway forward to address the performance gap that hasdeveloped between servers and storage. However, not alldata needs to be on flash. This is why dynamic tiering isso important to modern storage architecture.6Storage Handbook
  7. 7. Monitoring and analyzing storage growthUsers typically say they see maybe forty percent growthin their storage needs every year. This is obviouslysignificant growth. What’s causing it?Let’s say you’re frequently accessing an application likeyour customer database. Clearly you never want to losethat data, and you want to back it up, and that backup issupported by recovery points along the way. These pointsare created with different techniques depending on howimportant the data is (e.g., for a financial transaction,you might want to have minutes or seconds for recoverypoint objective; but with email exchange, you might notmind if you have hours to recover). Maybe you completea backup once per week and setup disaster recoverysites with an extra data copy at a remote location.When you make a data copy, you often have morethan two copies of it.Growth in primary storage is accelerated by copiesneeded for data protection. The size of primary datasets is multiplied by recovery points, whether near-line,as in snapshots, and disk-based backups or off-line, asin tape-based backups and archives. This means thatmanaging copies of data can have a direct effect onmanaging the growth of storage costs.To that end, data protection strategy and operations is astrong solutions focus for Dell. We’re helping businessesbecome more efficient and cost effective with primarystorage by developing advanced snapshot, backup anddisaster recovery techniques in a way that balances thecost of protection with the value of data.Working better together: Looking at storage aspart of the whole systemImproving upon storage technology is ideal, but it’simportant to remember that storage is part of a system.You have storage because you need to access data anduse it towards some end goal. This touches on the notionof converged infrastructure where we’re looking at server,storage and compute together.Another central strategy to how Dell thinks aboutstorage is ensuring that each piece in the system worksbetter together. We’re working on moving data closerto the processor, improving performance while stillpreserving the ability to manage and protect the data ina familiar shared storage model. We especially see thisconvergence happening on an engineering level wherethe focus is on taking advantage of synergies betweenthe related components. We’re investing in optimizinginformation technology infrastructure as a system:servers, networks, storage, software and services. n7Storage HandbookDeveloping advanced snapshot,backup, and disaster recoverytechniques in a way that balancesthe cost of protection with the valueof data.
  8. 8. Introduction to storage
  9. 9. The history and evolution of storagetechnologiesDigital storage has been around since the beginning ofcomputing. For most of that history, non-volatile digitalstorage (data that is retained even when the power isturned off) took the form of read/writeable magneticmedia. The first generation was in the form of tape whichrequired sequential access to the data. This was slowlyreplaced by hard disk drives that provided directaccess to data through spinning media with movingmechanical read heads.During the last forty years, increases in the performanceof logic processing in computer hardware haveconsistently out paced performance gains in digitalstorage leading to a performance gap. In the last fewyears, a new entrant in digital storage technologycalled solid-state storage has led to a narrowing of thatperformance gap. Solid-state storage retains the non-volatile nature of disk storage and features direct dataaccess but has no moving parts. This lack of moving partshas increased access speeds, increased reliability andreduced power consumption.One aspect of digital storage that has been slow tochange is the relationship between the physical mediawhere data is stored and the logical storage objectsthat represent managed information. Information to bemanaged (for example: a customer database) isstored in a volume and that volume is instantiated asdisk or collection of disks. This close relationshipbetween the physical storage and logical storageobject creates challenges.In the last few years, the rise of server virtualization hasled to huge benefits as the traditional tie-in betweena physical server and the application running on it hasbeen abstracted. Now multiple virtual servers can becombined in a single server to create efficiencies. Virtualservers can be easily moved between physical serversfor the purposes of load balancing and high availability.The “friction” has been dramatically reduced betweenworkloads and servers as a result. Storage virtualizationas a term has been around for a while but from a practicalperspective, most storage systems on the markettoday have not reached the frictionless state achievedby server virtualization.Dell recognized this challenge a few years agoand has taken storage virtualization to a new levelwith Fluid Data Architecture. This has resulted intremendous benefits for our customers throughdramatic increases in efficiency, concrete improvementsin the ability of Information Technology to respondto evolving requirements and the protection of digitalinformation assets. nIntroduction to storage9Storage HandbookThe rise of server virtualization hasled to huge benefits as the traditionaltie-in between a physical server andthe application running on it hasbeen abstracted.
  10. 10. Current landscape
  11. 11. Industry analysts peg storage capacity growth rates atforty percent per year. Most organizations today arewasting storage capacity, with average utilization rateshovering at around sixty percent. This waste is due toantiquated approaches to purchasing and managingstorage. This challenge is propagated by architecturesthat have inflexible limits on growth and hinder freemovement of data to unused drives. But what if youcould recapture that wasted space?Most data is accessed infrequently once it is created, yetorganizations store most data on one or maybe twotiers of storage. This is because finding and moving old,cold data is a labor-intensive and disruptive process. Ifninety-five percent of your data is cold, why not havethat data automatically moved to cheap and deepstorage by a non-disruptive background task?Storage is a critical link in establishing and maintainingacceptable application performance. Analysis shows thata small percentage of data truly needs low-latency, high-performing storage to remove the storage bottleneck.Determining which data needs that performance, movingthat data to the right storage and maintaining the rightdistribution over time is a complex task. What if yourstorage system could do that automatically with nomanual intervention?There are typically two types of storage in use: file andblock. Unstructured data in the form of files gets storedon an NAS or filer and now represents over two-thirdsof storage capacity. Block storage for structured datais stored on a SAN where it can be properly managedand protected. These disparate approaches to storageresult in islands of capacity that have separate purchasecycles and management tasks. Wouldn’t it be better tohave a single pool of managed storage capacity that canefficiently provide the repository for file and blockdata as needed?Data protection and recovery requires creating recoverypoints, replicas and backups to prevent data loss andmitigate disaster scenarios. These copies of productiondata contribute to storage growth, and the process ofcreating them weighs on application performance. Fiftypercent of organizations now struggle with meetingbackup windows. Forty percent of organizations havemore than one backup approach. How can youmanage backup data growth? How can you streamlinethe creation of recovery points to meet rising servicelevel expectations?Data migrations are disruptive and costly. Many storagesystems are replaced with a “forklift” every three years,requiring the purchase of new hardware, the re-purchaseof software licenses and painful migrations. What ifyou could accommodate growth while preserving yourinvestment in hardware and software?Read on to find out how Dell is tackling the latestchallenges in storage solutions. nCurrent landscape11Storage Handbook
  12. 12. The latest trends
  13. 13. Storage growthStorage growth is a fact of life. Organizations produceand consume data at an increasing rate. Core businessprocesses rely on digital data, and more data is beingcollected and stored as organizations are realizing thepotential value of collecting and analyzing all mannerof information. Collected data ranges from daily officecommunications to the output which is constantlyflowing from instruments and sensors. When the volumeof that data is amplified by copies made for protectionand recovery, the trend seems to be overwhelming.How can organizations keep up when storage growthoutstrips budget growth by a factor of ten or more?Consolidation presents clear opportunities for managingstorage growth. Organizations have multiple repositoriesfor data. Each repository must have some spare capacityto accommodate future growth. As the number ofrepositories multiply, that spare capacity adds up towasted capacity and inefficient storage utilization.Consolidating these repositories presents the opportunityto combine that excess capacity for use in productionresulting in an increase in storage utilization.Separate systems for file and block storage also resultin inefficient utilization. Unified storage solutions allowa single pool of storage to be allocated for use acrosseither block or file protocols. This also creates utilizationefficiencies that can reduce storage over provisioning.Deduplication and compression are two relatedtechniques that help mitigate the impact of storagegrowth, and backup storage is a prime application forthis solution. Backup storage frequently contains multiplecopies of the same data as frequent recovery points sharemultiple copies of data that has not changed during thetime between backups.Flash storageThe writing is on the wall. NAND Flash-based non-volatile memory (NVM) storage in the form of solid-statedrives (SSDs) and solid-state cache cards are poised todominate the future of storage for active data. Very lowlatencies and very high transaction rates for solid-statestorage provide the potential to close the performancegap between servers and storage. Given that mostdata is old and cold, active data represents a smallportion of the storage capacity necessary to provideacceptable performance.Automated data tiering moves hot data to the highest-performing storage without manual intervention. Thestorage system tracks usage patterns to determinehow often each block of data is accessed. Frequentlyaccessed data is moved to high-performance storagewhile cold blocks of data are moved down to more costeffective storage. This movement is done without theintervention of storage administrators. Automated datatiering enables the acceleration of workload performancewith a small amount of flash storage because it moves toflash only the specific portion of data that requires highperformance storage.Servers and storage convergeAs organizations create the next generation architecturefor their information technology, some trends begin toemerge. One clear trend is that the solution to optimalperformance for critical workloads requires closecoordination between storage and server components.One solution for the performance problem is to use NVMfor caching of disk Input/Output (I/O). As the operatingsystem calls for disk I/O, the data is read into the cacheand kept there until it is overwritten. NVM is almostalways used as read-only cache. Write caching risks dataloss since some transactions might not have been writtento the disk at the moment of an interruption, leavingthe disk storage in an inconsistent state. Write-throughcache requires waiting for acknowledgement from theback-end storage (whether DAS or shared) providing noperformance advantage.One improvement is to provide write-consistentcache with data protection. This provides the ability toaccelerate reads as well as writes while preventing dataloss in the event the cache card fails. The next step inthis technology is to integrate this capability with sharedstorage. This step will extend the data protection andmanagement benefits of a storage area network (SAN) tothe data in the cache, essentially making server-attachedThe latest trends13Storage Handbook
  14. 14. flash a managed tier in the storage infrastructure.This development will blur the lines between servermemory and storage.Another area where storage and servers are comingtogether is in highly dense compute environments likeblade enclosures. These solutions combine high-speedconnectivity in the form of the backplane of a bladeenclosure with the compute density of blade serversand bladed shared storage. This requires a high levelof engineering sophistication and integration testingto ensure a complete solution which can maximizeperformance and efficiency within the high density ofa blade enclosure.Cloud-based architectures are a driving force for thistype of convergence. Cloud computing environmentspromise to benefit enterprises in many different ways,including reduced capital costs through standardizedbuilding blocks, reduced operating costs throughintegrated management, and increased businessagility through automated service delivery and rapidprovisioning. To enable truly elastic cloud infrastructure,organizations must abandon the practice of custom-configuring each new virtualized environment. Thisshift is enabled by the adoption of standardizedinfrastructure building blocks which contain predefinedsets of servers, storage, and networking that providea desired level of service. These building blocksstandardize virtualized infrastructure, reducing thetime and effort involved to scale out the capacity ofthe cloud, and simplify the process of managing thatinfrastructure once it is deployed.Storage value moves to softwareStorage virtualization relies on moving value “up thestack” from the storage hardware itself to the softwarethat abstracts details of storage implementation andfocuses on management of workloads across a poolof storage and compute. As more and moreorganizations are able to leverage enterprise-classstorage components, the question becomes “Where isthe value-add?” in enterprise NAS/SAN solutions.The answer is increasingly in software.The trend toward “software defined” networking, datacenters and storage has been picking up momentum.It is important to understand that this trend has receivedlukewarm reception from major storage vendors becauseof the possibility that storage hardware may becomedemocratized in the process. This knee-jerk reaction onthe part of storage-only solution vendors ignores thehard reality that the lines between software and hardwareare blurring. Companies like Dell are embracing theconcept because highly virtualized storage is the future.Network design becomes pivotalThere are several trends that are driving the importanceof storage network design. Higher power servers driveincreasingly large storage network requirements.Virtualization has driven higher levels of consolidationand unpredictable workload peaks can combine tooverload storage networks. Higher performing storageincluding SSD technology is increasing throughputneeded in the storage network.The high end of Fibre channel storage networking hasdoubled with the introduction of 16Gb FC networkcomponents, while 10GbE is seeing widespread adoption.In order to fully recognize the benefits of improvednetwork bandwidth, customers need a full end-to-endsolution involving servers, networking and storage that isdesigned to optimize performance. n14Storage Handbook
  15. 15. Case studies
  16. 16. Accelerating and protecting critical workloadsMazda North American Operations was experiencingunacceptable performance with their core ERPapplications and was suffering from very long backupwindows, putting their crucial information assets atrisk of data loss. Mazda chose to implement a virtualenvironment with Dell Compellent storage with anSSD tier. Because Dell servers are specifically gearedto handle virtualization, the Mazda infrastructuredepartment was confident of its ability to transition toa virtual IT department. “Even minimal downtime meansbeing separated from critical cash flow,” saidJim DiMarzio, CIO at Mazda North American Operations.“So we picked the most reliable system available tosupport our virtualization efforts—Dell servers fit thevirtual environment one-hundred percent.”With an end-to-end virtualized server and storagearchitecture in place powered by the Compellent SAN,the Mazda infrastructure services department has beenable to substantially boost application performance.“We are now enjoying performance gains anywherefrom eighty to four-hundred percent,” said KaiSookwongse, Manager, Infrastructure Svcs Mazda NAOperations. “Critical applications like SAP actually runbetter than on physical servers.”Additionally, Mazda has reduced full backups from16 hours to 6 hours and their new setup now takes acomplete system snapshot—including databases—in30 seconds.“Dell Compellent storage gave us the performancewe needed to enter the virtual computing space andestablish best practices. Our business units are stunnedby the increase in application speed we have been ableto deliver,” said Sookwongse.Rapid response to changing business needsNavicure, Inc. is a leading Internet-based medical claimsclearinghouse with a need to store vast amounts ofdata. The company’s claims-processing platform relieson Oracle Real Application Clusters (RAC) 11g databasetechnology on Oracle Solaris-based servers. It firstlaunched the platform using outsourced Fibre Channelstorage, but the solution couldn’t expand quickly orcost-effectively enough to meet Navicure’s needs. “Weestimated at the time that adding three or four terabytesof usable redundant storage would cost us a quarter ofa million dollars over the contract period,” said DonaldWilkins, Navicure’s IT director.Navicure replaced the outsourced solution by deployingDell EqualLogic PS series storage arrays on premise. “Wehad our first EqualLogic SAN up and running within thirtyminutes,” Wilkins reports. “Other storage vendors told uswe would need to attend three or four days of classesto set up and use their systems, but we completelyfamiliarized ourselves with the Dell EqualLogic PS seriesarray in a very short time, without training, as its userinterface is very straightforward.”Case studies16Storage Handbook
  17. 17. The result is an agile and cost-effective storageinfrastructure. “We’re continually adjusting our IT plansto accommodate growth projections for the business,”Wilkins said. “We’ve been able to grow our environmentas our customer base grows by using highly scalable DellEqualLogic storage. We don’t have to deal so much withforklift upgrades like we would with a traditional,frame-based SAN. As we add new arrays, we might movesome of the older products down the line, from Tier1 to Tier 2 or to our disaster recovery site. But the DellEqualLogic arrays are never really outdated; we canupdate their firmware and keep them in the pool. Thefirst model we bought seven years ago is still in service.In fact, we have one of almost every model thatEqualLogic has ever produced, and they’re all runningside-by-side.”Taming explosive file growthIf a picture is worth a thousand words, a video is worth amillion. That makes social video a hot market—one thatToronto-based startup Keek Inc. is poised to conquer. Keekis already an active social video community, allowing usersto post video, text comments and share video updatesvia Twitter, Facebook and other networks, all at once. Butit’s Keek’s mobile app that’s causing an explosion in thecompany’s growth. Users can upload video status updates(called “keeks”) using the Keek app for Android and iPhone.Storage growth is a big deal for Dell EqualLogic userJeremy Wilson, Keek’s Chief Technology Officer. Keek islooking at growth of 40TB per month in video storagealone. That growth doesn’t count the storage capacityexpended on supporting two billion page views, 100million monthly visits and 18 million monthly uniquevisitors. It all requires a storage solution that doesn’ttolerate downtime and accommodates massive growth,especially since an additional 200,000 new users arejoining Keek a day. “Growth is exponential,” said Wilson.“Data is doubling every month. In fact, we’ve doubled thesize of the storage system since we initially installed it inAugust of 2012.”To fulfill his needs for a scalable and downtime-resistantstorage system, Wilson and his IT crew installed DellEqualLogic FS7500 Unified Storage Solutions andDell EqualLogic PS6500E iSCSI SAN disk arrays.The EqualLogic FS7500 front-ends the EqualLogic PS6500E and serves as a Network File System (NFS) front-end forthe PS6500 E file servers.With this system, Keek Inc. has been able to absorb a 300%increase in user base in one month without any slowdownin uploading videos or any file contention; “Dell designed astorage solution that would scale non-disruptively withoutdowntime,” says Wilson.Accelerating virtual desktopsNorthwest Mississippi Community College decidedto implement a virtual desktop initiative, rolling out48 virtualized desktops. They decided to use the DellEqualLogic PS6000XVS hybrid SAN which contains bothspinning media and NAND Flash-based, solid-state drives.The PS6000XVS SAN intelligently tiers workloads betweenthe SSDs and the lower-cost 15K SAS drives. “The abilityto distinguish between data that is in high demand versusless important data saves us the cost of an all-SSD SAN,”said Michael Lamar, network technician at NorthwestMississippi. The result has made a significant impact onusers’ experience. “We cut login times from 74 seconds to54 seconds, which is twenty-six percent less time usershave to spend waiting for their work session to start,” saidLamar. “This was all based on moving to the hybrid SAN.”Simplified performance tuningAnother current example involves the databases whichunderpin many critical applications. Nelnet, Inc. providesloan processing outsourcing services. In order to maintainhigh performance for those applications, Nelnet decidedto implement a Dell Compellent SAN with an SSD tier.Compellent features intelligent automated tiering calledData Progression. “We only allow our main reportingserver, a Dell PowerEdge R710 server running MicrosoftSQL Server®2008, to access our two terabytes of Tier 1solid-state drives (SSD),” said Ryan Regnier, IT Manager ofOperational Engineering at Nelnet. “But because of DataProgression, most of that data is actually sitting on Tier 2,which is 15K SAS. We’re not paying to have all of that datasitting on SSD, and we still get the performance benefit.”Modernized data protectionOne other recent initiative designed to streamline systemsadministration was upgrading data protection systems forHaggar. The company formerly used an Overland StorageREO virtual tape library (VTL). “We would store backupdata on the VTL for one day, after which we would moveit to tape,” said Matt Collins, Haggar’s Senior NetworkAdministrator. “This made data restores challenging. If auser needed a file that was accidentally deleted two daysearlier, we would have to travel offsite, look through adozen tapes to find the right one, bring it back, load it up,17Storage Handbook
  18. 18. find the right point in time on the tape and restore thefile. The process took hours or even days.”As the VTL approached end of life, Haggar planned toupgrade to a newer model. Then the Dell DR4000 DiskBackup Appliance caught the eye of Brad Coleman,Haggar’s infrastructure director. “The Dell DR4000deduplicates data before running backups,” he said.“We really liked the idea of compacting our backups intoless space and keeping more backup data locally, in aneasily accessible format.” Another appealing feature wasthe unit’s use of Rapid Data Access for fast data recovery.Haggar implemented a Dell DR4000; CommVaultSimpana runs backups to the appliance. “We’ve reducedthe amount of data in our daily backups by overeighty-five percent, thanks to the compression anddeduplication technologies in the Dell DR4000,” Collinssaid. “We now retain data for 30 days before offloadingto our Dell PowerVault TL2000 Tape Library, and we canrestore any of that information in seconds. Even thoughwe’re retaining data locally 30 times longer, we’re usingonly forty-eight percent of the total capacity on theDR4000. This solution gives us a lot of room to grow, andwe can restore data up to seventy-five percent quickerthan we could with our old VTL solution.” n18Storage Handbook
  19. 19. Storage buying decisions
  20. 20. Transactional systemsMany common applications are transactional in nature,for example, web-based applications such as an onlinestore or payroll processing applications. These types ofworkloads generate lots of small storage I/O requests.Microsoft Exchange is another example of an applicationthat can produce a large number of I/O requests. Ifthe response to this flood of I/O requests slows down,application response can be negatively affected.With competitors just “one click away” and executivesrelying on rapid access to online data, negativeexperiences with application performance can have aserious impact on results. Database systems like SQLServer, Oracle and MySQL often underlie these typesof systems. Understanding the intersection betweentransactional workloads and storage systems is the firststep to designing a system that can withstand lots of I/Orequests and provide the right service levels.Storage buying decisions20Storage HandbookMore and more organizations are trying to get out of the mode of purchasing point-products for their IT needs andchoosing to focus on workloads as the design point for system architectures, making storage buying decisions moreproject-based. Here are some examples of specific workloads that are storage-intensive, all are driving innovativeapproaches to end-to-end solutions.
  21. 21. Decision support systemsMost organizations are striving to mine the data theystore to make better decisions. Business Intelligence,Data Warehousing, Online Analytical Processing andrelated systems present a very different challenge forsystems design. These types of workloads tend toproduce requests for large blocks of data to be readsequentially. This places more focus for system designon higher throughput storage networks.Server virtualization and consolidationMost organizations are consolidating servers usingvirtualization. Before server virtualization, characterizingthe I/O stream from a server for the purposes ofoptimizing storage performance was less complex.When a single server ran one workload, the I/O wasoptimized using techniques like caching and serialization.With virtualization, the I/O requests for many workloadsare interleaved without optimization across the multipleVMs. This creates a highly randomized I/O stream somerefer to as the “I/O blender”. This creates a new level ofchallenges for architecting the end-to-end solution.Integration across the stack of servers, networks, storageand software in a virtualized environment can have adramatic impact on performance and reliability.High-performance computingScientific computing uses mathematical models andcomputer simulations to solve scientific problems.These simulations require large data sets to be read intoa processor for number crunching. Imaging applicationslike picture archiving and communication system (PACS)in the medical world also generate large data transfersbetween servers and storage. Understanding the I/Oprofile of this type of application is crucial for designingthe right combination of servers, networks and storagefor HPC.CloudCloud computing holds the promise of rapid provisioningand deprovisioning of blocks of compute, networkand storage resources to provide efficient and agileinfrastructure. A key aspect of this type of flexibility isproviding the ability to specify different service levelsfor the key components. With this type of environment,it is crucial to have tight integration between themanagement of hypervisors, servers, networks andstorage that provide ease of configurability.Mobile, BYOD and VDIMost organizations are pursuing initiatives to providetheir employees with more flexible access to enterprisetools through the device of choice. These initiatives puta lot of pressure on IT infrastructure for several reasons.They move data storage from the desktop to the datacenter, creating growing demands for centralized storage.They depend on consistent network connectivity toprovide the right responsiveness. They can create burstsof activity which must be planned for when sizingcomponents for performance.The next generation of business will be built arounda mobile device interface, whether desktop, tabletor phone. Flexible integration of the infrastructurecomponents will enable a successful transition to thismobile future. n21Storage Handbook
  22. 22. Seven words on storage
  23. 23. We sat down with the who’s who of storage solutions and challenged them to share their views on the current andfuture landscape of storage technologies—in only seven words. Here’s what they had to say. nSeven words on storage23Storage HandbookLuigi Danakos, CEOBlurt Media Group, Twitter: @NerdBlurtBruno José Ramalho e Sousa, Corporate IT
  24. 24. 24Storage HandbookRoger Lund, Sr Systems AdministratorVirtualization and Storage Evangelist, Dell Compellent, NetApp, EMC VNX, VNXeBob Plankers, Virtualization & Cloud ArchitectBarry Coombs, Blogger / Technical Architect ManagerBlog:
  25. 25. Link to Dell sites and contentDell Storage HomeDell Storage TechCenter PageTech Page OneInside Enterprise IT BlogLink to an extended conversation on storagethrough social channelsDell Storage FacebookDell Storage TwitterDell EqualLogic TwitterDell Compellent TwitterEventsDell Storage Resources and EventsDell Enterprise Forum FacebookDell Enterprise Forum TwitterThe IT SummitTechTarget Storage DecisionsFortune: Brainstorm TechLink to a few Dell experts for more informationJason BocheTechnical Marketing ConsultantTwitter: @jasonbocheLinkedIn: BoleyStorage Evangelist, Dell TechCenterTwitter: @LanceBoleyLinkedIn: GanleySenior Marketing Manager, Dell’s Storage Solutions TeamTwitter: @GanleyBobLinkedIn: HardyEMEA Storage Sales DirectorTwitter: @andyhardyLinkedIn: LocsinProduct Manager, Dell EqualLogic StorageLinkedIn: UrbanTechnical Marketing EngineerTwitter: @virtwilluLinkedIn: VigilExecutive Director, Dell Storage Product MarketingLinkedIn: information and resources25Storage Handbook