SlideShare a Scribd company logo
1 of 6
Download to read offline
White Paper




                        Dedupe-Centric Storage
                        Hugo Patterson, Chief Architect, Data Domain




                        Abstract
                        Deduplication is enabling a new generation of data storage systems. While traditional
                        storage vendors have tried to dismiss deduplication as a feature, it is proving to be a true
                        storage fundamental. Faced with the choice of completely rebuilding their storage system,
                        or simply bolting on a copy-on-write snapshot, most vendors choose the easy path that
                        will get a check-box snapshot feature to market soonest.

                        There are a vast number of requirements that make it very challenging to build a storage
                        system that actually delivers the full potential of deduplication. Deduplication is hard to
                        implement in a way that runs fast with low overhead across a full range of protocols and
                        application environments. When deduplication is done well, people can create, share,
                        access, protect, and manage their data in new and easier ways, and IT administrators
                        aren’t struggling to keep up.

                        Dedupe-centric storage is engineered from the start to address these challenges.
                        This paper gives historical perspective and a technological context to this revolution
                        in storage management.




DEDUPLICATION STORAGE
Dedupe-Centric Storage

                             Table of Contents

                                    AbSTRACT  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 1


                             DEDUPlICATION: ThE POST-SNAPShOT REvOlUTION
                             IN STORAGE  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 3


                                    bUIlDING A bETTER SNAPShOT  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 3


                                    DEDUPlICATION AS A POINT SOlUTION  .  .  .  .  .  .  .  .  .  .  .  .  .  . 3


                             DEDUPE AS A STORAGE FUNDAmENTAl  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 4


                                    vARIAblE lENGTh DUPlICATES  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 4


                                    lOCAl COmPRESSION  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 4


                                    FORmAT AGNOSTIC  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 4


                                    mUlTI-PROTOCOl  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 4


                                    CPU-CENTRIC vS . DISk-INTENSIvE AlGORIThm  .  .  .  .  .  .  .  . 5


                                    DEDUPlICATED REPlICATION  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 5


                                    DEDUPlICATED SNAPShOTS  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 5


                             CONClUSION  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 5




2   DEDUPE-CENTRIC STORAGE
Deduplication: The Post-Snapshot Revolution in Storage
The revolution to come with deduplication will eclipse the snapshot revolution of the early 1990s. Since at least the 1980s, systems had been
saving versions of individual files or snapshots of whole disk volumes or file systems. Technologies existed for creating snapshots reasonably
efficiently. But, in most production systems, snapshots were expensive, slow, or limited in number and so had limited use and deployment.

About 15 years ago, Network Appliance (NetApp) leveraged the existing technologies and added some of their own innovations to build
a file system that could create and present multiple snapshots of the entire system with little performance impact and without making a
complete new copy of all the data for every snapshot. It did not use deduplication to find and eliminate redundant data, but at least it
didn’t duplicate unchanged data just to create a new logical view of the same data in a new snapshot. Even this was a significant advance.
By not requiring a complete copy for each snapshot, it became much cheaper to store snapshots. And, by not moving the old version of
data preserved in a snapshot just to store a new version of data, there was no performance impact to creating snapshots. The efficiency
of their approach meant there was no reason not to create snapshots and keep a bunch around. Users responded by routinely creating
several snapshots a day and often preserving some snapshots for several days. The relative abundance of such snapshots revolutionized
data protection. Users could browse snapshots and restore earlier versions of files to recover from mistakes immediately all by themselves.
Instead of being shut down to run backups, databases could be quiesced briefly to create a consistent snapshot and then freed to run
again while backups proceeded in the background. Such consistent snapshots could also serve as starting points for database recovery in
the event of a database corruption, thereby avoiding the need to go to tape for recovery.


Building a Better Snapshot                                               data sets are vast, putting a premium on scalability. Ideally, a
Despite the many benefits and commercial success of space and            deduplication technology would be able to find and eliminate
performance efficient snapshots, it was over a decade before the         redundant data no matter which application writes it, even if
first competitors built comparable technology. Most competitors’         different applications have written the same data. Deduplication
snapshots still do not compare. Many added a snapshot feature            should be effective and scalable so that it can find a small segment
that delivers space efficiency but few matched the simplicity,           of matching data no matter where it is stored in a large system.
performance, scalability and ultimately the utility of NetApp’s          Deduplication also needs to be efficient or the memory and
design. Why?                                                             computing overhead of finding duplicates in a large system could
                                                                         negate any benefit. Finally, deduplication needs to be simple and
The answer is that creating and managing the multiple virtual
                                                                         automatic or the management burden of scheduling, tuning, or
views of a file system captured in snapshots is challenging. By far
                                                                         generally managing the deduplication process could again negate
the easiest and most widely adopted approach is to read the old
                                                                         any benefits of the data reduction. All of these requirements make
version of data and copy it to a safe place before writing new
                                                                         it very challenging to build a storage system that actually delivers
data. This is known as copy-on-write and works especially well for
                                                                         the full potential of deduplication.
block storage systems. Unfortunately, the copy operation imposes
a severe performance penalty because of the extra read and write
operations it requires. But, doing something more sophisticated                   When deduplication is done well, people
and writing new data to a new location without the extra read                      can create, share, access, protect, and
and write requires all new kinds of data structures and completely                 manage their data in new and easier
changes the model for how a storage system works. Further,                           ways and IT administrators aren’t
block-level snapshots by themselves do not provide access to the                           struggling to keep up.
file version captured in the snapshots; you need a file system that
integrates those versions into the name space.
                                                                         Early efforts at deduplication did not attempt a comprehensive
Faced with the choice of completely rebuilding their storage
                                                                         solution. One example is email systems that store only one copy
system, or simply bolting on a copy-on-write snapshot, most
                                                                         of an attachment or a message even when it is sent to many
vendors choose the easy path that will get a check-box snapshot
                                                                         recipients. Such email blasts are a common way for people to share
feature to market soonest. Very few are willing to invest the years
                                                                         their work and early email implementations created a separate copy
and dollars to start over for what they view as just a single feature.
                                                                         of the email and attachment for every recipient. The first innovation
By not investing, their snapshots were doomed to be second rate.
                                                                         is to store just a single copy on the server and maintain multiple
They don’t cross the hurdle to that makes snapshots easy, cheap
                                                                         references to that. More sophisticated systems might detect when
and most useful. They don’t deliver on the snapshot revolution.
                                                                         the same attachment is forwarded to further recipients and create
                                                                         additional references to the same data instead of storing a new
Deduplication as a Point Solution                                        copy. Nevertheless, when users download attachments to their
On its surface, deduplication is a simple concept: find and              desktops, the corporate environment still ends up with many copies
eliminate redundant copies of data. But computing systems are            to store, manage and protect.
complex, supporting many different kinds of applications. And

                                                                                                                   DEDUPE-CENTRIC STORAGE        3
Local Compression
           Creating and managing the multiple                             Deduplication across all the data stored in a system is necessary, but
         virtual views of a file system captured in                       it should be complemented with local compression which typi-
                  snapshots is challenging.                               cally reduces data size by another factor of 2. This may not be as
                                                                          significant a factor as deduplication itself, but no system that takes
                                                                          data reduction seriously can afford to ignore it.

At the end of the day, an email system that eliminates duplicate
email attachments is a point solution. It helps that one application,     Format Agnostic
but it doesn’t keep many additional duplicates, even of the very          Data comes in many formats generated by many different applica-
same attachments from proliferating through the environment.              tions. But, embedded in those different formats is often the same
                                                                          duplicate data. A document may appear as an individual file gener-
The power of an efficient snapshot mechanism in a storage system          ated by a word processor, or in an email saved in a folder by an
is its generality. Databases could all build their own snapshot           email reader, or in the database of the email server, or embedded
mechanism, but when the storage provides them they don’t have             in a backup image, or squirreled away by an email archive applica-
to. They and any other application can benefit from snapshots for         tion. A deduplication system that relies on parsing the formats
efficient, and independent, data protection. NetApp was dedicated         generated by all these different applications can never be a general
to building efficient snapshots so all the applications didn’t have to.   purpose storage platform. There are too many such formats and
                                                                          they change too quickly for a storage vendor to support them all.
Dedupe as a Storage Fundamental                                           Even if they could, such an approach would end up handcuffing
                                                                          application writers trying to innovate on top of the platform. Until
Data Domain is dedicated to building deduplication storage
                                                                          their new format is supported, they’d gain no benefit from the
systems with a deduplication engine powerful enough to become
                                                                          platform. They would be better off sticking with the same old
a platform that general applications can leverage. That engine
                                                                          formats and not creating anything new. Storage platforms should
can find duplicates in data written by many different applications
                                                                          unleash creativity, not squelch it. Thus, the deduplication engine
at every stage of the lifecycle of the file, can scale to store many
                                                                          must be data agnostic and find and eliminate duplicates in data no
hundreds of terabytes of data and find duplicates wherever they
                                                                          matter how it is packaged and stored to the system.
exist, can reduce data by a factor of 20 or more, can do so at
high speed without huge memory or disk resources, and does so
automatically while taking snapshots and so requires a minimum of
administrator attention.                                                                Vast requirements make it very
Such an engine is only possible with an unconventional system                        challenging to build a storage system
architecture that breaks with traditional thinking. Here are some                  that actually delivers the full potential of
important features needed to build such an engine.                                               deduplication.

Variable Length Duplicates
Not all data changes are exactly 4KB in size and 4KB aligned.
Sometimes people just replace one word with a longer word. Such           Multi-Protocol
a simple small replacement shifts all the rest of the data by some        There are many standard protocols in use in storage systems today
small amount. To a deduplication engine built on fixed size blocks,       from NFS and CIFS to blocks and VTL. For maximum flexibility,
the entire rest of the file would seem to be new unique data even         storage systems should support all these protocols since different
though it really contains lots of duplicates. Further, that file may      protocols are needed for different applications at the same time.
exist elsewhere in the system, say in an email folder, at a random        User home directories may be in NAS. The exchange server may
offset in a larger file. Again, to an engine organized around fixed       need to run on blocks. And backups may prefer VTL. Over the
blocks, the data would all seem to be new. Conventional storage           course of its lifecycle, the same data may be stored with all of these
systems, whether NAS or SAN, store fixed sized blocks. Such a             protocols. A presentation that starts in a home directory may be
system which attempts to bolt on deduplication as an afterthought         emailed to a colleague and stored in blocks by the email server and
will only be able to look for identical fixed size blocks. Clearly,       then archived in NAS by an email archive application and backed
such an approach will never be as effective, comprehensive, and           up from the home directory, the email server, and the archive ap-
general purpose as a system that can handle small replacements            plication to VTL. Deduplication should be able to find and eliminate
and recognize and eliminate duplicate data no matter where in a           redundant data no matter how it is stored.
file it appears.




4   DEDUPE-CENTRIC STORAGE
CPU-Centric vs. Disk-Intensive Algorithm                                 Deduplicated Snapshots
Over the last two decades, CPU performance has increased                 Snapshots are very helpful for capturing different versions of
2,000,000 times1. In that time, disk performance has only increased      data and deduplication can store all those versions much more
11 times1. Today, CPU performance is taking another leap with            compactly. Both are fundamental to simplifying data management.
every doubling of the number of cores in a chip. Clearly, algorithms     Yet many systems that are cobbled together as a set of features
developed today for deduplication should leverage the growth in          either can’t create snapshots efficiently, or they can’t deduplicate
CPU performance instead of being tied to disk performance. Some          snapshots, so users need to be careful when they create snapshots
                                                                         so as not to lock duplicates in. Such restrictions and limitations
                                                                         complicate data management not simplify it.
              Deduplication has leverage across the
               storage infrastructure for reducing                       Conclusion
             data, improving data protection and, in                     Deduplication has leverage across the storage infrastructure for
             general, simplifying data management.                       reducing data, improving data protection and, in general, simplify-
                                                                         ing data management. But, deduplication is hard to implement
                                                                         in a way that runs fast with low overhead across a full range of
                                                                         protocols and application environments. Storage system vendors
systems rely on a disk access to find every piece of duplicate data.
                                                                         who treat deduplication merely as a feature will check off a box on
In some systems, the disk access is to lookup a segment fingerprint.
                                                                         a feature list, but are likely to fall short of delivering the benefits
Other systems can’t find duplicate data except by reading the old
                                                                         deduplication promises.
data to compare it to the new. In either case, the rate at which
duplicate data can be found and eliminated is bounded by the
speed of these disk accesses. To go faster, such systems need to
add more disks. But, the whole idea of deduplication is to reduce
storage, not grow it. The Data Domain SISL™ (Stream-Informed
Segment Layout) scaling architecture does not have to rely on
reading lots of data from disk to find duplicates and organizes
the segment fingerprints on disk in such a way that only a small
number of accesses are needed to find thousands of duplicates.
With SISL, back end disks only need to deliver a few megabytes of
data to deduplicate hundreds of megabytes of incoming data.

Deduplicated Replication
Data is protected from disaster only when a copy of it is safely at
a remote location. Replication has long been used for high-value
data, but without deduplication, replication is too expensive for the
other 90% of the data. Deduplication should happen immediately
inline and its benefits applied to replication in real time, so the
lag till data is safely off site is as small as possible. Only systems
designed for deduplication can run fast enough to deduplicate and
replicate data right away. Systems which have bolted on deduplica-
tion as merely a feature at best impose unnecessary delays in
replication and at worst don’t deliver the benefits of deduplicated
replication at all.




                                                                                                                  www.datadomain.com
1 Seagate Technology Paper, Economies of Capacity and Speed, May 2004
Data Domain
2421 Mission College Blvd.
Santa Clara, CA 95054
866-WE-DDUPE; 408-980-4800
sales@datadomain.com
22 international offices: datadomain.com/company/contacts.html

Copyright © 2008 Data Domain, Inc. All rights reserved.

Data Domain, Inc. believes information in this publication is accurate as of its publication date. This publication could include technical inaccuracies or typographical errors. The information is subject to change without notice. Changes are periodically
added to the information herein; these changes will be incorporated in new additions of the publication. Data Domain, Inc. may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time.
Reproduction of this publication without prior written permission is forbidden.

The information in this publication is provided “as is.” Data Domain, Inc. makes no representations or warranties of any kind, with respect to the information in this publication, and specifically disclaims implied warranties of
merchantability or fitness for a particular purpose.

Data Domain and Global Compression are trademarks or registered trademarks of Data Domain, Inc. All other brands, products, service names, trademarks, or registered service marks are used to identify the products or
services of their respective owners. WP-DCS-0408


      DEDUPLICATION STORAGE                                                                                                                                                                                    www.datadomain.com

More Related Content

What's hot

MySQL 8.0 achitecture and enhancement
MySQL 8.0 achitecture and enhancementMySQL 8.0 achitecture and enhancement
MySQL 8.0 achitecture and enhancementlalit choudhary
 
Strategic imperative the enterprise data model
Strategic imperative the enterprise data modelStrategic imperative the enterprise data model
Strategic imperative the enterprise data modelDATAVERSITY
 
Which Change Data Capture Strategy is Right for You?
Which Change Data Capture Strategy is Right for You?Which Change Data Capture Strategy is Right for You?
Which Change Data Capture Strategy is Right for You?Precisely
 
Creating Beautiful Dashboards with Grafana and ClickHouse
Creating Beautiful Dashboards with Grafana and ClickHouseCreating Beautiful Dashboards with Grafana and ClickHouse
Creating Beautiful Dashboards with Grafana and ClickHouseAltinity Ltd
 
MySQL Data Encryption at Rest
MySQL Data Encryption at RestMySQL Data Encryption at Rest
MySQL Data Encryption at RestMydbops
 
Data Lakehouse, Data Mesh, and Data Fabric (r1)
Data Lakehouse, Data Mesh, and Data Fabric (r1)Data Lakehouse, Data Mesh, and Data Fabric (r1)
Data Lakehouse, Data Mesh, and Data Fabric (r1)James Serra
 
Amazon RDS with Amazon Aurora | AWS Public Sector Summit 2016
Amazon RDS with Amazon Aurora | AWS Public Sector Summit 2016Amazon RDS with Amazon Aurora | AWS Public Sector Summit 2016
Amazon RDS with Amazon Aurora | AWS Public Sector Summit 2016Amazon Web Services
 
Data cleansing and prep with synapse data flows
Data cleansing and prep with synapse data flowsData cleansing and prep with synapse data flows
Data cleansing and prep with synapse data flowsMark Kromer
 
Akili Data Integration using PPDM
Akili Data Integration using PPDMAkili Data Integration using PPDM
Akili Data Integration using PPDMrnaramore
 
Azure vm の可用性を見直そう
Azure vm の可用性を見直そうAzure vm の可用性を見直そう
Azure vm の可用性を見直そうShuheiUda
 
Sql serverインデックスの断片化と再構築の必要性について
Sql serverインデックスの断片化と再構築の必要性についてSql serverインデックスの断片化と再構築の必要性について
Sql serverインデックスの断片化と再構築の必要性について貴仁 大和屋
 
[db tech showcase Tokyo 2014] B26: PostgreSQLを拡張してみよう by SRA OSS, Inc. 日本支社 高塚遥
[db tech showcase Tokyo 2014] B26: PostgreSQLを拡張してみよう  by SRA OSS, Inc. 日本支社 高塚遥[db tech showcase Tokyo 2014] B26: PostgreSQLを拡張してみよう  by SRA OSS, Inc. 日本支社 高塚遥
[db tech showcase Tokyo 2014] B26: PostgreSQLを拡張してみよう by SRA OSS, Inc. 日本支社 高塚遥Insight Technology, Inc.
 
Conformed Dimensions of Data Quality – An Organized Approach to Data Quality ...
Conformed Dimensions of Data Quality – An Organized Approach to Data Quality ...Conformed Dimensions of Data Quality – An Organized Approach to Data Quality ...
Conformed Dimensions of Data Quality – An Organized Approach to Data Quality ...DATAVERSITY
 
Amazon Aurora Deep Dive (db tech showcase 2016)
Amazon Aurora Deep Dive (db tech showcase 2016)Amazon Aurora Deep Dive (db tech showcase 2016)
Amazon Aurora Deep Dive (db tech showcase 2016)Amazon Web Services Japan
 

What's hot (20)

MySQL 8.0 achitecture and enhancement
MySQL 8.0 achitecture and enhancementMySQL 8.0 achitecture and enhancement
MySQL 8.0 achitecture and enhancement
 
Data Domain-Driven Design
Data Domain-Driven DesignData Domain-Driven Design
Data Domain-Driven Design
 
Strategic imperative the enterprise data model
Strategic imperative the enterprise data modelStrategic imperative the enterprise data model
Strategic imperative the enterprise data model
 
Which Change Data Capture Strategy is Right for You?
Which Change Data Capture Strategy is Right for You?Which Change Data Capture Strategy is Right for You?
Which Change Data Capture Strategy is Right for You?
 
Locking in SQL Server
Locking in SQL ServerLocking in SQL Server
Locking in SQL Server
 
Creating Beautiful Dashboards with Grafana and ClickHouse
Creating Beautiful Dashboards with Grafana and ClickHouseCreating Beautiful Dashboards with Grafana and ClickHouse
Creating Beautiful Dashboards with Grafana and ClickHouse
 
MySQL Data Encryption at Rest
MySQL Data Encryption at RestMySQL Data Encryption at Rest
MySQL Data Encryption at Rest
 
Data Lakehouse, Data Mesh, and Data Fabric (r1)
Data Lakehouse, Data Mesh, and Data Fabric (r1)Data Lakehouse, Data Mesh, and Data Fabric (r1)
Data Lakehouse, Data Mesh, and Data Fabric (r1)
 
Learning postgresql
Learning postgresqlLearning postgresql
Learning postgresql
 
Amazon RDS with Amazon Aurora | AWS Public Sector Summit 2016
Amazon RDS with Amazon Aurora | AWS Public Sector Summit 2016Amazon RDS with Amazon Aurora | AWS Public Sector Summit 2016
Amazon RDS with Amazon Aurora | AWS Public Sector Summit 2016
 
Data cleansing and prep with synapse data flows
Data cleansing and prep with synapse data flowsData cleansing and prep with synapse data flows
Data cleansing and prep with synapse data flows
 
Modern data warehouse
Modern data warehouseModern data warehouse
Modern data warehouse
 
Akili Data Integration using PPDM
Akili Data Integration using PPDMAkili Data Integration using PPDM
Akili Data Integration using PPDM
 
Emc data domain
Emc data domainEmc data domain
Emc data domain
 
Azure vm の可用性を見直そう
Azure vm の可用性を見直そうAzure vm の可用性を見直そう
Azure vm の可用性を見直そう
 
Sql serverインデックスの断片化と再構築の必要性について
Sql serverインデックスの断片化と再構築の必要性についてSql serverインデックスの断片化と再構築の必要性について
Sql serverインデックスの断片化と再構築の必要性について
 
snowpro (1).pdf
snowpro (1).pdfsnowpro (1).pdf
snowpro (1).pdf
 
[db tech showcase Tokyo 2014] B26: PostgreSQLを拡張してみよう by SRA OSS, Inc. 日本支社 高塚遥
[db tech showcase Tokyo 2014] B26: PostgreSQLを拡張してみよう  by SRA OSS, Inc. 日本支社 高塚遥[db tech showcase Tokyo 2014] B26: PostgreSQLを拡張してみよう  by SRA OSS, Inc. 日本支社 高塚遥
[db tech showcase Tokyo 2014] B26: PostgreSQLを拡張してみよう by SRA OSS, Inc. 日本支社 高塚遥
 
Conformed Dimensions of Data Quality – An Organized Approach to Data Quality ...
Conformed Dimensions of Data Quality – An Organized Approach to Data Quality ...Conformed Dimensions of Data Quality – An Organized Approach to Data Quality ...
Conformed Dimensions of Data Quality – An Organized Approach to Data Quality ...
 
Amazon Aurora Deep Dive (db tech showcase 2016)
Amazon Aurora Deep Dive (db tech showcase 2016)Amazon Aurora Deep Dive (db tech showcase 2016)
Amazon Aurora Deep Dive (db tech showcase 2016)
 

Viewers also liked

EMC Data domain advanced features and functions
EMC Data domain advanced features and functionsEMC Data domain advanced features and functions
EMC Data domain advanced features and functionssolarisyougood
 
Emc data domain technical deep dive workshop
Emc data domain  technical deep dive workshopEmc data domain  technical deep dive workshop
Emc data domain technical deep dive workshopsolarisyougood
 
Data Domain Backup & Recovery
Data Domain Backup & RecoveryData Domain Backup & Recovery
Data Domain Backup & Recoverynetlogix
 
Avamar Run Book - 5-14-2015_v3
Avamar Run Book - 5-14-2015_v3Avamar Run Book - 5-14-2015_v3
Avamar Run Book - 5-14-2015_v3Bill Oliver
 
EMC Deduplication Fundamentals
EMC Deduplication FundamentalsEMC Deduplication Fundamentals
EMC Deduplication Fundamentalsemcbaltics
 
EMC Networker installation Document
EMC Networker installation DocumentEMC Networker installation Document
EMC Networker installation Documentuzzal basak
 
Vblock Infrastructure Packages — integrated best-of-breed packages from VMwar...
Vblock Infrastructure Packages — integrated best-of-breed packages from VMwar...Vblock Infrastructure Packages — integrated best-of-breed packages from VMwar...
Vblock Infrastructure Packages — integrated best-of-breed packages from VMwar...Eric Sloof
 
Net Worker 9 : une solution orientée Backup As a Service
Net Worker 9 : une solution orientée Backup As a ServiceNet Worker 9 : une solution orientée Backup As a Service
Net Worker 9 : une solution orientée Backup As a ServiceRSD
 
EMC NetWorker Module for Microsoft SQL Server Release 5.1 ...
EMC NetWorker Module for Microsoft SQL Server Release 5.1 ...EMC NetWorker Module for Microsoft SQL Server Release 5.1 ...
EMC NetWorker Module for Microsoft SQL Server Release 5.1 ...webhostingguy
 
DataDomain brochure
DataDomain brochureDataDomain brochure
DataDomain brochureCommaGroup
 
Using FCS Networker
Using FCS NetworkerUsing FCS Networker
Using FCS NetworkerLyman Lycan
 
Become the Ultimate Networker
Become the Ultimate NetworkerBecome the Ultimate Networker
Become the Ultimate NetworkerPaul Maynard
 
Implementing an NDMP backup solution using EMC NetWorker on IBM Storwize V700...
Implementing an NDMP backup solution using EMC NetWorker on IBM Storwize V700...Implementing an NDMP backup solution using EMC NetWorker on IBM Storwize V700...
Implementing an NDMP backup solution using EMC NetWorker on IBM Storwize V700...IBM India Smarter Computing
 
Blue Medora - VMware vROps Management Pack for VCE Vblock Overview
Blue Medora - VMware vROps Management Pack for VCE Vblock OverviewBlue Medora - VMware vROps Management Pack for VCE Vblock Overview
Blue Medora - VMware vROps Management Pack for VCE Vblock OverviewBlue Medora
 
Presentation deduplication backup software and system
Presentation   deduplication backup software and systemPresentation   deduplication backup software and system
Presentation deduplication backup software and systemxKinAnx
 

Viewers also liked (18)

EMC Data domain advanced features and functions
EMC Data domain advanced features and functionsEMC Data domain advanced features and functions
EMC Data domain advanced features and functions
 
Emc data domain technical deep dive workshop
Emc data domain  technical deep dive workshopEmc data domain  technical deep dive workshop
Emc data domain technical deep dive workshop
 
Data Domain Backup & Recovery
Data Domain Backup & RecoveryData Domain Backup & Recovery
Data Domain Backup & Recovery
 
Avamar Run Book - 5-14-2015_v3
Avamar Run Book - 5-14-2015_v3Avamar Run Book - 5-14-2015_v3
Avamar Run Book - 5-14-2015_v3
 
EMC Deduplication Fundamentals
EMC Deduplication FundamentalsEMC Deduplication Fundamentals
EMC Deduplication Fundamentals
 
EMC Networker installation Document
EMC Networker installation DocumentEMC Networker installation Document
EMC Networker installation Document
 
VCE VBLOCK SYSTEMS
VCE VBLOCK SYSTEMSVCE VBLOCK SYSTEMS
VCE VBLOCK SYSTEMS
 
Vblock Infrastructure Packages — integrated best-of-breed packages from VMwar...
Vblock Infrastructure Packages — integrated best-of-breed packages from VMwar...Vblock Infrastructure Packages — integrated best-of-breed packages from VMwar...
Vblock Infrastructure Packages — integrated best-of-breed packages from VMwar...
 
Net Worker 9 : une solution orientée Backup As a Service
Net Worker 9 : une solution orientée Backup As a ServiceNet Worker 9 : une solution orientée Backup As a Service
Net Worker 9 : une solution orientée Backup As a Service
 
EMC NetWorker Module for Microsoft SQL Server Release 5.1 ...
EMC NetWorker Module for Microsoft SQL Server Release 5.1 ...EMC NetWorker Module for Microsoft SQL Server Release 5.1 ...
EMC NetWorker Module for Microsoft SQL Server Release 5.1 ...
 
DataDomain brochure
DataDomain brochureDataDomain brochure
DataDomain brochure
 
Using FCS Networker
Using FCS NetworkerUsing FCS Networker
Using FCS Networker
 
Become the Ultimate Networker
Become the Ultimate NetworkerBecome the Ultimate Networker
Become the Ultimate Networker
 
Implementing an NDMP backup solution using EMC NetWorker on IBM Storwize V700...
Implementing an NDMP backup solution using EMC NetWorker on IBM Storwize V700...Implementing an NDMP backup solution using EMC NetWorker on IBM Storwize V700...
Implementing an NDMP backup solution using EMC NetWorker on IBM Storwize V700...
 
Avamar 7 2010
Avamar 7 2010Avamar 7 2010
Avamar 7 2010
 
Iasp Enablement
Iasp EnablementIasp Enablement
Iasp Enablement
 
Blue Medora - VMware vROps Management Pack for VCE Vblock Overview
Blue Medora - VMware vROps Management Pack for VCE Vblock OverviewBlue Medora - VMware vROps Management Pack for VCE Vblock Overview
Blue Medora - VMware vROps Management Pack for VCE Vblock Overview
 
Presentation deduplication backup software and system
Presentation   deduplication backup software and systemPresentation   deduplication backup software and system
Presentation deduplication backup software and system
 

Similar to Dedupe-Centric Storage Revolution

Dedupe-Centric Storage for General Applications
Dedupe-Centric Storage for General Applications Dedupe-Centric Storage for General Applications
Dedupe-Centric Storage for General Applications EMC
 
Symantec Netbackup Delivering Performance Value Through Multiple De-duplicati...
Symantec Netbackup Delivering Performance Value Through Multiple De-duplicati...Symantec Netbackup Delivering Performance Value Through Multiple De-duplicati...
Symantec Netbackup Delivering Performance Value Through Multiple De-duplicati...Symantec
 
Pivotal gem fire_twp_distributed-main-memory-platform_042313
Pivotal gem fire_twp_distributed-main-memory-platform_042313Pivotal gem fire_twp_distributed-main-memory-platform_042313
Pivotal gem fire_twp_distributed-main-memory-platform_042313EMC
 
Postgres plus cloud_database_getting_started_guide
Postgres plus cloud_database_getting_started_guidePostgres plus cloud_database_getting_started_guide
Postgres plus cloud_database_getting_started_guideice1oog
 
PAM g.tr 3832
PAM g.tr 3832PAM g.tr 3832
PAM g.tr 3832Accenture
 
Pivotal gem fire_wp_hardest-problems-data-management_053013
Pivotal gem fire_wp_hardest-problems-data-management_053013Pivotal gem fire_wp_hardest-problems-data-management_053013
Pivotal gem fire_wp_hardest-problems-data-management_053013EMC
 
Challenges and Best Practices for Storing/ Challenges and Best Practices for ...
Challenges and Best Practices for Storing/ Challenges and Best Practices for ...Challenges and Best Practices for Storing/ Challenges and Best Practices for ...
Challenges and Best Practices for Storing/ Challenges and Best Practices for ...NetApp
 
Big data week London Big data pipelining 0.2
Big data week London  Big data pipelining 0.2Big data week London  Big data pipelining 0.2
Big data week London Big data pipelining 0.2Simon Ambridge
 
Testing Delphix: easy data virtualization
Testing Delphix: easy data virtualizationTesting Delphix: easy data virtualization
Testing Delphix: easy data virtualizationFranck Pachot
 
8 considerations for evaluating disk based backup solutions
8 considerations for evaluating disk based backup solutions8 considerations for evaluating disk based backup solutions
8 considerations for evaluating disk based backup solutionsServium
 
NetIQ Disaster Recovery ebook
NetIQ Disaster Recovery ebookNetIQ Disaster Recovery ebook
NetIQ Disaster Recovery ebookpamolson
 
Juniper: Data Center Evolution
Juniper: Data Center EvolutionJuniper: Data Center Evolution
Juniper: Data Center EvolutionTechnologyBIZ
 

Similar to Dedupe-Centric Storage Revolution (20)

Dedupe-Centric Storage for General Applications
Dedupe-Centric Storage for General Applications Dedupe-Centric Storage for General Applications
Dedupe-Centric Storage for General Applications
 
Symantec Netbackup Delivering Performance Value Through Multiple De-duplicati...
Symantec Netbackup Delivering Performance Value Through Multiple De-duplicati...Symantec Netbackup Delivering Performance Value Through Multiple De-duplicati...
Symantec Netbackup Delivering Performance Value Through Multiple De-duplicati...
 
Pivotal gem fire_twp_distributed-main-memory-platform_042313
Pivotal gem fire_twp_distributed-main-memory-platform_042313Pivotal gem fire_twp_distributed-main-memory-platform_042313
Pivotal gem fire_twp_distributed-main-memory-platform_042313
 
Postgres plus cloud_database_getting_started_guide
Postgres plus cloud_database_getting_started_guidePostgres plus cloud_database_getting_started_guide
Postgres plus cloud_database_getting_started_guide
 
PAM g.tr 3832
PAM g.tr 3832PAM g.tr 3832
PAM g.tr 3832
 
Generic RLM White Paper
Generic RLM White PaperGeneric RLM White Paper
Generic RLM White Paper
 
Pivotal gem fire_wp_hardest-problems-data-management_053013
Pivotal gem fire_wp_hardest-problems-data-management_053013Pivotal gem fire_wp_hardest-problems-data-management_053013
Pivotal gem fire_wp_hardest-problems-data-management_053013
 
My sql
My sqlMy sql
My sql
 
thesis
thesisthesis
thesis
 
Challenges and Best Practices for Storing/ Challenges and Best Practices for ...
Challenges and Best Practices for Storing/ Challenges and Best Practices for ...Challenges and Best Practices for Storing/ Challenges and Best Practices for ...
Challenges and Best Practices for Storing/ Challenges and Best Practices for ...
 
Big data week London Big data pipelining 0.2
Big data week London  Big data pipelining 0.2Big data week London  Big data pipelining 0.2
Big data week London Big data pipelining 0.2
 
Bb sql serverdell
Bb sql serverdellBb sql serverdell
Bb sql serverdell
 
Testing Delphix: easy data virtualization
Testing Delphix: easy data virtualizationTesting Delphix: easy data virtualization
Testing Delphix: easy data virtualization
 
8 considerations for evaluating disk based backup solutions
8 considerations for evaluating disk based backup solutions8 considerations for evaluating disk based backup solutions
8 considerations for evaluating disk based backup solutions
 
NetIQ Disaster Recovery ebook
NetIQ Disaster Recovery ebookNetIQ Disaster Recovery ebook
NetIQ Disaster Recovery ebook
 
gfs-sosp2003
gfs-sosp2003gfs-sosp2003
gfs-sosp2003
 
gfs-sosp2003
gfs-sosp2003gfs-sosp2003
gfs-sosp2003
 
paper
paperpaper
paper
 
Juniper: Data Center Evolution
Juniper: Data Center EvolutionJuniper: Data Center Evolution
Juniper: Data Center Evolution
 
Gfs论文
Gfs论文Gfs论文
Gfs论文
 

Recently uploaded

Top 10 Hubspot Development Companies in 2024
Top 10 Hubspot Development Companies in 2024Top 10 Hubspot Development Companies in 2024
Top 10 Hubspot Development Companies in 2024TopCSSGallery
 
QCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architecturesQCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architecturesBernd Ruecker
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsNathaniel Shimoni
 
Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Farhan Tariq
 
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Mark Goldstein
 
Email Marketing Automation for Bonterra Impact Management (fka Social Solutio...
Email Marketing Automation for Bonterra Impact Management (fka Social Solutio...Email Marketing Automation for Bonterra Impact Management (fka Social Solutio...
Email Marketing Automation for Bonterra Impact Management (fka Social Solutio...Jeffrey Haguewood
 
Testing tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesTesting tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesKari Kakkonen
 
Microsoft 365 Copilot: How to boost your productivity with AI – Part two: Dat...
Microsoft 365 Copilot: How to boost your productivity with AI – Part two: Dat...Microsoft 365 Copilot: How to boost your productivity with AI – Part two: Dat...
Microsoft 365 Copilot: How to boost your productivity with AI – Part two: Dat...Nikki Chapple
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality AssuranceInflectra
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityIES VE
 
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotes
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotesMuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotes
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotesManik S Magar
 
Infrared simulation and processing on Nvidia platforms
Infrared simulation and processing on Nvidia platformsInfrared simulation and processing on Nvidia platforms
Infrared simulation and processing on Nvidia platformsYoss Cohen
 
Zeshan Sattar- Assessing the skill requirements and industry expectations for...
Zeshan Sattar- Assessing the skill requirements and industry expectations for...Zeshan Sattar- Assessing the skill requirements and industry expectations for...
Zeshan Sattar- Assessing the skill requirements and industry expectations for...itnewsafrica
 
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better StrongerModern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better Strongerpanagenda
 
Assure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyesAssure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyesThousandEyes
 
Connecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfConnecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfNeo4j
 
Kuma Meshes Part I - The basics - A tutorial
Kuma Meshes Part I - The basics - A tutorialKuma Meshes Part I - The basics - A tutorial
Kuma Meshes Part I - The basics - A tutorialJoão Esperancinha
 
4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sector
4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sector4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sector
4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sectoritnewsafrica
 
Accelerating Enterprise Software Engineering with Platformless
Accelerating Enterprise Software Engineering with PlatformlessAccelerating Enterprise Software Engineering with Platformless
Accelerating Enterprise Software Engineering with PlatformlessWSO2
 
Transcript: New from BookNet Canada for 2024: BNC SalesData and LibraryData -...
Transcript: New from BookNet Canada for 2024: BNC SalesData and LibraryData -...Transcript: New from BookNet Canada for 2024: BNC SalesData and LibraryData -...
Transcript: New from BookNet Canada for 2024: BNC SalesData and LibraryData -...BookNet Canada
 

Recently uploaded (20)

Top 10 Hubspot Development Companies in 2024
Top 10 Hubspot Development Companies in 2024Top 10 Hubspot Development Companies in 2024
Top 10 Hubspot Development Companies in 2024
 
QCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architecturesQCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architectures
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directions
 
Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...
 
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
 
Email Marketing Automation for Bonterra Impact Management (fka Social Solutio...
Email Marketing Automation for Bonterra Impact Management (fka Social Solutio...Email Marketing Automation for Bonterra Impact Management (fka Social Solutio...
Email Marketing Automation for Bonterra Impact Management (fka Social Solutio...
 
Testing tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesTesting tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examples
 
Microsoft 365 Copilot: How to boost your productivity with AI – Part two: Dat...
Microsoft 365 Copilot: How to boost your productivity with AI – Part two: Dat...Microsoft 365 Copilot: How to boost your productivity with AI – Part two: Dat...
Microsoft 365 Copilot: How to boost your productivity with AI – Part two: Dat...
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a reality
 
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotes
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotesMuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotes
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotes
 
Infrared simulation and processing on Nvidia platforms
Infrared simulation and processing on Nvidia platformsInfrared simulation and processing on Nvidia platforms
Infrared simulation and processing on Nvidia platforms
 
Zeshan Sattar- Assessing the skill requirements and industry expectations for...
Zeshan Sattar- Assessing the skill requirements and industry expectations for...Zeshan Sattar- Assessing the skill requirements and industry expectations for...
Zeshan Sattar- Assessing the skill requirements and industry expectations for...
 
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better StrongerModern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
 
Assure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyesAssure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyes
 
Connecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfConnecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdf
 
Kuma Meshes Part I - The basics - A tutorial
Kuma Meshes Part I - The basics - A tutorialKuma Meshes Part I - The basics - A tutorial
Kuma Meshes Part I - The basics - A tutorial
 
4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sector
4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sector4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sector
4. Cobus Valentine- Cybersecurity Threats and Solutions for the Public Sector
 
Accelerating Enterprise Software Engineering with Platformless
Accelerating Enterprise Software Engineering with PlatformlessAccelerating Enterprise Software Engineering with Platformless
Accelerating Enterprise Software Engineering with Platformless
 
Transcript: New from BookNet Canada for 2024: BNC SalesData and LibraryData -...
Transcript: New from BookNet Canada for 2024: BNC SalesData and LibraryData -...Transcript: New from BookNet Canada for 2024: BNC SalesData and LibraryData -...
Transcript: New from BookNet Canada for 2024: BNC SalesData and LibraryData -...
 

Dedupe-Centric Storage Revolution

  • 1. White Paper Dedupe-Centric Storage Hugo Patterson, Chief Architect, Data Domain Abstract Deduplication is enabling a new generation of data storage systems. While traditional storage vendors have tried to dismiss deduplication as a feature, it is proving to be a true storage fundamental. Faced with the choice of completely rebuilding their storage system, or simply bolting on a copy-on-write snapshot, most vendors choose the easy path that will get a check-box snapshot feature to market soonest. There are a vast number of requirements that make it very challenging to build a storage system that actually delivers the full potential of deduplication. Deduplication is hard to implement in a way that runs fast with low overhead across a full range of protocols and application environments. When deduplication is done well, people can create, share, access, protect, and manage their data in new and easier ways, and IT administrators aren’t struggling to keep up. Dedupe-centric storage is engineered from the start to address these challenges. This paper gives historical perspective and a technological context to this revolution in storage management. DEDUPLICATION STORAGE
  • 2. Dedupe-Centric Storage Table of Contents AbSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 DEDUPlICATION: ThE POST-SNAPShOT REvOlUTION IN STORAGE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 bUIlDING A bETTER SNAPShOT . . . . . . . . . . . . . . . . . . . . . 3 DEDUPlICATION AS A POINT SOlUTION . . . . . . . . . . . . . . 3 DEDUPE AS A STORAGE FUNDAmENTAl . . . . . . . . . . . . . . . . . 4 vARIAblE lENGTh DUPlICATES . . . . . . . . . . . . . . . . . . . . . 4 lOCAl COmPRESSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 FORmAT AGNOSTIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 mUlTI-PROTOCOl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 CPU-CENTRIC vS . DISk-INTENSIvE AlGORIThm . . . . . . . . 5 DEDUPlICATED REPlICATION . . . . . . . . . . . . . . . . . . . . . . . 5 DEDUPlICATED SNAPShOTS . . . . . . . . . . . . . . . . . . . . . . . . 5 CONClUSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2 DEDUPE-CENTRIC STORAGE
  • 3. Deduplication: The Post-Snapshot Revolution in Storage The revolution to come with deduplication will eclipse the snapshot revolution of the early 1990s. Since at least the 1980s, systems had been saving versions of individual files or snapshots of whole disk volumes or file systems. Technologies existed for creating snapshots reasonably efficiently. But, in most production systems, snapshots were expensive, slow, or limited in number and so had limited use and deployment. About 15 years ago, Network Appliance (NetApp) leveraged the existing technologies and added some of their own innovations to build a file system that could create and present multiple snapshots of the entire system with little performance impact and without making a complete new copy of all the data for every snapshot. It did not use deduplication to find and eliminate redundant data, but at least it didn’t duplicate unchanged data just to create a new logical view of the same data in a new snapshot. Even this was a significant advance. By not requiring a complete copy for each snapshot, it became much cheaper to store snapshots. And, by not moving the old version of data preserved in a snapshot just to store a new version of data, there was no performance impact to creating snapshots. The efficiency of their approach meant there was no reason not to create snapshots and keep a bunch around. Users responded by routinely creating several snapshots a day and often preserving some snapshots for several days. The relative abundance of such snapshots revolutionized data protection. Users could browse snapshots and restore earlier versions of files to recover from mistakes immediately all by themselves. Instead of being shut down to run backups, databases could be quiesced briefly to create a consistent snapshot and then freed to run again while backups proceeded in the background. Such consistent snapshots could also serve as starting points for database recovery in the event of a database corruption, thereby avoiding the need to go to tape for recovery. Building a Better Snapshot data sets are vast, putting a premium on scalability. Ideally, a Despite the many benefits and commercial success of space and deduplication technology would be able to find and eliminate performance efficient snapshots, it was over a decade before the redundant data no matter which application writes it, even if first competitors built comparable technology. Most competitors’ different applications have written the same data. Deduplication snapshots still do not compare. Many added a snapshot feature should be effective and scalable so that it can find a small segment that delivers space efficiency but few matched the simplicity, of matching data no matter where it is stored in a large system. performance, scalability and ultimately the utility of NetApp’s Deduplication also needs to be efficient or the memory and design. Why? computing overhead of finding duplicates in a large system could negate any benefit. Finally, deduplication needs to be simple and The answer is that creating and managing the multiple virtual automatic or the management burden of scheduling, tuning, or views of a file system captured in snapshots is challenging. By far generally managing the deduplication process could again negate the easiest and most widely adopted approach is to read the old any benefits of the data reduction. All of these requirements make version of data and copy it to a safe place before writing new it very challenging to build a storage system that actually delivers data. This is known as copy-on-write and works especially well for the full potential of deduplication. block storage systems. Unfortunately, the copy operation imposes a severe performance penalty because of the extra read and write operations it requires. But, doing something more sophisticated When deduplication is done well, people and writing new data to a new location without the extra read can create, share, access, protect, and and write requires all new kinds of data structures and completely manage their data in new and easier changes the model for how a storage system works. Further, ways and IT administrators aren’t block-level snapshots by themselves do not provide access to the struggling to keep up. file version captured in the snapshots; you need a file system that integrates those versions into the name space. Early efforts at deduplication did not attempt a comprehensive Faced with the choice of completely rebuilding their storage solution. One example is email systems that store only one copy system, or simply bolting on a copy-on-write snapshot, most of an attachment or a message even when it is sent to many vendors choose the easy path that will get a check-box snapshot recipients. Such email blasts are a common way for people to share feature to market soonest. Very few are willing to invest the years their work and early email implementations created a separate copy and dollars to start over for what they view as just a single feature. of the email and attachment for every recipient. The first innovation By not investing, their snapshots were doomed to be second rate. is to store just a single copy on the server and maintain multiple They don’t cross the hurdle to that makes snapshots easy, cheap references to that. More sophisticated systems might detect when and most useful. They don’t deliver on the snapshot revolution. the same attachment is forwarded to further recipients and create additional references to the same data instead of storing a new Deduplication as a Point Solution copy. Nevertheless, when users download attachments to their On its surface, deduplication is a simple concept: find and desktops, the corporate environment still ends up with many copies eliminate redundant copies of data. But computing systems are to store, manage and protect. complex, supporting many different kinds of applications. And DEDUPE-CENTRIC STORAGE 3
  • 4. Local Compression Creating and managing the multiple Deduplication across all the data stored in a system is necessary, but virtual views of a file system captured in it should be complemented with local compression which typi- snapshots is challenging. cally reduces data size by another factor of 2. This may not be as significant a factor as deduplication itself, but no system that takes data reduction seriously can afford to ignore it. At the end of the day, an email system that eliminates duplicate email attachments is a point solution. It helps that one application, Format Agnostic but it doesn’t keep many additional duplicates, even of the very Data comes in many formats generated by many different applica- same attachments from proliferating through the environment. tions. But, embedded in those different formats is often the same duplicate data. A document may appear as an individual file gener- The power of an efficient snapshot mechanism in a storage system ated by a word processor, or in an email saved in a folder by an is its generality. Databases could all build their own snapshot email reader, or in the database of the email server, or embedded mechanism, but when the storage provides them they don’t have in a backup image, or squirreled away by an email archive applica- to. They and any other application can benefit from snapshots for tion. A deduplication system that relies on parsing the formats efficient, and independent, data protection. NetApp was dedicated generated by all these different applications can never be a general to building efficient snapshots so all the applications didn’t have to. purpose storage platform. There are too many such formats and they change too quickly for a storage vendor to support them all. Dedupe as a Storage Fundamental Even if they could, such an approach would end up handcuffing application writers trying to innovate on top of the platform. Until Data Domain is dedicated to building deduplication storage their new format is supported, they’d gain no benefit from the systems with a deduplication engine powerful enough to become platform. They would be better off sticking with the same old a platform that general applications can leverage. That engine formats and not creating anything new. Storage platforms should can find duplicates in data written by many different applications unleash creativity, not squelch it. Thus, the deduplication engine at every stage of the lifecycle of the file, can scale to store many must be data agnostic and find and eliminate duplicates in data no hundreds of terabytes of data and find duplicates wherever they matter how it is packaged and stored to the system. exist, can reduce data by a factor of 20 or more, can do so at high speed without huge memory or disk resources, and does so automatically while taking snapshots and so requires a minimum of administrator attention. Vast requirements make it very Such an engine is only possible with an unconventional system challenging to build a storage system architecture that breaks with traditional thinking. Here are some that actually delivers the full potential of important features needed to build such an engine. deduplication. Variable Length Duplicates Not all data changes are exactly 4KB in size and 4KB aligned. Sometimes people just replace one word with a longer word. Such Multi-Protocol a simple small replacement shifts all the rest of the data by some There are many standard protocols in use in storage systems today small amount. To a deduplication engine built on fixed size blocks, from NFS and CIFS to blocks and VTL. For maximum flexibility, the entire rest of the file would seem to be new unique data even storage systems should support all these protocols since different though it really contains lots of duplicates. Further, that file may protocols are needed for different applications at the same time. exist elsewhere in the system, say in an email folder, at a random User home directories may be in NAS. The exchange server may offset in a larger file. Again, to an engine organized around fixed need to run on blocks. And backups may prefer VTL. Over the blocks, the data would all seem to be new. Conventional storage course of its lifecycle, the same data may be stored with all of these systems, whether NAS or SAN, store fixed sized blocks. Such a protocols. A presentation that starts in a home directory may be system which attempts to bolt on deduplication as an afterthought emailed to a colleague and stored in blocks by the email server and will only be able to look for identical fixed size blocks. Clearly, then archived in NAS by an email archive application and backed such an approach will never be as effective, comprehensive, and up from the home directory, the email server, and the archive ap- general purpose as a system that can handle small replacements plication to VTL. Deduplication should be able to find and eliminate and recognize and eliminate duplicate data no matter where in a redundant data no matter how it is stored. file it appears. 4 DEDUPE-CENTRIC STORAGE
  • 5. CPU-Centric vs. Disk-Intensive Algorithm Deduplicated Snapshots Over the last two decades, CPU performance has increased Snapshots are very helpful for capturing different versions of 2,000,000 times1. In that time, disk performance has only increased data and deduplication can store all those versions much more 11 times1. Today, CPU performance is taking another leap with compactly. Both are fundamental to simplifying data management. every doubling of the number of cores in a chip. Clearly, algorithms Yet many systems that are cobbled together as a set of features developed today for deduplication should leverage the growth in either can’t create snapshots efficiently, or they can’t deduplicate CPU performance instead of being tied to disk performance. Some snapshots, so users need to be careful when they create snapshots so as not to lock duplicates in. Such restrictions and limitations complicate data management not simplify it. Deduplication has leverage across the storage infrastructure for reducing Conclusion data, improving data protection and, in Deduplication has leverage across the storage infrastructure for general, simplifying data management. reducing data, improving data protection and, in general, simplify- ing data management. But, deduplication is hard to implement in a way that runs fast with low overhead across a full range of protocols and application environments. Storage system vendors systems rely on a disk access to find every piece of duplicate data. who treat deduplication merely as a feature will check off a box on In some systems, the disk access is to lookup a segment fingerprint. a feature list, but are likely to fall short of delivering the benefits Other systems can’t find duplicate data except by reading the old deduplication promises. data to compare it to the new. In either case, the rate at which duplicate data can be found and eliminated is bounded by the speed of these disk accesses. To go faster, such systems need to add more disks. But, the whole idea of deduplication is to reduce storage, not grow it. The Data Domain SISL™ (Stream-Informed Segment Layout) scaling architecture does not have to rely on reading lots of data from disk to find duplicates and organizes the segment fingerprints on disk in such a way that only a small number of accesses are needed to find thousands of duplicates. With SISL, back end disks only need to deliver a few megabytes of data to deduplicate hundreds of megabytes of incoming data. Deduplicated Replication Data is protected from disaster only when a copy of it is safely at a remote location. Replication has long been used for high-value data, but without deduplication, replication is too expensive for the other 90% of the data. Deduplication should happen immediately inline and its benefits applied to replication in real time, so the lag till data is safely off site is as small as possible. Only systems designed for deduplication can run fast enough to deduplicate and replicate data right away. Systems which have bolted on deduplica- tion as merely a feature at best impose unnecessary delays in replication and at worst don’t deliver the benefits of deduplicated replication at all. www.datadomain.com 1 Seagate Technology Paper, Economies of Capacity and Speed, May 2004
  • 6. Data Domain 2421 Mission College Blvd. Santa Clara, CA 95054 866-WE-DDUPE; 408-980-4800 sales@datadomain.com 22 international offices: datadomain.com/company/contacts.html Copyright © 2008 Data Domain, Inc. All rights reserved. Data Domain, Inc. believes information in this publication is accurate as of its publication date. This publication could include technical inaccuracies or typographical errors. The information is subject to change without notice. Changes are periodically added to the information herein; these changes will be incorporated in new additions of the publication. Data Domain, Inc. may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time. Reproduction of this publication without prior written permission is forbidden. The information in this publication is provided “as is.” Data Domain, Inc. makes no representations or warranties of any kind, with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Data Domain and Global Compression are trademarks or registered trademarks of Data Domain, Inc. All other brands, products, service names, trademarks, or registered service marks are used to identify the products or services of their respective owners. WP-DCS-0408 DEDUPLICATION STORAGE www.datadomain.com