Data deduplication is a hot topic in storage and saves significant disk space for many environments, with some trade offs. We’ll discuss what deduplication is and where the Open Source solutions are versus commercial offerings. Presentation will lean towards the practical – where attendees can use it in their real world projects (what works, what doesn’t, should you use in production, etcetera).
In this presentation, Skip Levens from Quantum describes the company StorNext5 family of Appliances and Quantum Lattus Object Storage for Big Data management.
"For customers who need to retain and access hundreds of terabytes of unstructured data, Quantum Lattus Object Storage is a self-healing, self-protecting private cloud solution that enables more efficient primary storage usage, delivers extreme archive data resiliency and protection, and offers low latency disk access to archive data. Compared to RAID or tape storage, Lattus Object Storage provides the most effective solution on a cost/performance basis for active access, retention and protection of unstructured data in large archive environments."
Learn more: http://www.quantum.com/products/bigdatamanagement/index.aspx
Watch the video presentation: http://inside-bigdata.com/2013/12/12/introducing-stornext5-appliances/
Hadoop is a well-known framework used for big data processing now-a-days. It implements MapReduce for processing and utilizes distributed file system known as Hadoop Distributed File System (HDFS) to store data. HDFS provides fault tolerant, distributed and scalable storage for big data so that MapReduce can easily perform jobs on this data. Knowledge and understanding of data storage over HDFS is very important for a researcher working on Hadoop for big data storage and processing optimization. The aim of this presentation is to describe the architecture and process flow of HDFS. This presentation highlights prominent features of this file system implemented by Hadoop to execute MapReduce jobs. Moreover the presentation provides the description of process flow for achieving the design objectives of HDFS. Future research directions to explore and improve HDFS performance are also elaborated on.
In this presentation, Skip Levens from Quantum describes the company StorNext5 family of Appliances and Quantum Lattus Object Storage for Big Data management.
"For customers who need to retain and access hundreds of terabytes of unstructured data, Quantum Lattus Object Storage is a self-healing, self-protecting private cloud solution that enables more efficient primary storage usage, delivers extreme archive data resiliency and protection, and offers low latency disk access to archive data. Compared to RAID or tape storage, Lattus Object Storage provides the most effective solution on a cost/performance basis for active access, retention and protection of unstructured data in large archive environments."
Learn more: http://www.quantum.com/products/bigdatamanagement/index.aspx
Watch the video presentation: http://inside-bigdata.com/2013/12/12/introducing-stornext5-appliances/
Hadoop is a well-known framework used for big data processing now-a-days. It implements MapReduce for processing and utilizes distributed file system known as Hadoop Distributed File System (HDFS) to store data. HDFS provides fault tolerant, distributed and scalable storage for big data so that MapReduce can easily perform jobs on this data. Knowledge and understanding of data storage over HDFS is very important for a researcher working on Hadoop for big data storage and processing optimization. The aim of this presentation is to describe the architecture and process flow of HDFS. This presentation highlights prominent features of this file system implemented by Hadoop to execute MapReduce jobs. Moreover the presentation provides the description of process flow for achieving the design objectives of HDFS. Future research directions to explore and improve HDFS performance are also elaborated on.
Big Lab Problems Solved with Spectrum Scale: Innovations for the Coral Programinside-BigData.com
In this video from the DDN User Group at SC16, Sven Oehme Chief Research Strategist, IBM, presents "Big Lab Problems Solved with Spectrum Scale: Innovations for the Coral Program."
Watch the video presentation: http://wp.me/p3RLHQ-g52
Sign up for our insideHPC Newsletter: http://wp.me/p3RLHQ-g52
Maginatics @ SDC 2013: Architecting An Enterprise Storage Platform Using Obje...Maginatics
How did Maginatics build a strongly consistent and secure distributed file system? Niraj Tolia, Chief Architect at Maginatics, gave this presentation on the design of MagFS at the Storage Developer Conference on September 16, 2013.
For more information about MagFS—The File System for the Cloud, visit maginatics.com or contact us directly at info@maginatics.com.
The 3.0 release of the Maginatics Cloud Storage Platform (MCSP) includes great improvements in Data Protection, Multi-tier Caching and APIs, as well as other significant new features that make Maginatics the ideal choice for enterprise businesses with demanding storage requirements.
[DPM 2015] PerfectDedup - Secure Data Deduplication for Cloud StoragePasquale Puzio
With the continuous increase of cloud storage adopters, data deduplication has become a necessity for cloud providers. By storing a unique copy of duplicate data, cloud providers greatly reduce their storage and data transfer costs. Unfortunately, deduplication introduces a number of new security challenges.
We propose PerfectDedup, a novel scheme for secure data deduplication, which takes into account the popularity of the data segments and leverages the properties of Perfect Hashing in order to assure block-level deduplication and data condentiality at the same time. We show that the client-side overhead is minimal and the main computational load is outsourced to the cloud storage provider.
Big Lab Problems Solved with Spectrum Scale: Innovations for the Coral Programinside-BigData.com
In this video from the DDN User Group at SC16, Sven Oehme Chief Research Strategist, IBM, presents "Big Lab Problems Solved with Spectrum Scale: Innovations for the Coral Program."
Watch the video presentation: http://wp.me/p3RLHQ-g52
Sign up for our insideHPC Newsletter: http://wp.me/p3RLHQ-g52
Maginatics @ SDC 2013: Architecting An Enterprise Storage Platform Using Obje...Maginatics
How did Maginatics build a strongly consistent and secure distributed file system? Niraj Tolia, Chief Architect at Maginatics, gave this presentation on the design of MagFS at the Storage Developer Conference on September 16, 2013.
For more information about MagFS—The File System for the Cloud, visit maginatics.com or contact us directly at info@maginatics.com.
The 3.0 release of the Maginatics Cloud Storage Platform (MCSP) includes great improvements in Data Protection, Multi-tier Caching and APIs, as well as other significant new features that make Maginatics the ideal choice for enterprise businesses with demanding storage requirements.
[DPM 2015] PerfectDedup - Secure Data Deduplication for Cloud StoragePasquale Puzio
With the continuous increase of cloud storage adopters, data deduplication has become a necessity for cloud providers. By storing a unique copy of duplicate data, cloud providers greatly reduce their storage and data transfer costs. Unfortunately, deduplication introduces a number of new security challenges.
We propose PerfectDedup, a novel scheme for secure data deduplication, which takes into account the popularity of the data segments and leverages the properties of Perfect Hashing in order to assure block-level deduplication and data condentiality at the same time. We show that the client-side overhead is minimal and the main computational load is outsourced to the cloud storage provider.
Are your agency customers struggling to produce bookings using your APIs?
Incorrect Hotel Mapping posing a challenge for your customers?
Complexity in Hotel IDs often confusing your clients?
clarifi is your answer.(World's First Automated Hotel Mapping and Descriptive Content Delivery System)
• Get unique Hotel IDs for over 2466116 international hotels.
• Get mapped Hotel IDs for more than 23+ international suppliers.
• Garbage data filtration.
• Standardized hotel static data - descriptions, geocode, address, star rating 38078501 images and more.
Secure auditing and deduplicating data in cloudnexgentech15
Nexgen Technology Address:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com.
www.nexgenproject.com
Mobile: 9751442511,9791938249
Telephone: 0413-2211159.
NEXGEN TECHNOLOGY as an efficient Software Training Center located at Pondicherry with IT Training on IEEE Projects in Android,IEEE IT B.Tech Student Projects, Android Projects Training with Placements Pondicherry, IEEE projects in pondicherry, final IEEE Projects in Pondicherry , MCA, BTech, BCA Projects in Pondicherry, Bulk IEEE PROJECTS IN Pondicherry.So far we have reached almost all engineering colleges located in Pondicherry and around 90km
A Real-Time Hotel Mapping And Descriptive Content Delivery System.
A technology solution aimed to resolve the inconsistent distribution of hotel content across major hotel content providers.
A Hybrid Cloud Approach for Secure Authorized DeduplicationSWAMI06
Data deduplication is one of important data compression techniques for eliminating duplicate copies of repeating data,
and has been widely used in cloud storage to reduce the amount of storage space and save bandwidth. To protect the confidentiality
of sensitive data while supporting deduplication, the convergent encryption technique has been proposed to encrypt the data before
outsourcing. To better protect data security, this paper makes the first attempt to formally address the problem of authorized data
deduplication. Different from traditional deduplication systems, the differential privileges of users are further considered in duplicate
check besides the data itself.We also present several new deduplication constructions supporting authorized duplicate check in a hybrid
cloud architecture. Security analysis demonstrates that our scheme is secure in terms of the definitions specified in the proposed
security model. As a proof of concept, we implement a prototype of our proposed authorized duplicate check scheme and conduct
testbed experiments using our prototype. We show that our proposed authorized duplicate check scheme incurs minimal overhead
compared to normal operations.
Deduplication reduces the amount of disk storage needed to retain and protect data by ratios of 10-30x and greater, making a disk a cost-effective alternative to tape. Data on disk is available online and onsite for longer retention periods, and restores become fast and reliable. Storing only unique data on disk also means that data can be cost-effectively replicated over existing networks to remote sites for disaster recovery and consolidated tape operations.
This session will cover performance-related developments in Red Hat Gluster Storage 3 and share best practices for testing, sizing, configuration, and tuning.
Join us to learn about:
Current features in Red Hat Gluster Storage, including 3-way replication, JBOD support, and thin-provisioning.
Features that are in development, including network file system (NFS) support with Ganesha, erasure coding, and cache tiering.
New performance enhancements related to the area of remote directory memory access (RDMA), small-file performance, FUSE caching, and solid state disks (SSD) readiness.
Database performance tuning for SSD based storageAngelo Rajadurai
Databases are a key part of any application. The storage subsystem contributes most to performance of the database. In recent days, new storage technologies like Solid State Storage (SSD) and high performance drives are becoming cheaper and more accessible, but it takes a lot of planning to use these technologies in a cost effective way for best price-performance.
Cassandra Day Chicago 2015: DataStax Enterprise & Apache Cassandra Hardware B...DataStax Academy
Speaker(s): Kathryn Erickson, Engineering at DataStax
During this session we will discuss varying recommended hardware configurations for DSE. We’ll get right to the point and provide quick and solid recommendations up front. After we get the main points down take a brief tour of the history of database storage and then focus on designing a storage subsystem that won't let you down.
Accelerating hbase with nvme and bucket cacheDavid Grier
This set of slides describes some initial experiments which we have designed for discovering improvements for performance in Hadoop technologies using NVMe technology
Databases are a key part of any application. The storage subsystem contributes most to performance of the database. In recent days, new storage technologies like Solid State Storage (SSD) and high performance drives are becoming cheaper and more accessible, but it takes a lot of planning to use these technologies in a cost effective way for best price-performance.
Kafka on ZFS: Better Living Through Filesystems confluent
(Hugh O'Brien, Jet.com) Kafka Summit SF 2018
You’re doing disk IO wrong, let ZFS show you the way. ZFS on Linux is now stable. Say goodbye to JBOD, to directories in your reassignment plans, to unevenly used disks. Instead, have 8K Cloud IOPS for $25, SSD speed reads on spinning disks, in-kernel LZ4 compression and the smartest page cache on the planet. (Fear compactions no more!)
Learn how Jet’s Kafka clusters squeeze every drop of disk performance out of Azure, all completely transparent to Kafka.
-Striping cheap disks to maximize instance IOPS
-Block compression to reduce disk usage by ~80% (JSON data)
-Instance SSD as the secondary read cache (storing compressed data), eliminating >99% of disk reads and safe across host redeployments
-Upcoming features: Compressed blocks in memory, potentially quadrupling your page cache (RAM) for free
We’ll cover:
-Basic Principles
-Adapting ZFS for cloud instances (gotchas)
-Performance tuning for Kafka
-Benchmarks
Accelerating HBase with NVMe and Bucket CacheNicolas Poggi
on-Volatile-Memory express (NVMe) standard promises and order of magnitude faster storage than regular SSDs, while at the same time being more economical than regular RAM on TB/$. This talk evaluates the use cases and benefits of NVMe drives for its use in Big Data clusters with HBase and Hadoop HDFS.
First, we benchmark the different drives using system level tools (FIO) to get maximum expected values for each different device type and set expectations. Second, we explore the different options and use cases of HBase storage and benchmark the different setups. And finally, we evaluate the speedups obtained by the NVMe technology for the different Big Data use cases from the YCSB benchmark.
In summary, while the NVMe drives show up to 8x speedup in best case scenarios, testing the cost-efficiency of new device technologies is not straightforward in Big Data, where we need to overcome system level caching to measure the maximum benefits.
Nagios Conference 2012 - Dan Wittenberg - Case Study: Scaling Nagios Core at ...Nagios
Dan Wittenberg's presentation on using Nagios at a Fortune 50 Company
The presentation was given during the Nagios World Conference North America held Sept 25-28th, 2012 in Saint Paul, MN. For more information on the conference (including photos and videos), visit: http://go.nagios.com/nwcna
Josh Berkus
You've heard that PostgreSQL is the highest-performance transactional open source database, but you're not seeing it on YOUR server. In fact, your PostgreSQL application is kind of poky. What should you do? While doing advanced performance engineering for really high-end systems takes years to learn, you can learn the basics to solve performance issues for 80% of PostgreSQL installations in less than an hour. In this session, you will learn: -- The parts of database application performance -- The performance setup procedure -- Basic troubleshooting tools -- The 13 postgresql.conf settings you need to know -- Where to look for more information.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Key Trends Shaping the Future of Infrastructure.pdf
Open Source Data Deduplication
1. Open Source Data Deduplication
Nick Webb
nickw@redwireservices.com www.redwireservices.com @RedWireServices
(206) 829-8621
Last updated 8/10/2011
2. Introduction
● What is Deduplication? Different kinds?
● Why do you want it?
● How does it work?
● Advantages / Drawbacks
● Commercial Implementations
● Open Source implementations, performance,
reliability, and stability of each
3. What is Data Deduplication
Wikipedia:
. . . data deduplication is a specialized data compression
technique for eliminating coarse-grained redundant data,
typically to improve storage utilization. In the deduplication
process, duplicate data is deleted, leaving only one copy of
the data to be stored, along with references to the unique
copy of data. Deduplication is able to reduce the required
storage capacity since only the unique data is stored.
Depending on the type of deduplication, redundant files may
be reduced, or even portions of files or other data that are
similar can also be removed . . .
4. Why Dedupe?
● Save disk space and money (less disks)
● Less disks = less power, cooling, and space
● Improve write performance (of duplicate data)
● Be efficient – don’t re-copy or store previously
stored data
5. Where does it Work Well?
● Secondary Storage
● Backups/Archives
● Online backups with limited
bandwidth/replication
● Save disk space – additional
full backups take little space
● Virtual Machines (Primary &
Secondary)
● File Shares
6. Not a Fit
● Random data
● Video
● Pictures
● Music
● Encrypted files
– many vendors dedupe, then encrypt
7. Types
● Source / Target
● Global
● Fixed/Sliding Block
● File Based (SIS)
8. Drawbacks
● Slow writes, slower reads
● High CPU/memory utilization (dedicated server
is a must)
● Increases data loss risk / corruption
● Collision risk of 1.3x10^-49% chance per PB
● (256 bit hash & 8KB Blocks)
12. Block Reclamation
● In general, blocks are not
removed/freed when a file is
removed
● We must periodically check blocks
for references, a block with no
reference can be deleted, freeing
allocated space
● Process can be expensive,
scheduled during off-peak
13. Commercial Implementations
● Just about every backup vendor
● Symantec, CommVault
● Cloud: Asigra, Baracuda, Dropbox (global), JungleDisk,
Mozy
● NAS/SAN/Backup Targets
● NEC HydraStor
● DataDomain/EMC Avamar
● Quantum
● NetApp
15. How Good is it?
● Many see 10-20x deduplicaiton meaning 10-20
times more logical object storage than physical
● Especially true in backup or virtual
environments
16. SDFS / OpenDedupe
www.opendedup.org
● Java 7 Based / platform agnostic
● Uses fuse
● S3 storage support
● Snapshots
● Inline or batch mode deduplication
● Supposedly fast (290MBps+ on great H/W)
● Support for global/clustered dedupe
● Probably most mature OSS Dedupe (IMHO)
19. SDFS
● Pro
● Works when configured properly
● Appears to be multithreaded
● Con
● Slow / resource intensive (CPU/Memory)
● Fragile, easy to mess up options, leading to crashes, little
user feedback
● Standard POSIX utilities do not show accurate data (e.g. df,
must use getfattr -d <mount point>, and calculate bytes →
GB/TB and % free yourself)
● Slow with 4k blocks, recommended for VMs
20. LessFS
www.lessfs.com
● Written in C = Less CPU Overhead
● Have to build yourself (configure && make && make install)
● Has replication, encryption
● Uses fuse
21. LessFS Install
wget http://...lessfs-1.4.2.tar.gz
tar zxvf *.tar.gz
wget http://...db-4.8.30.tar.gz
yum install buildstuff…
. . .
echo never >
/sys/kernel/mm/redhat_transparent_hugepage/defrag
echo no >
/sys/kernel/mm/redhat_transparent_hugepage/khugep
aged/defrag
22. LessFS Go
sudo vi /etc/lessfs.cfg
BLOCKDATA_PATH=/mnt/data/dta/blockdata.dta
META_PATH=/mnt/meta/mta
BLKSIZE=4096 # only 4k supported on centos 5
ENCRYPT_DATA=on
ENCRYPT_META=off
mklessfs -c /etc/lessfs.cfg
lessfs /etc/lessfs.cfg /mnt/dedupe
23. LessFS
● Pro
● Does inline compression by default as well
● Reasonable VM compression with 128k blocks
● Con
● Fragile
● Stats/FS info hard to see (per file accounting, no totals)
● Kernel >= 2.6.26 required for blocks > 4k (RHEL6 only)
● Running with 4k blocks is not really feasible
25. Other OSS
● ZFS?
● Tried it, and empirically it was a drag, but I have no
hard data (got like 3x dedupe with identical full
backups of VMs)
● At least it’s stable…
26. Kick the Tires
● Test data set; ~330GB of data
● 22GB of documents, pictures, music
● Virtual Machines
– 220GB Windows 2003 Server with SQL Data
– 2003 AD DC ~60GB
– 2003 Server ~8GB
– Two OpenSolaris VMs, 1.5 & 2.7GB
– 3GB Windows 2000 VM
– 15GB XP Pro VM
27. Kick the Tires
● Test Environment
● AWS High CPU Extra Large Instance
● ~7GB of RAM
● ~Eight Cores ~2.5GHz each
● ext4
28. Compression Performance
● First round (all “unique” data)
● If another copy was put in (like another full), we should expect
100% reduction for that non-unique data (1x dedupe per run)
FS Home % Home VM % VM Combined % Total MBps
Data Reduction Data Reduction Reduction
SDFS 4k 21GB 4.50% 109 64% 128GB 61% 16
GB
lessfs 4k 24GB -9% N/A 51% N/A 50% 4
(est.)
SDFS 128k 21GB 4.50% 255 16% 276GB 15% 40
GB
lessfs 128k 21GB 4.50% 130 57% 183GB 44% 24
GB
tar/gz --fast 21GB 4.50% 178 41% 199GB 39% 35
GB
31. Kick the Tires: Part 2
● Test data set – two ~204GB full backup
archives from a popular commercial vendor
● Test Environment
● VirtualBox VM, 2GB RAM, 2 Cores, 2x7200RPM
SATA drives (meta & data separated for LessFS)
● Physical CPU: Quad Core Xeon
32. Write Performance
MBps
40
35
30
25
20 MBps
15
10
5
0
raw SDFS 128k W SDFS 128k Re-W LessFS 128k W LessFS 128k Re-W
35. Open Source Dedupe
● Pro
● Free
● Can be stable, if well managed
● Con
● Not in repos yet
● Efforts behind them seem very limited, 1 dev each
● No/Poor documentation
37. Conclusion/Recommendations
● Dedupe is great, if it works and it meets your
performance and storage requirements
● OSS Dedupe has a way to go
● SDFS/OpenDedupe is best OSS option right
now
● JungleDisk is good and cheap, but not OSS
38. About Red Wire Services
If you found this presentation helpful, consider
Red Wire Services for your next
Backup, Archive, or IT Disaster Recovery
Planning project.
Learn more at www.RedWireServices.com
39. About Nick Webb
Nick Webb is the founder of Red Wire Services, in
Seattle, WA. Nick is available to speak on a variety of IT
Disaster Recovery related topics, including:
● Preserving Your Digital Legacy
● Getting Started with your Small Business Disaster
Recovery Plan
● Archive Storage for SMBs
If interested in having Nick speak to your group, please
call (206) 829-8621 or email info@redwireservices.com
Editor's Notes
Different types of deduplication levels:File levelBlock levelVariable block versus fixed block Quantum/DD Variable Blocks