Your SlideShare is downloading. ×
0
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Big Unstructured Data
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Big Unstructured Data

1,843

Published on

Presentation at CloudExpo West 2011. Topic: object "cloud" storage for Big Unstructured Data

Presentation at CloudExpo West 2011. Topic: object "cloud" storage for Big Unstructured Data

Published in: Technology, Business
0 Comments
3 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,843
On Slideshare
0
From Embeds
0
Number of Embeds
7
Actions
Shares
0
Downloads
0
Comments
0
Likes
3
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • A Quick History of Big Data & Hadoop ▪ Facebook, Yahoo! and Google found themselves collecting data on an unprecedented scale.  They were the first massive companies collecting tons of data from millions of users. ▪ They quickly overwhelmed traditional data systems and techniques like Oracle and MySql.  Even the best, most expensive vendors using the biggest hardware could barely keep up and certainly couldn’t give them tools to powerfully analyze their influx of data. ▪ In the early 2000’s their armies of PhDs developed new techniques like MapReduce, BigTable and Google File System to handle their big data.  Initially these techniques were held proprietary.  But… ▪ Around 2005 Facebook, Yahoo! and Google started sharing whitepapersdescribing their big data technologies. ▪ In 2006 Doug Cutting starts the Hadoop project as an open source version of these technologies. ▪ Companies in every industry now find themselves with big data problems because their ability to collect data grows every day. ▪ A thriving ecosystem of companies, projects and individuals has emerged to tackle big data problems. What is Big Data? Big Data generally means having so much data that you overwhelm your traditional systems and techniques.  Systems that worked last year, and that felt nimble when launched, suddenly feel sluggish as the burden of massive data loads crushes them. Systems work… engineers make heroic efforts to guarantee that they do.  But they never feel agile or responsive again. Although big data’s often in the many-terabyte, petabyte and exabyte range, there is no official size threshold.   In fact, some of the best big data problems don’t involve massive amounts of data… they just require massive amounts of processing on that data. Signs of a Big Data Problem ▪ Batch jobs that take too long to run… what if you had that business intelligence in a matter of minutes? ▪ CPU-bound database or datawarehouse servers ▪ Repeated emergency meetings to discuss scaling of the data systems ▪ Long waits just to move data around ▪ Business managers asking for insight that IT can’t provide What is Hadoop? Hadoop is an open source software platform that makes big data look like normal data.  It makes it possible to do very complex analysis against very large data sets that would overwhelm even the biggest and most expensive database installations. The problem with traditional databases and techniques is that they invariably centralize data.  A massive Java/MySQL app is architected such that many Java computing machines sit around a central MySQL machine (or even a MySQL cluster.)  Scaling to thousands of Java compute machines means that your central MySQL installation gets hammered by requests.  In essence you’re running a distributed denial of service (ddos) attack on your own systems!  At a fundamental level it’s the disk IO bottleneck that prevents your system from scaling. Hadoop changes the core architecture of computing problems.  Instead of a centralized data store it chunks the massive data sets and stores those chunks all across a cluster of machines.  Then, and this is key, it sends compute jobs *out to* the data!  So the compute jobs run where the data is.  This system leverages disk IO across the entire cluster by putting the data close to the CPU that needs it. Hadoop is built on two main components… MapReduce and the Hadoop File System.  MapReduce is what chunks processing code out to the cluster.  The Hadoop File System is what chunks data out to the cluster. MapReduce is a way of programming computational problems.  While MapReduce jobs can be written in many languages, most are written in Java.  So MapReduce isn’t a language.  It’s a way of thinking about computing problems.  It’s another way to skin a cat.  If you have business computations encoded in PL/SQL, Java, stored procedures or some arcane XML BI syntax, MapReduce can accomplish the same task. The Hadoop File System (HDFS) allows tens, hundreds or thousands of servers to share files.  It’s like creating one hard drive from thousands.  It’s redundant… mission-critical data is always stored on at least three machines… if one machine goes down HDFS automatically shuffles a new copy of your data to another machine.  And it’s smart… HDFS knows how to move segments of data closer to computing processes that need it. Hadoop provides the framework to use MapReduce and HDFS to run massive compute jobs.  When you stand up a cluster of 1000 machines, Hadoop keeps track of which one is running which job, where its data is stored, etc. Massive Investment Momentum in the Big Data Space Big deals are flowing within the Big Data space as enterprises across all industries encounter the same data problems that Facebook, Yahoo! and Google did ten years ago.  This is good for all enterprises as it means that the tools will continue to mature and industry-specific solutions will emerge. ▪ EMC to spend $3B on big data in 2011 after spending $3B in 2010 ▪ IBM invests $100M in big data ▪ Yahoo! Spins HortonWorks out @ $200M valuation ▪ HP Acquires Vertica The list of deals, big and small, goes on and on.  Venture capital is pursuing the space more aggressively than it did the social media space because big data points of pain aren’t tied to discretionary marketing budgets… they’re core to an enterprise’s existence. The space is still nascent with many investments being made in toolsets which will compete and often lose against open source community-developed solutions.  SocketWare chooses a strategy of deep industry insight to create hard-to-replicate products.
  • ----- Meeting Notes (9/1/11 11:01) ----- datacenter here, in powerbus
  • Transcript

    • 1. Big “Unstructured” Data in the Cloud A Case for Optimized Object Storage
    • 2. Agenda
      • Introduction: storage facts and trends
      • Big Data for Analytics vs. Big “Unstructured” Data
      • Object Storage for Big “Unstructured” Data
      • AmpliStor: Optimized Object Storage
      • Cost Reduction through Erasure Coding
      • Use Case: Montreux Jazz
      • Questions
      Amplidata Confidential
    • 3.
      • Introduction: storage facts and trends
    • 4. Introduction, facts and trends
      • Studies show that data storage capacities will likely increase by over 30X in the coming decade to over 35 Zettabytes
      30X 35ZB Time Storage Consumption High-capacity drives Less Staff / TB Unstructured Data 2020
    • 5. Introduction, facts and trends The number of qualified people to manage this data will stay flat (~1.5X) Time Capcity / Budget Efficiency: automate & reduce overhead Storage Requirements Storage Budget
    • 6. Introduction, facts and trends
      • Much of that growth (80%) is driven by unstructured data : billions of large objects
      Active Archives Online Images Large Files Medical Images Online Storage Online Movies
    • 7. Introduction, facts and trends
      • Storage currently accounts for 37-40% of overall data center energy consumption from hardware
      • Energy consumption will influence technology procurement criteria
      Data Center Power Usage
    • 8. Introduction, facts and trends
      • Data migration will soon take longer than the lifetime of media
      • “ It’s like painting the Golden Gate Bridge, but the bridge is continuously getting longer ”
    • 9. Introduction, facts and trends
      • There is a growing interest in Object Storage
      • Erasure coding is the proclaimed successor of RAID
    • 10.
      • Big Data for Analytics vs. Big “Unstructured” Data
    • 11. Big Data for Analytics
      • In the 1990ies, we experienced an explosion of data captured for analytics purposes:
        • Academic Research
        • Chemical R&D facilities
        • Geo-industry, oil & gas
    • 12. Big Data for Analytics
      • Data is captured as many small log files & concatenated as “Big Data”
      • Relational databases were not optimal:
        • Too much data, too big
        • Not performant for analytics
      • This stimulated innovations:
        • Hadoop, MapReduce, GFS
      • => Big Data for Analytics
    • 13. Big Data Evolution
      • Today, Big Data trend refers to both Big Data for Analytics and Big Unstructured Data:
        • Fundamentally different
        • Lots of similarities
      • Unstructured data is traditionally stored on host files systems but:
        • File systems do not scale up to the size we need
        • File systems do not meet performance requirements
    • 14. Big Unstructured Data
      • 80% of data growth comes from unstructured data
      • Unstructured data takes all shapes based on specific industries
        • Healthcare: medical images
        • Travel and hospitality: surveillance video footage
        • Retail and manufacturing: design data and product images
        • Huge amount of documents generated in any corporate
      Source: Oraclestorageguy
    • 15. Big Unstructured Data
      • Most unstructured data is archived, often to tape (cost)
      • Data archives are a burden (Grandma’s Attic)
    • 16. Big Unstructured Data
      • Big Unstructured Data represents the next generation analytics that can help businesses make more informed decisions related to:
        • Product strategy
        • Marketing
        • Research
        • Historical trends
    • 17. Big Unstructured Data
      • Companies are starting to see the value of the data in their archives:
        • Documents of individuals can be valuable for others
      • Some companies have legal reasons to keep data available
      • Unexplored analytics opportunities
    • 18. Big Unstructured Data But how do store all this data in a cost efficient way?
    • 19. Big Unstructured Data
      • What are the requirements?
        • Tape is not an option: latency is key
        • Data has to be always available online
        • Direct interface to the applications
        • Petabyte scalability
        • Extreme reliability, integrity
        • Cost-efficient
        • Security
        • Disk Storage
        • + REST API, Cloud-enabled
        • + Erasure Coding
        • = Optimized Object Storage
      } }
    • 20.
      • Object Storage for
      • Big Unstructured Data
    • 21. Disk vs. Tape
      • Tape has several obvious advantages over disk & there will always be use cases for tape
      • But disks enable live archives with instant data accessibility
      • More arguments for disk-based archives
        • Disks can be powered down
        • Tape requires replication
        • Data integrity?
        • Massive migration projects
    • 22. Storage Clouds
      • Storage Cloud infrastructures
        • Private or public setup
        • Provide highest availability
      • Applications
        • File systems are obsolete
        • Use REST API
      REST API Application Application Application
    • 23. Petabyte Scalability
      • Object Storage systems will scale:
        • Beyond petabytes of data
        • Beyond billions of data objects
      • Systems should scale uniformly
        • Add resources incrementally
        • Scale performance and capacity separately
    • 24. Petabyte Scalability
      • Scalable metadata repository (capacity & performance)
      • Lightweight metadata, designed to scale up to billions of objects
      • Flat namespace
    • 25. Data Integrity
      • Ensuring the integrity of long-term unstructured data archive requires new data protection algorithms, to:
        • Address the increasing capacity of disk drives
        • Solve issues related to long RAID rebuild windows
      • “ Object storage systems based on erasure-coding can not only protect data from higher numbers of drive failures, but also against the failure of entire storage modules. ”
    • 26. Cost-efficient
      • Power, cooling and floor-space requirements are paramount concerns: erasure coding drastically reduces overhead numbers
      • Systems need to be self-managing
      • The system needs to be hardware independent: data migration needs to be an automatic, continuous background process.
    • 27. Cost-efficient
      • Eliminate the need for manual disk swaps: move to higher-level container management tasks.
      • The system should automatically manage allocation to the underlying disks
    • 28. Security
      • Multi-tenant authentication/authorisation
        • Read
        • Read/Write
        • List
      • Auditing & Logging
      • Secure protocols/encryptions (https)
      • Individual disks cannot be mis-used
        • Data is encoded and spread
    • 29.
      • Amplidata Object Storage
    • 30. AmpliStor for Big Unstructured Data
      • Turnkey storage solution for BIG Unstructured Data
        • Systems scales from beyond Petabytes with Global Object Namespace
        • Throughput scales with amount of resources
      • Policy-Driven Storage Durability
        • “ Ten 9’s” of Durability (99.99999999%) and beyond through policies
        • Eliminates the reliability exposures of RAID on high-density disk drives
        • Eliminates data corruption or loss due to bit errors
      • 50-70% improvement in Storage Efficiency
        • 70% reduction in storage footprint compared to “Three copies in the cloud”
        • 50% reduction in storage footprint compared to mirrored RAID
        • Drives proportional reductions in data center floor space & power
      • Automated Management
        • Self-healing design manages data integrity assurance and auto-repairs data
      • 50-70% reduction in TCO
        • Storage footprint (Capex), power, data center space & management costs
    • 31. Big Unstructured Data Use Cases
      • Online Applications
        • SaaS applications managing large-scale rich media
        • Photography & video within social media
        • Tens of petabytes are becoming common – RAID is insufficient, triple-mirrors too expensive
      • Storage Clouds
        • Online file sharing & backup services
        • Cloud Service Providers building competitors to Amazon S3
        • Corporate private cloud repositories for unstructured data
      • Media & Entertainment
        • Online video repositories (HD video driving huge capacities)
        • New tier that fills the void between fast/expensive SAN (post-production) & tape archives
      • Others
        • Video surveillance, medical imaging, satellite imaging, backups & BIG DATA archives
      Amplidata Confidential
    • 32. Erasure Coding, simply explained
      • BitSpread Encodes data in linear equations
      • Distributes the equations across disks, storage nodes, racks, data centers
      • Original data can always be uniquely determined from a subset of the equations
      • BitSpread uses 4K variables independent of object size
      • Extra blocks can be generated without knowing what is missing
      75 7 5 X+Y=12 X-Y=2 2X+Y=19 7 5 7 5 7 5 BitSpread Simplified mathematics: Original Object Decomposed Object Series of Equations Any 2 out of 3 equations uniquely determine object
    • 33. Core Software Technology Components
      • BitSpread – Distributed Encoder/Decoder
        • RAID replacement technology based on unique variant of Erasure Coding
        • “ Dial-in” fault tolerance through namespace level policies
          • Namespace1: 16/4 policy protects against any 4 failures in 16 disks
          • Namespace2: 18/6 policy protects against any 6 failures in 18 disks
        • Provides availability and reliability even during failures
        • Policies can be dynamically changed
      • BitDynamics – Maintenance & Self-Healing Agent
        • Out of band operations agent for disk monitoring, integrity verification & object self-healing
        • Performs automated tasks: scrubs, verifies, self-heals, repairs & optimizes data on disk
    • 34. AmpliStor System
      • Controller Nodes (3+)
        • Dual, quad-core Xeon processors, 16GB RAM, 2 x 200GB SSD, 2 x 10 Gigabit Ethernet network interfaces
        • Object Based Interfaces: http/REST API, C API, Python CLI, WebDav
        • 3 Controllers per System (minimum) – can be scaled up for performance (fully shared metadata & storage pool)
      • AS20 Low Power Storage Nodes (8+)
        • 1 U rack mount chassis with 20TB capacity
        • 2 x 1 Gigabit network interfaces
        • Low power processor (Intel Atom)
        • 10 x 2 TB low-power “Green” SATA disk drives
        • Low power: 65 - 140 watts per node utilization
        • (3.5 - 7 watts per TB)
    • 35. AmpliStor: Dense, Fast & Power-efficient
      • High-Density Rack Definition
        • Single 44U rack:
        • (3) Controllers Nodes & (36) Storage Nodes
        • OR
        • (42) Storage Nodes
        • (2) 48-port Ethernet switches
      • Storage Density
        • Up to 420 disk drives in a single rack
        • 840TB raw capacity / 525TB usable capacity protected against 4 simultaneous failures
      • Power
        • Nominal / peak usage 4.2 / 6.6 KWatts
        • 2 x 30A / 240VAC circuit power supplies
      • Performance
        • 3 x 10GbE ports to customer network
        • This provides 1.3 GB/sec aggregate throughput
      Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Controller node Storage node Storage node Storage node Storage node Storage node Controller node Controller node 2 x 10GbE (expansion racks) 3 x 10GbE (customer network) Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Storage node Ethernet Ethernet Ethernet Ethernet
    • 36. AmpliStor Summary Advantages
      • Ultra-Durable & Efficient platform
        • Our erasure-coding implementation provides the most flexible & efficient storage durability
        • Dial-in Ten 9’s durability and higher through policies
      • Performance
        • We can demonstrate throughput of 1.3 GB/sec per rack today and scale-up controllers for higher throughput
      • Power & Density
        • AmpliStor provides 50-70% better power efficiency & density than competitors
      • Pricing & TCO
        • 50-70% TCO reduction compared to alternative storage with high-durability
    • 37. Amplidata Background
      • Technology was incubated since 2005 at Incubaid (www.incubaid.com)
      • Amplidata Incorporated in 2008
      • Designed by Founders of DCT (became NetBackup Puredisk deduplication technology - acquired by Veritas/Symantec)
      • Belgium based R&D (Lochristi, outside Gent)
      • US Headquarters in Redwood City, CA
      • World Wide Support centers in Redwood City, CA; Belgium, Egypt, India, (Taiwan in Q4)
    • 38.
      • AmpliStor Use Case:
      • Montreux Jazz
    • 39. Montreux Jazz, an invaluable research asset
      • 45 years of Montreux Jazz festivals
        • 5000 hours of video (2000 critical)
        • 5000 hours of high quality audio
        • 3000 concerts descriptions
        • High-def video formats used since 1991
        • Also a collection of photos, press releases, …
      • Selected AmpliStor as the scale-out Archive system
        • Collaboration with the University of Laussane, Switzerland (EPFL)
        • Acquired a 1PB AmpliStor system
      • The 3 main objectives:
        • Save the recordings in a secure archive (static archive)
        • Make the archive available for cultural and scientific projects (live archive)
        • Scale and maintain the archives
        • Enable end-user access in a series of Jazz Café’s
    • 40. Tom Leyden, Director of Alliances & Marketing Twitter.com/tomme Thank You!

    ×