• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Large Scale Data With Hadoop
 

Large Scale Data With Hadoop

on

  • 4,654 views

 

Statistics

Views

Total Views
4,654
Views on SlideShare
4,635
Embed Views
19

Actions

Likes
8
Downloads
108
Comments
0

1 Embed 19

http://www.slideshare.net 19

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • So everyone knows what data processing is, but what do we mean by “scale” ?
  • Simply- Data. Is. Big.… So this is the trend. The amount of data we can collect is increasing exponentially, and most companies aren’t capable of handling it. Patterson likes to call this “the data tsunami.” Let’s talk about a real example of this…
  • Okay, so data is big. No big deal, I’ve got a processor with four cores that will chew through anything. However, the speed of my application is constrained by the speed at which I can get data. It is not going to fit in memory, so it’s going to be living on a hard disk. This brings us to problem number 2.
  • Hard drive speed comes from two numbers.Disk seek time: The time to move the read head on a drive to where the data is storedData transfer: The speed that I can get information off the diskHard drives are wonderful because they are random access devices, and I can get data anywhere off the disk ay time I want it just by seeking and reading.Seeking takes a while, though. Fortunately, I can take advantage of locality, and read a page of data in to a bufferLet’s look at an example….
  • This example has nice round numbers.I have a fictional hard drive that has a 10ms seek time, and a 10 meg/second transfer speedOn it, there’s a 1TB of data, made up of 100 byte records in 10K pages. That’s 10 billion entries over a billion pages, and I want to update 10% of this data set – a gig.…So seeks are slow, but I can transfer the whole file in a single day. Again, my application is always bound by the slowest piece in the pipe, so I get the most benefit by speeding that part up – the disk.
  • Here are some real drives. I grabbed the specs that are advertised on newegg....Solid state drives are expensive though, $4k per terabyte. I can’t afford to buy a new one every month for my sensor collection.
  • …With this observation in mind, let’s consider treading a hard disk (a random access device) like tape (a sequential device)…So we get closer to 1 day instead of a thousand
  • I bet a lot of you know where this is going already – parallelism!So let’s say I’ve got one of those ide drives from earlier. I can sequentially process 100TB of data in 16 days.Alright, let’s get a thousand of them and run them in parallel – 75 GIGS per second, and I’m done in 22 minutes.Alright, parallel processing solves our problem, let’s call it a day!
  • There’s an issue, though. Parallelism is really really hard.…And that’s not the only problem, either.
  • Even if you have an OS that you know won’t crash, and code that won’t kill it either, reliability is still an issue because hardware fails.
  • So let’s buy some expensive fault-tolerant hardware…
  • A system that is robust in the face of machine failureA platform to allows multiple groups to collaborateA solution that scales linearly with respect to costA vision that will not lock us into a single vendor over time
  • Alright, now we’re going to do some examples. I find it is most useful to look at what happens to the data instead of what the code looks like. Let’s review MR, but think about the data.So I have a big file that I want to process. It is split up in blocks and spread over my cluster. I start a job and Hadoop initiates a bunch of map tasks- this processing occurs where the data exists already. The mapper reads in a part of the file, and emits several key/value pairs. These are collected, sorted into buckets based on key, and each bucket goes to a reduce task. Each reducer processes a bucket and outputs the result. Of course, I can chain these steps together.Word count is the ‘Hello World’ of mapreduce. I’m interested in the frequency of words in a dataset.…Word count isn’t entirely silly by the way. Consider the suggestions that pop up when you start a google search. What you see is a list of search strings that people use frequently. Think of it as phrase count instead of word count.
  • I’ve got some code here , but I’m going to skip going over it in detail. The slides will be available if you want to pour over it.We talked about MR being accessible for a programmer when compared to an MPI approach, and this is the entire map class for word count.
  • Here’s another example to illustrate that my map process can do more than just read data in and push it back out. Here’s a file with information about stock prices – the ticker symbol, a date,the open price, the high and low prices for the day, and what it closed at. Since we’re talking about big data sets here, I want you to imagine that it’s got every stock for the last 50 years and there’s not enough room on my slide to include it all.I’m interested in volatility or something, so I want the biggest change in price for a particular stock. Let’s look at the data.
  • My mapper reads in a record, filters out the information I’m not interested in (date and open/close prices), and emits the delta for each day.
  • I think that collecting data without doing anything interesting with it is a big sin. So, here’s a business case for someone in the room, perhaps. Say you want to grep through some server logs that you’ve been collecting forever but never got around to doing anything with. Amazon EC2 supports Hadoop, so you can run your job without having to buy any hardware at all.And a list of stuff that is built on top of Hadoop.… You don’t have to write your jobs in java. I know that I love python, and I bet you do too.…We’ll be contributing some our time series stuff to the Mahout project.
  • So let’s conclude with a quote from Peter Norvig that I think justifies our entire presentation.

Large Scale Data With Hadoop Large Scale Data With Hadoop Presentation Transcript

  • Large Scale Data with Hadoop
    Galen Riley and Josh Patterson
    Presented at DevChatt 2010
  • Agenda
    Thinking at Scale
    Hadoop Architecture
    Distributed File System
    MapReduce Programming Model
    Examples
  • Data is Big
    The Data Deluge (2/25/2010)
    “Eighteen months ago, Li & Fung, a firm that manages supply chains for retailers, saw 100 gigabytes of information flow through its network each day. Now the amount has increased tenfold.”
    http://www.economist.com/opinion/displaystory.cfm?story_id=15579717
  • Data is Big
    Sensor data collection
    128 sensors
    37 GB/day
    10 bytes/sample, 30 per second
    Increasing 10x by 2012 http://jpatterson.floe.tv/index.php/2009/10/29/the-smartgrid-goes-open-source
  • Disks are Slow
    Disk Seek, Data Transfer
    Reading Files
    Disk seek for every access
    Buffered reads, locality  still seeking every disk page
  • Disks are Slow
    10ms seek, 10MB/s transfer
    1TB file, 100b records, 10kb page10B entries, 1B pages1GB of updates
    Seek for each update, 1000 days
    Seek for each page, 100 days
    Transfer entire TB, 1 day
  • Disks are Slow
    IDE drive – 75 MB/sec, 10ms seek
    SATA drive – 300MB/s, 8.5ms seek
    SSD – 800MB/s, 2 ms “seek”
    (1TB = $4k!) 
  • // Sidetrack
    Observation: transfer speed improves at a greater rate than seek speed
    Improvement by treating disks like tapes
    Seek as little as possible in favor of sequential reads
    Operate at transfer speed http://weblogs.java.net/blog/2008/03/18/disks-have-become-tapes
  • An Idea: Parallelism
    1 drive – 75 MB/sec
    16 days for 100TB
    1000 drives – 75 GB/sec
    22 minutes for 100TB
  • A Problem: Parallelism is Hard
    Issues
    Synchronization
    Deadlock
    Limited bandwidth
    Timing issues
    Apples v. Oranges, but… MPI
    Data distribution, communication between nodes done manually by the programmer
    Considerable effort achieving parallelism compared to actual processing
  • A Problem: Reliability
    Computers are complicated
    Hard drive
    Power supply
    Overheating
  • A Problem: Reliability
    1 Machine
    3 years mean time between failures
    1000 Machines
    1 day mean time between failures
  • Requirements
    Backup
    Reliable
    Partial failure, graceful decline rather than full halt
    Data recoverability, if a node fails, another picks up its workload
    Node recoverability, a fixed node can rejoin the group without a full group restart
    Scalability, adding resources adds load capacity
    Easy to use
  • Hadoop: Robust, Cheap, Reliable
    Apache project, open source
    Designed for commodity hardware
    Can lose whole nodes and not lose data
    Includes MapReduce programming model
  • Why Commodity Hardware?
    Single large computer systems are expensive and proprietary
    High initial costs, plus lock-in with vendor
    Existing methods do not work at petabyte-scale
    Solution: Scale “out” instead of “up”
  • Hadoop Distributed File System
    Throughput Good, Latency Bad
    Data Coherency
    Write-once, read-many access model
    Files are broken up into blocks
    Typically 64MB or 128MB block size
    Each replicated on multiple DataNodes on write
    Intelligent Client
    Client can find location of blocks
    Client accesses data directly from DataNode
  • Source: http://wiki.apache.org/hadoop/HadoopPresentations?action=AttachFile&do=get&target=hdfs_dhruba.pdf
  • HDFS: Performance
    Robust in the face of multiple machine failures through aggressive replication of data blocks
    High Performance
    Checksum of 100 TB in 10 minutes,~166 GB/sec
    Built to house petabytes of data
  • MapReduce
    Simple programming model that abstracts parallel programming complications away from data processing logic
    Made popular at Google, drives their processing systems, used on 1000s of computers in various clusters
    Hadoop provides an open source version of MR
  • MapReduce Data Flow
  • Using MapReduce
    MapReduce is a programming model for efficient distributed computing
    It works like a Unix pipeline:
    cat input | grep | sort | uniq -c | cat > output
    Input | Map | Shuffle & Sort | Reduce | Output
    Efficiency from
    Streaming through data, reducing seeks
    Pipelining
    A good fit for a lot of applications
    Log processing
    Web index building
  • Hadoop In The Field
    Yahoo
    Facebook
    Twitter
    Commercial support available from Cloudera
  • Hadoop In Your Backyard
    openPDC project at TVAhttp://openpdc.codeplex.com
    Cluster is currently:
    20 nodes
    200TB of physical drive space
    Used for
    Cheap, redundant storage
    Time series data mining
  • Examples – Word Count
    Hello, World!
    Map
    Input:
    foofoo bar
    Output all words in a dataset as:{ key, value }
    {“foo”, 1}, {“foo”, 1}, {“bar”, 1}
    Reduce
    Input:{“foo”, (1, 1)}, {“bar”, (1)}
    Output:{“foo”, 2}, {“bar”, 1}
  • Word Count: Mapper
    public static class MapClass extends MapReduceBase
    implements Mapper<LongWritable, Text, Text, IntWritable> {
    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();
    public void map(LongWritable key, Text value,
    OutputCollector<Text, IntWritable> output,
    Reporter reporter) throws IOException {
    String line = value.toString();
    StringTokenizeritr = new
    StringTokenizer(line);
    while (itr.hasMoreTokens()) {
    word.set(itr.nextToken());
    output.collect(word, one);
    }
    }
    }
  • Word Count: Reducer
    public static class Reduce extends MapReduceBase
    implements Reducer<Text, IntWritable, Text, IntWritable> {
    public void reduce(Text key, Iterator<IntWritable> values,
    OutputCollector<Text, IntWritable> output,
    Reporter reporter) throws IOException {
    int sum = 0;
    while (values.hasNext()) {
    sum += values.next().get();
    }
    output.collect(key, new IntWritable(sum));
    }
    }
  • Examples – Stock Analysis
    Input dataset:
    Symbol,Date,Open,High,Low,Close
    GOOG,2010-03-19,555.23,568.00,557.28,560.00
    YHOO,2010-03-19,16.62,16.81,16.34,16.44
    GOOG,2010-03-18,564.72,568.44,562.96,566.40
    YHOO,2010-03-18,16.46,16.57,16.32,16.56
    Interested in biggest delta for each stock
  • Examples – Stock Analysis
    Map
    Output
    {“GOOG”, 10.72},
    {“YHOO”, 0.47},
    {“GOOG”, 5.48},
    {“YHOO”, 0.25}
    Reduce
    Input: {“GOOG”, (10.72, 5.48)},{“YHOO”, (0.47, 0.25)}
    Output:{“GOOG”, 10.72},{“YHOO”, 0.47}
  • Examples – Time Series Analysis
    Map:
    {pointId, Timestamp + 30s of data}
    Reduce:
    Data mining!
    Classify samples based on training dataset
    Output samples that fall into interesting categories, index in database
  • Other Stuff
    Compatibility with Amazon Elastic Cloud
    HadoopStreaming
    MapReducewith anything that uses stdin/stdout
    Hbase, distributed column-store database
    Pig, data analysis (transforms, filters, etc)
    Hive, data warehousing infrastructure
    Mahout, machine learning algorithms
  • Parting Thoughts
    “We don't have better algorithms than anyone else. We just have more data.”
    Peter Norvig
    Artificial Intelligence: A Modern Approach
    Chief scientist at Google
  • Contact
    Galen Riley
    http://galenriley.com
    @TotallyGreat
    Josh Patterson
    http://jpatterson.floe.tv
    @jpatanooga