0
Large Scale Data with Hadoop<br />Galen Riley and Josh Patterson<br />Presented at DevChatt 2010<br />
Agenda<br />Thinking at Scale<br />Hadoop Architecture<br />Distributed File System<br />MapReduce Programming Model<br />...
Data is Big<br />The Data Deluge (2/25/2010)<br />“Eighteen months ago, Li & Fung, a firm that manages supply chains for r...
Data is Big<br />Sensor data collection<br />128 sensors<br />37 GB/day<br />10 bytes/sample, 30 per second<br />Increasin...
Disks are Slow<br />Disk Seek, Data Transfer<br />Reading Files<br />Disk seek for every access<br />Buffered reads, local...
Disks are Slow<br />10ms seek, 10MB/s transfer<br />1TB file, 100b records, 10kb page10B entries, 1B pages1GB of updates<b...
Disks are Slow<br />IDE drive – 75 MB/sec, 10ms seek<br />SATA drive – 300MB/s, 8.5ms seek<br />SSD – 800MB/s, 2 ms “seek”...
// Sidetrack<br />Observation: transfer speed improves at a greater rate than seek speed<br />Improvement by treating disk...
An Idea: Parallelism<br />1 drive – 75 MB/sec<br />16 days for 100TB<br />1000 drives – 75 GB/sec<br />22 minutes for 100T...
A Problem: Parallelism is Hard<br />Issues<br />Synchronization<br />Deadlock<br />Limited bandwidth<br />Timing issues<br...
A Problem: Reliability<br />Computers are complicated<br />Hard drive<br />Power supply<br />Overheating<br />
A Problem: Reliability<br />1 Machine<br />3 years mean time between failures<br />1000 Machines<br />1 day mean time betw...
Requirements<br />Backup<br />Reliable<br />Partial failure, graceful decline rather than full halt<br />Data recoverabili...
Hadoop: Robust, Cheap, Reliable<br />Apache project, open source<br />Designed for commodity hardware<br />Can lose whole ...
Why Commodity Hardware?<br />Single large computer systems are expensive and proprietary<br />High initial costs, plus loc...
Hadoop Distributed File System<br />Throughput Good, Latency Bad<br />Data Coherency<br />Write-once, read-many access mod...
Source: http://wiki.apache.org/hadoop/HadoopPresentations?action=AttachFile&do=get&target=hdfs_dhruba.pdf<br />
HDFS: Performance<br />Robust in the face of multiple machine failures through aggressive replication of data blocks<br />...
MapReduce<br />Simple programming model that abstracts parallel programming complications away from data processing logic<...
MapReduce Data Flow<br />
Using MapReduce<br />MapReduce is a programming model for efficient distributed computing<br />It works like a Unix pipeli...
Hadoop In The Field<br />Yahoo<br />Facebook<br />Twitter<br />Commercial support available from Cloudera<br />
Hadoop In Your Backyard<br />openPDC project at TVAhttp://openpdc.codeplex.com<br />Cluster is currently:<br />20 nodes<br...
Examples – Word Count<br />Hello, World!<br />Map<br />Input:<br />foofoo bar<br />Output all words in a dataset as:{ key,...
Word Count: Mapper<br />public static class MapClass extends MapReduceBase<br />implements Mapper<LongWritable, Text, Text...
Word Count: Reducer<br />public static class Reduce extends MapReduceBase<br />implements Reducer<Text, IntWritable, Text,...
Examples – Stock Analysis<br />Input dataset: <br />Symbol,Date,Open,High,Low,Close<br />GOOG,2010-03-19,555.23,568.00,557...
Examples – Stock Analysis<br />Map<br />Output<br />	{“GOOG”, 10.72},<br />	{“YHOO”, 0.47},<br />	{“GOOG”, 5.48},<br />	{“...
Examples – Time Series Analysis<br />Map:<br />{pointId, Timestamp + 30s of data}<br />Reduce:<br />Data mining!<br />Clas...
Other Stuff<br />Compatibility with Amazon Elastic Cloud<br />HadoopStreaming<br />MapReducewith anything that uses stdin/...
Parting Thoughts<br />“We don't have better algorithms than anyone else. We just have more data.”<br />Peter Norvig<br />A...
Contact<br />Galen Riley<br />http://galenriley.com<br />@TotallyGreat<br />Josh Patterson<br />http://jpatterson.floe.tv<...
Upcoming SlideShare
Loading in...5
×

Large Scale Data With Hadoop

3,514

Published on

Published in: Technology
0 Comments
8 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
3,514
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
110
Comments
0
Likes
8
Embeds 0
No embeds

No notes for slide
  • So everyone knows what data processing is, but what do we mean by “scale” ?
  • Simply- Data. Is. Big.… So this is the trend. The amount of data we can collect is increasing exponentially, and most companies aren’t capable of handling it. Patterson likes to call this “the data tsunami.” Let’s talk about a real example of this…
  • Okay, so data is big. No big deal, I’ve got a processor with four cores that will chew through anything. However, the speed of my application is constrained by the speed at which I can get data. It is not going to fit in memory, so it’s going to be living on a hard disk. This brings us to problem number 2.
  • Hard drive speed comes from two numbers.Disk seek time: The time to move the read head on a drive to where the data is storedData transfer: The speed that I can get information off the diskHard drives are wonderful because they are random access devices, and I can get data anywhere off the disk ay time I want it just by seeking and reading.Seeking takes a while, though. Fortunately, I can take advantage of locality, and read a page of data in to a bufferLet’s look at an example….
  • This example has nice round numbers.I have a fictional hard drive that has a 10ms seek time, and a 10 meg/second transfer speedOn it, there’s a 1TB of data, made up of 100 byte records in 10K pages. That’s 10 billion entries over a billion pages, and I want to update 10% of this data set – a gig.…So seeks are slow, but I can transfer the whole file in a single day. Again, my application is always bound by the slowest piece in the pipe, so I get the most benefit by speeding that part up – the disk.
  • Here are some real drives. I grabbed the specs that are advertised on newegg....Solid state drives are expensive though, $4k per terabyte. I can’t afford to buy a new one every month for my sensor collection.
  • …With this observation in mind, let’s consider treading a hard disk (a random access device) like tape (a sequential device)…So we get closer to 1 day instead of a thousand
  • I bet a lot of you know where this is going already – parallelism!So let’s say I’ve got one of those ide drives from earlier. I can sequentially process 100TB of data in 16 days.Alright, let’s get a thousand of them and run them in parallel – 75 GIGS per second, and I’m done in 22 minutes.Alright, parallel processing solves our problem, let’s call it a day!
  • There’s an issue, though. Parallelism is really really hard.…And that’s not the only problem, either.
  • Even if you have an OS that you know won’t crash, and code that won’t kill it either, reliability is still an issue because hardware fails.
  • So let’s buy some expensive fault-tolerant hardware…
  • A system that is robust in the face of machine failureA platform to allows multiple groups to collaborateA solution that scales linearly with respect to costA vision that will not lock us into a single vendor over time
  • Alright, now we’re going to do some examples. I find it is most useful to look at what happens to the data instead of what the code looks like. Let’s review MR, but think about the data.So I have a big file that I want to process. It is split up in blocks and spread over my cluster. I start a job and Hadoop initiates a bunch of map tasks- this processing occurs where the data exists already. The mapper reads in a part of the file, and emits several key/value pairs. These are collected, sorted into buckets based on key, and each bucket goes to a reduce task. Each reducer processes a bucket and outputs the result. Of course, I can chain these steps together.Word count is the ‘Hello World’ of mapreduce. I’m interested in the frequency of words in a dataset.…Word count isn’t entirely silly by the way. Consider the suggestions that pop up when you start a google search. What you see is a list of search strings that people use frequently. Think of it as phrase count instead of word count.
  • I’ve got some code here , but I’m going to skip going over it in detail. The slides will be available if you want to pour over it.We talked about MR being accessible for a programmer when compared to an MPI approach, and this is the entire map class for word count.
  • Here’s another example to illustrate that my map process can do more than just read data in and push it back out. Here’s a file with information about stock prices – the ticker symbol, a date,the open price, the high and low prices for the day, and what it closed at. Since we’re talking about big data sets here, I want you to imagine that it’s got every stock for the last 50 years and there’s not enough room on my slide to include it all.I’m interested in volatility or something, so I want the biggest change in price for a particular stock. Let’s look at the data.
  • My mapper reads in a record, filters out the information I’m not interested in (date and open/close prices), and emits the delta for each day.
  • I think that collecting data without doing anything interesting with it is a big sin. So, here’s a business case for someone in the room, perhaps. Say you want to grep through some server logs that you’ve been collecting forever but never got around to doing anything with. Amazon EC2 supports Hadoop, so you can run your job without having to buy any hardware at all.And a list of stuff that is built on top of Hadoop.… You don’t have to write your jobs in java. I know that I love python, and I bet you do too.…We’ll be contributing some our time series stuff to the Mahout project.
  • So let’s conclude with a quote from Peter Norvig that I think justifies our entire presentation.
  • Transcript of "Large Scale Data With Hadoop"

    1. 1. Large Scale Data with Hadoop<br />Galen Riley and Josh Patterson<br />Presented at DevChatt 2010<br />
    2. 2. Agenda<br />Thinking at Scale<br />Hadoop Architecture<br />Distributed File System<br />MapReduce Programming Model<br />Examples<br />
    3. 3. Data is Big<br />The Data Deluge (2/25/2010)<br />“Eighteen months ago, Li & Fung, a firm that manages supply chains for retailers, saw 100 gigabytes of information flow through its network each day. Now the amount has increased tenfold.”<br />http://www.economist.com/opinion/displaystory.cfm?story_id=15579717<br />
    4. 4. Data is Big<br />Sensor data collection<br />128 sensors<br />37 GB/day<br />10 bytes/sample, 30 per second<br />Increasing 10x by 2012 http://jpatterson.floe.tv/index.php/2009/10/29/the-smartgrid-goes-open-source<br />
    5. 5. Disks are Slow<br />Disk Seek, Data Transfer<br />Reading Files<br />Disk seek for every access<br />Buffered reads, locality  still seeking every disk page<br />
    6. 6. Disks are Slow<br />10ms seek, 10MB/s transfer<br />1TB file, 100b records, 10kb page10B entries, 1B pages1GB of updates<br />Seek for each update, 1000 days<br />Seek for each page, 100 days<br />Transfer entire TB, 1 day<br />
    7. 7. Disks are Slow<br />IDE drive – 75 MB/sec, 10ms seek<br />SATA drive – 300MB/s, 8.5ms seek<br />SSD – 800MB/s, 2 ms “seek”<br />(1TB = $4k!) <br />
    8. 8. // Sidetrack<br />Observation: transfer speed improves at a greater rate than seek speed<br />Improvement by treating disks like tapes<br />Seek as little as possible in favor of sequential reads <br />Operate at transfer speed http://weblogs.java.net/blog/2008/03/18/disks-have-become-tapes<br />
    9. 9. An Idea: Parallelism<br />1 drive – 75 MB/sec<br />16 days for 100TB<br />1000 drives – 75 GB/sec<br />22 minutes for 100TB<br />
    10. 10. A Problem: Parallelism is Hard<br />Issues<br />Synchronization<br />Deadlock<br />Limited bandwidth<br />Timing issues<br />Apples v. Oranges, but… MPI<br />Data distribution, communication between nodes done manually by the programmer<br />Considerable effort achieving parallelism compared to actual processing<br />
    11. 11. A Problem: Reliability<br />Computers are complicated<br />Hard drive<br />Power supply<br />Overheating<br />
    12. 12. A Problem: Reliability<br />1 Machine<br />3 years mean time between failures<br />1000 Machines<br />1 day mean time between failures<br />
    13. 13. Requirements<br />Backup<br />Reliable<br />Partial failure, graceful decline rather than full halt<br />Data recoverability, if a node fails, another picks up its workload<br />Node recoverability, a fixed node can rejoin the group without a full group restart<br />Scalability, adding resources adds load capacity<br />Easy to use<br />
    14. 14. Hadoop: Robust, Cheap, Reliable<br />Apache project, open source<br />Designed for commodity hardware<br />Can lose whole nodes and not lose data<br />Includes MapReduce programming model<br />
    15. 15. Why Commodity Hardware?<br />Single large computer systems are expensive and proprietary<br />High initial costs, plus lock-in with vendor<br />Existing methods do not work at petabyte-scale<br />Solution: Scale “out” instead of “up”<br />
    16. 16. Hadoop Distributed File System<br />Throughput Good, Latency Bad<br />Data Coherency<br />Write-once, read-many access model <br />Files are broken up into blocks<br />Typically 64MB or 128MB block size<br />Each replicated on multiple DataNodes on write<br />Intelligent Client<br />Client can find location of blocks<br />Client accesses data directly from DataNode<br />
    17. 17. Source: http://wiki.apache.org/hadoop/HadoopPresentations?action=AttachFile&do=get&target=hdfs_dhruba.pdf<br />
    18. 18. HDFS: Performance<br />Robust in the face of multiple machine failures through aggressive replication of data blocks<br />High Performance<br />Checksum of 100 TB in 10 minutes,~166 GB/sec<br />Built to house petabytes of data<br />
    19. 19. MapReduce<br />Simple programming model that abstracts parallel programming complications away from data processing logic<br />Made popular at Google, drives their processing systems, used on 1000s of computers in various clusters<br />Hadoop provides an open source version of MR<br />
    20. 20. MapReduce Data Flow<br />
    21. 21. Using MapReduce<br />MapReduce is a programming model for efficient distributed computing<br />It works like a Unix pipeline:<br />cat input | grep | sort | uniq -c | cat > output<br />Input | Map | Shuffle & Sort | Reduce | Output<br />Efficiency from<br />Streaming through data, reducing seeks<br />Pipelining<br />A good fit for a lot of applications<br />Log processing<br />Web index building<br />
    22. 22. Hadoop In The Field<br />Yahoo<br />Facebook<br />Twitter<br />Commercial support available from Cloudera<br />
    23. 23. Hadoop In Your Backyard<br />openPDC project at TVAhttp://openpdc.codeplex.com<br />Cluster is currently:<br />20 nodes<br />200TB of physical drive space<br />Used for <br />Cheap, redundant storage<br />Time series data mining<br />
    24. 24. Examples – Word Count<br />Hello, World!<br />Map<br />Input:<br />foofoo bar<br />Output all words in a dataset as:{ key, value }<br /> {“foo”, 1}, {“foo”, 1}, {“bar”, 1}<br />Reduce<br />Input:{“foo”, (1, 1)}, {“bar”, (1)}<br />Output:{“foo”, 2}, {“bar”, 1}<br />
    25. 25. Word Count: Mapper<br />public static class MapClass extends MapReduceBase<br />implements Mapper<LongWritable, Text, Text, IntWritable> {<br /> private final static IntWritable one = new IntWritable(1);<br /> private Text word = new Text();<br /> public void map(LongWritable key, Text value,<br />OutputCollector<Text, IntWritable> output,<br /> Reporter reporter) throws IOException {<br /> String line = value.toString();<br />StringTokenizeritr = new <br />StringTokenizer(line);<br /> while (itr.hasMoreTokens()) {<br />word.set(itr.nextToken());<br />output.collect(word, one);<br /> }<br /> }<br />}<br />
    26. 26. Word Count: Reducer<br />public static class Reduce extends MapReduceBase<br />implements Reducer<Text, IntWritable, Text, IntWritable> {<br /> public void reduce(Text key, Iterator<IntWritable> values,<br />OutputCollector<Text, IntWritable> output,<br /> Reporter reporter) throws IOException {<br />int sum = 0;<br /> while (values.hasNext()) {<br /> sum += values.next().get();<br /> }<br />output.collect(key, new IntWritable(sum));<br /> }<br />}<br />
    27. 27. Examples – Stock Analysis<br />Input dataset: <br />Symbol,Date,Open,High,Low,Close<br />GOOG,2010-03-19,555.23,568.00,557.28,560.00<br />YHOO,2010-03-19,16.62,16.81,16.34,16.44<br />GOOG,2010-03-18,564.72,568.44,562.96,566.40<br />YHOO,2010-03-18,16.46,16.57,16.32,16.56<br />Interested in biggest delta for each stock<br />
    28. 28. Examples – Stock Analysis<br />Map<br />Output<br /> {“GOOG”, 10.72},<br /> {“YHOO”, 0.47},<br /> {“GOOG”, 5.48},<br /> {“YHOO”, 0.25}<br />Reduce<br />Input: {“GOOG”, (10.72, 5.48)},{“YHOO”, (0.47, 0.25)}<br />Output:{“GOOG”, 10.72},{“YHOO”, 0.47}<br />
    29. 29. Examples – Time Series Analysis<br />Map:<br />{pointId, Timestamp + 30s of data}<br />Reduce:<br />Data mining!<br />Classify samples based on training dataset<br />Output samples that fall into interesting categories, index in database<br />
    30. 30. Other Stuff<br />Compatibility with Amazon Elastic Cloud<br />HadoopStreaming<br />MapReducewith anything that uses stdin/stdout<br />Hbase, distributed column-store database<br />Pig, data analysis (transforms, filters, etc)<br />Hive, data warehousing infrastructure<br />Mahout, machine learning algorithms<br />
    31. 31. Parting Thoughts<br />“We don't have better algorithms than anyone else. We just have more data.”<br />Peter Norvig<br />Artificial Intelligence: A Modern Approach<br />Chief scientist at Google<br />
    32. 32. Contact<br />Galen Riley<br />http://galenriley.com<br />@TotallyGreat<br />Josh Patterson<br />http://jpatterson.floe.tv<br />@jpatanooga<br />
    1. A particular slide catching your eye?

      Clipping is a handy way to collect important slides you want to go back to later.

    ×