Your SlideShare is downloading. ×
Large Scale Data With Hadoop
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Large Scale Data With Hadoop

3,491

Published on

Published in: Technology
0 Comments
8 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
3,491
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
110
Comments
0
Likes
8
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • So everyone knows what data processing is, but what do we mean by “scale” ?
  • Simply- Data. Is. Big.… So this is the trend. The amount of data we can collect is increasing exponentially, and most companies aren’t capable of handling it. Patterson likes to call this “the data tsunami.” Let’s talk about a real example of this…
  • Okay, so data is big. No big deal, I’ve got a processor with four cores that will chew through anything. However, the speed of my application is constrained by the speed at which I can get data. It is not going to fit in memory, so it’s going to be living on a hard disk. This brings us to problem number 2.
  • Hard drive speed comes from two numbers.Disk seek time: The time to move the read head on a drive to where the data is storedData transfer: The speed that I can get information off the diskHard drives are wonderful because they are random access devices, and I can get data anywhere off the disk ay time I want it just by seeking and reading.Seeking takes a while, though. Fortunately, I can take advantage of locality, and read a page of data in to a bufferLet’s look at an example….
  • This example has nice round numbers.I have a fictional hard drive that has a 10ms seek time, and a 10 meg/second transfer speedOn it, there’s a 1TB of data, made up of 100 byte records in 10K pages. That’s 10 billion entries over a billion pages, and I want to update 10% of this data set – a gig.…So seeks are slow, but I can transfer the whole file in a single day. Again, my application is always bound by the slowest piece in the pipe, so I get the most benefit by speeding that part up – the disk.
  • Here are some real drives. I grabbed the specs that are advertised on newegg....Solid state drives are expensive though, $4k per terabyte. I can’t afford to buy a new one every month for my sensor collection.
  • …With this observation in mind, let’s consider treading a hard disk (a random access device) like tape (a sequential device)…So we get closer to 1 day instead of a thousand
  • I bet a lot of you know where this is going already – parallelism!So let’s say I’ve got one of those ide drives from earlier. I can sequentially process 100TB of data in 16 days.Alright, let’s get a thousand of them and run them in parallel – 75 GIGS per second, and I’m done in 22 minutes.Alright, parallel processing solves our problem, let’s call it a day!
  • There’s an issue, though. Parallelism is really really hard.…And that’s not the only problem, either.
  • Even if you have an OS that you know won’t crash, and code that won’t kill it either, reliability is still an issue because hardware fails.
  • So let’s buy some expensive fault-tolerant hardware…
  • A system that is robust in the face of machine failureA platform to allows multiple groups to collaborateA solution that scales linearly with respect to costA vision that will not lock us into a single vendor over time
  • Alright, now we’re going to do some examples. I find it is most useful to look at what happens to the data instead of what the code looks like. Let’s review MR, but think about the data.So I have a big file that I want to process. It is split up in blocks and spread over my cluster. I start a job and Hadoop initiates a bunch of map tasks- this processing occurs where the data exists already. The mapper reads in a part of the file, and emits several key/value pairs. These are collected, sorted into buckets based on key, and each bucket goes to a reduce task. Each reducer processes a bucket and outputs the result. Of course, I can chain these steps together.Word count is the ‘Hello World’ of mapreduce. I’m interested in the frequency of words in a dataset.…Word count isn’t entirely silly by the way. Consider the suggestions that pop up when you start a google search. What you see is a list of search strings that people use frequently. Think of it as phrase count instead of word count.
  • I’ve got some code here , but I’m going to skip going over it in detail. The slides will be available if you want to pour over it.We talked about MR being accessible for a programmer when compared to an MPI approach, and this is the entire map class for word count.
  • Here’s another example to illustrate that my map process can do more than just read data in and push it back out. Here’s a file with information about stock prices – the ticker symbol, a date,the open price, the high and low prices for the day, and what it closed at. Since we’re talking about big data sets here, I want you to imagine that it’s got every stock for the last 50 years and there’s not enough room on my slide to include it all.I’m interested in volatility or something, so I want the biggest change in price for a particular stock. Let’s look at the data.
  • My mapper reads in a record, filters out the information I’m not interested in (date and open/close prices), and emits the delta for each day.
  • I think that collecting data without doing anything interesting with it is a big sin. So, here’s a business case for someone in the room, perhaps. Say you want to grep through some server logs that you’ve been collecting forever but never got around to doing anything with. Amazon EC2 supports Hadoop, so you can run your job without having to buy any hardware at all.And a list of stuff that is built on top of Hadoop.… You don’t have to write your jobs in java. I know that I love python, and I bet you do too.…We’ll be contributing some our time series stuff to the Mahout project.
  • So let’s conclude with a quote from Peter Norvig that I think justifies our entire presentation.
  • Transcript

    • 1. Large Scale Data with Hadoop
      Galen Riley and Josh Patterson
      Presented at DevChatt 2010
    • 2. Agenda
      Thinking at Scale
      Hadoop Architecture
      Distributed File System
      MapReduce Programming Model
      Examples
    • 3. Data is Big
      The Data Deluge (2/25/2010)
      “Eighteen months ago, Li & Fung, a firm that manages supply chains for retailers, saw 100 gigabytes of information flow through its network each day. Now the amount has increased tenfold.”
      http://www.economist.com/opinion/displaystory.cfm?story_id=15579717
    • 4. Data is Big
      Sensor data collection
      128 sensors
      37 GB/day
      10 bytes/sample, 30 per second
      Increasing 10x by 2012 http://jpatterson.floe.tv/index.php/2009/10/29/the-smartgrid-goes-open-source
    • 5. Disks are Slow
      Disk Seek, Data Transfer
      Reading Files
      Disk seek for every access
      Buffered reads, locality  still seeking every disk page
    • 6. Disks are Slow
      10ms seek, 10MB/s transfer
      1TB file, 100b records, 10kb page10B entries, 1B pages1GB of updates
      Seek for each update, 1000 days
      Seek for each page, 100 days
      Transfer entire TB, 1 day
    • 7. Disks are Slow
      IDE drive – 75 MB/sec, 10ms seek
      SATA drive – 300MB/s, 8.5ms seek
      SSD – 800MB/s, 2 ms “seek”
      (1TB = $4k!) 
    • 8. // Sidetrack
      Observation: transfer speed improves at a greater rate than seek speed
      Improvement by treating disks like tapes
      Seek as little as possible in favor of sequential reads
      Operate at transfer speed http://weblogs.java.net/blog/2008/03/18/disks-have-become-tapes
    • 9. An Idea: Parallelism
      1 drive – 75 MB/sec
      16 days for 100TB
      1000 drives – 75 GB/sec
      22 minutes for 100TB
    • 10. A Problem: Parallelism is Hard
      Issues
      Synchronization
      Deadlock
      Limited bandwidth
      Timing issues
      Apples v. Oranges, but… MPI
      Data distribution, communication between nodes done manually by the programmer
      Considerable effort achieving parallelism compared to actual processing
    • 11. A Problem: Reliability
      Computers are complicated
      Hard drive
      Power supply
      Overheating
    • 12. A Problem: Reliability
      1 Machine
      3 years mean time between failures
      1000 Machines
      1 day mean time between failures
    • 13. Requirements
      Backup
      Reliable
      Partial failure, graceful decline rather than full halt
      Data recoverability, if a node fails, another picks up its workload
      Node recoverability, a fixed node can rejoin the group without a full group restart
      Scalability, adding resources adds load capacity
      Easy to use
    • 14. Hadoop: Robust, Cheap, Reliable
      Apache project, open source
      Designed for commodity hardware
      Can lose whole nodes and not lose data
      Includes MapReduce programming model
    • 15. Why Commodity Hardware?
      Single large computer systems are expensive and proprietary
      High initial costs, plus lock-in with vendor
      Existing methods do not work at petabyte-scale
      Solution: Scale “out” instead of “up”
    • 16. Hadoop Distributed File System
      Throughput Good, Latency Bad
      Data Coherency
      Write-once, read-many access model
      Files are broken up into blocks
      Typically 64MB or 128MB block size
      Each replicated on multiple DataNodes on write
      Intelligent Client
      Client can find location of blocks
      Client accesses data directly from DataNode
    • 17. Source: http://wiki.apache.org/hadoop/HadoopPresentations?action=AttachFile&do=get&target=hdfs_dhruba.pdf
    • 18. HDFS: Performance
      Robust in the face of multiple machine failures through aggressive replication of data blocks
      High Performance
      Checksum of 100 TB in 10 minutes,~166 GB/sec
      Built to house petabytes of data
    • 19. MapReduce
      Simple programming model that abstracts parallel programming complications away from data processing logic
      Made popular at Google, drives their processing systems, used on 1000s of computers in various clusters
      Hadoop provides an open source version of MR
    • 20. MapReduce Data Flow
    • 21. Using MapReduce
      MapReduce is a programming model for efficient distributed computing
      It works like a Unix pipeline:
      cat input | grep | sort | uniq -c | cat > output
      Input | Map | Shuffle & Sort | Reduce | Output
      Efficiency from
      Streaming through data, reducing seeks
      Pipelining
      A good fit for a lot of applications
      Log processing
      Web index building
    • 22. Hadoop In The Field
      Yahoo
      Facebook
      Twitter
      Commercial support available from Cloudera
    • 23. Hadoop In Your Backyard
      openPDC project at TVAhttp://openpdc.codeplex.com
      Cluster is currently:
      20 nodes
      200TB of physical drive space
      Used for
      Cheap, redundant storage
      Time series data mining
    • 24. Examples – Word Count
      Hello, World!
      Map
      Input:
      foofoo bar
      Output all words in a dataset as:{ key, value }
      {“foo”, 1}, {“foo”, 1}, {“bar”, 1}
      Reduce
      Input:{“foo”, (1, 1)}, {“bar”, (1)}
      Output:{“foo”, 2}, {“bar”, 1}
    • 25. Word Count: Mapper
      public static class MapClass extends MapReduceBase
      implements Mapper<LongWritable, Text, Text, IntWritable> {
      private final static IntWritable one = new IntWritable(1);
      private Text word = new Text();
      public void map(LongWritable key, Text value,
      OutputCollector<Text, IntWritable> output,
      Reporter reporter) throws IOException {
      String line = value.toString();
      StringTokenizeritr = new
      StringTokenizer(line);
      while (itr.hasMoreTokens()) {
      word.set(itr.nextToken());
      output.collect(word, one);
      }
      }
      }
    • 26. Word Count: Reducer
      public static class Reduce extends MapReduceBase
      implements Reducer<Text, IntWritable, Text, IntWritable> {
      public void reduce(Text key, Iterator<IntWritable> values,
      OutputCollector<Text, IntWritable> output,
      Reporter reporter) throws IOException {
      int sum = 0;
      while (values.hasNext()) {
      sum += values.next().get();
      }
      output.collect(key, new IntWritable(sum));
      }
      }
    • 27. Examples – Stock Analysis
      Input dataset:
      Symbol,Date,Open,High,Low,Close
      GOOG,2010-03-19,555.23,568.00,557.28,560.00
      YHOO,2010-03-19,16.62,16.81,16.34,16.44
      GOOG,2010-03-18,564.72,568.44,562.96,566.40
      YHOO,2010-03-18,16.46,16.57,16.32,16.56
      Interested in biggest delta for each stock
    • 28. Examples – Stock Analysis
      Map
      Output
      {“GOOG”, 10.72},
      {“YHOO”, 0.47},
      {“GOOG”, 5.48},
      {“YHOO”, 0.25}
      Reduce
      Input: {“GOOG”, (10.72, 5.48)},{“YHOO”, (0.47, 0.25)}
      Output:{“GOOG”, 10.72},{“YHOO”, 0.47}
    • 29. Examples – Time Series Analysis
      Map:
      {pointId, Timestamp + 30s of data}
      Reduce:
      Data mining!
      Classify samples based on training dataset
      Output samples that fall into interesting categories, index in database
    • 30. Other Stuff
      Compatibility with Amazon Elastic Cloud
      HadoopStreaming
      MapReducewith anything that uses stdin/stdout
      Hbase, distributed column-store database
      Pig, data analysis (transforms, filters, etc)
      Hive, data warehousing infrastructure
      Mahout, machine learning algorithms
    • 31. Parting Thoughts
      “We don't have better algorithms than anyone else. We just have more data.”
      Peter Norvig
      Artificial Intelligence: A Modern Approach
      Chief scientist at Google
    • 32. Contact
      Galen Riley
      http://galenriley.com
      @TotallyGreat
      Josh Patterson
      http://jpatterson.floe.tv
      @jpatanooga

    ×