TheEdge10 : Big Data is Here - Hadoop to the Rescue


Published on

A presentation from TheEdge10 about Hadoop and Big data

Published in: Software
1 Comment
  • Really Really Good...It clearly states everything in simple terms
    Are you sure you want to  Yes  No
    Your message goes here
No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Lets say we have a huge log file that we need to analyzeOr that we have large amount of data that we need to store – what shall we do?Add more CPUs or memory to single machine (node?)Or add more nodes to the system?
  • Talk about commodity hardware"Commodity" hardware is hardware that is easily and affordably available. A device that is said to use "commodity hardware" is one that uses components that were previously available or designed and are thus not necessarily unique to that device. Unfortunately, at some point there won’t be a big enoughmachine available for the larger data sets. More importantly, the high-end machinesare not cost effective for many applications. For example, a machine with four timesthe power of a standard PC costs a lot more than putting four such PCs in a cluster.Spreading and dividing the data between many machines will provide a much higher throughput - distributed software application
  • 1. Because it runs on commodity hardware2. With X2 machines it will run close to X2 faster4. Talk about it being mature and very popular (then move to next slide)
  • New york times – converted 11 million articles from TIFF images to PDFTwitter - We use Hadoop to store and process tweets, log files
  • Cutting is also the founder of Apache Lucene, popular text search library
  • Although we’ve spoken about key and values we have yet to discuss their typesSpeak about why java serialization is bad , about pluggable frameworks and about using predefined hadoop.
  • Those 2 static classes will reside in a single file. Those inner classses are independent – during job execution the Mapper and Reducer are replicated and run in various nodes in different JVMs
  • Executing 2 jobs manually is possible, but its more convenient to automate itThe input of the 2nd will be the output of the firstJobClient.runJob() is blocking.
  • Useful for writing simple short programs that are rapidly developed in scriptsOr writing programs that can take advantage of non-Java libraries
  • Useful for writing simple short programs that are rapidly developed in scriptsOr writing programs that can take advantage of non-Java libraries
  • it becomes necessary to partition it across a number of separate machines. Filesystems that managethe storage across a network of machines are called distributed filesystems. Since theyare network-based, all the complications of network programming kick in, thus makingdistributed filesystems more complex than regular disk filesystems. For example, oneof the biggest challenges is making the filesystem tolerate node failure without suffering data loss.
  • Lets say you have a 100TB in a file – HDFS abstracts the complexity and give you the illusion that you're dealing with a single fileVery large files“Very large” in this context means files that are hundreds of megabytes, gigabytes,or terabytes in size. There are Hadoop clusters running today that store petabytesof data.*Streaming data accessHDFS is built around the idea that the most efficient data processing pattern is awrite-once, read-many-times pattern. A dataset is typically generated or copiedfrom source, then various analyses are performed on that dataset over time. Eachanalysis will involve a large proportion, if not all, of the dataset, so the time to readthe whole dataset is more important than the latency in reading the first record.Commodity hardwareHadoop doesn’t require expensive, highly reliable hardware to run on. It’s designedto run on clusters of commodity hardware (commonly available hardware availablefrom multiple vendors†) for which the chance of node failure across the cluster ishigh, at least for large clusters. HDFS is designed to carry on working without anoticeable interruption to the user in the face of such failure
  • Some of them exist only on one server and some across all servers
  • The most important daemonMake sure that the server hosting the NameNode will not store any data locally or perform any computations for a MapReduce program
  • Constantly reports to the namenode – informs the namenode of which blocks it is currently storingDatanodes also poll the namenode to provide information regarding local changes as well as receiving instructions to create move or delete blocks
  • all blocks in a file except the last block are the same size
  • One important aspect of this design is that the client contacts datanodes directly toretrieve data and is guided by the namenode to the best datanode for each block. Thisdesign allows HDFS to scale to a large number of concurrent clients, since the datatraffic is spread across all the datanodes in the cluster. The namenode meanwhile merelyhas to service block location requests (which it stores in memory, making them veryefficient) and does not, for example, serve data, which would quickly become a bot-tleneck as the number of clients grew.
  • One per clusterTakes snapshots by communicating with the namenode at interval specified by configuration
  • Relaunch possibly on a different node – up to num of retries
  • If JobTrackerfails to receive a message from a TaskTracker it assumes failure and submit the task to other nodes
  • -help to look for help
  • Local mode is to assist debugging and create the logicUsually the local mode and the Pseudo do mode will work on a subset of dataPseudo is “cluster of one”Switching is easy
  • If there are bugs that sometimes cause a task to hand or slow down then relying on the speculative execution to avoid these problems is unwise and won’t work reliably since the same bugs are likely to affect the speculative task
  • You may have a few large data processing jobs that occasionally take advantageof hundreds of nodes, but those same nodes will sit idle the rest of the time.You may be new to Hadoop and want to get familiar with it first before investing ina dedicated cluster. You may own a startup that needs to conserve cash and wantsto avoid the capital expense of a Hadoop cluster. In these and other situations, itmakes more sense to rent a cluster of machines rather than buy it.You can rent computing and storage services from AWS on demand as your requirement scales.As of this writing, renting a compute unit with the equivalent power of a 1.0 GHz32-bit Opteron with 1.7 GB RAM and 160 GB disk storage costs $0.10(varies if its windows or Unix instance) per hour. Using acluster of 100 such machines for an hour will cost a measly $10!
  • Supported operating systems onEC2 include more than six variants of Linux, plus Windows Server and OpenSolaris.Other images include one of the operating systems plus pre-installed software, such asdatabase server, Apache HTTP server, Java application server, and others. AWS offerspreconfigured images of Hadoop running on Linux,
  • Load users, Load pagesFilter by ageJoin by nameGroup on URLCount click, sort clicks , get top 5
  • 1.Under the covers pig turns the tranformations into a series of MapReduce jobs but we as programmersAre unaware of this which allows us to focus on the data rather than the nature of execution2. Pig Latin is a data flow pgoramming language where SQL is a declarative programming languate. Ping Latin program is a tep by step set of perations on an input SQL statements is a a set of statements taken together and produce output3. A script file – also pluginPigPen for eclipse exists
  • HBase – uses HDFS as the underlying file system. Supports of billions of rows and millions of columns
  • HBase – uses HDFS as the underlying file system. Supports of billions of rows and millions of columns
  • HBase – uses HDFS as the underlying file system. Supports of billions of rows and millions of columns
  • HBase – uses HDFS as the underlying file system. Supports of billions of rows and millions of columns
  • TheEdge10 : Big Data is Here - Hadoop to the Rescue

    1. 1. Big Data is Here – <br />Hadoop to the Rescue!<br />Shay Sofer,<br />AlphaCSP<br />
    2. 2. Today we will:<br />Understand what is BigData<br />Get to know Hadoop<br />Experience some MapReduce magic<br />Persist very large files<br />Learn some nifty tricks<br />On Today's Menu...<br />
    3. 3. Data is Everywhere<br />
    4. 4. IDC : “Total data in the universe : 1.2 Zettabytes” (May, 2010)<br />1ZB = 1 Trillion Gigabytes <br /> (or: 1,000,000,000,000,000,000,000 bytes = 1021)<br />60% Growth from 2009<br />By 2020 – we will reach 35 ZB<br />Facts and Numbers<br />Data is Everywhere<br />
    5. 5. Facts and Numbers<br />Data is Everywhere<br />Source:<br />
    6. 6. 234M Web sites<br />7M New sites in 2009<br />New York Stock Exchange – 1 TB of data per day<br />Web 2.0<br />147M Blogs (and counting…)<br />Twitter – ~12 TB of data per day<br />Facts and Numbers<br />Data is Everywhere<br />
    7. 7. 500M users<br />40M photos per day <br /> More than 30billion pieces of content (web links, news stories, blog posts, notes, photo albums etc.) shared each month<br />Facts and Numbers - Facebook<br />Data is Everywhere<br />
    8. 8. Big dataare datasets that grow so large that they become awkward to work with using on-hand database management tools<br />Where and how do we store this information?<br />How do we perform analyses on such large datasets?<br />Why are you here?<br />Data is Everywhere<br />
    9. 9. Scale-up Vs. Scale-out<br />Data is Everywhere<br />
    10. 10. Scale-up : Adding resources to a single node in a system, typically involving the addition of CPUs or memory to a single computer<br />Scale-out : Adding more nodes to a system. E.g. Adding a new computer with commodity hardware to a distributed software application<br />Scale-up Vs. Scale-out<br />Data is Everywhere<br />
    11. 11. Introducing…Hadoop!<br />
    12. 12. A framework for writing and running distributed applications that process large amount of data.<br />Runs on large clusters of commodity hardware<br />A cluster with hundreds of machine is standard<br />Inspired by Google’s architecture : MapReduce and GFS<br />What is Hadoop?<br />Hadoop<br />
    13. 13. Robust - Handles failures of individual nodes<br />Scales linearly<br />Open source <br />A top-level Apache project<br />Why Hadoop?<br />Hadoop<br />
    14. 14. Hadoop<br />
    15. 15. Facebook holds the largest known Hadoop storage cluster in the world<br />2000 machines<br />12 TB per machine (some has 24 TB)<br />32 GB of RAM per machine<br />Total of more than 21 Petabytes <br />(1 Petabyte = 1024 Terabytes) <br />Facebook (Again…)<br />Hadoop<br />
    16. 16. History<br />Hadoop<br />Apache Nutch – Open Source web search engine founded by Doug Cutting<br />Cutting joins Yahoo!, forms Hadoop<br />Sorting 1 TB in 62 seconds<br />2004<br />2006<br />2008<br />2008<br />2002<br />2010<br />Google’s GFS & MapReduce papers published<br />Creating the longest Pi yet<br />Hadoop hits web scale, being used by Yahoo! for web indexing<br />
    17. 17. Hadoop<br />
    18. 18. IDE Plugin<br />Hadoop<br />
    19. 19. Hadoop and MapReduce<br />
    20. 20. A programming model for processing and generating large data sets<br />Introduced by Google <br />Parallel processing of the map/reduce operations<br />Definition<br />MapReduce<br />
    21. 21. Sam believed “An apple a day keeps a doctor away”<br />MapReduce – The Story of Sam<br />Mother<br />Sam<br />An Apple<br />Source: Saliya Ekanayake, SALSA HPC Group at Community Grids Labs<br />
    22. 22. Sam thought of “drinking” the apple<br />MapReduce – The Story of Sam<br /><ul><li>He used a to cut the and a to make juice. </li></ul>Source: Saliya Ekanayake, SALSA HPC Group at Community Grids Labs<br />
    23. 23. Sam applied his invention to all the fruits he could find in the fruit basket<br /><ul><li>(map ‘( )) </li></ul>MapReduce – The Story of Sam<br />A list of values mapped into another list of values, which gets reduced into a single value<br />( ) <br /><ul><li>(reduce ‘( )) </li></ul>Source: Saliya Ekanayake, SALSA HPC Group at Community Grids Labs<br />
    24. 24. MapReduce – The Story of Sam<br />Sam got his first job for his talent in making juice<br />Fruits<br /><ul><li>Now, it’s not just one basket but a whole container of fruits</li></ul>Largedata and list of values for output<br /><ul><li>Also, they produce alist of juice types separately
    25. 25. But, Sam had just ONE and ONE </li></ul>Source: Saliya Ekanayake, SALSA HPC Group at Community Grids Labs<br />
    26. 26. MapReduce – The Story of Sam<br />Sam Implemented a parallelversion of his innovation <br />Each map input: list of <key, value> pairs<br />Fruits<br />(<a, > , <o, > , <p ,> , …)<br />Map<br />Each map output: list of <key, value> pairs<br />(<a’ , > , <o’, v > , <p’ , > , …)<br />Grouped by key (shuffle)<br />Each reduce input: <key, value-list><br />e.g. <a’, ( …)><br />Reduce<br />Reduced into a list of values<br />Source: Saliya Ekanayake, SALSA HPC Group at Community Grids Labs<br />
    27. 27. Mapper- Takes a series of key/value pairs, processes each and generates output key/value pairs<br /> (k1, v1) list(k2, v2)<br />Reducer- Iterates through the values that are associated with a specific key and generate output<br /> (k2, list (v2)) list(k3, v3)<br />The Mapper takes the input data, filters and transforms into something The Reducercan aggregate over<br />First Map, Then Reduce<br />MapReduce<br />
    28. 28. MapReduce<br />Shuffle<br />Input<br />
    29. 29. Hadoop comes with a number of predefined classes<br />BooleanWritable<br />ByteWritable<br />LongWritable<br />Text, etc…<br />Supports pluggable serialization frameworks<br />Apache Avro <br />Hadoop Data Types<br />MapReduce<br />
    30. 30. TextInputFormat / TextOutputFormat<br />KeyValueTextInputFormat<br />SequenceFile - A Hadoopspecific compressed binary file format. Optimized for passing data between 2 MapReduce jobs<br />Input / Output Formats<br />MapReduce<br />
    31. 31. publicstaticclass MapClass extends MapReduceBase<br />privateText word = new Text();<br />publicvoid map(LongWritable key, Text value,<br /> OutputCollector<Text,IntWritable> output, …){<br />String line = value.toString();<br />StringTokenizer itr = new StringTokenizer(line);<br />while(itr.hasMoreTokens()){<br />word.set(itr.nextToken());<br />output.collect(word,newIntWritable(1));<br /> }<br /> } <br />} <br />Word Count – The Mapper<br />implements Mapper<LongWritable,Text,Text,IntWritable><br />< Hello, 1> < World, 1> < Bye, 1> < World, 1> <br /><K1,Hello World Bye World><br />
    32. 32. publicstaticclassReduceClassextends MapReduceBase<br />publicvoidreduce(Text key, Iterator<IntWritable> values, OutputCollector<Text,IntWritable> output,…){<br />intsum = 0;<br />while(values.hasNext()){<br />sum +=; <br />}<br />output.collect(key, new IntWritable(sum));<br />{<br />{<br />Word Count– The Reducer<br />implementsReducer<Text,IntWritable,Text,IntWritable>{<br />< Hello, 1> < World, 2> < Bye, 1> <br />< Hello, 1> < World, 1> < Bye, 1> < World, 1> <br />
    33. 33. publicstaticvoid main(String[] args){<br />JobConf job = newJobConf(WordCount.class);<br />job.setOutputKeyClass(Text.class);<br />job.setOutputValueClass(IntWritable.class);<br />job.setMapperClass(MapClass.class);<br />job.setReducerClass(ReduceClass.class);<br />FileInputFormat.addInputFormat(job ,new Path(args[0]));<br />FileOutputFormat.addOutputFormat(job ,newPath(args[1]));<br />//job.setInputFormat(KeyValueTextInputFormat.class);<br />JobClient.runJob(job);<br />{<br />Word Count – The Driver<br />
    34. 34. Music discovery website<br />Scrobbling / Streaming VIA radio<br />40M unique visitors per month<br />Over 40M scrobbles per day<br />Each scrobble creates a log line<br />Hadoop @ Last.FM<br />MapReduce<br />
    35. 35.
    36. 36. Goal : Create a “Unique listeners per track” chart<br />Sample listening data<br />MapReduce<br />
    37. 37. publicvoid map(LongWritable position, Text rawLine, OutputCollector<IntWritable,IntWritable> output, <br /> Reporter reporter) throwsIOException { <br />intscrobbles, radioListens; // assume they are initialized -<br />IntWritabletrackId,userId; // for verbosity<br /> // if track somehow is marked with zero plays - ignore<br />if (scrobbles <= 0 && radioListens <= 0) {<br />return; <br /> }<br />// output user id against track id<br />output.collect(trackId, userId);<br /> }<br />Unique Listens - Mapper<br />
    38. 38. publicvoid reduce(IntWritabletrackId, <br />Iterator<IntWritable> values, <br />OutputCollector<IntWritable, IntWritable> output, <br /> Reporter reporter) throwsIOException {<br /> Set<Integer> usersSet = newHashSet<Integer>();<br />// add all userIds to the set, duplicates removed<br />while (values.hasNext()) {<br />IntWritableuserId =;<br />usersSet.add(userId.get());<br /> }<br />// output: trackId -> number of unique listeners per track<br />output.collect(trackId, newIntWritable(usersSet.size()));<br />}<br />Unique Listens - Reducer<br />
    39. 39. Complex tasks will sometimes be needed to be broken down to subtasks<br />Output of the previous job goes as input to the next job<br />job-a | job-b | job-c<br />Simply launch the driver of the 2nd job after the 1st<br />Chaining<br />MapReduce<br />
    40. 40. Hadoop supports other languages via API called Streaming<br />Use UNIX commands as mappers and reducers<br />Or use any script that processes line-oriented data stream from STDIN and outputs to STDOUT<br />Python, Perl etc.<br />Hadoop Streaming<br />MapReduce<br />
    41. 41. $ hadoop jar hadoop-streaming.jar <br /> -input input/myFile.txt<br /> -output output.txt <br /> -mapper<br /> -reducer<br />Hadoop Streaming<br />MapReduce<br />
    42. 42. HDFS<br />Hadoop Distributed File System<br />
    43. 43. A large dataset can and will outgrow the storage capacity of a single physical machine<br />Partition it across separate machines – Distributed FileSystems<br />Network based - complex<br />What happens when a node fails?<br />Distributed FileSystem<br />HDFS<br />
    44. 44. Designed for storing very large files running on clusters on commodity hardware<br />Highly fault-tolerant (via replication)<br />A typical file is gigabytes to terabytes in size<br />High throughput<br />HDFS - Hadoop Distributed FileSystem<br />HDFS<br />
    45. 45. Running Hadoop = Running a set of daemons on<br />different servers in your network<br />NameNode<br />DataNode<br />Secondary NameNode<br />JobTracker<br />TaskTracker<br />Hadoop’s Building Blocks<br />HDFS<br />
    46. 46. Topology of a Hadoop Cluster<br />Secondary NameNode<br />NameNode<br />JobTracker<br />DataNode<br />TaskTracker<br />DataNode<br />TaskTracker<br />DataNode<br />TaskTracker<br />DataNode<br />TaskTracker<br />
    47. 47. HDFS has a master/slave architecture ; The NameNode acts as the master<br />Single NameNode per HDFS<br />Keeps track of :<br />How the files are broken into blocks<br />Which nodes store those blocks<br />The overall health of the filesystem<br />Memory and I/O intensive<br />The NameNode<br />HDFS<br />
    48. 48. Each slave machine will host a DataNode daemon<br />Serves read/write/delete requests from the NameNode<br />Manages the storage attached to the nodes <br />Sends a periodic Heartbeat to the NameNode<br />The DataNode<br />HDFS<br />
    49. 49. Failure is the norm rather than exception<br />Detection of faults and quick, automatic recovery<br />Each file is stored as a sequence of blocks (default: 64MB each)<br />The blocks of a file are replicated for fault tolerance<br />Block size and replicas are configurable per file<br />Fault Tolerance - Replication<br />HDFS<br />
    50. 50. HDFS<br />
    51. 51. Topology of a Hadoop Cluster<br />Secondary NameNode<br />NameNode<br />JobTracker<br />DataNode<br />TaskTracker<br />DataNode<br />TaskTracker<br />DataNode<br />TaskTracker<br />DataNode<br />TaskTracker<br />
    52. 52. Assistant daemon that should be on a dedicated node<br />Takes snapshots of the HDFS metadata<br />Doesn’t receive real time changes<br />Helps minimizing downtime incase the NameNode crashes<br />Secondary NameNode<br />HDFS<br />
    53. 53. Topology of a Hadoop Cluster<br />Secondary NameNode<br />NameNode<br />JobTracker<br />DataNode<br />TaskTracker<br />DataNode<br />TaskTracker<br />DataNode<br />TaskTracker<br />DataNode<br />TaskTracker<br />
    54. 54. One per cluster - on the master node<br />Receives job request submitted by the client<br />Schedules and monitors MapReduce jobs on TaskTrackers<br />JobTracker<br />HDFS<br />
    55. 55. Run map and reduce tasks<br />Send progress reports to the JobTracker<br />TaskTracker<br />HDFS<br />
    56. 56. VIA file commands<br />$ hadoopfs -mkdir /user/chuck<br />$ hadoopfs -put hugeFile.txt<br />$ hadoopfs -get anotherHugeFile.txt<br />Programmatically (HDFS API)<br />FileSystem hdfs = FileSystem.get(new Configuration());<br />FSDataOutStream out = hdfs.create(filePath);<br />while(...){<br /> out.write(buffer,0,bytesRead);<br />}<br />Working with HDFS<br />HDFS<br />
    57. 57. Tips & Tricks<br />
    58. 58. Tip #1: Hadoop Configuration Types<br />Tips & Tricks<br />
    59. 59. Monitoring events in the cluster can prove to be a bit more difficult<br />Web interface for our cluster<br />Shows a summary of the cluster<br />Details about list of jobs there are currently running, completed and failed<br />Tip #2: JobTracker UI <br />Tips & Tricks<br />
    60. 60. WebTracker UI SS<br />Tips & Tricks<br />
    61. 61. Digging through logs or….<br /> Running again the exact same scenario with the same input on the same node?<br />IsolationRunner can rerun the failed task to reproduce the problem<br />Attach a debugger <br />Keep.failed.tasks.file= true<br />Tip #3: IsolationRunner – Hadoop’s Time Machine<br />Tips & Tricks<br />
    62. 62. Output of the map phase (which will be shuffled across the network) can be quite large<br />Built in support for compression<br />Different codecs : gzip, bzip2 etc<br />Transparent to the developer<br />conf.setCompressMapOutput(true);<br />conf.setMapOutputCompressorClass(GzipCodec.class);<br />Tip #4: Compression<br />Tips & Tricks<br />
    63. 63. A node can experience a slowdown, thus slowing down the entire job<br />If a task is identified as “slow”, it will be scheduled to run in another node in parallel<br />As soon as one finishes successfully, the others will be killed<br />An optimization – not a feature<br />Tip #5: Speculative Execution<br />Tips & Tricks<br />
    64. 64. Input can come from 2 (or more) different sources<br />Hadoop has a contrib package called datajoin<br />Generic framework for performing reduce-side join<br />Tip #6: DataJoin Package<br />MapReduce<br />
    65. 65. Hadoop in the Cloud<br />Amazon Web Services<br />
    66. 66. Cloud computing - Shared resources and information are provided on demand<br />Rent a cluster rather than buy it<br />The best known infrastructure for cloud computing is Amazon Web Services (AWS)<br />Launched at July 2002<br />Cloud Computing and AWS<br />Hadoop in the Cloud<br />
    67. 67. Elastic Compute Cloud (EC2)<br />A large farm of VMs where a user can rent and use them to run a computer application<br />Wide range on instance types to choose from (price varies)<br />Simple Storage Service (S3) – Online storage for persisting MapReduce data for future use<br />Hadoop comes with built in support for EC2 and S3<br />$ hadoop-ec2 launch-cluster <cluster-name> <num-of-slaves> <br />Hadoop in the Cloud – Core Services<br />
    68. 68. EC2 Data Flow<br />HDFS<br />EC2<br />MapReduce Tasks<br />Our<br />Data<br />
    69. 69. EC2 & S3 Data Flow<br />S3<br />Our<br />Data<br />HDFS<br />EC2<br />MapReduce Tasks<br />
    70. 70. Hadoop-Related Projects<br />
    71. 71. Thinking in the level of Map, Reduce and job chaining instead of simple data flow operations is non-trivial<br />Pig simplifies Hadoop programming<br />Provides high-level data processing language : Pig Latin<br />Being used by Yahoo! (70% of production jobs), Twitter, LinkedIn, EBay etc..<br />Problem: Users file & Pages file. Find top 5 most visited pages by users aged 18-25<br />Pig<br />Hadoop-Related Projects<br />
    72. 72. Users = LOAD ‘users.csv’ AS (name, age);<br />Fltrd = FILTER Users BYage >= 18 AND age <= 25;<br />Pages = LOAD ‘pages.csv’ AS (user, url);<br />Jnd = JOIN Fltrd BY name, Pages BY user;<br />Grpd = GROUP Jnd BY url;<br />Smmd = FOREACH Grpd GENERATEgroup, COUNT(Jnd) AS clicks;<br />Srtd = ORDER Smmd BY clicks DESC;<br />Top5 = LIMIT Srtd 5;<br />STORE Top5 INTO ‘top5sites.csv’;<br />Pig Latin – Data Flow Language <br />
    73. 73. A data warehousing package built on top of Hadoop<br />SQL-like queries on large datasets <br />Hive<br />Hadoop-Related Projects<br />
    74. 74. Hadoop database for random read/write access<br />Uses HDFS as the underlying file system<br />Supports billions of rows and millions of columns<br />Facebook chose HBase as a framework for their new version of “Messages”<br />HBase<br />Hadoop-Related Projects<br />
    75. 75. A distribution of Hadoop that simplifies deployment by providing the most recent stable version of Apache Hadoop with and backports<br />Cloudera<br />Hadoop-Related Projects<br />
    76. 76. Machine learning algorithms for Hadoop<br />Coming up next.. (-:<br />Mahout<br />Hadoop-Related Projects<br />
    77. 77. Big Data can and will cause serious scalability problems to your application<br />MapReduce for analysis, Distributed filesystem for storage<br />Hadoop = MapReduce + HDFS and much more<br />AWS integration is easy<br />Lots of documentation<br />Last words<br />Summary<br />
    78. 78. Hadoop in Action / Chuck Lam<br />Hadoop: The Definitive Guide, 2nd Edition / Tom White (O’reilly)<br />Apache Hadoop Documentation<br />Hadoop @ Last.FM Presentation <br />MapReduce in Simple Terms / SaliyaEkanayake<br />Amazon Web Services<br />References<br />
    79. 79. Thank you!<br />