Hadoop a Natural Choice for Data Intensive Log Processing


Published on

Hadoop Architecture, Components and few Design Samples of Web Systems which utilizes them in real sense! Enjoy !!

Published in: Technology

Hadoop a Natural Choice for Data Intensive Log Processing

  1. 1. Apache Hadoop A Natural Choice for Data Intensive Multiform at Log Processing Date: 22 nd April’ 2011 Authored and Compiled By: Hitendra Kumar
  2. 2. <ul><li>A framework that can be installed on a commodity Linux cluster to permit large scale distributed data analysis. </li></ul><ul><li>Initial version created in 2004 by Doug Cutting and since after having broad and rapidly growing user community. </li></ul><ul><li>Hadoop provides the robust, fault-tolerant Hadoop Distributed File System (HDFS), inspired by Google's file system, as well as a Java-based API that allows parallel processing across the nodes of the cluster using the Map-Reduce paradigm allowing - </li></ul><ul><ul><li>Distributed processing of large data sets </li></ul></ul><ul><ul><li>Pluggable user code runs in generic framework </li></ul></ul><ul><li>Use of code written in other languages, such as Python and C, is possible through Hadoop Streaming, a utility which allows users to create and run jobs with any executables as the mapper and/or the reducer. </li></ul><ul><li>Hadoop comes with Job and Task Trackers that keep track of the programs’ execution across the nodes of the cluster. </li></ul><ul><li>Natural Choice for: </li></ul><ul><ul><li>Data Intensive Log processing </li></ul></ul><ul><ul><li>Web search indexing </li></ul></ul><ul><ul><li>Ad-hoc queries </li></ul></ul>Hadoop Framework A Brief Background A Brief Background
  3. 3. <ul><li>Accelerating nightly batch business processes. Since Hadoop can scale linearly, this can enable internal or external on-demand cloud farms to dynamically handle shrink performance windows and take on larger volume situations that an RDBMS just can't easily deal with. </li></ul><ul><li>Storage of extremely high volumes of enterprise data. The Hadoop Distributed File System is a marvel in itself and can be used to hold extremely large data sets safely on commodity hardware long term that otherwise couldn't stored or handled easily in a relational database. </li></ul><ul><li>HDFS creates a natural, reliable, and easy-to-use backup environment for almost any amount of data at reasonable prices considering that it's essentially a high-speed online data storage environment. </li></ul><ul><li>Improving the scalability of applications. Very low cost commodity hardware can be used to power Hadoop clusters since redundancy and fault resistance is built into the software instead of using expensive enterprise hardware or software alternatives with proprietary solutions. </li></ul><ul><li>Use of Java for data processing instead of SQL. Hadoop is a Java platform and can be used by just about anyone fluent in the language (other language options are coming available soon via APIs.) </li></ul><ul><li>Producing just-in-time feeds for dashboards and business intelligence. </li></ul><ul><li>Handling urgent, ad hoc requests for data. While certainly expensive enterprise data warehousing software can do this, Hadoop is a strong performer when it comes to quickly asking and getting answers to urgent questions involving extremely large datasets. </li></ul><ul><li>Turning unstructured data into relational data. While ETL tools and bulk load applications work well with smaller datasets, few can approach the data volume and performance that Hadoop can </li></ul><ul><li>Taking on tasks that require massive parallelism. Hadoop has been known to scale out to thousands of nodes in production environments. Moving existing algorithms, code, frameworks, and components to a highly distributed computing environment.  </li></ul>Hadoop Framework Leveraging Hadoop for High Performance over RDBMS Leveraging Hadoop over RDBMS
  4. 4. XML Logs CSV SQL Objects, JSONs Binary Hadoop Distributed File System (HDFS) M A P C R E A T I O N Reduce Commodity Server Cloud (Scale Out) Hadoop Environment RDBMS import Reporting Dash Boards BI Applications Enterprise High Volume Data In-Flow Map-Reduce Process Consume Results Hadoop Processing How it works? How it works?
  5. 5. <ul><li>Automatic & efficient parallelization / distribution </li></ul><ul><li>Extremely popular for analyzing large datasets in cluster environments. The success of </li></ul><ul><li>Stems from hiding the details of parallelization, fault tolerance, and load balancing in a simple programming framework. </li></ul><ul><li>Widely accepted by community:- </li></ul><ul><li>MapReduce preferable over a parallel RDBMS for log processing. Example:- </li></ul><ul><li>Big Web 2.0 companies like Facebook, Yahoo and of Google. </li></ul><ul><li>Traditional enterprise customers of RDBMSs, such as JP Morgan Chase, VISA, The New York Times and China Mobile have started investigating and embracing MapReduce. </li></ul><ul><li>More than 80 companies and organizations are listed as users of Hadoop in data analytic solutions, log event processing etc. </li></ul><ul><li>The IT giant, IBM engaged with a number of enterprise customers to prototype novel Hadoop-based solutions on massive amount of structured and unstructured data for their business analytics applications. </li></ul><ul><li>China Mobile gathers 5–8TB of call records/day. Facebook , almost 6TB of new log data collected every day, with 1.7PB of log data accumulated over time. </li></ul><ul><li>Just formatting and loading that much data into a parallel RDBMS in a timely manner is a challenge. Second, the log records do not always follow the same schema, This makes the lack of a rigid schema in MapReduce a feature rather than a shortcoming. </li></ul><ul><li>Third, all the log records within a time period are typically analyzed together, making simple scans preferable to index scans. </li></ul><ul><li>Fourth, log processing can be very time consuming and therefore it is important to keep the analysis job going even in the event of failures. </li></ul><ul><li>Joining log data with all kinds of reference data in MapReduce has emerged as an important part of analytic operations for enterprise customers, as well as Web 2.0 companies </li></ul>Hadoop Processing Map Reduce Algorithm . Map Reduce Algorithm
  6. 6. Hadoop Processing Map Reduce Algorithm .. Map Reduce Algorithm ..
  7. 7. <ul><li>There are separate Map and Reduce steps, each step done in parallel, each operating on sets of key-value pairs. </li></ul><ul><li>Program execution is divided into a Map and a Reduce stage, separated by data transfer between nodes in the cluster. So we have this workflow: Input -> Map() -> Copy()/Sort() -> Reduce() ->Output. In the first stage, a node executes a Map function on a section of the input data. Map output is a set of records in the form of key-value pairs, stored on that node. </li></ul><ul><li>The records for any given key – possibly spread across many nodes – are aggregated at the node running the Reducer for that key. </li></ul><ul><li>This involves data transfer between machines. This second Reduce stage is blocked from progressing until all the data from the Map stage has been transferred to the appropriate machine. </li></ul><ul><li>The Reduce stage produces another set of key-value pairs, as final output. This is a simple programming model, restricted to use of key-value pairs, but a surprising number of tasks and algorithms will fit into this framework. </li></ul><ul><li>Also, while Hadoop is currently primarily used for batch analysis of very large data sets, nothing precludes use of Hadoop for computationally intensive analyses, e.g., the Mahout machine learning project described below. </li></ul>Hadoop Processing Map Reduce Algorithm ... Map Reduce Algorithm …
  8. 8. Hadoop Processing Components Map Reduce Algorithm … <ul><li>HDFS , Hadoop Distributed File System </li></ul><ul><li>HBASE , Modeled on Google's BigTable database, adds a distributed, fault-tolerant scalable database, built on top of the HDFS file system. </li></ul><ul><li>HIVE , Data-Flow-Language and Dataware House Framework on top of Hadoop </li></ul><ul><li>Pig , High-Level Data-Flow Language (Pig Latin) and Execution Framework whose compiler produces sequences of Map/Reduce programs </li></ul><ul><li>Zookeeper , A distributed, highly available coordination service. Zookeeper provides primitives such as distributed locks that can be used for building distributed applications. </li></ul><ul><li>Sqoop , A tool for efficiently moving data between relational databases and HDFS </li></ul>
  9. 9. Hadoop Processing HDFS File System HDFS File System <ul><li>HDFS file system </li></ul><ul><li>There are some drawbacks to HDFS use. </li></ul><ul><ul><li>HDFS handles continuous updates (write many) less well than a traditional relational database management system. </li></ul></ul><ul><ul><li>Also, HDFS cannot be directly mounted onto the existing operating system. Hence getting data into and out of the HDFS file system can be awkward. </li></ul></ul><ul><li>In addition to Hadoop itself, there are multiple open source projects built on top of Hadoop. Major projects are described such below. </li></ul><ul><ul><li>Hive </li></ul></ul><ul><ul><li>Pig </li></ul></ul><ul><ul><li>Cascading </li></ul></ul><ul><ul><li>HBase </li></ul></ul>
  10. 10. Hadoop Processing HIVE Framework and Hive QL HIVE <ul><li>Hive is a data warehouse framework built on top of Hadoop, </li></ul><ul><li>Developed at Facebook, used for ad hoc querying with an SQL type query language and also used for more complex analysis. </li></ul><ul><li>Users define tables and columns. </li></ul><ul><li>Data is loaded into and retrieved through these tables. </li></ul><ul><li>Hive QL, a SQL-like query language, is used to create summaries, reports, analyses. </li></ul><ul><li>Hive queries launch MapReduce jobs. </li></ul><ul><li>Hive is designed for batch processing, not online transaction processing – unlike HBase (see below), </li></ul><ul><li>Hive does not offer real-time queries. </li></ul>
  11. 11. Hadoop Processing Hive, Why? HIVE <ul><li>Needed where Multi Petabyte Warehouse is required </li></ul><ul><li>Files are insufficient data abstractions </li></ul><ul><ul><li>Need tables, schemas, partitions, indices </li></ul></ul><ul><li>SQL is highly popular </li></ul><ul><li>Need for an open data format </li></ul><ul><li>– RDBMS have a closed data format </li></ul><ul><li>– flexible schema </li></ul><ul><li>Hive is a Hadoop subproject! </li></ul>
  12. 12. Hadoop Processing Pig – High Level Data Flow Language Pig – High Level Data Flow Language <ul><li>Pig is a high-level data-flow language (Pig Latin) and execution framework whose compiler produces sequences of Map/Reduce programs for execution within Hadoop. </li></ul><ul><li>Pig is designed for batch processing of data. </li></ul><ul><li>Pig’s infrastructure layer consists of a compiler that turns (relatively short) Pig Latin programs into sequences of MapReduce programs. </li></ul><ul><li>Pig is a Java client-side application, and users install locally – nothing is altered on the Hadoop cluster itself. Grunt is the Pig interactive shell. </li></ul>
  13. 13. Hadoop Processing Mahout – Extensions to Hadoop Programming Extensions to Hadoop Programming <ul><li>Hadoop is not just for large-scale data processing. </li></ul><ul><li>Mahout is an Apache project for building scalable machine learning libraries, with most algorithms built on top of Hadoop. </li></ul><ul><li>Current algorithm focus areas of Mahout: clustering, classification, data mining (frequent itemset), and evolutionary programming. </li></ul><ul><li>Mahout clustering and classifier algorithms have direct relevance in bioinformatics - for example, for clustering of large gene expression data sets, and as classifiers for biomarker identification. </li></ul><ul><li>For the growing community of Python users in bioinformatics, Pydoop, a Python MapReduce and HDFS API for Hadoop that allows complete MapReduce applications to be written in Python, is available. </li></ul>
  14. 14. Hadoop Processing HBASE – Distrubited, Fault Tolerant and Scalable DB HBASE <ul><li>Hbase, modeled on Google's BigTable database, HBase adds a distributed, fault-tolerant scalable database, built on top of the HDFS file system, with random real-time read/write access to data. </li></ul><ul><li>Each HBase table is stored as a multidimensional sparse map, with rows and columns, each cell having a time stamp. A cell value at a given row and column is by uniquely identified by (Table, Row, Column-Family:Column, Timestamp) -> Value. HBase has its own Java client API, and tables in HBase can be used both as an input source and as an output target for MapReduce jobs through TableInput/TableOutputFormat. </li></ul><ul><li>There is no HBase single point of failure. HBase uses Zookeeper, another Hadoop subproject, for management of partial failures. </li></ul><ul><li>All table accesses are by the primary key. Secondary indices are possible through additional index tables; programmers need to denormalize and replicate. There is no SQL query language in base HBase. However, there is also a Hive/HBase integration project that allows Hive QL statements access to HBase tables for both reading and inserting. </li></ul><ul><li>A table is made up of regions. Each region is defined by a startKey and EndKey, may live on a different node, and is made up of several HDFS files and blocks, each of which is replicated by Hadoop. Columns can be added on-the-fly to tables, with only the parent column families being fixed in a schema. Each cell is tagged by column family and column name, so programs can always identify what type of data item a given cell contains. In addition to being able to scale to petabyte size data sets, we may note the ease of integration of disparate data sources into a small number of HBase tables for building a data workspace, with different columns possibly defined (on-the-fly) for different rows in the same table. Such facility is also important. (See the biological integration discussion below.) </li></ul><ul><li>In addition to HBase, other scalable random access databases are now available. HadoopDB, is a hybrid of MapReduce and a standard relational db system. HadoopDB uses PostgreSQL for db layer (one PostgreSQL instance per data chunk per node), Hadoop for communication layer, and extended version of Hive for a translation layer. </li></ul>
  15. 15. Hadoop Processing Hadoop Db - Architecture Hadoop DB <ul><li>A Database Connector that connects Hadoop with the single-node database systems </li></ul><ul><li>A Data Loader which partitions data and manages parallel loading of data into the database systems. </li></ul><ul><li>A Catalog which tracks locations of different data chunks, including those replicated across multiple nodes. </li></ul><ul><li>The SQL-MapReduce-SQL (SMS) planner which ex-tends Hive to provide a SQL interface to HadoopDB </li></ul>
  16. 16. Example System (Web Portal) Tera-Bytes of data being populated to centralized storage and processed, every week-end!
  17. 17. <ul><li>Features </li></ul><ul><li>Pluggable Portal Components – Portlets </li></ul><ul><li>Functional Aggregation and Deployment as Portlets </li></ul><ul><li>Exposing Portlets as Web Services </li></ul><ul><li>Pluggable, interactive, user-facing web services </li></ul><ul><li>Portlets deployed as independent WAR files </li></ul><ul><li>Portlet Web Services can be consumed by other Portals </li></ul><ul><li>Integration UI to provision real time integration with external systems via web and other channels </li></ul><ul><li>Provisioning for admin features based on roles and level of access </li></ul>Role Management Administration Module Monitoring Control Report Configurations Reporting Business Intelligence Module Analysis Metrics Trends Application Integration Services Application Integration Portlet Integration Rules Data Sources Business Applications Infrastructure and Business Services MyASUP Portal Application Set-Up Core Framework (Logging, Exceptions, Rule Engine, Analytics, Auditing) External Apps UI Adaptation Real Time Integration Module JMS, MQ, JDBC Channels Back End Web Portal (High Level Architecture) Web Portal - High Level Architecture Which uses Hadoop, Solr and Lucene for Backend Data Processing Web Portal – Using Hadoop/Solr/Lucene Security
  18. 18. DB Server J2EE Application Server HTTP HTTP DB Server J2EE Application Server Apache Web Server Tomcat mod_jk Plug-In JBOSS - J2EE Application JBOSS – Portal Web Service JBOSS – jBPM JBOSS - Portal HTTP JDBC Web Portal Servers (Apache + App Server) Web Portal Deployment Landscape Shrading Function Hadoop Processing Web Portal – Deployment Landscape Web Portal – Deployment Landscape DB LB LB DB
  19. 19. Example – AOL Advertising Platform http://www.cloudera.com/blog/2011/02/an-emerging-data-management-architectural-pattern-behind-interactive-web-application/
  20. 20. <ul><li>AOL Advertising runs one of the largest online ad serving operations, serving billions of impressions each month to hundreds of millions of people. AOL faced three data management challenges in building their ad serving platform. There were three major challenges:- </li></ul><ul><ul><li>How to analyze billions of user-related events, presented as a mix of structured and unstructured data, to infer demographic, psychographic and behavioral characteristics that are encapsulated into hundreds of millions of “cookie profiles” </li></ul></ul><ul><ul><li>How to make hundreds of millions of cookie profiles available to their ad targeting platform with sub-millisecond, random read latency </li></ul></ul><ul><ul><li>How to keep the user profiles fresh and current </li></ul></ul><ul><li>The solution was to integrate two data management systems: one optimized for high-throughput data analysis (the “analytics” system), the other for low-latency random access (the “transactional” system). After analyzing alternatives, the final architecture selected paired  Cloudera Distribution for Apache Hadoop  ( CDH ) with Membase. </li></ul>Hadoop Processing AOL Advertising – Business Case and Solution AOL Advertising – Business Case and Solution
  21. 21. Thank You!