Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Hadoop For Enterprises


Published on

Big Data is a mega trend in IT and Hadoop is front runner. This is a short overview of Hadoop, eco-system and reference architectures

Published in: Technology, Education

Hadoop For Enterprises

  1. Hadoop for Enterprise rev 7Rajesh NadipalliMar
  2. Hadoop getting attention• Feb 2012: Microsoft, Hortonworks in partnership to develop Excel plug-in for Hadoop• Jan 2012: Oracle announces Big Data Appliance with Cloudera’s Hadoop distribution• Dec 2011: EMC released Unified Analytics Platform which includes Greenplum Apache Hadoop distribution• Oct 2011: Microsoft plans to add Hadoop support to SQL server 2012• May 2010: IBM introduces Hadoop based InfoSphereBigInsights
  3. In this Presentation…  Big Data – Big Opportunities  Hadoop for Enterprise – Reference Arch  Map Reduce Overview  Hive  References
  4. BIG DATA – BIGOPPORTUNITIES Rajesh.nadipalli@gmail.c
  5. Big Data - BusinessOpportunityEnterprises today are challenged with.. Exponential data growth Complex data needs- structured & unstructured Real time insights with key indicators Heterogeneous environment: private and public clouds Tighter budgets and the need to do more with less Traditional relational databases are not able to scale and meet these challenges
  6. Data – 4 V’s (Forrester)
  7. Why Hadoop? Hadoop provides…  Distributed File System  Parallel computing across several nodes  Support for structured and un-structured content  Fault tolerance and linear scalability  Open source under Apache foundation  Increasing support from vendors  Key Philosophy: “moving compute is cheaper than moving data”Forrester regards Hadoop as the nucleus of the next-generation EDW in thecloud.
  8. Some users of Hadoop… • Use Hadoop to store copies of internal log and dimension data sources and use it as a source for reporting/analytics and machine learning. • Currently we have 2 major clusters: A 1100-machine cluster with 8800 cores and about 12 PB raw storage. A 300-machine cluster with 2400 cores and about 3 PB raw storage. • Each (commodity) node has 8 cores and 12 TB of storage. • Hadoop used to analyze the log of search and do some mining work on web page database • We handle about 3000TB per week Our clusters vary from 10 to 500 nodes • 532 nodes cluster (8 * 532 cores, 5.3PB). • Heavy usage of Java MapReduce, Pig, Hive, HBase • Using it for search optimization and Research •5 machine cluster (8 cores/machine, 5TB/machine storage) •Existing 19 virtual machine cluster (2 cores/machine 30TB storage •Predominantly Hive and Streaming API based jobs (~20,000 jobs a week) •Daily batch ETL; Log analysis; Data mining; Machine learning
  9. HADOOP REFERENCEARCHITECTURE Rajesh.nadipalli@gmail.c
  10. Hadoop for Enterprise – Technology Stack User Experience Ad-hoc Notifications Embedded Analytics queries /Alerts Search Data Access Excel R (Rhipe, Hive Pig Datameer RBits) Zookeeper (Orchestration, Quorum) Pentaho (Scheduling, Integrations) Data Processing Mapreduce Hadoop Data Store Hbase (NOSQL DB) HDFS Sqoop Data Sources Application Database Log RSS Cloud Others s (internal) s Files Feeds
  11. Hadoop for BI – Reference Architecture Data Hadoop Distributed Computing Enterprise Apps Environment Dashboards RDBMS Excel M XML A N-Node JSON P scalable cluster ERP, Enterprise Apps Binary R E CSV D U Log C Import RDBM E S Java Hadoop File Objects System (HDFS)
  12. 521209.pdf?ssSourceSiteId=ocomen Oracle’s Big Data Solution• Oracle sees Hadoop is good for unstructured sourcing and map reduce.• It recommends to use Oracle database for the final analyze stage• Oracle Data Integrator can make Hive queries (ETL)• Oracle has a wrapper on top of sqoop which is called Oraoop (seereferences)
  13. DATA PROCESSING Rajesh.nadipalli@gmail.c
  14. Hadoop Mapreduce Overview Map Reduce Process Node 1 010101010101010101010 Node 1 10 222222222222222222 010101010101010101010 Map 3333333333333333333 10 3333333333333333333 DATA (from HDFS) 010101010101010101010 10 RESULTS0101010101010101010101001010101010101010101010 Node 2 222222222222222201010101010101010101010 2201010101010101010101010 010101010101010101010 Node 2 333333333333333301010101010101010101010 Reduc Split 10 3301010101010101010101010 2222222222222222222 e 010101010101010101010 444444444444444401010101010101010101010 Map 3333333333333333333 10 4401010101010101010101010 4444444444444444444 01010101010101010101001010101010101010101010 10010101010101010101010100101010101010101010101001010101010101010101010 Node 3 Node 3 010101010101010101010 10 010101010101010101010 222222222222222222 10 Map 3333333333333333333 010101010101010101010 3333333333333333333 10
  15. Map Reduce Tips The first step is to understand what data you have, and how to feed it into the Hadoop distributed computing environment. Using distributed applications, provide analytics of the massive data sets, while simultaneously enabling the surfacing of opportunities. Hadoop stores your information for future queries, enhancing the exploratory capabilities (as well as historical reference) of your data.
  16. DATA STORE Rajesh.nadipalli@gmail.c
  17. HDFS Distributed file system consisting of ◦ One single node is called “Namenode” and has metadata ◦ Several “Datanodes” Designed to run on commodity hardware Data gets imported as blocks (64 MB) These Blocks are replicated (typically 3 copies) to protect for hardware failures Access via Java API’s or hadoop command line ($hadoop fs…)
  18. htmlHDFS architectureHadoop next revision has a failover Namenode called “Avatar”
  19. HBase Distributed, column-oriented database (NoSQL) Failure-tolerant Low latency HDFS aware Access via Java APIs or REST APIs It is not a replacement for RDBMS Recommended to use Hbase when ◦ Data is searched by key (or range) ◦ Data does not conform to a schema (for instance if you have attributes that change by record).
  20. Hbase Architecture Zookeeper Avatar Hbase (Failover of Master master) Region Region Region Region Server Server Server Server Zookeeper maintains quorum and knows which server is the master Master keeps track of regions and region servers Region servers store table regions
  21. Hbase Column StorageHbase stores data like tags for a key; for example:Row Column Column Cell Family Cast Cast:Actor1 Harrison FordStar Wars Cast:Actor2 Carrie Fisher Reviews Review: IMDB Review URL Review: ET Review URL2
  22. DATA ACCESS Rajesh.nadipalli@gmail.c
  23. Hive Overview Data warehouse software built on top of Hadoop HiveQL provides a SQL like interface and performs a map reduce job Provides structure to HDFS data similar to Oracle External table
  24. Hive Architecture Hive CLIBrowse Query Hive QL Hive ParserMetastore Execution SerDe (Map Reduce) HDFS
  25. Pig Overview Pig is a layer on top of map-reduce for statisticians (programmers) It provides several standard operators: join, order by etc It allows user defined functions to be included. Java or phyton supported for UDF’s
  26. OverviewKey philosophy: Business users understand Excel; let them do the grouping, sorting, filtering, aggregatesKey Steps: Datameer’s source is a mapreduce output. Datameer takes a quick sample of 5000 records. The end user is next presented an Excel like interface on top of this 5000 records. This is where the end users can define their filters, formula, grouping, aggregations, joins across sheets (even join hadoop data with data from a relational database table) Once the end user has defined what they want as the end result, they can submit a job to run on the complete dataset. Datameer will then build the necessary map reduce jobs and run it on the complete data set. Next the user gets the results and can build charts, tables etc – all on the browser
  27. IntegrationMicrosoft announced Excel integration with Hadoop (Feb 2012) with HortonWorksKey Highlights: Microsoft &Hortonworks will deliver a Hive ODBC driver that will enable integration with Excel Microsoft’s PowerPivot in-memory plug-in for Excel will handle larger data sets There is also a plan for Javascript framework for Hadoop enabling Ajax like iterative
  28. INTEGRATION,SCHEDULING Rajesh.nadipalli@gmail.c
  29. Pentaho Data Integration Pentaho is considered as “strong performer” by Forrestor (Feb 2012) It makes building MapReduce easy via it’s Data Integration IDE. It can read/write to HDFS, run map reduce and Pig scripts The IDE has several standard connectors, transformation, and allows custom java code
  30. edded Pentaho Data Integration Build Reducer 21 Build Mapper Run Map 3 Reduce
  31. Talend - ETL Talend is another ETL development, scheduling and monitoring tool It supports HDFS, Pig, Hive, Sqoop data/
  32. Talend ETL – with Hadoop • Can invoke Hadoopcalls (generates Hive queries) • See right slide “Processing”
  33. USER EXPERIENCE Rajesh.nadipalli@gmail.c
  34. User ExperienceThis layer of stack is generally custom development. However some tools that work with Hadoop are: Tableau for data analysis & visualizations SAS Enterprise Miner IBM BigInsights
  35. REFERENCES Rajesh.nadipalli@gmail.c
  36. References node-cluster/ appliance-for-ibm.html presentation gates?from=ss_embed _c11-690561.html OraHive.pdf
  38. Hadoop Players
  39. MAP-R No single point of failure of name node Performance improvements (5 times faster than HDFS) Snapshots, Multi-site copies They have separate Mapreduce (extended mapreduce) MapR is 8K blocks instead of 64MB block size of HDFS
  40. Open Topics – why there isadoption issue  Security – no concept of roles  Backup, Recovery  ACID not supported
  41. Thank You to my viewers
  42. Questions /