SlideShare a Scribd company logo
1 of 33
Presented
By
 Architecture of Hadoop Distributed File System
 Hadoop usage at Facebook
 Ideas for Hadoop related research
www.kellytechno.com
 Hadoop Developer
 Core contributor since Hadoop’s infancy
 Project Lead for Hadoop Distributed File System
 Facebook (Hadoop, Hive, Scribe)
 Yahoo! (Hadoop in Yahoo Search)
 Veritas (San Point Direct, Veritas File System)
 IBM Transarc (Andrew File System)
 UW Computer Science Alumni (Condor Project)
www.kellytechno.com
 Need to process Multi Petabyte Datasets
 Expensive to build reliability in each application.
 Nodes fail every day
– Failure is expected, rather than exceptional.
– The number of nodes in a cluster is not constant.
 Need common infrastructure
– Efficient, reliable, Open Source Apache License
 The above goals are same as Condor, but
 Workloads are IO bound and not CPU bound
www.kellytechno.com
 Need a Multi Petabyte Warehouse
 Files are insufficient data abstractions
 Need tables, schemas, partitions, indices
 SQL is highly popular
 Need for an open data format
– RDBMS have a closed data format
– flexible schema
 Hive is a Hadoop subproject!
www.kellytechno.com
 Dec 2004 – Google GFS paper published
 July 2005 – Nutch uses MapReduce
 Feb 2006 – Becomes Lucene subproject
 Apr 2007 – Yahoo! on 1000-node cluster
 Jan 2008 – An Apache Top Level Project
 Jul 2008 – A 4000 node test cluster
 Sept 2008 – Hive becomes a Hadoop subproject
www.kellytechno.com
 Amazon/A9
 Facebook
 Google
 IBM
 Joost
 Last.fm
 New York Times
 PowerSet
 Veoh
 Yahoo!
www.kellytechno.com
Typically in 2 level architecture
– Nodes are commodity PCs
– 30-40 nodes/rack
– Uplink from rack is 3-4 gigabit
– Rack-internal is 1 gigabit
www.kellytechno.com
 Very Large Distributed File System
– 10K nodes, 100 million files, 10 PB
 Assumes Commodity Hardware
– Files are replicated to handle hardware failure
– Detect failures and recovers from them
 Optimized for Batch Processing
– Data locations exposed so that computations can
move to where data resides
– Provides very high aggregate bandwidth
 User Space, runs on heterogeneous OS
www.kellytechno.com
Secondary
NameNode
Client
HDFS Architecture
NameNode
DataNodes
1. filename
2. BlckId, DataNodes
o
3.Read data
Cluster Membership
Cluster Membership
NameNode : Maps a file to a file-id and list of MapNodes
DataNode : Maps a block-id to a physical location on disk
SecondaryNameNode: Periodic merge of Transaction log
www.kellytechno.com
 Single Namespace for entire cluster
 Data Coherency
– Write-once-read-many access model
– Client can only append to existing files
 Files are broken up into blocks
– Typically 128 MB block size
– Each block replicated on multiple DataNodes
 Intelligent Client
– Client can find location of blocks
– Client accesses data directly from DataNode
www.kellytechno.com
www.kellytechno.com
 Meta-data in Memory
– The entire metadata is in main memory
– No demand paging of meta-data
 Types of Metadata
– List of files
– List of Blocks for each file
– List of DataNodes for each block
– File attributes, e.g creation time, replication factor
 A Transaction Log
– Records file creations, file deletions. etc
www.kellytechno.com
 A Block Server
– Stores data in the local file system (e.g. ext3)
– Stores meta-data of a block (e.g. CRC)
– Serves data and meta-data to Clients
 Block Report
– Periodically sends a report of all existing blocks to the
NameNode
 Facilitates Pipelining of Data
– Forwards data to other specified DataNodes
www.kellytechno.com
 Current Strategy
-- One replica on local node
-- Second replica on a remote rack
-- Third replica on same remote rack
-- Additional replicas are randomly placed
 Clients read from nearest replica
 Would like to make this policy pluggable
www.kellytechno.com
 Use Checksums to validate data
– Use CRC32
 File Creation
– Client computes checksum per 512 byte
– DataNode stores the checksum
 File access
– Client retrieves the data and checksum from
DataNode
– If Validation fails, Client tries other replicas
www.kellytechno.com
 A single point of failure
 Transaction Log stored in multiple directories
– A directory on the local file system
– A directory on a remote file system (NFS/CIFS)
 Need to develop a real HA solution
www.kellytechno.com
 Client retrieves a list of DataNodes on which to place
replicas of a block
 Client writes block to the first DataNode
 The first DataNode forwards the data to the next
DataNode in the Pipeline
 When all replicas are written, the Client moves on to
write the next block in file
www.kellytechno.com
 Goal: % disk full on DataNodes should be similar
 Usually run when new DataNodes are added
 Cluster is online when Rebalancer is active
 Rebalancer is throttled to avoid network congestion
 Command line tool
www.kellytechno.com
 The Map-Reduce programming model
– Framework for distributed processing of large data
sets
– Pluggable user code runs in generic framework
 Common design pattern in data processing
cat * | grep | sort | unique -c | cat > file
input | map | shuffle | reduce | output
 Natural for:
– Log processing
– Web search indexing
– Ad-hoc queries
www.kellytechno.com
 Production cluster
 4800 cores, 600 machines, 16GB per machine – April 2009
 8000 cores, 1000 machines, 32 GB per machine – July 2009
 4 SATA disks of 1 TB each per machine
 2 level network hierarchy, 40 machines per rack
 Total cluster size is 2 PB, projected to be 12 PB in Q3 2009
 Test cluster
• 800 cores, 16GB each
www.kellytechno.com
Web Servers Scribe Servers
Network
Storage
Hadoop ClusterOracle RAC MySQL
www.kellytechno.com
 Statistics :
 15 TB uncompressed data ingested per day
 55TB of compressed data scanned per day
 3200+ jobs on production cluster per day
 80M compute minutes per day
 Barrier to entry is reduced:
 80+ engineers have run jobs on Hadoop platform
 Analysts (non-engineers) starting to use Hadoop through Hive
www.kellytechno.com
Ideas for Collaboration
www.kellytechno.com
 Run Condor jobs on Hadoop File System
 Create HDFS using local disk on condor nodes
 Use HDFS API to find data location
 Place computation close to data location
 Support map-reduce data abstraction model
www.kellytechno.com
 Power Management
 Major operating expense
 Power down CPU’s when idle
 Block placement based on access pattern
 Move cold data to disks that need less power
 Condor Green
www.kellytechno.com
 Design Quantitative Benchmarks
 Measure Hadoop’s fault tolerance
 Measure Hive’s schema flexibility
 Compare above benchmark results
 with RDBMS
 with other grid computing engines
www.kellytechno.com
 Current state of affairs
 FIFO and Fair Share scheduler
 Checkpointing and parallelism tied together
 Topics for Research
 Cycle scavenging scheduler
 Separate checkpointing and parallelism
 Use resource matchmaking to support
heterogeneous Hadoop compute clusters
 Scheduler and API for MPI workload
www.kellytechno.com
 Machines and software are commodity
 Networking components are not
 High-end costly switches needed
 Hadoop assumes hierarchical topology
 Design new topology based on commodity hardware
www.kellytechno.com
 Hadoop Log Analysis
 Failure prediction and root cause analysis
 Hadoop Data Rebalancing
 Based on access patterns and load
 Best use of flash memory?
www.kellytechno.com
 Lots of synergy between Hadoop and Condor
 Let’s get the best of both worlds
www.kellytechno.com
 HDFS Design:
 http://hadoop.apache.org/core/docs/current/hdfs_design.html
 Hadoop API:
 http://hadoop.apache.org/core/docs/current/api/
 Hive:
 http://hadoop.apache.org/hive/
www.kellytechno.com
Thankyou
Presented
By
www.kellytechno.com

More Related Content

What's hot

Cloud File System with GFS and HDFS
Cloud File System with GFS and HDFS  Cloud File System with GFS and HDFS
Cloud File System with GFS and HDFS Dr Neelesh Jain
 
Hw09 Low Latency, Random Reads From Hdfs
Hw09   Low Latency, Random Reads From HdfsHw09   Low Latency, Random Reads From Hdfs
Hw09 Low Latency, Random Reads From HdfsCloudera, Inc.
 
2013 year of real-time hadoop
2013 year of real-time hadoop2013 year of real-time hadoop
2013 year of real-time hadoopGeoff Hendrey
 
Ceph Days 2014 Paul Evans Slide Deck
Ceph Days 2014 Paul Evans Slide DeckCeph Days 2014 Paul Evans Slide Deck
Ceph Days 2014 Paul Evans Slide DeckDaystromTech
 
2.introduction to hdfs
2.introduction to hdfs2.introduction to hdfs
2.introduction to hdfsdatabloginfo
 
GFS - Google File System
GFS - Google File SystemGFS - Google File System
GFS - Google File Systemtutchiio
 
1. beyond mission critical virtualizing big data and hadoop
1. beyond mission critical   virtualizing big data and hadoop1. beyond mission critical   virtualizing big data and hadoop
1. beyond mission critical virtualizing big data and hadoopChiou-Nan Chen
 
Cloudera Impala - HUG Karlsruhe, July 04, 2013
Cloudera Impala - HUG Karlsruhe, July 04, 2013Cloudera Impala - HUG Karlsruhe, July 04, 2013
Cloudera Impala - HUG Karlsruhe, July 04, 2013Alexander Alten
 
Building robust CDC pipeline with Apache Hudi and Debezium
Building robust CDC pipeline with Apache Hudi and DebeziumBuilding robust CDC pipeline with Apache Hudi and Debezium
Building robust CDC pipeline with Apache Hudi and DebeziumTathastu.ai
 
CS 542 Parallel DBs, NoSQL, MapReduce
CS 542 Parallel DBs, NoSQL, MapReduceCS 542 Parallel DBs, NoSQL, MapReduce
CS 542 Parallel DBs, NoSQL, MapReduceJ Singh
 
Understanding Distributed Databases Scalability
Understanding Distributed Databases ScalabilityUnderstanding Distributed Databases Scalability
Understanding Distributed Databases ScalabilityRicardo Jimenez-Peris
 
Beyond Hadoop and MapReduce
Beyond Hadoop and MapReduceBeyond Hadoop and MapReduce
Beyond Hadoop and MapReduceAlexander Alten
 
How to Build a Cloud Native Stack for Analytics with Spark, Hive, and Alluxio...
How to Build a Cloud Native Stack for Analytics with Spark, Hive, and Alluxio...How to Build a Cloud Native Stack for Analytics with Spark, Hive, and Alluxio...
How to Build a Cloud Native Stack for Analytics with Spark, Hive, and Alluxio...Alluxio, Inc.
 
From limited Hadoop compute capacity to increased data scientist efficiency
From limited Hadoop compute capacity to increased data scientist efficiencyFrom limited Hadoop compute capacity to increased data scientist efficiency
From limited Hadoop compute capacity to increased data scientist efficiencyAlluxio, Inc.
 
Hadoop architecture-tutorial
Hadoop  architecture-tutorialHadoop  architecture-tutorial
Hadoop architecture-tutorialvinayiqbusiness
 

What's hot (20)

Cloud File System with GFS and HDFS
Cloud File System with GFS and HDFS  Cloud File System with GFS and HDFS
Cloud File System with GFS and HDFS
 
Hw09 Low Latency, Random Reads From Hdfs
Hw09   Low Latency, Random Reads From HdfsHw09   Low Latency, Random Reads From Hdfs
Hw09 Low Latency, Random Reads From Hdfs
 
2013 year of real-time hadoop
2013 year of real-time hadoop2013 year of real-time hadoop
2013 year of real-time hadoop
 
Ceph Days 2014 Paul Evans Slide Deck
Ceph Days 2014 Paul Evans Slide DeckCeph Days 2014 Paul Evans Slide Deck
Ceph Days 2014 Paul Evans Slide Deck
 
Hadoop Online Training
Hadoop Online TrainingHadoop Online Training
Hadoop Online Training
 
2.introduction to hdfs
2.introduction to hdfs2.introduction to hdfs
2.introduction to hdfs
 
GFS - Google File System
GFS - Google File SystemGFS - Google File System
GFS - Google File System
 
1. beyond mission critical virtualizing big data and hadoop
1. beyond mission critical   virtualizing big data and hadoop1. beyond mission critical   virtualizing big data and hadoop
1. beyond mission critical virtualizing big data and hadoop
 
HDFS Tiered Storage
HDFS Tiered StorageHDFS Tiered Storage
HDFS Tiered Storage
 
Cloudera Impala - HUG Karlsruhe, July 04, 2013
Cloudera Impala - HUG Karlsruhe, July 04, 2013Cloudera Impala - HUG Karlsruhe, July 04, 2013
Cloudera Impala - HUG Karlsruhe, July 04, 2013
 
Building robust CDC pipeline with Apache Hudi and Debezium
Building robust CDC pipeline with Apache Hudi and DebeziumBuilding robust CDC pipeline with Apache Hudi and Debezium
Building robust CDC pipeline with Apache Hudi and Debezium
 
CS 542 Parallel DBs, NoSQL, MapReduce
CS 542 Parallel DBs, NoSQL, MapReduceCS 542 Parallel DBs, NoSQL, MapReduce
CS 542 Parallel DBs, NoSQL, MapReduce
 
HDFS tiered storage
HDFS tiered storageHDFS tiered storage
HDFS tiered storage
 
Hadoop hdfs
Hadoop hdfsHadoop hdfs
Hadoop hdfs
 
Understanding Distributed Databases Scalability
Understanding Distributed Databases ScalabilityUnderstanding Distributed Databases Scalability
Understanding Distributed Databases Scalability
 
Beyond Hadoop and MapReduce
Beyond Hadoop and MapReduceBeyond Hadoop and MapReduce
Beyond Hadoop and MapReduce
 
How to Build a Cloud Native Stack for Analytics with Spark, Hive, and Alluxio...
How to Build a Cloud Native Stack for Analytics with Spark, Hive, and Alluxio...How to Build a Cloud Native Stack for Analytics with Spark, Hive, and Alluxio...
How to Build a Cloud Native Stack for Analytics with Spark, Hive, and Alluxio...
 
Introduction to Hadoop Administration
Introduction to Hadoop AdministrationIntroduction to Hadoop Administration
Introduction to Hadoop Administration
 
From limited Hadoop compute capacity to increased data scientist efficiency
From limited Hadoop compute capacity to increased data scientist efficiencyFrom limited Hadoop compute capacity to increased data scientist efficiency
From limited Hadoop compute capacity to increased data scientist efficiency
 
Hadoop architecture-tutorial
Hadoop  architecture-tutorialHadoop  architecture-tutorial
Hadoop architecture-tutorial
 

Similar to Apache hadoop and hive

Similar to Apache hadoop and hive (20)

Hadoop Distributed File System
Hadoop Distributed File SystemHadoop Distributed File System
Hadoop Distributed File System
 
Borthakur hadoop univ-research
Borthakur hadoop univ-researchBorthakur hadoop univ-research
Borthakur hadoop univ-research
 
Hadoop training institute in bangalore
Hadoop training institute in bangaloreHadoop training institute in bangalore
Hadoop training institute in bangalore
 
Hadoop training institute in hyderabad
Hadoop training institute in hyderabadHadoop training institute in hyderabad
Hadoop training institute in hyderabad
 
Hadoop File system (HDFS)
Hadoop File system (HDFS)Hadoop File system (HDFS)
Hadoop File system (HDFS)
 
Introduction to hadoop and hdfs
Introduction to hadoop and hdfsIntroduction to hadoop and hdfs
Introduction to hadoop and hdfs
 
Hadoop training in bangalore-kellytechnologies
Hadoop training in bangalore-kellytechnologiesHadoop training in bangalore-kellytechnologies
Hadoop training in bangalore-kellytechnologies
 
Hadoop ppt1
Hadoop ppt1Hadoop ppt1
Hadoop ppt1
 
Hadoop introduction
Hadoop introductionHadoop introduction
Hadoop introduction
 
Big data with HDFS and Mapreduce
Big data  with HDFS and MapreduceBig data  with HDFS and Mapreduce
Big data with HDFS and Mapreduce
 
Unit-1 Introduction to Big Data.pptx
Unit-1 Introduction to Big Data.pptxUnit-1 Introduction to Big Data.pptx
Unit-1 Introduction to Big Data.pptx
 
Hadoop
HadoopHadoop
Hadoop
 
Hadoop data management
Hadoop data managementHadoop data management
Hadoop data management
 
List of Engineering Colleges in Uttarakhand
List of Engineering Colleges in UttarakhandList of Engineering Colleges in Uttarakhand
List of Engineering Colleges in Uttarakhand
 
Hadoop.pptx
Hadoop.pptxHadoop.pptx
Hadoop.pptx
 
Hadoop.pptx
Hadoop.pptxHadoop.pptx
Hadoop.pptx
 
Hadoop introduction
Hadoop introductionHadoop introduction
Hadoop introduction
 
HDFS_architecture.ppt
HDFS_architecture.pptHDFS_architecture.ppt
HDFS_architecture.ppt
 
Hadoop training in hyderabad-kellytechnologies
Hadoop training in hyderabad-kellytechnologiesHadoop training in hyderabad-kellytechnologies
Hadoop training in hyderabad-kellytechnologies
 
Introduction to Hadoop Administration
Introduction to Hadoop AdministrationIntroduction to Hadoop Administration
Introduction to Hadoop Administration
 

Recently uploaded

SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxiammrhaywood
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxGaneshChakor2
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeThiyagu K
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfciinovamais
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityGeoBlogs
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptxContemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptxRoyAbrique
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Educationpboyjonauth
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxNirmalaLoungPoorunde1
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesFatimaKhan178732
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdfQucHHunhnh
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxmanuelaromero2013
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104misteraugie
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxpboyjonauth
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxSayali Powar
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13Steve Thomason
 

Recently uploaded (20)

SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptx
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptxContemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Education
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptx
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and Actinides
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptx
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptx
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13
 

Apache hadoop and hive

  • 2.  Architecture of Hadoop Distributed File System  Hadoop usage at Facebook  Ideas for Hadoop related research www.kellytechno.com
  • 3.  Hadoop Developer  Core contributor since Hadoop’s infancy  Project Lead for Hadoop Distributed File System  Facebook (Hadoop, Hive, Scribe)  Yahoo! (Hadoop in Yahoo Search)  Veritas (San Point Direct, Veritas File System)  IBM Transarc (Andrew File System)  UW Computer Science Alumni (Condor Project) www.kellytechno.com
  • 4.  Need to process Multi Petabyte Datasets  Expensive to build reliability in each application.  Nodes fail every day – Failure is expected, rather than exceptional. – The number of nodes in a cluster is not constant.  Need common infrastructure – Efficient, reliable, Open Source Apache License  The above goals are same as Condor, but  Workloads are IO bound and not CPU bound www.kellytechno.com
  • 5.  Need a Multi Petabyte Warehouse  Files are insufficient data abstractions  Need tables, schemas, partitions, indices  SQL is highly popular  Need for an open data format – RDBMS have a closed data format – flexible schema  Hive is a Hadoop subproject! www.kellytechno.com
  • 6.  Dec 2004 – Google GFS paper published  July 2005 – Nutch uses MapReduce  Feb 2006 – Becomes Lucene subproject  Apr 2007 – Yahoo! on 1000-node cluster  Jan 2008 – An Apache Top Level Project  Jul 2008 – A 4000 node test cluster  Sept 2008 – Hive becomes a Hadoop subproject www.kellytechno.com
  • 7.  Amazon/A9  Facebook  Google  IBM  Joost  Last.fm  New York Times  PowerSet  Veoh  Yahoo! www.kellytechno.com
  • 8. Typically in 2 level architecture – Nodes are commodity PCs – 30-40 nodes/rack – Uplink from rack is 3-4 gigabit – Rack-internal is 1 gigabit www.kellytechno.com
  • 9.  Very Large Distributed File System – 10K nodes, 100 million files, 10 PB  Assumes Commodity Hardware – Files are replicated to handle hardware failure – Detect failures and recovers from them  Optimized for Batch Processing – Data locations exposed so that computations can move to where data resides – Provides very high aggregate bandwidth  User Space, runs on heterogeneous OS www.kellytechno.com
  • 10. Secondary NameNode Client HDFS Architecture NameNode DataNodes 1. filename 2. BlckId, DataNodes o 3.Read data Cluster Membership Cluster Membership NameNode : Maps a file to a file-id and list of MapNodes DataNode : Maps a block-id to a physical location on disk SecondaryNameNode: Periodic merge of Transaction log www.kellytechno.com
  • 11.  Single Namespace for entire cluster  Data Coherency – Write-once-read-many access model – Client can only append to existing files  Files are broken up into blocks – Typically 128 MB block size – Each block replicated on multiple DataNodes  Intelligent Client – Client can find location of blocks – Client accesses data directly from DataNode www.kellytechno.com
  • 13.  Meta-data in Memory – The entire metadata is in main memory – No demand paging of meta-data  Types of Metadata – List of files – List of Blocks for each file – List of DataNodes for each block – File attributes, e.g creation time, replication factor  A Transaction Log – Records file creations, file deletions. etc www.kellytechno.com
  • 14.  A Block Server – Stores data in the local file system (e.g. ext3) – Stores meta-data of a block (e.g. CRC) – Serves data and meta-data to Clients  Block Report – Periodically sends a report of all existing blocks to the NameNode  Facilitates Pipelining of Data – Forwards data to other specified DataNodes www.kellytechno.com
  • 15.  Current Strategy -- One replica on local node -- Second replica on a remote rack -- Third replica on same remote rack -- Additional replicas are randomly placed  Clients read from nearest replica  Would like to make this policy pluggable www.kellytechno.com
  • 16.  Use Checksums to validate data – Use CRC32  File Creation – Client computes checksum per 512 byte – DataNode stores the checksum  File access – Client retrieves the data and checksum from DataNode – If Validation fails, Client tries other replicas www.kellytechno.com
  • 17.  A single point of failure  Transaction Log stored in multiple directories – A directory on the local file system – A directory on a remote file system (NFS/CIFS)  Need to develop a real HA solution www.kellytechno.com
  • 18.  Client retrieves a list of DataNodes on which to place replicas of a block  Client writes block to the first DataNode  The first DataNode forwards the data to the next DataNode in the Pipeline  When all replicas are written, the Client moves on to write the next block in file www.kellytechno.com
  • 19.  Goal: % disk full on DataNodes should be similar  Usually run when new DataNodes are added  Cluster is online when Rebalancer is active  Rebalancer is throttled to avoid network congestion  Command line tool www.kellytechno.com
  • 20.  The Map-Reduce programming model – Framework for distributed processing of large data sets – Pluggable user code runs in generic framework  Common design pattern in data processing cat * | grep | sort | unique -c | cat > file input | map | shuffle | reduce | output  Natural for: – Log processing – Web search indexing – Ad-hoc queries www.kellytechno.com
  • 21.  Production cluster  4800 cores, 600 machines, 16GB per machine – April 2009  8000 cores, 1000 machines, 32 GB per machine – July 2009  4 SATA disks of 1 TB each per machine  2 level network hierarchy, 40 machines per rack  Total cluster size is 2 PB, projected to be 12 PB in Q3 2009  Test cluster • 800 cores, 16GB each www.kellytechno.com
  • 22. Web Servers Scribe Servers Network Storage Hadoop ClusterOracle RAC MySQL www.kellytechno.com
  • 23.  Statistics :  15 TB uncompressed data ingested per day  55TB of compressed data scanned per day  3200+ jobs on production cluster per day  80M compute minutes per day  Barrier to entry is reduced:  80+ engineers have run jobs on Hadoop platform  Analysts (non-engineers) starting to use Hadoop through Hive www.kellytechno.com
  • 25.  Run Condor jobs on Hadoop File System  Create HDFS using local disk on condor nodes  Use HDFS API to find data location  Place computation close to data location  Support map-reduce data abstraction model www.kellytechno.com
  • 26.  Power Management  Major operating expense  Power down CPU’s when idle  Block placement based on access pattern  Move cold data to disks that need less power  Condor Green www.kellytechno.com
  • 27.  Design Quantitative Benchmarks  Measure Hadoop’s fault tolerance  Measure Hive’s schema flexibility  Compare above benchmark results  with RDBMS  with other grid computing engines www.kellytechno.com
  • 28.  Current state of affairs  FIFO and Fair Share scheduler  Checkpointing and parallelism tied together  Topics for Research  Cycle scavenging scheduler  Separate checkpointing and parallelism  Use resource matchmaking to support heterogeneous Hadoop compute clusters  Scheduler and API for MPI workload www.kellytechno.com
  • 29.  Machines and software are commodity  Networking components are not  High-end costly switches needed  Hadoop assumes hierarchical topology  Design new topology based on commodity hardware www.kellytechno.com
  • 30.  Hadoop Log Analysis  Failure prediction and root cause analysis  Hadoop Data Rebalancing  Based on access patterns and load  Best use of flash memory? www.kellytechno.com
  • 31.  Lots of synergy between Hadoop and Condor  Let’s get the best of both worlds www.kellytechno.com
  • 32.  HDFS Design:  http://hadoop.apache.org/core/docs/current/hdfs_design.html  Hadoop API:  http://hadoop.apache.org/core/docs/current/api/  Hive:  http://hadoop.apache.org/hive/ www.kellytechno.com

Editor's Notes

  1. This is the architecture of our backend data warehouing system. This system provides important information on the usage of our website, including but not limited to the number page views of each page, the number of active users in each country, etc. We generate 3TB of compressed log data every day. All these data are stored and processed by the hadoop cluster which consists of over 600 machines. The summary of the log data is then copied to Oracle and MySQL databases, to make sure it is easy for people to access.