What attributes to BigData…
Solution ? Hadoop
Hadoop is an open source framework for writing and
running distributed applications that process large
amounts of data on clusters of commodity hardware
using simple programming model.
Google – 2004
Apache and Yahoo - 2009
Project Creator - Doug Cutting , named “hadoop” after his son’s yellow
It is written with large clusters of computers in mind and is built around
the following assumptions:
Hardware will fail.
Processing will be run in batches.
Applications that run on HDFS have large data sets. A typical file
in HDFS is gigabytes to terabytes in size.
It should provide high aggregate data bandwidth and scale to
hundreds of nodes in a single cluster. It should support tens of
millions of files in a single instance.
Applications need a write-once-read-many access model.
Moving Computation is Cheaper than Moving Data.
Hadoop Core Components
o Hadoop Distributed File System
o Execution engine
Hadoop - Master/Slave
Hadoop is designed as a master-slave shared-nothing architecture
Master node (single node)
Many slave nodes
Master of the system
Maintains and manages the blocks which are present in the
Slaves which are deployed on each machine
Provides the actual storage.
Responsible for providing read and write requests from client.
Main Properties of HDFS
Large: A HDFS instance may consist of thousands of
server machines, each storing part of the file system’s
Replication: Each data block is replicated many times
(default is 3)
Failure: Failure is the norm rather than exception
Fault Tolerance: Detection of faults and quick, automatic
recovery from them is a core architectural goal of HDFS
Programming model developed at Google
Sort/merge based distributed computing
The underlying system takes care of the partitioning of
the input data, scheduling the program’s execution
across several machines, handling machine failures,
and managing required inter-machine communication.
(This is the key for Hadoop’s success)
Map Reduce Components
Job Tracker is the master node (runs with the
Receives the user’s job
Decides on how many tasks will run (number of mappers)
Decides on where to run each mapper (concept of locality)
Task Tracker is the slave node (runs on each
Receives the task from Job Tracker
Runs the task until completion (either map or reduce task)
Always in communication with the Job Tracker reporting
How Map Reduce works ?
The run time partitions the input and provides it to
different Map instances;
Map (key, value) (key’, value’)
The run time collects the (key’, value’) pairs and
distributes them to several Reduce functions so that
each Reduce function gets the pairs with the same key’.
Each Reduce produces a single (or zero) file output.
Map and Reduce are user written functions
Map Reduce Phases
Deciding on what will be the key and what will be the value
Example : Color Count
Shuffle & Sorting
based on k
Produces (k, v)
( , 1)
( , [1,1,1,1,1,1..])
( , 100)
Job: Count the number of each color in a data set
That’s the output file, it
has 3 parts on probably 3
Hadoop vs. Other Systems
Distributed Databases Hadoop
Computing Model - Notion of transactions
- Transaction is the unit of work
- ACID properties, Concurrency control
- Notion of jobs
- Job is the unit of work
- No concurrency control
Data Model - Structured data with known schema
- Read/Write mode
- Any data will fit in any format
- ReadOnly mode
Cost Model - Expensive servers - Cheap commodity machines
Fault Tolerance - Failures are rare
- Recovery mechanisms
- Failures are common over
thousands of machines
- Simple yet efficient fault tolerance
Key Characteristics - Efficiency, optimizations, fine-tuning - Scalability, flexibility, fault tolerance
A Reliable shared storage.
Simple analysis system.
Distributed File System.
Tasks are independent.
Easy to handle partial failures - entire nodes can fail
Lack of central data.
Single master node.
Managing job flow isn’t trivial when intermediate data
should be kept.