What is Big Data?
Big Data is a collection of large datasets that
cannot be processed using traditional
Big Data includes huge volume, high velocity,
and extensible variety of data.
Classification of Big
The data in it will be of three types:
Structured data: Relational data.
Semi Structured data: XML data.
Unstructured data: Word, PDF, Text, Media
Big Data Challenges
The major challenges associated with big data:
It is a parallel programming model for writing
It can efficiently process multi-terabyte data-
Runs on large clusters of commodity
hardware in a reliable, fault-tolerant manner.
Introduction to Hadoop
Hadoop was developed by Doug Cutting.
Hadoop is an Apache open source
framework written in java.
Hadoop allows distributed storage and
processing of large datasets across clusters of
Hadoop has the two major layers namely:
Processing/Computation layer (MapReduce)
Storage layer (Hadoop Distributed File
Other modules of Hadoop Framework includes:
Hadoop YARN(Yet Another Resource
What is MapReduce?
The MapReduce algorithm contains two
important tasks, namely Map and Reduce.
Map takes a set of data and breaks
individual elements into tuples (key/value
Reduce takes Map’s output as an input and
combines those data tuples forming a
smaller set of tuples.
Under the MapReduce model, the data
processing primitives are called mappers and
Hadoop initiates Map stage by issuing
mapping task to appropriate servers in the
The input file or directory, stored in the HDFS is
passed to the mapper function line by line.
The mapper processes the data and creates
several small chunks of data(key/value pairs).
Hadoop monitors for task completion and
initiates shuffle stage.
The framework groups data from all mappers
by the keys and splits them among the
appropriate servers for the reduce stage.
The Reducer processes the data coming from
the mapper, producing a new set of output,
that is stored in the HDFS.
The framework manages all the details of
data-passing and copying between the
nodes in the cluster.
Hadoop Distributed File
HDFS is based on the Google File System.
It is highly fault-tolerant and is designed to be
deployed on low-cost hardware.
It is suitable for applications having large
These files are stored in redundant fashion to
rescue the system from possible data losses in
case of failure.
It acts as a master server that manages the
file system namespace.
Regulates client’s access to files.
These nodes manage the data storage of
And performs read-write and block
operations regulated by namenode.
It is the minimum amount of data that HDFS
can read/ write.
The files are divided into one or more blocks.
Blocks are stored in individual data nodes.
It provides essential services and basic
processes such as abstraction of the
underlying operating system and its file
It assumes that hardware failures are
common and should be automatically
handled by the Framework.
It also contains the necessary Java Archive
(JAR) files and scripts required to start
It is a clustering platform that helps to
manage and allocate resources to
applications and schedule tasks.
Responsible for negotiating resources with
the ResourceManager and for working
with the Node Managers to execute and
monitor the tasks.
Takes instructions from the ResourceManager
and manage resources on its own node.
How Does Hadoop
Data is initially divided into directories and
files. Files are divided into uniform sized blocks
of 128M and 64M.
These files are then distributed across various
cluster nodes for further processing
supervised by the HDFS.
Blocks are replicated for handling hardware
Checking that the code was executed
Performing the sort that takes place between
the map and reduce stages.
Sending the sorted data to a certain
Writing the debugging logs for each job.
Applications of Hadoop
Black Box Data
Social Media Data
Stock Exchange Data
Search Engine Data
Prominent users of
The Search Webmap is a Hadoop
application that runs on a big Linux cluster.
In 2010, Facebook claimed that they had the
largest Hadoop cluster in the world.
The New York Times used 100
instances and a Hadoop application to
process 4 TB data into 11 million PDFs in a day
at a computation cost of about $240.
Advantages of Hadoop
Hadoop is open source and compatible on
all the platforms since it is Java based.
Hadoop does not rely on hardware to
provide fault-tolerance and high availability.
Servers can be added or removed from the
cluster dynamically without interruption.
Hadoop efficiently utilizes the underlying
parallelism of the CPU cores in distributed