SlideShare a Scribd company logo
1 of 46
Download to read offline
Understanding Big Data Technology
Foundations
Module 3
Syllabus
• The MapReduce Framework
• Techniques to reduce Mapreduce Jobs
• Uses of Mapreduce
• Role of Hbase in Big data Processing
Understanding Big Data Technology
Foundations
• The advent of Local Area Networks (LANs) and other
networking technologies shifted the focus of IT industry
toward solving bigger and bigger problems by combining
computing and storing capacities of systems on the network
• This chapter focuses on explaining the basics and exploring
the relevance and role of various functions that are used in
the MapReduce framework.
Big Data
• The MapReduce Framework At the start of the 21st century,
the team of engineers working with Google concluded that
because of the increasing number of Internet users, the
resources and solutions available would be inadequate to
fulfill the future requirements. As a preparation to the
upcoming issue, Google engineers established that the
concept of task distribution across economical resources,
and their interconnectivity as a cluster over the network,
can be presented as a solution. The concept of task
distribution, though, could not be a complete answer to the
issue, which requires the tasks to be distributed in parallel.
A parallel distribution of tasks
• Helps in automatic expansion and contraction of processes
• Enables continuation of processes without being affected by network
failures or individual system failures
• Empowers developers with rights to access the services that other
developers have created in context of multiple usage scenarios
• A generic implementation to the entire concept was, therefore, provided
with the development of the MapReduce programming model
Exploring the Features of MapReduce
• MapReduce keeps all the processing operations separate for parallel execution. Problems that are
extremely large in size are divided into subtasks, which are chunks of data separated in manageable
blocks.
• The principal features of MapReduce include the following:
Synchronization
Co-location of Code/Data (Data Locality)
Handling of Errors/Faults
Scale-Out Architecture
Working of
MapReduce
1. Take a large dataset or set of records.
2. Perform iteration over the data.
3. Extract some interesting patterns to prepare an
output list by using the map function.
4. Arrange the output list properly to enable
optimization for further processing.
5. Compute a set of results by using the reduce
function.
6. Provide the final output.
The MapReduce programming model also works on an
algorithm to execute the map and reduce operations.
This algorithm can be depicted as follows
Working of the MapReduce approach
Working of the MapReduce
approach
• Is a combination of a master and three slaves
• The master monitors the entire job assigned to the MapReduce algorithm
and is given the name of JobTracker
• Slaves, on the other hand, are responsible for keeping track of individual
tasks and are called TaskTrackers
• First, the given job is divided into a number of tasks by the master, i.e., the
JobTracker, which then distributes these tasks into slaves
• It is the responsibility of the JobTracker to further keep an eye on the
processing activities and the re-execution of the failed tasks
• Slaves coordinate with the master by executing the tasks they are given by
the master.
• The JobTracker receives jobs from client applications to process large
information. These jobs are assigned in the forms of individual tasks (after
a job is divided into smaller parts) to various TaskTrackers
• The task distribution operation is completed by the JobTracker. The data
after being processed by TaskTrackers is transmitted to the reduce function
so that the final, integrated output which is an aggregate of the data
processed by the map function, can be provided.
Operations performed in the MapReduce
model
• The input is provided from large data files in the form of
key-value pair (KVP), which is the standard input format
in a Hadoop MapReduce programming model
• The input data is divided into small pieces, and master
and slave nodes are created. The master node usually
executes on the machine where the data is present, and
slaves are made to work remotely on the data.
• The map operation is performed simultaneously on all the
data pieces, which are read by the map function. The
map function extracts the relevant data and generates the
KVP for it
The input/output operations of the
map function are shown in Figure
•The output list is generated from the map operation,
and the master instructs the reduce function about
further actions that it needs to take
•The list of KVPs obtained from the map function is
passed on to the reduce function. The reduce function
sorts the data on the basis of the KVP list
•The process of collecting the map output list from
the map function and then sorting it as per the keys is
known as shuffling. Every unique key is then taken by
the reduce function. These keys are called, as required,
for producing the final output to be sent to the file
The input/output operations of the reduce function are
shown in Figure
The output is finally generated by the reduce function, and the control is handed
over to the user program by the master
The entire process of data analysis conducted in the
MapReduce programming model:
• Let’s now try to understand the
working of the MapReduce
programming model with the help of
a few examples
Example 1
• Consider that there is a data analysis project in which 20 terabytes of data needs to be
analyzed on 20 different MapReduce server nodes
• At first, the data distribution process simply copies data to all the nodes before starting
the MapReduce process.
• You need to keep in mind that the determination of the format of the file rests with the
user and no standard file format is specified in MapReduce as in relational databases.
• Next, the scheduler comes into the picture as it receives two programs from the
programmer. These two programs are the map and reduce programs. The data is made
available from the disk to the map function, which runs the logic on the data. In our
example, all the 20 nodes independently perform the operation.
•The map function passes the results to the reduce function for summarizing and
providing the final output in an aggregate form
Example 1
• The ancient Rome census can help to understand the mapping process of the map and
reduce functions. In the Rome census, volunteers were sent to cover various places
that are situated near the kingdom of Rome. Volunteers had to count the number of
people living in the area assigned to them and send the report of the population to the
organization. The census chief added the count of people recorded from all the areas
to reach an aggregate whole. The map function performs the processing operation in
parallel to counting the number of people living in an area, and the reduce function
combines the entire result.
Example 2
• A data analytic professional parses out every term available in the chat text by creating a map step. He
creates a map function to find out every word of the chat. The count is incremented by one after the word
is parsed from the paragraph.
• The map function provides the output in the form of a list that involves a number of KVPs, for example,
″<my, 1>,″ ″<product, 1>,″ ″<broke, 1>.″
• Once the operations of all map functions are complete, the information is provided to the scheduler by
the map function itself. After completing the map operation,
• After completing the map operation, the reduce function starts performing the reduce operation. Keeping
the current target of finding the count of the number of times a word appears in the text, shuffling is
performed next
• This process involves distribution of the map output through hashing in order to map the same keywords
to the respective node of the reduce function. Assuming a simple situation of processing an English text,
for example, we require 26 nodes that can handle words starting with individual letters of the alphabet
• In this case, words starting with A will be handled by one node, words that start with B will be handled
by another node, and so on. Thus, the number of words can easily be counted by the reduce step.
The final output of the process will include ″<my, 10>,″ ″<product, 25>,″ ″<broke, 20>,″ where the first value of each angular
bracket (<>) is the analyzed word, and the second value is the count of the word, i.e., the number of times the word appears
within the entire text. The result set will include 26 files.
The detailed MapReduce process used in this
example:
•The final output of the process will include ″<my, 10>,″ ″<product, 25>,″ ″<broke, 20>,″ where the first
value of each angular bracket (<>) is the analyzed word, and the second value is the count of the word,
i.e., the number of times the word appears within the entire text
• The result set will include 26 files. Each of these files is produced from an individual node and contains
the count of words in a sorted order. You need to keep in mind that the combining operation will also
require a process to handle all the 26 files obtained as a result of the MapReduce operations. After we
obtain the count of words, we can feed the results for any kind of analysis.
Exploring Map and Reduce Functions
•The MapReduce programming model facilitates faster data analysis for which the data is taken in the
form of KVPs.
•Both MapReduce functions and Hadoop can be created in many languages; however, programmers
generally prefer to create them in Java. The Pipes library allows C++ source code to be utilized for map
and reduce code
•The generic Application Programming Interface (API) called streaming allows programs created in
most languages to be utilized as map and reduce functions in Hadoop
•Consider an example of a program that counts the number of Indian cities having a population of above
one lakh. You must note that the following is not a programming code instead a plain English
representation of the solution to the problem.
•One way to achieve the following task is to determine the input data and generate a list in the following
manner:
mylist = ("all counties in the India that participated in the most recent general
election")
Exploring Map and Reduce Functions
• Use the map function to create a function, howManyPeople, which selects the cities
having a population of more than one lakh:
map howManyPeople (mylist) = [howManyPeople "city 1";howManyPeople"city 2";
howManyPeople "city 3"; howManyPeople "city 4";...]
•Now, generate a new output list of all the cities having a population of more than one lakh:
(no, city 1; yes, city 2; no, city 3; yes, city 4;?, city nnn)
•The preceding function gets executed without making any modifications to the original list.
Moreover, you can notice that each element of the output list gets mapped to a
corresponding element of the input list, having a “yes” or “no” attached.
example, city is the key and temperature is the
value.
• Out of all the data we have collected, we want to find the maximum temperature for each
city across all of the data files (note that each file might have the same city represented
multiple times). Using the MapReduce framework, we can break this down into five map
tasks, where each mapper works on one of the five files, and the mapper task goes through
the data and returns the maximum temperature for each city. For example, the results
produced from one mapper task for the data above would look like this:
(Toronto, 20) (Whitby, 25) (New York, 22)
(Rome, 33)
example, city is the key and temperature
is the value.
Let’s assume the other four mapper tasks produced the following intermediate results
(Toronto, 18) (Whitby, 27) (New York, 32) (Rome, 37)(Toronto, 32) (Whitby, 20) (New
York, 33) (Rome, 38)(Toronto, 22) (Whitby, 19) (New York, 20) (Rome, 31)(Toronto,
31) (Whitby, 22) (New York, 19) (Rome, 30)
All five of these output streams would be fed into the reduce tasks, which combine the
input results and output a single value for each city, producing the final result set as
follows:
(Toronto, 32) (Whitby, 27) (New York, 33) (Rome, 38)
Techniques to Optimize MapReduce Jobs
•MapReduce optimization techniques are in the following categories:
Hardware or network topology
 Synchronization
 File system
•You need to keep the following points in mind while designing a file that supports
MapReduce implementation:
Keep it Warm
The Bigger the Better
The Long View
Right Degree of Security
The fields benefitted by the use of MapReduce are:
1. Web Page Visits—Suppose a researcher wants to know the number of times the website of a particular
newspaper was accessed. The map task would be to read the logs of the Web page requests and make a
complete list. The map outputs may look similar to the following:
The reduce function would find the results for the newspaper URL and add them.
The output of the preceding code is:
<newspaperURL, 3>
The fields benefitted by the use of
MapReduce are:
2. Web Page Visitor Paths- Consider a situation in which an advocacy group wishes to
know how visitors get to know about its website. To determine this, they designed a
link known as “source,” and the Web page to which the link transfers the information is
known as “target.” The map function scans the Web links for returning the results of the
type <target, source>. The reduce function scans this list for determining the results
where the “target” is the Web page. The reduce function output, which is the final
output, will be of the form <advocacy group page, list (source)>.
The fields benefitted by the use of
MapReduce are:
3. Word Frequency—A researcher wishes to read articles about flood but, he
does not want those articles in which the flood is discussed as a minor topic.
Therefore, he decided that an article basically dealing with earthquakes and
floods should have the word “tectonic plate” in it more than 10 times. The map
function will count the number of times the specified word occurred in each
document and provide the result as <document, frequency>. The reduce
function will count and select only the results that have a frequency of more
than 10 words.
The fields benefitted by the use of MapReduce are:
4. Word Count—Suppose a researcher wishes to determine the number of times celebrities talk about the present
bestseller. The data to be analyzed comprises written blogs, posts, and tweets of the celebrities. The map function
will make a list of all the words. This list will be in the KVP format, in which the key is each word, and the value is
1 for every appearance of that word. The output from the map function would be obtained somewhat as follows:
The preceding output will be converted in the following form by the reduce function:
HBase
• The MapReduce programming model can utilize other components of the Hadoop ecosystem to
perform its operations better. One such components is Hbase
• Role of HBase in Big Data Processing- HBase is an open source, non-relational, distributed,
column-oriented database developed as a part of Apache Software Foundation’s Hadoop project.
• MapReduce enhances Big Data processing, HBase takes care of its storage and access
requirements. Characteristics of HBase -- HBase helps programmers to store large quantities
of data in such a way that it can be accessed easily and quickly, as and when required
• It stores data in a compressed format and thus, occupies less memory space. HBase has low
latency time and is, therefore, beneficial for lookups and scanning of large amounts of data.
HBase saves data in cells in the descending order (with the help of timestamp); therefore, a read
will always first determine the most recent values. Columns in HBase relate to a column family.
• The column family name is utilized as a prefix for determining members of its family; for
instance, Cars: Wagon R and Cars: i10 are the members of the Cars column family. A key is
associated with rows in HBase tables
HBase
• The structure of the key is very flexible. It can be a calculated value, a string, or
any other data structure. The key is used for controlling the retrieval of data to the
cells in the row. All these characteristics help build the schema of the HBase data
structure before the storage of any data. Moreover, tables can be modified and new
column families can be added once the database is up and running.
• The columns can be added very easily and are added row-by-row, providing great
flexibility, performance, and scalability. In case you have large volume and
variety of data, you can use a columnar database. HBase is suitable in conditions
where the data changes gradually and rapidly. In addition, HBase can save the data
that has a slow-changing rate and ensure its availability for Hadoop tasks.
• HBase is a framework written in Java for supporting applications that are used to
process Big Data. HBase is a non-relational Hadoop database that provides fault
tolerance for huge amounts of data.
Hbase-Installation
• Before starting the installation of HBase, you need to install the Java Software
Development Kit (SDK). The installation of HBase requires the following
operations to be performed in a stepwise manner: In the Windows terminal, install
the dependency $sudo apt-get installntp libopts25. Figure 5.7 shows the
installation of dependency for HBase: Figure
Hbase-Installation
• The HBase file can be customized as per the user needs by exporting JAVA_HOME
and HBase Opts (hbase-env.sh). To customize an HBase file, type the following
code:
Hbase-Installation
• Zookeeper, the file management engine of the Hadoop ecosystem, manages the files
thatHBase plans to use currently and in the future. Therefore, to manage zookeeper in
HBase and ensure that it is enabled, use the following command:
export HBASE_MANAGES_ZK=true
• Figure 5.9 shows zookeeper enabled in HBase:
Hbase-Installation
• Site-specific customizations are done in hbase-site.xml (HBASE_HOME/conf). Figure
5.10 shows customized hbase-site.xml (HBASE_HOME/conf):
Hbase-Installation
• To enable connection with remote HBase server, edit /etc/hosts. Figure
5.11 shows the edited /etc/hosts:
Hbase-Installation
• To enable connection with remote HBase server, edit /etc/hosts. Figure
5.11 shows the edited /etc/hosts:
Hbase-Installation
• Start HBase by using the following command: $bin/start-hbase.sh Figure
5.12 shows the initiation process of HBase daemons:
Hbase-Installation
• Check all HBase daemons by using the following command: $jps Figure
5.13 shows the implementation of the $jps command:
Hbase-Installation
• Paste the following link to access the Web interface, which has the list of
tables created, along with their definition: http://localhost:60010
Figure 5.14 shows the Web interface for HBase:
Hbase-Installation
• Check the region server for HBase by pasting the following link in your Web browser:
http://localhost:60030
• DT Editorial Services. Big Data, Black Book: Covers Hadoop 2, MapReduce, Hive,
YARN, Pig, R and Data Visualization (p. 138). Wiley India. Kindle Edition.
Hbase-Installation
• Start the HBase shell by using the following command: $bin/hbase shell
Figure 5.16 shows the $bin/hbase shell running in a terminal:
Hbase-Installation
Hbase-Installation
Hbase-Installation
Hbase-Installation
Hbase-Installation
Hbase-Installation
Thank You

More Related Content

Similar to module3part-1-bigdata-230301002404-3db4f2a4 (1).pdf

Hadoop first mr job - inverted index construction
Hadoop first mr job - inverted index constructionHadoop first mr job - inverted index construction
Hadoop first mr job - inverted index constructionSubhas Kumar Ghosh
 
Hadoop interview questions - Softwarequery.com
Hadoop interview questions - Softwarequery.comHadoop interview questions - Softwarequery.com
Hadoop interview questions - Softwarequery.comsoftwarequery
 
Hadoop scheduler with deadline constraint
Hadoop scheduler with deadline constraintHadoop scheduler with deadline constraint
Hadoop scheduler with deadline constraintijccsa
 
Mod05lec23(map reduce tutorial)
Mod05lec23(map reduce tutorial)Mod05lec23(map reduce tutorial)
Mod05lec23(map reduce tutorial)Ankit Gupta
 
Map reduce - simplified data processing on large clusters
Map reduce - simplified data processing on large clustersMap reduce - simplified data processing on large clusters
Map reduce - simplified data processing on large clustersCleverence Kombe
 
Introduction to the Map-Reduce framework.pdf
Introduction to the Map-Reduce framework.pdfIntroduction to the Map-Reduce framework.pdf
Introduction to the Map-Reduce framework.pdfBikalAdhikari4
 
Optimal Chain Matrix Multiplication Big Data Perspective
Optimal Chain Matrix Multiplication Big Data PerspectiveOptimal Chain Matrix Multiplication Big Data Perspective
Optimal Chain Matrix Multiplication Big Data Perspectiveপল্লব রায়
 
Mapreduce2008 cacm
Mapreduce2008 cacmMapreduce2008 cacm
Mapreduce2008 cacmlmphuong06
 
Report Hadoop Map Reduce
Report Hadoop Map ReduceReport Hadoop Map Reduce
Report Hadoop Map ReduceUrvashi Kataria
 
Hadoop training-in-hyderabad
Hadoop training-in-hyderabadHadoop training-in-hyderabad
Hadoop training-in-hyderabadsreehari orienit
 
Big data unit iv and v lecture notes qb model exam
Big data unit iv and v lecture notes   qb model examBig data unit iv and v lecture notes   qb model exam
Big data unit iv and v lecture notes qb model examIndhujeni
 
Map Reduce Workloads: A Dynamic Job Ordering and Slot Configurations
Map Reduce Workloads: A Dynamic Job Ordering and Slot ConfigurationsMap Reduce Workloads: A Dynamic Job Ordering and Slot Configurations
Map Reduce Workloads: A Dynamic Job Ordering and Slot Configurationsdbpublications
 
Hadoop mapreduce and yarn frame work- unit5
Hadoop mapreduce and yarn frame work-  unit5Hadoop mapreduce and yarn frame work-  unit5
Hadoop mapreduce and yarn frame work- unit5RojaT4
 
Hadoop map reduce in operation
Hadoop map reduce in operationHadoop map reduce in operation
Hadoop map reduce in operationSubhas Kumar Ghosh
 

Similar to module3part-1-bigdata-230301002404-3db4f2a4 (1).pdf (20)

Hadoop
HadoopHadoop
Hadoop
 
Hadoop first mr job - inverted index construction
Hadoop first mr job - inverted index constructionHadoop first mr job - inverted index construction
Hadoop first mr job - inverted index construction
 
Hadoop interview questions - Softwarequery.com
Hadoop interview questions - Softwarequery.comHadoop interview questions - Softwarequery.com
Hadoop interview questions - Softwarequery.com
 
Hadoop scheduler with deadline constraint
Hadoop scheduler with deadline constraintHadoop scheduler with deadline constraint
Hadoop scheduler with deadline constraint
 
Mod05lec23(map reduce tutorial)
Mod05lec23(map reduce tutorial)Mod05lec23(map reduce tutorial)
Mod05lec23(map reduce tutorial)
 
Map reduce
Map reduceMap reduce
Map reduce
 
The google MapReduce
The google MapReduceThe google MapReduce
The google MapReduce
 
Map reduce - simplified data processing on large clusters
Map reduce - simplified data processing on large clustersMap reduce - simplified data processing on large clusters
Map reduce - simplified data processing on large clusters
 
Introduction to the Map-Reduce framework.pdf
Introduction to the Map-Reduce framework.pdfIntroduction to the Map-Reduce framework.pdf
Introduction to the Map-Reduce framework.pdf
 
Optimal Chain Matrix Multiplication Big Data Perspective
Optimal Chain Matrix Multiplication Big Data PerspectiveOptimal Chain Matrix Multiplication Big Data Perspective
Optimal Chain Matrix Multiplication Big Data Perspective
 
MapReduce basics
MapReduce basicsMapReduce basics
MapReduce basics
 
Mapreduce2008 cacm
Mapreduce2008 cacmMapreduce2008 cacm
Mapreduce2008 cacm
 
Report Hadoop Map Reduce
Report Hadoop Map ReduceReport Hadoop Map Reduce
Report Hadoop Map Reduce
 
Hadoop training-in-hyderabad
Hadoop training-in-hyderabadHadoop training-in-hyderabad
Hadoop training-in-hyderabad
 
Big data unit iv and v lecture notes qb model exam
Big data unit iv and v lecture notes   qb model examBig data unit iv and v lecture notes   qb model exam
Big data unit iv and v lecture notes qb model exam
 
Map Reduce Workloads: A Dynamic Job Ordering and Slot Configurations
Map Reduce Workloads: A Dynamic Job Ordering and Slot ConfigurationsMap Reduce Workloads: A Dynamic Job Ordering and Slot Configurations
Map Reduce Workloads: A Dynamic Job Ordering and Slot Configurations
 
Hadoop
HadoopHadoop
Hadoop
 
Hadoop mapreduce and yarn frame work- unit5
Hadoop mapreduce and yarn frame work-  unit5Hadoop mapreduce and yarn frame work-  unit5
Hadoop mapreduce and yarn frame work- unit5
 
Hadoop map reduce in operation
Hadoop map reduce in operationHadoop map reduce in operation
Hadoop map reduce in operation
 
Hadoop job chaining
Hadoop job chainingHadoop job chaining
Hadoop job chaining
 

Recently uploaded

Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)dollysharma2066
 
Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...VICTOR MAESTRE RAMIREZ
 
power system scada applications and uses
power system scada applications and usespower system scada applications and uses
power system scada applications and usesDevarapalliHaritha
 
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerAnamika Sarkar
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxJoão Esperancinha
 
main PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidmain PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidNikhilNagaraju
 
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdfCCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdfAsst.prof M.Gokilavani
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )Tsuyoshi Horigome
 
chaitra-1.pptx fake news detection using machine learning
chaitra-1.pptx  fake news detection using machine learningchaitra-1.pptx  fake news detection using machine learning
chaitra-1.pptx fake news detection using machine learningmisbanausheenparvam
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxwendy cai
 
Introduction-To-Agricultural-Surveillance-Rover.pptx
Introduction-To-Agricultural-Surveillance-Rover.pptxIntroduction-To-Agricultural-Surveillance-Rover.pptx
Introduction-To-Agricultural-Surveillance-Rover.pptxk795866
 
Heart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxHeart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxPoojaBan
 
Introduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxIntroduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxvipinkmenon1
 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AIabhishek36461
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxpurnimasatapathy1234
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024hassan khalil
 
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ
 
Application of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptxApplication of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptx959SahilShah
 

Recently uploaded (20)

Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
 
Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...
 
power system scada applications and uses
power system scada applications and usespower system scada applications and uses
power system scada applications and uses
 
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
 
main PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidmain PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfid
 
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdfCCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )
 
chaitra-1.pptx fake news detection using machine learning
chaitra-1.pptx  fake news detection using machine learningchaitra-1.pptx  fake news detection using machine learning
chaitra-1.pptx fake news detection using machine learning
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptx
 
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
 
Introduction-To-Agricultural-Surveillance-Rover.pptx
Introduction-To-Agricultural-Surveillance-Rover.pptxIntroduction-To-Agricultural-Surveillance-Rover.pptx
Introduction-To-Agricultural-Surveillance-Rover.pptx
 
Heart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxHeart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptx
 
Introduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxIntroduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptx
 
Design and analysis of solar grass cutter.pdf
Design and analysis of solar grass cutter.pdfDesign and analysis of solar grass cutter.pdf
Design and analysis of solar grass cutter.pdf
 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AI
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptx
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024
 
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
 
Application of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptxApplication of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptx
 

module3part-1-bigdata-230301002404-3db4f2a4 (1).pdf

  • 1. Understanding Big Data Technology Foundations Module 3
  • 2. Syllabus • The MapReduce Framework • Techniques to reduce Mapreduce Jobs • Uses of Mapreduce • Role of Hbase in Big data Processing
  • 3. Understanding Big Data Technology Foundations • The advent of Local Area Networks (LANs) and other networking technologies shifted the focus of IT industry toward solving bigger and bigger problems by combining computing and storing capacities of systems on the network • This chapter focuses on explaining the basics and exploring the relevance and role of various functions that are used in the MapReduce framework.
  • 4. Big Data • The MapReduce Framework At the start of the 21st century, the team of engineers working with Google concluded that because of the increasing number of Internet users, the resources and solutions available would be inadequate to fulfill the future requirements. As a preparation to the upcoming issue, Google engineers established that the concept of task distribution across economical resources, and their interconnectivity as a cluster over the network, can be presented as a solution. The concept of task distribution, though, could not be a complete answer to the issue, which requires the tasks to be distributed in parallel.
  • 5. A parallel distribution of tasks • Helps in automatic expansion and contraction of processes • Enables continuation of processes without being affected by network failures or individual system failures • Empowers developers with rights to access the services that other developers have created in context of multiple usage scenarios • A generic implementation to the entire concept was, therefore, provided with the development of the MapReduce programming model
  • 6. Exploring the Features of MapReduce • MapReduce keeps all the processing operations separate for parallel execution. Problems that are extremely large in size are divided into subtasks, which are chunks of data separated in manageable blocks. • The principal features of MapReduce include the following: Synchronization Co-location of Code/Data (Data Locality) Handling of Errors/Faults Scale-Out Architecture
  • 7. Working of MapReduce 1. Take a large dataset or set of records. 2. Perform iteration over the data. 3. Extract some interesting patterns to prepare an output list by using the map function. 4. Arrange the output list properly to enable optimization for further processing. 5. Compute a set of results by using the reduce function. 6. Provide the final output. The MapReduce programming model also works on an algorithm to execute the map and reduce operations. This algorithm can be depicted as follows
  • 8. Working of the MapReduce approach
  • 9. Working of the MapReduce approach • Is a combination of a master and three slaves • The master monitors the entire job assigned to the MapReduce algorithm and is given the name of JobTracker • Slaves, on the other hand, are responsible for keeping track of individual tasks and are called TaskTrackers • First, the given job is divided into a number of tasks by the master, i.e., the JobTracker, which then distributes these tasks into slaves • It is the responsibility of the JobTracker to further keep an eye on the processing activities and the re-execution of the failed tasks • Slaves coordinate with the master by executing the tasks they are given by the master. • The JobTracker receives jobs from client applications to process large information. These jobs are assigned in the forms of individual tasks (after a job is divided into smaller parts) to various TaskTrackers • The task distribution operation is completed by the JobTracker. The data after being processed by TaskTrackers is transmitted to the reduce function so that the final, integrated output which is an aggregate of the data processed by the map function, can be provided.
  • 10. Operations performed in the MapReduce model • The input is provided from large data files in the form of key-value pair (KVP), which is the standard input format in a Hadoop MapReduce programming model • The input data is divided into small pieces, and master and slave nodes are created. The master node usually executes on the machine where the data is present, and slaves are made to work remotely on the data. • The map operation is performed simultaneously on all the data pieces, which are read by the map function. The map function extracts the relevant data and generates the KVP for it
  • 11. The input/output operations of the map function are shown in Figure •The output list is generated from the map operation, and the master instructs the reduce function about further actions that it needs to take •The list of KVPs obtained from the map function is passed on to the reduce function. The reduce function sorts the data on the basis of the KVP list •The process of collecting the map output list from the map function and then sorting it as per the keys is known as shuffling. Every unique key is then taken by the reduce function. These keys are called, as required, for producing the final output to be sent to the file
  • 12. The input/output operations of the reduce function are shown in Figure The output is finally generated by the reduce function, and the control is handed over to the user program by the master
  • 13. The entire process of data analysis conducted in the MapReduce programming model: • Let’s now try to understand the working of the MapReduce programming model with the help of a few examples
  • 14. Example 1 • Consider that there is a data analysis project in which 20 terabytes of data needs to be analyzed on 20 different MapReduce server nodes • At first, the data distribution process simply copies data to all the nodes before starting the MapReduce process. • You need to keep in mind that the determination of the format of the file rests with the user and no standard file format is specified in MapReduce as in relational databases. • Next, the scheduler comes into the picture as it receives two programs from the programmer. These two programs are the map and reduce programs. The data is made available from the disk to the map function, which runs the logic on the data. In our example, all the 20 nodes independently perform the operation. •The map function passes the results to the reduce function for summarizing and providing the final output in an aggregate form
  • 15. Example 1 • The ancient Rome census can help to understand the mapping process of the map and reduce functions. In the Rome census, volunteers were sent to cover various places that are situated near the kingdom of Rome. Volunteers had to count the number of people living in the area assigned to them and send the report of the population to the organization. The census chief added the count of people recorded from all the areas to reach an aggregate whole. The map function performs the processing operation in parallel to counting the number of people living in an area, and the reduce function combines the entire result.
  • 16. Example 2 • A data analytic professional parses out every term available in the chat text by creating a map step. He creates a map function to find out every word of the chat. The count is incremented by one after the word is parsed from the paragraph. • The map function provides the output in the form of a list that involves a number of KVPs, for example, ″<my, 1>,″ ″<product, 1>,″ ″<broke, 1>.″ • Once the operations of all map functions are complete, the information is provided to the scheduler by the map function itself. After completing the map operation, • After completing the map operation, the reduce function starts performing the reduce operation. Keeping the current target of finding the count of the number of times a word appears in the text, shuffling is performed next • This process involves distribution of the map output through hashing in order to map the same keywords to the respective node of the reduce function. Assuming a simple situation of processing an English text, for example, we require 26 nodes that can handle words starting with individual letters of the alphabet • In this case, words starting with A will be handled by one node, words that start with B will be handled by another node, and so on. Thus, the number of words can easily be counted by the reduce step.
  • 17. The final output of the process will include ″<my, 10>,″ ″<product, 25>,″ ″<broke, 20>,″ where the first value of each angular bracket (<>) is the analyzed word, and the second value is the count of the word, i.e., the number of times the word appears within the entire text. The result set will include 26 files. The detailed MapReduce process used in this example: •The final output of the process will include ″<my, 10>,″ ″<product, 25>,″ ″<broke, 20>,″ where the first value of each angular bracket (<>) is the analyzed word, and the second value is the count of the word, i.e., the number of times the word appears within the entire text • The result set will include 26 files. Each of these files is produced from an individual node and contains the count of words in a sorted order. You need to keep in mind that the combining operation will also require a process to handle all the 26 files obtained as a result of the MapReduce operations. After we obtain the count of words, we can feed the results for any kind of analysis.
  • 18. Exploring Map and Reduce Functions •The MapReduce programming model facilitates faster data analysis for which the data is taken in the form of KVPs. •Both MapReduce functions and Hadoop can be created in many languages; however, programmers generally prefer to create them in Java. The Pipes library allows C++ source code to be utilized for map and reduce code •The generic Application Programming Interface (API) called streaming allows programs created in most languages to be utilized as map and reduce functions in Hadoop •Consider an example of a program that counts the number of Indian cities having a population of above one lakh. You must note that the following is not a programming code instead a plain English representation of the solution to the problem. •One way to achieve the following task is to determine the input data and generate a list in the following manner: mylist = ("all counties in the India that participated in the most recent general election")
  • 19. Exploring Map and Reduce Functions • Use the map function to create a function, howManyPeople, which selects the cities having a population of more than one lakh: map howManyPeople (mylist) = [howManyPeople "city 1";howManyPeople"city 2"; howManyPeople "city 3"; howManyPeople "city 4";...] •Now, generate a new output list of all the cities having a population of more than one lakh: (no, city 1; yes, city 2; no, city 3; yes, city 4;?, city nnn) •The preceding function gets executed without making any modifications to the original list. Moreover, you can notice that each element of the output list gets mapped to a corresponding element of the input list, having a “yes” or “no” attached.
  • 20. example, city is the key and temperature is the value. • Out of all the data we have collected, we want to find the maximum temperature for each city across all of the data files (note that each file might have the same city represented multiple times). Using the MapReduce framework, we can break this down into five map tasks, where each mapper works on one of the five files, and the mapper task goes through the data and returns the maximum temperature for each city. For example, the results produced from one mapper task for the data above would look like this: (Toronto, 20) (Whitby, 25) (New York, 22) (Rome, 33)
  • 21. example, city is the key and temperature is the value. Let’s assume the other four mapper tasks produced the following intermediate results (Toronto, 18) (Whitby, 27) (New York, 32) (Rome, 37)(Toronto, 32) (Whitby, 20) (New York, 33) (Rome, 38)(Toronto, 22) (Whitby, 19) (New York, 20) (Rome, 31)(Toronto, 31) (Whitby, 22) (New York, 19) (Rome, 30) All five of these output streams would be fed into the reduce tasks, which combine the input results and output a single value for each city, producing the final result set as follows: (Toronto, 32) (Whitby, 27) (New York, 33) (Rome, 38)
  • 22. Techniques to Optimize MapReduce Jobs •MapReduce optimization techniques are in the following categories: Hardware or network topology  Synchronization  File system •You need to keep the following points in mind while designing a file that supports MapReduce implementation: Keep it Warm The Bigger the Better The Long View Right Degree of Security
  • 23. The fields benefitted by the use of MapReduce are: 1. Web Page Visits—Suppose a researcher wants to know the number of times the website of a particular newspaper was accessed. The map task would be to read the logs of the Web page requests and make a complete list. The map outputs may look similar to the following: The reduce function would find the results for the newspaper URL and add them. The output of the preceding code is: <newspaperURL, 3>
  • 24. The fields benefitted by the use of MapReduce are: 2. Web Page Visitor Paths- Consider a situation in which an advocacy group wishes to know how visitors get to know about its website. To determine this, they designed a link known as “source,” and the Web page to which the link transfers the information is known as “target.” The map function scans the Web links for returning the results of the type <target, source>. The reduce function scans this list for determining the results where the “target” is the Web page. The reduce function output, which is the final output, will be of the form <advocacy group page, list (source)>.
  • 25. The fields benefitted by the use of MapReduce are: 3. Word Frequency—A researcher wishes to read articles about flood but, he does not want those articles in which the flood is discussed as a minor topic. Therefore, he decided that an article basically dealing with earthquakes and floods should have the word “tectonic plate” in it more than 10 times. The map function will count the number of times the specified word occurred in each document and provide the result as <document, frequency>. The reduce function will count and select only the results that have a frequency of more than 10 words.
  • 26. The fields benefitted by the use of MapReduce are: 4. Word Count—Suppose a researcher wishes to determine the number of times celebrities talk about the present bestseller. The data to be analyzed comprises written blogs, posts, and tweets of the celebrities. The map function will make a list of all the words. This list will be in the KVP format, in which the key is each word, and the value is 1 for every appearance of that word. The output from the map function would be obtained somewhat as follows: The preceding output will be converted in the following form by the reduce function:
  • 27. HBase • The MapReduce programming model can utilize other components of the Hadoop ecosystem to perform its operations better. One such components is Hbase • Role of HBase in Big Data Processing- HBase is an open source, non-relational, distributed, column-oriented database developed as a part of Apache Software Foundation’s Hadoop project. • MapReduce enhances Big Data processing, HBase takes care of its storage and access requirements. Characteristics of HBase -- HBase helps programmers to store large quantities of data in such a way that it can be accessed easily and quickly, as and when required • It stores data in a compressed format and thus, occupies less memory space. HBase has low latency time and is, therefore, beneficial for lookups and scanning of large amounts of data. HBase saves data in cells in the descending order (with the help of timestamp); therefore, a read will always first determine the most recent values. Columns in HBase relate to a column family. • The column family name is utilized as a prefix for determining members of its family; for instance, Cars: Wagon R and Cars: i10 are the members of the Cars column family. A key is associated with rows in HBase tables
  • 28. HBase • The structure of the key is very flexible. It can be a calculated value, a string, or any other data structure. The key is used for controlling the retrieval of data to the cells in the row. All these characteristics help build the schema of the HBase data structure before the storage of any data. Moreover, tables can be modified and new column families can be added once the database is up and running. • The columns can be added very easily and are added row-by-row, providing great flexibility, performance, and scalability. In case you have large volume and variety of data, you can use a columnar database. HBase is suitable in conditions where the data changes gradually and rapidly. In addition, HBase can save the data that has a slow-changing rate and ensure its availability for Hadoop tasks. • HBase is a framework written in Java for supporting applications that are used to process Big Data. HBase is a non-relational Hadoop database that provides fault tolerance for huge amounts of data.
  • 29. Hbase-Installation • Before starting the installation of HBase, you need to install the Java Software Development Kit (SDK). The installation of HBase requires the following operations to be performed in a stepwise manner: In the Windows terminal, install the dependency $sudo apt-get installntp libopts25. Figure 5.7 shows the installation of dependency for HBase: Figure
  • 30. Hbase-Installation • The HBase file can be customized as per the user needs by exporting JAVA_HOME and HBase Opts (hbase-env.sh). To customize an HBase file, type the following code:
  • 31. Hbase-Installation • Zookeeper, the file management engine of the Hadoop ecosystem, manages the files thatHBase plans to use currently and in the future. Therefore, to manage zookeeper in HBase and ensure that it is enabled, use the following command: export HBASE_MANAGES_ZK=true • Figure 5.9 shows zookeeper enabled in HBase:
  • 32. Hbase-Installation • Site-specific customizations are done in hbase-site.xml (HBASE_HOME/conf). Figure 5.10 shows customized hbase-site.xml (HBASE_HOME/conf):
  • 33. Hbase-Installation • To enable connection with remote HBase server, edit /etc/hosts. Figure 5.11 shows the edited /etc/hosts:
  • 34. Hbase-Installation • To enable connection with remote HBase server, edit /etc/hosts. Figure 5.11 shows the edited /etc/hosts:
  • 35. Hbase-Installation • Start HBase by using the following command: $bin/start-hbase.sh Figure 5.12 shows the initiation process of HBase daemons:
  • 36. Hbase-Installation • Check all HBase daemons by using the following command: $jps Figure 5.13 shows the implementation of the $jps command:
  • 37. Hbase-Installation • Paste the following link to access the Web interface, which has the list of tables created, along with their definition: http://localhost:60010 Figure 5.14 shows the Web interface for HBase:
  • 38. Hbase-Installation • Check the region server for HBase by pasting the following link in your Web browser: http://localhost:60030 • DT Editorial Services. Big Data, Black Book: Covers Hadoop 2, MapReduce, Hive, YARN, Pig, R and Data Visualization (p. 138). Wiley India. Kindle Edition.
  • 39. Hbase-Installation • Start the HBase shell by using the following command: $bin/hbase shell Figure 5.16 shows the $bin/hbase shell running in a terminal: