SlideShare a Scribd company logo
1 of 46
Understanding Big Data Technology
Foundations
Module 3
Syllabus
• The MapReduce Framework
• Techniques to reduce Mapreduce Jobs
• Uses of Mapreduce
• Role of Hbase in Big data Processing
Understanding Big Data Technology
Foundations
• The advent of Local Area Networks (LANs) and other
networking technologies shifted the focus of IT industry
toward solving bigger and bigger problems by combining
computing and storing capacities of systems on the network
• This chapter focuses on explaining the basics and exploring
the relevance and role of various functions that are used in
the MapReduce framework.
Big Data
• The MapReduce Framework At the start of the 21st century,
the team of engineers working with Google concluded that
because of the increasing number of Internet users, the
resources and solutions available would be inadequate to
fulfill the future requirements. As a preparation to the
upcoming issue, Google engineers established that the
concept of task distribution across economical resources,
and their interconnectivity as a cluster over the network,
can be presented as a solution. The concept of task
distribution, though, could not be a complete answer to the
issue, which requires the tasks to be distributed in parallel.
A parallel distribution of tasks
• Helps in automatic expansion and contraction of processes
• Enables continuation of processes without being affected by network
failures or individual system failures
• Empowers developers with rights to access the services that other
developers have created in context of multiple usage scenarios
• A generic implementation to the entire concept was, therefore, provided
with the development of the MapReduce programming model
Exploring the Features of MapReduce
• MapReduce keeps all the processing operations separate for parallel execution. Problems that are
extremely large in size are divided into subtasks, which are chunks of data separated in manageable
blocks.
• The principal features of MapReduce include the following:
Synchronization
Co-location of Code/Data (Data Locality)
Handling of Errors/Faults
Scale-Out Architecture
Working of
MapReduce
1. Take a large dataset or set of records.
2. Perform iteration over the data.
3. Extract some interesting patterns to prepare an
output list by using the map function.
4. Arrange the output list properly to enable
optimization for further processing.
5. Compute a set of results by using the reduce
function.
6. Provide the final output.
The MapReduce programming model also works on an
algorithm to execute the map and reduce operations.
This algorithm can be depicted as follows
Working of the MapReduce approach
Working of the MapReduce
approach
• Is a combination of a master and three slaves
• The master monitors the entire job assigned to the MapReduce algorithm
and is given the name of JobTracker
• Slaves, on the other hand, are responsible for keeping track of individual
tasks and are called TaskTrackers
• First, the given job is divided into a number of tasks by the master, i.e., the
JobTracker, which then distributes these tasks into slaves
• It is the responsibility of the JobTracker to further keep an eye on the
processing activities and the re-execution of the failed tasks
• Slaves coordinate with the master by executing the tasks they are given by
the master.
• The JobTracker receives jobs from client applications to process large
information. These jobs are assigned in the forms of individual tasks (after
a job is divided into smaller parts) to various TaskTrackers
• The task distribution operation is completed by the JobTracker. The data
after being processed by TaskTrackers is transmitted to the reduce function
so that the final, integrated output which is an aggregate of the data
processed by the map function, can be provided.
Operations performed in the MapReduce
model
• The input is provided from large data files in the form of
key-value pair (KVP), which is the standard input format
in a Hadoop MapReduce programming model
• The input data is divided into small pieces, and master
and slave nodes are created. The master node usually
executes on the machine where the data is present, and
slaves are made to work remotely on the data.
• The map operation is performed simultaneously on all the
data pieces, which are read by the map function. The
map function extracts the relevant data and generates the
KVP for it
The input/output operations of the
map function are shown in Figure
•The output list is generated from the map operation,
and the master instructs the reduce function about
further actions that it needs to take
•The list of KVPs obtained from the map function is
passed on to the reduce function. The reduce function
sorts the data on the basis of the KVP list
•The process of collecting the map output list from
the map function and then sorting it as per the keys is
known as shuffling. Every unique key is then taken by
the reduce function. These keys are called, as required,
for producing the final output to be sent to the file
The input/output operations of the reduce function are
shown in Figure
The output is finally generated by the reduce function, and the control is handed
over to the user program by the master
The entire process of data analysis conducted in the
MapReduce programming model:
• Let’s now try to understand the
working of the MapReduce
programming model with the help of
a few examples
Example 1
• Consider that there is a data analysis project in which 20 terabytes of data needs to be
analyzed on 20 different MapReduce server nodes
• At first, the data distribution process simply copies data to all the nodes before starting
the MapReduce process.
• You need to keep in mind that the determination of the format of the file rests with the
user and no standard file format is specified in MapReduce as in relational databases.
• Next, the scheduler comes into the picture as it receives two programs from the
programmer. These two programs are the map and reduce programs. The data is made
available from the disk to the map function, which runs the logic on the data. In our
example, all the 20 nodes independently perform the operation.
•The map function passes the results to the reduce function for summarizing and
providing the final output in an aggregate form
Example 1
• The ancient Rome census can help to understand the mapping process of the map and
reduce functions. In the Rome census, volunteers were sent to cover various places
that are situated near the kingdom of Rome. Volunteers had to count the number of
people living in the area assigned to them and send the report of the population to the
organization. The census chief added the count of people recorded from all the areas
to reach an aggregate whole. The map function performs the processing operation in
parallel to counting the number of people living in an area, and the reduce function
combines the entire result.
Example 2
• A data analytic professional parses out every term available in the chat text by creating a map step. He
creates a map function to find out every word of the chat. The count is incremented by one after the word
is parsed from the paragraph.
• The map function provides the output in the form of a list that involves a number of KVPs, for example,
″<my, 1>,″ ″<product, 1>,″ ″<broke, 1>.″
• Once the operations of all map functions are complete, the information is provided to the scheduler by
the map function itself. After completing the map operation,
• After completing the map operation, the reduce function starts performing the reduce operation. Keeping
the current target of finding the count of the number of times a word appears in the text, shuffling is
performed next
• This process involves distribution of the map output through hashing in order to map the same keywords
to the respective node of the reduce function. Assuming a simple situation of processing an English text,
for example, we require 26 nodes that can handle words starting with individual letters of the alphabet
• In this case, words starting with A will be handled by one node, words that start with B will be handled
by another node, and so on. Thus, the number of words can easily be counted by the reduce step.
The final output of the process will include ″<my, 10>,″ ″<product, 25>,″ ″<broke, 20>,″ where the first value of each angular
bracket (<>) is the analyzed word, and the second value is the count of the word, i.e., the number of times the word appears
within the entire text. The result set will include 26 files.
The detailed MapReduce process used in this
example:
•The final output of the process will include ″<my, 10>,″ ″<product, 25>,″ ″<broke, 20>,″ where the first
value of each angular bracket (<>) is the analyzed word, and the second value is the count of the word,
i.e., the number of times the word appears within the entire text
• The result set will include 26 files. Each of these files is produced from an individual node and contains
the count of words in a sorted order. You need to keep in mind that the combining operation will also
require a process to handle all the 26 files obtained as a result of the MapReduce operations. After we
obtain the count of words, we can feed the results for any kind of analysis.
Exploring Map and Reduce Functions
•The MapReduce programming model facilitates faster data analysis for which the data is taken in the
form of KVPs.
•Both MapReduce functions and Hadoop can be created in many languages; however, programmers
generally prefer to create them in Java. The Pipes library allows C++ source code to be utilized for map
and reduce code
•The generic Application Programming Interface (API) called streaming allows programs created in
most languages to be utilized as map and reduce functions in Hadoop
•Consider an example of a program that counts the number of Indian cities having a population of above
one lakh. You must note that the following is not a programming code instead a plain English
representation of the solution to the problem.
•One way to achieve the following task is to determine the input data and generate a list in the following
manner:
mylist = ("all counties in the India that participated in the most recent general
election")
Exploring Map and Reduce Functions
• Use the map function to create a function, howManyPeople, which selects the cities
having a population of more than one lakh:
map howManyPeople (mylist) = [howManyPeople "city 1";howManyPeople"city 2";
howManyPeople "city 3"; howManyPeople "city 4";...]
•Now, generate a new output list of all the cities having a population of more than one lakh:
(no, city 1; yes, city 2; no, city 3; yes, city 4;?, city nnn)
•The preceding function gets executed without making any modifications to the original list.
Moreover, you can notice that each element of the output list gets mapped to a
corresponding element of the input list, having a “yes” or “no” attached.
example, city is the key and temperature is the
value.
• Out of all the data we have collected, we want to find the maximum temperature for each
city across all of the data files (note that each file might have the same city represented
multiple times). Using the MapReduce framework, we can break this down into five map
tasks, where each mapper works on one of the five files, and the mapper task goes through
the data and returns the maximum temperature for each city. For example, the results
produced from one mapper task for the data above would look like this:
(Toronto, 20) (Whitby, 25) (New York, 22)
(Rome, 33)
example, city is the key and temperature
is the value.
Let’s assume the other four mapper tasks produced the following intermediate results
(Toronto, 18) (Whitby, 27) (New York, 32) (Rome, 37)(Toronto, 32) (Whitby, 20) (New
York, 33) (Rome, 38)(Toronto, 22) (Whitby, 19) (New York, 20) (Rome, 31)(Toronto,
31) (Whitby, 22) (New York, 19) (Rome, 30)
All five of these output streams would be fed into the reduce tasks, which combine the
input results and output a single value for each city, producing the final result set as
follows:
(Toronto, 32) (Whitby, 27) (New York, 33) (Rome, 38)
Techniques to Optimize MapReduce Jobs
•MapReduce optimization techniques are in the following categories:
Hardware or network topology
 Synchronization
 File system
•You need to keep the following points in mind while designing a file that supports
MapReduce implementation:
Keep it Warm
The Bigger the Better
The Long View
Right Degree of Security
The fields benefitted by the use of MapReduce are:
1. Web Page Visits—Suppose a researcher wants to know the number of times the website of a particular
newspaper was accessed. The map task would be to read the logs of the Web page requests and make a
complete list. The map outputs may look similar to the following:
The reduce function would find the results for the newspaper URL and add them.
The output of the preceding code is:
<newspaperURL, 3>
The fields benefitted by the use of
MapReduce are:
2. Web Page Visitor Paths- Consider a situation in which an advocacy group wishes to
know how visitors get to know about its website. To determine this, they designed a
link known as “source,” and the Web page to which the link transfers the information is
known as “target.” The map function scans the Web links for returning the results of the
type <target, source>. The reduce function scans this list for determining the results
where the “target” is the Web page. The reduce function output, which is the final
output, will be of the form <advocacy group page, list (source)>.
The fields benefitted by the use of
MapReduce are:
3. Word Frequency—A researcher wishes to read articles about flood but, he
does not want those articles in which the flood is discussed as a minor topic.
Therefore, he decided that an article basically dealing with earthquakes and
floods should have the word “tectonic plate” in it more than 10 times. The map
function will count the number of times the specified word occurred in each
document and provide the result as <document, frequency>. The reduce
function will count and select only the results that have a frequency of more
than 10 words.
The fields benefitted by the use of MapReduce are:
4. Word Count—Suppose a researcher wishes to determine the number of times celebrities talk about the present
bestseller. The data to be analyzed comprises written blogs, posts, and tweets of the celebrities. The map function
will make a list of all the words. This list will be in the KVP format, in which the key is each word, and the value is
1 for every appearance of that word. The output from the map function would be obtained somewhat as follows:
The preceding output will be converted in the following form by the reduce function:
HBase
• The MapReduce programming model can utilize other components of the Hadoop ecosystem to
perform its operations better. One such components is Hbase
• Role of HBase in Big Data Processing- HBase is an open source, non-relational, distributed,
column-oriented database developed as a part of Apache Software Foundation’s Hadoop project.
• MapReduce enhances Big Data processing, HBase takes care of its storage and access
requirements. Characteristics of HBase -- HBase helps programmers to store large quantities
of data in such a way that it can be accessed easily and quickly, as and when required
• It stores data in a compressed format and thus, occupies less memory space. HBase has low
latency time and is, therefore, beneficial for lookups and scanning of large amounts of data.
HBase saves data in cells in the descending order (with the help of timestamp); therefore, a read
will always first determine the most recent values. Columns in HBase relate to a column family.
• The column family name is utilized as a prefix for determining members of its family; for
instance, Cars: Wagon R and Cars: i10 are the members of the Cars column family. A key is
associated with rows in HBase tables
HBase
• The structure of the key is very flexible. It can be a calculated value, a string, or
any other data structure. The key is used for controlling the retrieval of data to the
cells in the row. All these characteristics help build the schema of the HBase data
structure before the storage of any data. Moreover, tables can be modified and new
column families can be added once the database is up and running.
• The columns can be added very easily and are added row-by-row, providing great
flexibility, performance, and scalability. In case you have large volume and
variety of data, you can use a columnar database. HBase is suitable in conditions
where the data changes gradually and rapidly. In addition, HBase can save the data
that has a slow-changing rate and ensure its availability for Hadoop tasks.
• HBase is a framework written in Java for supporting applications that are used to
process Big Data. HBase is a non-relational Hadoop database that provides fault
tolerance for huge amounts of data.
Hbase-Installation
• Before starting the installation of HBase, you need to install the Java Software
Development Kit (SDK). The installation of HBase requires the following
operations to be performed in a stepwise manner: In the Windows terminal, install
the dependency $sudo apt-get installntp libopts25. Figure 5.7 shows the
installation of dependency for HBase: Figure
Hbase-Installation
• The HBase file can be customized as per the user needs by exporting JAVA_HOME
and HBase Opts (hbase-env.sh). To customize an HBase file, type the following
code:
Hbase-Installation
• Zookeeper, the file management engine of the Hadoop ecosystem, manages the files
thatHBase plans to use currently and in the future. Therefore, to manage zookeeper in
HBase and ensure that it is enabled, use the following command:
export HBASE_MANAGES_ZK=true
• Figure 5.9 shows zookeeper enabled in HBase:
Hbase-Installation
• Site-specific customizations are done in hbase-site.xml (HBASE_HOME/conf). Figure
5.10 shows customized hbase-site.xml (HBASE_HOME/conf):
Hbase-Installation
• To enable connection with remote HBase server, edit /etc/hosts. Figure
5.11 shows the edited /etc/hosts:
Hbase-Installation
• To enable connection with remote HBase server, edit /etc/hosts. Figure
5.11 shows the edited /etc/hosts:
Hbase-Installation
• Start HBase by using the following command: $bin/start-hbase.sh Figure
5.12 shows the initiation process of HBase daemons:
Hbase-Installation
• Check all HBase daemons by using the following command: $jps Figure
5.13 shows the implementation of the $jps command:
Hbase-Installation
• Paste the following link to access the Web interface, which has the list of
tables created, along with their definition: http://localhost:60010
Figure 5.14 shows the Web interface for HBase:
Hbase-Installation
• Check the region server for HBase by pasting the following link in your Web browser:
http://localhost:60030
• DT Editorial Services. Big Data, Black Book: Covers Hadoop 2, MapReduce, Hive,
YARN, Pig, R and Data Visualization (p. 138). Wiley India. Kindle Edition.
Hbase-Installation
• Start the HBase shell by using the following command: $bin/hbase shell
Figure 5.16 shows the $bin/hbase shell running in a terminal:
Hbase-Installation
Hbase-Installation
Hbase-Installation
Hbase-Installation
Hbase-Installation
Hbase-Installation
Thank You

More Related Content

What's hot

Efficient Data Storage for Analytics with Apache Parquet 2.0
Efficient Data Storage for Analytics with Apache Parquet 2.0Efficient Data Storage for Analytics with Apache Parquet 2.0
Efficient Data Storage for Analytics with Apache Parquet 2.0
Cloudera, Inc.
 
Introducing Technologies for Handling Big Data by Jaseela
Introducing Technologies for Handling Big Data by JaseelaIntroducing Technologies for Handling Big Data by Jaseela
Introducing Technologies for Handling Big Data by Jaseela
Student
 
5 Data Modeling for NoSQL 1/2
5 Data Modeling for NoSQL 1/25 Data Modeling for NoSQL 1/2
5 Data Modeling for NoSQL 1/2
Fabio Fumarola
 
Hadoop Architecture | HDFS Architecture | Hadoop Architecture Tutorial | HDFS...
Hadoop Architecture | HDFS Architecture | Hadoop Architecture Tutorial | HDFS...Hadoop Architecture | HDFS Architecture | Hadoop Architecture Tutorial | HDFS...
Hadoop Architecture | HDFS Architecture | Hadoop Architecture Tutorial | HDFS...
Simplilearn
 

What's hot (20)

Efficient Data Storage for Analytics with Apache Parquet 2.0
Efficient Data Storage for Analytics with Apache Parquet 2.0Efficient Data Storage for Analytics with Apache Parquet 2.0
Efficient Data Storage for Analytics with Apache Parquet 2.0
 
Introduction Data warehouse
Introduction Data warehouseIntroduction Data warehouse
Introduction Data warehouse
 
Presentation on Big Data
Presentation on Big DataPresentation on Big Data
Presentation on Big Data
 
Hadoop Overview & Architecture
Hadoop Overview & Architecture  Hadoop Overview & Architecture
Hadoop Overview & Architecture
 
Big Data & Hadoop Introduction
Big Data & Hadoop IntroductionBig Data & Hadoop Introduction
Big Data & Hadoop Introduction
 
Big data ppt
Big data pptBig data ppt
Big data ppt
 
Parquet overview
Parquet overviewParquet overview
Parquet overview
 
Big data architectures and the data lake
Big data architectures and the data lakeBig data architectures and the data lake
Big data architectures and the data lake
 
Big Data Architecture
Big Data ArchitectureBig Data Architecture
Big Data Architecture
 
Scaling HDFS to Manage Billions of Files with Distributed Storage Schemes
Scaling HDFS to Manage Billions of Files with Distributed Storage SchemesScaling HDFS to Manage Billions of Files with Distributed Storage Schemes
Scaling HDFS to Manage Billions of Files with Distributed Storage Schemes
 
Introducing Technologies for Handling Big Data by Jaseela
Introducing Technologies for Handling Big Data by JaseelaIntroducing Technologies for Handling Big Data by Jaseela
Introducing Technologies for Handling Big Data by Jaseela
 
5 Data Modeling for NoSQL 1/2
5 Data Modeling for NoSQL 1/25 Data Modeling for NoSQL 1/2
5 Data Modeling for NoSQL 1/2
 
Big Data: An Overview
Big Data: An OverviewBig Data: An Overview
Big Data: An Overview
 
Apache HBase™
Apache HBase™Apache HBase™
Apache HBase™
 
How to understand and analyze Apache Hive query execution plan for performanc...
How to understand and analyze Apache Hive query execution plan for performanc...How to understand and analyze Apache Hive query execution plan for performanc...
How to understand and analyze Apache Hive query execution plan for performanc...
 
Batch Processing vs Stream Processing Difference
Batch Processing vs Stream Processing DifferenceBatch Processing vs Stream Processing Difference
Batch Processing vs Stream Processing Difference
 
5 Steps for Architecting a Data Lake
5 Steps for Architecting a Data Lake5 Steps for Architecting a Data Lake
5 Steps for Architecting a Data Lake
 
Row or Columnar Database
Row or Columnar DatabaseRow or Columnar Database
Row or Columnar Database
 
Introduction to MongoDB
Introduction to MongoDBIntroduction to MongoDB
Introduction to MongoDB
 
Hadoop Architecture | HDFS Architecture | Hadoop Architecture Tutorial | HDFS...
Hadoop Architecture | HDFS Architecture | Hadoop Architecture Tutorial | HDFS...Hadoop Architecture | HDFS Architecture | Hadoop Architecture Tutorial | HDFS...
Hadoop Architecture | HDFS Architecture | Hadoop Architecture Tutorial | HDFS...
 

Similar to Big Data.pptx

Hadoop eco system with mapreduce hive and pig
Hadoop eco system with mapreduce hive and pigHadoop eco system with mapreduce hive and pig
Hadoop eco system with mapreduce hive and pig
KhanKhaja1
 
Hadoop first mr job - inverted index construction
Hadoop first mr job - inverted index constructionHadoop first mr job - inverted index construction
Hadoop first mr job - inverted index construction
Subhas Kumar Ghosh
 

Similar to Big Data.pptx (20)

E031201032036
E031201032036E031201032036
E031201032036
 
Map reduce presentation
Map reduce presentationMap reduce presentation
Map reduce presentation
 
Hadoop eco system with mapreduce hive and pig
Hadoop eco system with mapreduce hive and pigHadoop eco system with mapreduce hive and pig
Hadoop eco system with mapreduce hive and pig
 
Mapreduce script
Mapreduce scriptMapreduce script
Mapreduce script
 
MapReduce Programming Model
MapReduce Programming ModelMapReduce Programming Model
MapReduce Programming Model
 
Big Data Analytics Chapter3-6@2021.pdf
Big Data Analytics Chapter3-6@2021.pdfBig Data Analytics Chapter3-6@2021.pdf
Big Data Analytics Chapter3-6@2021.pdf
 
Hadoop Architecture
Hadoop ArchitectureHadoop Architecture
Hadoop Architecture
 
writing Hadoop Map Reduce programs
writing Hadoop Map Reduce programswriting Hadoop Map Reduce programs
writing Hadoop Map Reduce programs
 
Hadoop
HadoopHadoop
Hadoop
 
Hadoop first mr job - inverted index construction
Hadoop first mr job - inverted index constructionHadoop first mr job - inverted index construction
Hadoop first mr job - inverted index construction
 
Hadoop interview questions - Softwarequery.com
Hadoop interview questions - Softwarequery.comHadoop interview questions - Softwarequery.com
Hadoop interview questions - Softwarequery.com
 
Hadoop scheduler with deadline constraint
Hadoop scheduler with deadline constraintHadoop scheduler with deadline constraint
Hadoop scheduler with deadline constraint
 
Mod05lec23(map reduce tutorial)
Mod05lec23(map reduce tutorial)Mod05lec23(map reduce tutorial)
Mod05lec23(map reduce tutorial)
 
Map reduce
Map reduceMap reduce
Map reduce
 
The google MapReduce
The google MapReduceThe google MapReduce
The google MapReduce
 
Map reduce - simplified data processing on large clusters
Map reduce - simplified data processing on large clustersMap reduce - simplified data processing on large clusters
Map reduce - simplified data processing on large clusters
 
Introduction to the Map-Reduce framework.pdf
Introduction to the Map-Reduce framework.pdfIntroduction to the Map-Reduce framework.pdf
Introduction to the Map-Reduce framework.pdf
 
Optimal Chain Matrix Multiplication Big Data Perspective
Optimal Chain Matrix Multiplication Big Data PerspectiveOptimal Chain Matrix Multiplication Big Data Perspective
Optimal Chain Matrix Multiplication Big Data Perspective
 
MapReduce basics
MapReduce basicsMapReduce basics
MapReduce basics
 
Mapreduce2008 cacm
Mapreduce2008 cacmMapreduce2008 cacm
Mapreduce2008 cacm
 

Recently uploaded

obat aborsi pacitan wa 081336238223 jual obat aborsi cytotec asli di pacitan0...
obat aborsi pacitan wa 081336238223 jual obat aborsi cytotec asli di pacitan0...obat aborsi pacitan wa 081336238223 jual obat aborsi cytotec asli di pacitan0...
obat aborsi pacitan wa 081336238223 jual obat aborsi cytotec asli di pacitan0...
yulianti213969
 
Jual obat aborsi Dubai ( 085657271886 ) Cytote pil telat bulan penggugur kand...
Jual obat aborsi Dubai ( 085657271886 ) Cytote pil telat bulan penggugur kand...Jual obat aborsi Dubai ( 085657271886 ) Cytote pil telat bulan penggugur kand...
Jual obat aborsi Dubai ( 085657271886 ) Cytote pil telat bulan penggugur kand...
ZurliaSoop
 
Biography of Doctor Arif Patel Preston UK
Biography of Doctor Arif Patel Preston UKBiography of Doctor Arif Patel Preston UK
Biography of Doctor Arif Patel Preston UK
ArifPatel42
 
如何办理(TMU毕业证书)多伦多都会大学毕业证成绩单本科硕士学位证留信学历认证
如何办理(TMU毕业证书)多伦多都会大学毕业证成绩单本科硕士学位证留信学历认证如何办理(TMU毕业证书)多伦多都会大学毕业证成绩单本科硕士学位证留信学历认证
如何办理(TMU毕业证书)多伦多都会大学毕业证成绩单本科硕士学位证留信学历认证
gkyvm
 
obat aborsi gresik wa 081336238223 jual obat aborsi cytotec asli di gresik782...
obat aborsi gresik wa 081336238223 jual obat aborsi cytotec asli di gresik782...obat aborsi gresik wa 081336238223 jual obat aborsi cytotec asli di gresik782...
obat aborsi gresik wa 081336238223 jual obat aborsi cytotec asli di gresik782...
yulianti213969
 
一比一定(购)堪培拉大学毕业证(UC毕业证)成绩单学位证
一比一定(购)堪培拉大学毕业证(UC毕业证)成绩单学位证一比一定(购)堪培拉大学毕业证(UC毕业证)成绩单学位证
一比一定(购)堪培拉大学毕业证(UC毕业证)成绩单学位证
eqaqen
 
一比一原版(UCI毕业证)加州大学欧文分校毕业证成绩单学位证留信学历认证
一比一原版(UCI毕业证)加州大学欧文分校毕业证成绩单学位证留信学历认证一比一原版(UCI毕业证)加州大学欧文分校毕业证成绩单学位证留信学历认证
一比一原版(UCI毕业证)加州大学欧文分校毕业证成绩单学位证留信学历认证
vflw6bsde
 

Recently uploaded (20)

B.tech Civil Engineering Major Project by Deepak Kumar ppt.pdf
B.tech Civil Engineering Major Project by Deepak Kumar ppt.pdfB.tech Civil Engineering Major Project by Deepak Kumar ppt.pdf
B.tech Civil Engineering Major Project by Deepak Kumar ppt.pdf
 
obat aborsi pacitan wa 081336238223 jual obat aborsi cytotec asli di pacitan0...
obat aborsi pacitan wa 081336238223 jual obat aborsi cytotec asli di pacitan0...obat aborsi pacitan wa 081336238223 jual obat aborsi cytotec asli di pacitan0...
obat aborsi pacitan wa 081336238223 jual obat aborsi cytotec asli di pacitan0...
 
Rachel_Ochsenschlager_Resume_May_2024.docx
Rachel_Ochsenschlager_Resume_May_2024.docxRachel_Ochsenschlager_Resume_May_2024.docx
Rachel_Ochsenschlager_Resume_May_2024.docx
 
👉 Tirunelveli Call Girls Service Just Call 🍑👄6378878445 🍑👄 Top Class Call Gir...
👉 Tirunelveli Call Girls Service Just Call 🍑👄6378878445 🍑👄 Top Class Call Gir...👉 Tirunelveli Call Girls Service Just Call 🍑👄6378878445 🍑👄 Top Class Call Gir...
👉 Tirunelveli Call Girls Service Just Call 🍑👄6378878445 🍑👄 Top Class Call Gir...
 
Mallu Aunts ℂall Girls Ahmedabad ℂall Us 6378878445 Top ℂlass ℂall Girl Servi...
Mallu Aunts ℂall Girls Ahmedabad ℂall Us 6378878445 Top ℂlass ℂall Girl Servi...Mallu Aunts ℂall Girls Ahmedabad ℂall Us 6378878445 Top ℂlass ℂall Girl Servi...
Mallu Aunts ℂall Girls Ahmedabad ℂall Us 6378878445 Top ℂlass ℂall Girl Servi...
 
Jual obat aborsi Dubai ( 085657271886 ) Cytote pil telat bulan penggugur kand...
Jual obat aborsi Dubai ( 085657271886 ) Cytote pil telat bulan penggugur kand...Jual obat aborsi Dubai ( 085657271886 ) Cytote pil telat bulan penggugur kand...
Jual obat aborsi Dubai ( 085657271886 ) Cytote pil telat bulan penggugur kand...
 
Ochsen Screenplay Coverage - JACOB - 10.16.23.pdf
Ochsen Screenplay Coverage - JACOB - 10.16.23.pdfOchsen Screenplay Coverage - JACOB - 10.16.23.pdf
Ochsen Screenplay Coverage - JACOB - 10.16.23.pdf
 
Biography of Doctor Arif Patel Preston UK
Biography of Doctor Arif Patel Preston UKBiography of Doctor Arif Patel Preston UK
Biography of Doctor Arif Patel Preston UK
 
Chennai (Chennai) Independent Escorts - 9632533318 100% genuine
Chennai (Chennai) Independent Escorts - 9632533318 100% genuineChennai (Chennai) Independent Escorts - 9632533318 100% genuine
Chennai (Chennai) Independent Escorts - 9632533318 100% genuine
 
Specialize in a MSc within Biomanufacturing, and work part-time as Process En...
Specialize in a MSc within Biomanufacturing, and work part-time as Process En...Specialize in a MSc within Biomanufacturing, and work part-time as Process En...
Specialize in a MSc within Biomanufacturing, and work part-time as Process En...
 
如何办理(TMU毕业证书)多伦多都会大学毕业证成绩单本科硕士学位证留信学历认证
如何办理(TMU毕业证书)多伦多都会大学毕业证成绩单本科硕士学位证留信学历认证如何办理(TMU毕业证书)多伦多都会大学毕业证成绩单本科硕士学位证留信学历认证
如何办理(TMU毕业证书)多伦多都会大学毕业证成绩单本科硕士学位证留信学历认证
 
obat aborsi gresik wa 081336238223 jual obat aborsi cytotec asli di gresik782...
obat aborsi gresik wa 081336238223 jual obat aborsi cytotec asli di gresik782...obat aborsi gresik wa 081336238223 jual obat aborsi cytotec asli di gresik782...
obat aborsi gresik wa 081336238223 jual obat aborsi cytotec asli di gresik782...
 
Only Cash On Delivery Call Girls Service In Nanded Enjoy 24/7 Escort Service
Only Cash On Delivery Call Girls Service In Nanded Enjoy 24/7 Escort ServiceOnly Cash On Delivery Call Girls Service In Nanded Enjoy 24/7 Escort Service
Only Cash On Delivery Call Girls Service In Nanded Enjoy 24/7 Escort Service
 
Ganga Path Project (marine drive project) Patna ,Bihar .pdf
Ganga Path Project (marine drive project) Patna ,Bihar .pdfGanga Path Project (marine drive project) Patna ,Bihar .pdf
Ganga Path Project (marine drive project) Patna ,Bihar .pdf
 
❤️Mangalore Call Girls Service ❤️🍑 6378878445 👄🫦Independent Escort Service Ch...
❤️Mangalore Call Girls Service ❤️🍑 6378878445 👄🫦Independent Escort Service Ch...❤️Mangalore Call Girls Service ❤️🍑 6378878445 👄🫦Independent Escort Service Ch...
❤️Mangalore Call Girls Service ❤️🍑 6378878445 👄🫦Independent Escort Service Ch...
 
Career counseling presentation for commerce students
Career counseling presentation for commerce studentsCareer counseling presentation for commerce students
Career counseling presentation for commerce students
 
一比一定(购)堪培拉大学毕业证(UC毕业证)成绩单学位证
一比一定(购)堪培拉大学毕业证(UC毕业证)成绩单学位证一比一定(购)堪培拉大学毕业证(UC毕业证)成绩单学位证
一比一定(购)堪培拉大学毕业证(UC毕业证)成绩单学位证
 
一比一原版(UCI毕业证)加州大学欧文分校毕业证成绩单学位证留信学历认证
一比一原版(UCI毕业证)加州大学欧文分校毕业证成绩单学位证留信学历认证一比一原版(UCI毕业证)加州大学欧文分校毕业证成绩单学位证留信学历认证
一比一原版(UCI毕业证)加州大学欧文分校毕业证成绩单学位证留信学历认证
 
B.tech civil major project by Deepak Kumar
B.tech civil major project by Deepak KumarB.tech civil major project by Deepak Kumar
B.tech civil major project by Deepak Kumar
 
UXPA Boston 2024 Maximize the Client Consultant Relationship.pdf
UXPA Boston 2024 Maximize the Client Consultant Relationship.pdfUXPA Boston 2024 Maximize the Client Consultant Relationship.pdf
UXPA Boston 2024 Maximize the Client Consultant Relationship.pdf
 

Big Data.pptx

  • 1. Understanding Big Data Technology Foundations Module 3
  • 2. Syllabus • The MapReduce Framework • Techniques to reduce Mapreduce Jobs • Uses of Mapreduce • Role of Hbase in Big data Processing
  • 3. Understanding Big Data Technology Foundations • The advent of Local Area Networks (LANs) and other networking technologies shifted the focus of IT industry toward solving bigger and bigger problems by combining computing and storing capacities of systems on the network • This chapter focuses on explaining the basics and exploring the relevance and role of various functions that are used in the MapReduce framework.
  • 4. Big Data • The MapReduce Framework At the start of the 21st century, the team of engineers working with Google concluded that because of the increasing number of Internet users, the resources and solutions available would be inadequate to fulfill the future requirements. As a preparation to the upcoming issue, Google engineers established that the concept of task distribution across economical resources, and their interconnectivity as a cluster over the network, can be presented as a solution. The concept of task distribution, though, could not be a complete answer to the issue, which requires the tasks to be distributed in parallel.
  • 5. A parallel distribution of tasks • Helps in automatic expansion and contraction of processes • Enables continuation of processes without being affected by network failures or individual system failures • Empowers developers with rights to access the services that other developers have created in context of multiple usage scenarios • A generic implementation to the entire concept was, therefore, provided with the development of the MapReduce programming model
  • 6. Exploring the Features of MapReduce • MapReduce keeps all the processing operations separate for parallel execution. Problems that are extremely large in size are divided into subtasks, which are chunks of data separated in manageable blocks. • The principal features of MapReduce include the following: Synchronization Co-location of Code/Data (Data Locality) Handling of Errors/Faults Scale-Out Architecture
  • 7. Working of MapReduce 1. Take a large dataset or set of records. 2. Perform iteration over the data. 3. Extract some interesting patterns to prepare an output list by using the map function. 4. Arrange the output list properly to enable optimization for further processing. 5. Compute a set of results by using the reduce function. 6. Provide the final output. The MapReduce programming model also works on an algorithm to execute the map and reduce operations. This algorithm can be depicted as follows
  • 8. Working of the MapReduce approach
  • 9. Working of the MapReduce approach • Is a combination of a master and three slaves • The master monitors the entire job assigned to the MapReduce algorithm and is given the name of JobTracker • Slaves, on the other hand, are responsible for keeping track of individual tasks and are called TaskTrackers • First, the given job is divided into a number of tasks by the master, i.e., the JobTracker, which then distributes these tasks into slaves • It is the responsibility of the JobTracker to further keep an eye on the processing activities and the re-execution of the failed tasks • Slaves coordinate with the master by executing the tasks they are given by the master. • The JobTracker receives jobs from client applications to process large information. These jobs are assigned in the forms of individual tasks (after a job is divided into smaller parts) to various TaskTrackers • The task distribution operation is completed by the JobTracker. The data after being processed by TaskTrackers is transmitted to the reduce function so that the final, integrated output which is an aggregate of the data processed by the map function, can be provided.
  • 10. Operations performed in the MapReduce model • The input is provided from large data files in the form of key-value pair (KVP), which is the standard input format in a Hadoop MapReduce programming model • The input data is divided into small pieces, and master and slave nodes are created. The master node usually executes on the machine where the data is present, and slaves are made to work remotely on the data. • The map operation is performed simultaneously on all the data pieces, which are read by the map function. The map function extracts the relevant data and generates the KVP for it
  • 11. The input/output operations of the map function are shown in Figure •The output list is generated from the map operation, and the master instructs the reduce function about further actions that it needs to take •The list of KVPs obtained from the map function is passed on to the reduce function. The reduce function sorts the data on the basis of the KVP list •The process of collecting the map output list from the map function and then sorting it as per the keys is known as shuffling. Every unique key is then taken by the reduce function. These keys are called, as required, for producing the final output to be sent to the file
  • 12. The input/output operations of the reduce function are shown in Figure The output is finally generated by the reduce function, and the control is handed over to the user program by the master
  • 13. The entire process of data analysis conducted in the MapReduce programming model: • Let’s now try to understand the working of the MapReduce programming model with the help of a few examples
  • 14. Example 1 • Consider that there is a data analysis project in which 20 terabytes of data needs to be analyzed on 20 different MapReduce server nodes • At first, the data distribution process simply copies data to all the nodes before starting the MapReduce process. • You need to keep in mind that the determination of the format of the file rests with the user and no standard file format is specified in MapReduce as in relational databases. • Next, the scheduler comes into the picture as it receives two programs from the programmer. These two programs are the map and reduce programs. The data is made available from the disk to the map function, which runs the logic on the data. In our example, all the 20 nodes independently perform the operation. •The map function passes the results to the reduce function for summarizing and providing the final output in an aggregate form
  • 15. Example 1 • The ancient Rome census can help to understand the mapping process of the map and reduce functions. In the Rome census, volunteers were sent to cover various places that are situated near the kingdom of Rome. Volunteers had to count the number of people living in the area assigned to them and send the report of the population to the organization. The census chief added the count of people recorded from all the areas to reach an aggregate whole. The map function performs the processing operation in parallel to counting the number of people living in an area, and the reduce function combines the entire result.
  • 16. Example 2 • A data analytic professional parses out every term available in the chat text by creating a map step. He creates a map function to find out every word of the chat. The count is incremented by one after the word is parsed from the paragraph. • The map function provides the output in the form of a list that involves a number of KVPs, for example, ″<my, 1>,″ ″<product, 1>,″ ″<broke, 1>.″ • Once the operations of all map functions are complete, the information is provided to the scheduler by the map function itself. After completing the map operation, • After completing the map operation, the reduce function starts performing the reduce operation. Keeping the current target of finding the count of the number of times a word appears in the text, shuffling is performed next • This process involves distribution of the map output through hashing in order to map the same keywords to the respective node of the reduce function. Assuming a simple situation of processing an English text, for example, we require 26 nodes that can handle words starting with individual letters of the alphabet • In this case, words starting with A will be handled by one node, words that start with B will be handled by another node, and so on. Thus, the number of words can easily be counted by the reduce step.
  • 17. The final output of the process will include ″<my, 10>,″ ″<product, 25>,″ ″<broke, 20>,″ where the first value of each angular bracket (<>) is the analyzed word, and the second value is the count of the word, i.e., the number of times the word appears within the entire text. The result set will include 26 files. The detailed MapReduce process used in this example: •The final output of the process will include ″<my, 10>,″ ″<product, 25>,″ ″<broke, 20>,″ where the first value of each angular bracket (<>) is the analyzed word, and the second value is the count of the word, i.e., the number of times the word appears within the entire text • The result set will include 26 files. Each of these files is produced from an individual node and contains the count of words in a sorted order. You need to keep in mind that the combining operation will also require a process to handle all the 26 files obtained as a result of the MapReduce operations. After we obtain the count of words, we can feed the results for any kind of analysis.
  • 18. Exploring Map and Reduce Functions •The MapReduce programming model facilitates faster data analysis for which the data is taken in the form of KVPs. •Both MapReduce functions and Hadoop can be created in many languages; however, programmers generally prefer to create them in Java. The Pipes library allows C++ source code to be utilized for map and reduce code •The generic Application Programming Interface (API) called streaming allows programs created in most languages to be utilized as map and reduce functions in Hadoop •Consider an example of a program that counts the number of Indian cities having a population of above one lakh. You must note that the following is not a programming code instead a plain English representation of the solution to the problem. •One way to achieve the following task is to determine the input data and generate a list in the following manner: mylist = ("all counties in the India that participated in the most recent general election")
  • 19. Exploring Map and Reduce Functions • Use the map function to create a function, howManyPeople, which selects the cities having a population of more than one lakh: map howManyPeople (mylist) = [howManyPeople "city 1";howManyPeople"city 2"; howManyPeople "city 3"; howManyPeople "city 4";...] •Now, generate a new output list of all the cities having a population of more than one lakh: (no, city 1; yes, city 2; no, city 3; yes, city 4;?, city nnn) •The preceding function gets executed without making any modifications to the original list. Moreover, you can notice that each element of the output list gets mapped to a corresponding element of the input list, having a “yes” or “no” attached.
  • 20. example, city is the key and temperature is the value. • Out of all the data we have collected, we want to find the maximum temperature for each city across all of the data files (note that each file might have the same city represented multiple times). Using the MapReduce framework, we can break this down into five map tasks, where each mapper works on one of the five files, and the mapper task goes through the data and returns the maximum temperature for each city. For example, the results produced from one mapper task for the data above would look like this: (Toronto, 20) (Whitby, 25) (New York, 22) (Rome, 33)
  • 21. example, city is the key and temperature is the value. Let’s assume the other four mapper tasks produced the following intermediate results (Toronto, 18) (Whitby, 27) (New York, 32) (Rome, 37)(Toronto, 32) (Whitby, 20) (New York, 33) (Rome, 38)(Toronto, 22) (Whitby, 19) (New York, 20) (Rome, 31)(Toronto, 31) (Whitby, 22) (New York, 19) (Rome, 30) All five of these output streams would be fed into the reduce tasks, which combine the input results and output a single value for each city, producing the final result set as follows: (Toronto, 32) (Whitby, 27) (New York, 33) (Rome, 38)
  • 22. Techniques to Optimize MapReduce Jobs •MapReduce optimization techniques are in the following categories: Hardware or network topology  Synchronization  File system •You need to keep the following points in mind while designing a file that supports MapReduce implementation: Keep it Warm The Bigger the Better The Long View Right Degree of Security
  • 23. The fields benefitted by the use of MapReduce are: 1. Web Page Visits—Suppose a researcher wants to know the number of times the website of a particular newspaper was accessed. The map task would be to read the logs of the Web page requests and make a complete list. The map outputs may look similar to the following: The reduce function would find the results for the newspaper URL and add them. The output of the preceding code is: <newspaperURL, 3>
  • 24. The fields benefitted by the use of MapReduce are: 2. Web Page Visitor Paths- Consider a situation in which an advocacy group wishes to know how visitors get to know about its website. To determine this, they designed a link known as “source,” and the Web page to which the link transfers the information is known as “target.” The map function scans the Web links for returning the results of the type <target, source>. The reduce function scans this list for determining the results where the “target” is the Web page. The reduce function output, which is the final output, will be of the form <advocacy group page, list (source)>.
  • 25. The fields benefitted by the use of MapReduce are: 3. Word Frequency—A researcher wishes to read articles about flood but, he does not want those articles in which the flood is discussed as a minor topic. Therefore, he decided that an article basically dealing with earthquakes and floods should have the word “tectonic plate” in it more than 10 times. The map function will count the number of times the specified word occurred in each document and provide the result as <document, frequency>. The reduce function will count and select only the results that have a frequency of more than 10 words.
  • 26. The fields benefitted by the use of MapReduce are: 4. Word Count—Suppose a researcher wishes to determine the number of times celebrities talk about the present bestseller. The data to be analyzed comprises written blogs, posts, and tweets of the celebrities. The map function will make a list of all the words. This list will be in the KVP format, in which the key is each word, and the value is 1 for every appearance of that word. The output from the map function would be obtained somewhat as follows: The preceding output will be converted in the following form by the reduce function:
  • 27. HBase • The MapReduce programming model can utilize other components of the Hadoop ecosystem to perform its operations better. One such components is Hbase • Role of HBase in Big Data Processing- HBase is an open source, non-relational, distributed, column-oriented database developed as a part of Apache Software Foundation’s Hadoop project. • MapReduce enhances Big Data processing, HBase takes care of its storage and access requirements. Characteristics of HBase -- HBase helps programmers to store large quantities of data in such a way that it can be accessed easily and quickly, as and when required • It stores data in a compressed format and thus, occupies less memory space. HBase has low latency time and is, therefore, beneficial for lookups and scanning of large amounts of data. HBase saves data in cells in the descending order (with the help of timestamp); therefore, a read will always first determine the most recent values. Columns in HBase relate to a column family. • The column family name is utilized as a prefix for determining members of its family; for instance, Cars: Wagon R and Cars: i10 are the members of the Cars column family. A key is associated with rows in HBase tables
  • 28. HBase • The structure of the key is very flexible. It can be a calculated value, a string, or any other data structure. The key is used for controlling the retrieval of data to the cells in the row. All these characteristics help build the schema of the HBase data structure before the storage of any data. Moreover, tables can be modified and new column families can be added once the database is up and running. • The columns can be added very easily and are added row-by-row, providing great flexibility, performance, and scalability. In case you have large volume and variety of data, you can use a columnar database. HBase is suitable in conditions where the data changes gradually and rapidly. In addition, HBase can save the data that has a slow-changing rate and ensure its availability for Hadoop tasks. • HBase is a framework written in Java for supporting applications that are used to process Big Data. HBase is a non-relational Hadoop database that provides fault tolerance for huge amounts of data.
  • 29. Hbase-Installation • Before starting the installation of HBase, you need to install the Java Software Development Kit (SDK). The installation of HBase requires the following operations to be performed in a stepwise manner: In the Windows terminal, install the dependency $sudo apt-get installntp libopts25. Figure 5.7 shows the installation of dependency for HBase: Figure
  • 30. Hbase-Installation • The HBase file can be customized as per the user needs by exporting JAVA_HOME and HBase Opts (hbase-env.sh). To customize an HBase file, type the following code:
  • 31. Hbase-Installation • Zookeeper, the file management engine of the Hadoop ecosystem, manages the files thatHBase plans to use currently and in the future. Therefore, to manage zookeeper in HBase and ensure that it is enabled, use the following command: export HBASE_MANAGES_ZK=true • Figure 5.9 shows zookeeper enabled in HBase:
  • 32. Hbase-Installation • Site-specific customizations are done in hbase-site.xml (HBASE_HOME/conf). Figure 5.10 shows customized hbase-site.xml (HBASE_HOME/conf):
  • 33. Hbase-Installation • To enable connection with remote HBase server, edit /etc/hosts. Figure 5.11 shows the edited /etc/hosts:
  • 34. Hbase-Installation • To enable connection with remote HBase server, edit /etc/hosts. Figure 5.11 shows the edited /etc/hosts:
  • 35. Hbase-Installation • Start HBase by using the following command: $bin/start-hbase.sh Figure 5.12 shows the initiation process of HBase daemons:
  • 36. Hbase-Installation • Check all HBase daemons by using the following command: $jps Figure 5.13 shows the implementation of the $jps command:
  • 37. Hbase-Installation • Paste the following link to access the Web interface, which has the list of tables created, along with their definition: http://localhost:60010 Figure 5.14 shows the Web interface for HBase:
  • 38. Hbase-Installation • Check the region server for HBase by pasting the following link in your Web browser: http://localhost:60030 • DT Editorial Services. Big Data, Black Book: Covers Hadoop 2, MapReduce, Hive, YARN, Pig, R and Data Visualization (p. 138). Wiley India. Kindle Edition.
  • 39. Hbase-Installation • Start the HBase shell by using the following command: $bin/hbase shell Figure 5.16 shows the $bin/hbase shell running in a terminal: