A MapReduce job usually splits the input data-set into independent chunks which are processed by the map tasks in a completely parallel manner. The framework sorts the outputs of the maps, which are then input to the reduce tasks. Typically both the input and the output of the job are stored in a file-system.
This presentation discusses the follow topics
What is Hadoop?
Need for Hadoop
History of Hadoop
Hadoop Overview
Advantages and Disadvantages of Hadoop
Hadoop Distributed File System
Comparing: RDBMS vs. Hadoop
Advantages and Disadvantages of HDFS
Hadoop frameworks
Modules of Hadoop frameworks
Features of 'Hadoop‘
Hadoop Analytics Tools
HDFS is a Java-based file system that provides scalable and reliable data storage, and it was designed to span large clusters of commodity servers. HDFS has demonstrated production scalability of up to 200 PB of storage and a single cluster of 4500 servers, supporting close to a billion files and blocks.
This presentation discusses the follow topics
What is Hadoop?
Need for Hadoop
History of Hadoop
Hadoop Overview
Advantages and Disadvantages of Hadoop
Hadoop Distributed File System
Comparing: RDBMS vs. Hadoop
Advantages and Disadvantages of HDFS
Hadoop frameworks
Modules of Hadoop frameworks
Features of 'Hadoop‘
Hadoop Analytics Tools
HDFS is a Java-based file system that provides scalable and reliable data storage, and it was designed to span large clusters of commodity servers. HDFS has demonstrated production scalability of up to 200 PB of storage and a single cluster of 4500 servers, supporting close to a billion files and blocks.
Hadoop is an open-source software framework for storing data and running applications on clusters of commodity hardware.
It provides massive storage for any kind of data, enormous processing power and the ability to handle virtually limitless concurrent tasks or jobs. The core of Apache Hadoop consists of a storage part (HDFS) and a processing part (MapReduce).
Hadoop is an open-source software framework for storing data and running applications on clusters of commodity hardware.
It provides massive storage for any kind of data, enormous processing power and the ability to handle virtually limitless concurrent tasks or jobs. The core of Apache Hadoop consists of a storage part (HDFS) and a processing part (MapReduce).
Processing massive amount of data with Map Reduce using Apache Hadoop - Indi...IndicThreads
Session presented at the 2nd IndicThreads.com Conference on Cloud Computing held in Pune, India on 3-4 June 2011.
http://CloudComputing.IndicThreads.com
Abstract: The processing of massive amount of data gives great insights into analysis for business. Many primary algorithms run over the data and gives information which can be used for business benefits and scientific research. Extraction and processing of large amount of data has become a primary concern in terms of time, processing power and cost. Map Reduce algorithm promises to address the above mentioned concerns. It makes computing of large sets of data considerably easy and flexible. The algorithm offers high scalability across many computing nodes. This session will introduce Map Reduce algorithm, followed by few variations of the same and also hands on example in Map Reduce using Apache Hadoop.
Speaker: Allahbaksh Asadullah is a Product Technology Lead from Infosys Labs, Bangalore. He has over 5 years of experience in software industry in various technologies. He has extensively worked on GWT, Eclipse Plugin development, Lucene, Solr, No SQL databases etc. He speaks at the developer events like ACM Compute, Indic Threads and Dev Camps.
Hadoop became the most common systm to store big data.
With Hadoop, many supporting systems emerged to complete the aspects that are missing in Hadoop itself.
Together they form a big ecosystem.
This presentation covers some of those systems.
While not capable to cover too many in one presentation, I tried to focus on the most famous/popular ones and on the most interesting ones.
Hadoop became the most common systm to store big data.
With Hadoop, many supporting systems emerged to complete the aspects that are missing in Hadoop itself.
Together they form a big ecosystem.
This presentation covers some of those systems.
While not capable to cover too many in one presentation, I tried to focus on the most famous/popular ones and on the most interesting ones.
Hadoop interview questions for freshers and experienced people. This is the best place for all beginners and Experts who are eager to learn Hadoop Tutorial from the scratch.
Read more here http://softwarequery.com/hadoop/
Apache Pig is a high-level platform for creating programs that runs on Apache Hadoop. The language for this platform is called Pig Latin. Pig can execute its Hadoop jobs in MapReduce, Apache Tez, or Apache Spark.
Apache Sqoop efficiently transfers bulk data between Apache Hadoop and structured datastores such as relational databases. Sqoop helps offload certain tasks (such as ETL processing) from the EDW to Hadoop for efficient execution at a much lower cost. Sqoop can also be used to extract data from Hadoop and export it into external structured datastores. Sqoop works with relational databases such as Teradata, Netezza, Oracle, MySQL, Postgres, and HSQLDB
Apache Hive is a data warehouse infrastructure built on top of Hadoop for providing data summarization, query, and analysis. While developed by Facebook.
Apache HBase™ is the Hadoop database, a distributed, salable, big data store.Its a column-oriented database management system that runs on top of HDFS.
Apache HBase is an open source NoSQL database that provides real-time read/write access to those large data sets. ... HBase is natively integrated with Hadoop and works seamlessly alongside other data access engines through YARN.
MongoDB is an open-source document database, and the leading NoSQL database. Written in C++.
MongoDB has official drivers for a variety of popular programming languages and development environments. There are also a large number of unofficial or community-supported drivers for other programming languages and frameworks.
SonarQube is an open platform to manage code quality. It has got a very efficient way of navigating, a balance between high-level view, dashboard, TimeMachine and defect hunting tools.
SonarQube tool is a web-based application. Rules, alerts, thresholds, exclusions, settings… can be configured online.
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
As Europe's leading economic powerhouse and the fourth-largest hashtag#economy globally, Germany stands at the forefront of innovation and industrial might. Renowned for its precision engineering and high-tech sectors, Germany's economic structure is heavily supported by a robust service industry, accounting for approximately 68% of its GDP. This economic clout and strategic geopolitical stance position Germany as a focal point in the global cyber threat landscape.
In the face of escalating global tensions, particularly those emanating from geopolitical disputes with nations like hashtag#Russia and hashtag#China, hashtag#Germany has witnessed a significant uptick in targeted cyber operations. Our analysis indicates a marked increase in hashtag#cyberattack sophistication aimed at critical infrastructure and key industrial sectors. These attacks range from ransomware campaigns to hashtag#AdvancedPersistentThreats (hashtag#APTs), threatening national security and business integrity.
🔑 Key findings include:
🔍 Increased frequency and complexity of cyber threats.
🔍 Escalation of state-sponsored and criminally motivated cyber operations.
🔍 Active dark web exchanges of malicious tools and tactics.
Our comprehensive report delves into these challenges, using a blend of open-source and proprietary data collection techniques. By monitoring activity on critical networks and analyzing attack patterns, our team provides a detailed overview of the threats facing German entities.
This report aims to equip stakeholders across public and private sectors with the knowledge to enhance their defensive strategies, reduce exposure to cyber risks, and reinforce Germany's resilience against cyber threats.
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...pchutichetpong
M Capital Group (“MCG”) expects to see demand and the changing evolution of supply, facilitated through institutional investment rotation out of offices and into work from home (“WFH”), while the ever-expanding need for data storage as global internet usage expands, with experts predicting 5.3 billion users by 2023. These market factors will be underpinned by technological changes, such as progressing cloud services and edge sites, allowing the industry to see strong expected annual growth of 13% over the next 4 years.
Whilst competitive headwinds remain, represented through the recent second bankruptcy filing of Sungard, which blames “COVID-19 and other macroeconomic trends including delayed customer spending decisions, insourcing and reductions in IT spending, energy inflation and reduction in demand for certain services”, the industry has seen key adjustments, where MCG believes that engineering cost management and technological innovation will be paramount to success.
MCG reports that the more favorable market conditions expected over the next few years, helped by the winding down of pandemic restrictions and a hybrid working environment will be driving market momentum forward. The continuous injection of capital by alternative investment firms, as well as the growing infrastructural investment from cloud service providers and social media companies, whose revenues are expected to grow over 3.6x larger by value in 2026, will likely help propel center provision and innovation. These factors paint a promising picture for the industry players that offset rising input costs and adapt to new technologies.
According to M Capital Group: “Specifically, the long-term cost-saving opportunities available from the rise of remote managing will likely aid value growth for the industry. Through margin optimization and further availability of capital for reinvestment, strong players will maintain their competitive foothold, while weaker players exit the market to balance supply and demand.”
1. What is Hadoop?
• The Apache Hadoop project develops open-
source software for reliable, scalable,
distributed computing.
In a nutshell
• Hadoop provides: a reliable shared storage and
analysis system.
• The storage is provided by HDFS
• The analysis by MapReduce.
2. Map Reduce
• HDFS handles the Distributed Filesystem layer
• MapReduce is a programming model for data processing.
• MapReduce
– Framework for parallel computing
– Programmers get simple API
– Don’t have to worry about handling
• parallelization
• data distribution
• load balancing
• fault tolerance
• Allows one to process huge amounts of data
(terabytes and petabytes) on thousands of processors
3. Map Reduce Concepts (Hadoop-
1.0)
Data Node
Task
Tracker
Data Node
Task
Tracker
Data Node
Task
Tracker
Data Node
Task
Tracker
MapReduce
Engine
HDFS
Cluster
Job Tracker
Admin Node
Name node
4. Map Reduce Concepts
Job Tracker
The Job-Tracker is responsible for accepting jobs
from clients, dividing those jobs into tasks, and
assigning those tasks to be executed by worker
nodes.
Task Tracker
Task-Tracker is a process that manages the
execution of the tasks currently assigned to that
node. Each Task Tracker has a fixed number of
slots for executing tasks (two maps and two
reduces by default).
5. Job Tracker
DFS
Job Tracker
1. Copy Input Files
User
2. Submit Job
3. Get Input Files’ Info
6. Submit Job
4. Create Splits
5. Upload Job
Information
Input Files
Client
Job.xml.
Job.jar.
8. Understanding Data Transformations
• In order to write MapReduce applications you need to have an understanding of
how data is transformed as it executes in the MapReduce framework.
From start to finish, there are four fundamental transformations. Data is:
• Transformed from the input files and fed into the mappers
• Transformed by the mappers
• Sorted, merged, and presented to the reducer
• Transform by reducers and written to output files
9. Solving a Programming Problem using
MapReduce
There are a total of 10 fields of information in each line. Our
programming objective uses only the first and fourth fields, which
are arbitrarily called "year" and "delta" respectively. We will ignore
all the other fields of data.
18. Introduction to MapReduce
Framework
A programming model for parallel data processing.
Hadoop can run map reduce programs in multiple
languages like Java, Python, Ruby etc.
Map function:
Operate on set of key, value pairs
Map is applied in parallel on input data set
This produces output keys and list of values for each key
depending upon the functionality
Mapper output are partitioned per reducer
Reduce function:
Operate on set of key, value pairs
Reduce is then applied in parallel to each group, again
producing a collection of key, values.
Total number of reducers can be set by the user.
19. Skeleton of a MapReduce Program
public class WordCount {
public static class TokenizerMapper
extends Mapper<Object, Text, Text, IntWritable>{
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context)
throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
20. Skeleton of a MapReduce Program
public static class IntSumReducer
extends Reducer<Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,Context context)
throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
21. Skeleton of a MapReduce program
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
FileSystem.get(conf).delete(new Path(args[1]), true);
job.waitForCompletion(true);
}
22. Executing MR Job in Java
1)Compile all the 3 java files which will create 3
.class files
2)Add all 3 .class files into 1 single jar file by
writing this command
jar –cvf file_name.jar *.class
3)Now you just need to execute single jar file
by writing this command
bin/hadoop jar file_name.jar Basic
input_file_name output_file_name
24. Understanding processing in a
MapReduce framework
User runs a program on the client computer
Program submits a job to HDFS.
Job contains:
Input data
Map / Reduce program
Configuration information
Two types of daemons that control job
execution:
Job Tracker (master node)
Task Trackers (slave nodes)
25. Understanding processing in a
MapReduce framework
Job sent to JobTracker.
JobTracker communicates with NameNode
and assigns parts of job to TaskTrackers
Task is a single MAP or REDUCE
operation over piece of data.
The JobTracker knows (from NameNode)
which node contains the data, and which
other machines are nearby.
Task processes send heartbeats to
TaskTracker
TaskTracker sends heartbeats to the
JobTracker.
26. Understanding processing in a
MapReduce framework
Any tasks that did not report in certain time
(default is 10 min) assumed to be failed and
it’s JVM will be killed by TaskTracker and
reported to the JobTracker
The JobTracker will reschedule any failed
tasks (with different TaskTracker)
If same task failed 4 times all job fails
Any TaskTracker reporting high number of
failed jobs on particular node will be blacklist
the node (remove metadata from NameNode)
JobTracker maintains and manages the status
of each job. Results from failed tasks will be
ignored
27. MapReduce Job Submission Flow
Input data is distributed to nodes
Node 1 Node 2
INPUT DATA
Node 1 Node 2
28. MapReduce Job Submission Flow
Input data is distributed to nodes
Each map task works on a “split” of data
Map
Node 1
Map
Node 2
INPUT DATA
Node 1 Node 2
29. MapReduce Job Submission Flow
Input data is distributed to nodes
Each map task works on a “split” of data
Mapper outputs intermediate data
Map
Node 1
Map
Node 2
INPUT DATA
Node 1 Node 2
30. MapReduce Job Submission Flow
Input data is distributed to nodes
Each map task works on a “split” of data
Mapper outputs intermediate data
Data exchange between nodes in a “shuffle” process
Map
Node 1
Map
Node 2
INPUT DATA
Node 1 Node 2
31. MapReduce Job Submission Flow
Input data is distributed to nodes
Each map task works on a “split” of data
Mapper outputs intermediate data
Data exchange between nodes in a “shuffle” process
Intermediate data of the same key goes to the same reducer
Map
Node 1
Map
Node 2
Reduce Reduce
INPUT DATA
Node 1 Node 2
32. MapReduce Job Submission Flow
Input data is distributed to nodes
Each map task works on a “split” of data
Mapper outputs intermediate data
Data exchange between nodes in a “shuffle” process
Intermediate data of the same key goes to the same reducer
Reducer output is stored
Map
Node 1
Map
Node 2
Reduce Reduce
INPUT DATA
Node 1 Node 2
39. MapReduce - Input Format
How the input files are split up and read is defined by
the InputFormat
InputFormat is a class that does the following:
Selects the files that should be used for input
Defines the InputSplits that break a file
Provides a factory for RecordReader objects that
read the file
41. Input Splits
An input split describes a unit of work that comprises
a single map task in a MapReduce program
By default, the InputFormat breaks a file upto 64MB
splits
By dividing the file into splits, we allow several map
tasks to operate on a single file in parallel
If the file is very large, this can improve performance
significantly through parallelism
Each map task corresponds to a single input split
42. RecordReader
The input split defines a slice of work but does
not describe how to access it
The RecordReader class actually loads data from
its source and converts it into (K, V) pairs suitable
for reading by Mappers
The RecordReader is invoked repeatedly on the
input until the entire split is consumed
Each invocation of the RecordReader leads to
another call of the map function defined by the
programmer
43. Mapper and Reducer
The Mapper performs the user-defined work of
the first phase of the MapReduce program.
A new instance of Mapper is created for each
split.
The Reducer performs the user-defined work of
the second phase of the MapReduce program.
A new instance of Reducer is created for each
partition.
For each key in the partition assigned to a
Reducer, the Reducer is called once.
44. Combiner
• Apply reduce function to map output before it is sent to reducer
• Reduces number of records outputted by mapper!
45. Partitioner
Each mapper may produce (K, V) pairs to any
partition.
Therefore, the map nodes must all agree on
where to send different pieces of intermediate
data.
The partitioner class determines which
partition a given (K,V) pair will go to.
The default partitioner computes a hash value for a
given key and assigns it to a partition based on
this result.
49. Shuffle and Sort
Mapper
Reducer
other mappers
other reducers
circular buffer
(in memory)
spills (on disk)
merged spills
(on disk)
intermediate files
(on disk)
Combiner
Combiner
50. Shuffle and Sort
• Probably the most complex aspect of MapReduce and heart of
the map reduce!
• Map side
Map outputs are buffered in memory in a circular buffer.
When buffer reaches threshold, contents are “spilled” to
disk.
Spills merged in a single, partitioned file (sorted within each
partition): combiner runs here first.
• Reduce side
First, map outputs are copied over to reducer machine.
“Sort” is a multi-pass merge of map outputs (happens in
memory and on disk): combiner runs here again.
Final merge pass goes directly into reducer.
51. Output Format
The OutputFormat class defines the way (K,V) pairs
produced by Reducers are written to output files
The instances of OutputFormat provided by Hadoop
write to files on the local disk or in HDFS
Several OutputFormats are provided by Hadoop:
TextOutputFormat - Default; writes lines in "key t
value" format
SequenceFileOutputFormat - Writes binary files
suitable for reading into subsequent MR jobs
NullOutputFormat - Generates no output files
57. Fault Tolerance
MapReduce can guide jobs toward a successful completion even
when jobs are run on a large cluster where probability of failures
increases
The primary way that MapReduce achieves fault tolerance is
through restarting tasks
If a TT fails to communicate with Application Manager for a
period of time (by default, 1 minute in Hadoop), JT will assume
that TT in question has crashed
If the job is still in the map phase, JT asks another TT to re-
execute all Mappers that previously ran at the failed TT
If the job is in the reduce phase, Application Manager asks
another TT to re-execute all Reducers that were in progress
on the failed TT
58. Speculative Execution
A MapReduce job is dominated by the slowest task
MapReduce attempts to locate slow tasks (stragglers)
and run redundant (speculative) tasks that will
optimistically commit before the corresponding
stragglers
This process is known as speculative execution
Only one copy of a straggler is allowed to be
speculated
Whichever copy (among the two copies) of a task
commits first, it becomes the definitive copy, and the
other copy is killed by JT
59. Locating Stragglers
How does Hadoop locate stragglers?
Hadoop monitors each task progress using a
progress score between 0 and 1
If a task’s progress score is less than (average –
0.2), and the task has run for at least 1 minute, it
is marked as a straggler
PS= 2/3
PS= 1/12
Not a stragglerT1
T2
Time
A straggler
61. Data Flow in a MapReduce Program
• InputFormat
• Map function
• Partitioner
• Sorting & Merging
• Combiner
• Shuffling
• Merging
• Reduce function
• OutputFormat
1:many
62. Counters
There are often things that you would like to know about the data you are analyzing
but that are peripheral to the analysis you are performing. Counters are a useful
channel for gathering statistics about the job: for quality control or for application-
level statistics.
Built-in Counters
Hadoop maintains some built-in counters for every job, and these report various
metrics. For example, there are counters for the number of bytes and records
processed, which allow you to confirm that the expected amount of input was
consumed and the expected amount of output was produced.
Counters are divided into groups, and there are several groups for the built-in
counters,
• MapReduce task counters
• Filesystem counters
• FileInputFormat counters
• FileOutputFormat counters
Each group either contains task counters (which are updated as a task progresses) or
job counters (which are updated as a job progresses).
63. Counters
Task counters
Task counters gather information about tasks over the course of their execution, and
the results are aggregated over all the tasks in a job.
The MAP_INPUT_RECORDS counter, for example, counts the input records read by
each map task and aggregates over all map tasks in a job, so that the final figure is the
total number of input records for the whole job.
Task counters are maintained by each task attempt, and periodically sent to the
application master so they can be globally aggregated.
Job counters
Job counters are maintained by the application master, so they don’t need to be sent
across the network, unlike all other counters, including user-defined ones. They
measure job-level statistics, not values that change while a task is running. For
example, TOTAL_LAUNCHED_MAPS counts the number of map tasks that were
launched over the course of a job (including tasks that failed).
User-Defined Java Counters
MapReduce allows user code to define a set of counters, which are then incremented
as desired in the mapper or reducer. Counters are defined by a Java enum.
64. JOINS
MAP-JOIN
• A map-side join between large inputs works by performing the join
before the data reaches the map function. For this to work, though,
the inputs to each map must be partitioned and sorted in a
particular way. Each input dataset must be divided into the same
number of partitions, and it must be sorted by the same key (the
join key) in each source. All the records for a particular key must
reside in the same partition. This may sound like a strict
requirement (and it is), but it actually fits the description of the
output of a MapReduce job.
Distributed Cache
• It is preferable to distribute datasets using Hadoop’s distributed
cache mechanism which provides a service for copying files to the
task nodes for the tasks to use them when they run. To save
network bandwidth, files are normally copied to any particular
node once per job.
65. JOINS
Reducer side –JOIN
• Reduce-side join is more general than a map-side
join, in that the input datasets don’t have to be
structured in any particular way, but it is less
efficient because both datasets have to go
through the MapReduce shuffle. The basic idea is
that the mapper tags each record with its source
and uses the join key as the map output key, so
that the records with the same key are brought
together in the reducer.
66. Secondary Sorting
• The MapReduce framework sorts the records by key
before they reach the reducers. For any particular key,
however, the values are not sorted. The order in which
the values appear is not even stable from one run to
the next, because they come from different map tasks,
which may finish at different times from run to run.
Generally, most MapReduce programs are written so as
not to depend on the order in which the values appear
to the reduce function. However, it is possible to
impose an order on the values by sorting and grouping
the keys in a particular way.
67. ToolRunner
public int run(String[] args) throws Exception
{
Configuration conf = new Configuration();
Job job = new Job(conf);
job.setJarByClass(multiInputFile.class);
…..
……
……
FileOutputFormat.setOutputPath(job, new Path(args[2]));
FileSystem.get(conf).delete(new Path(args[2]), true);
return (job.waitForCompletion(true) ? 0 : 1);
}
public static void main(String[] args) throws Exception
{
int ecode = ToolRunner.run(new multiInputFile(), args);
System.exit(ecode);
}
The key of the first record is the byte offset to the line in the input file (the 0th byte). The value of the first record includes the year, number of receipts, outlays, and the delta (receipts – outlays).
Remember – we are interested only in the first and fourth fields of the record value. Since the record value is in Text format, we will use a StringTokenizer to break up the Text string into individual fields.
Here we construct the StringTokenizer using white space as the delimiter.
Since we hard-coded the key to always be the string “summary,” there will be only one partition (and therefore only one reducer) when this mapreduce program is launched.
We determine if we’ve found a global minimum delta, and if so, assign the min and minYear accordingly.
When we pop out of the loop, we have the global min delta and the year associated with the min. We emit the year and min delta.
In the Driver class, we also define the types for output key and value in the job as Text and FloatWritable respectively. If the mapper and reducer classes do NOT use the same output key and value types, we must specify for the mapper. In this case, the output value type of the mapper is Text, while the output value type of the reducer is FloatWritable.
There are 2 ways to launch the job – syncronously and asyncronously. The job.waitForCompletion() launches the job syncronously. The driver code will block waiting for the job to complete at this line. The true argument informs the framework to write verbose output to the controlling terminal of the job.
The main() method is the entry point for the driver. In this method, we instantiate a new Configuration object for the job. We then call the ToolRunner static run() method.