This presentation will give you Information about :
1. Map/Reduce Overview and Architecture Installation
2. Developing Map/Red Jobs Input and Output Formats
3. Job Configuration Job Submission
4. Practicing Map Reduce Programs (atleast 10 Map Reduce
5. Algorithms )Data Flow Sources and Destinations
6. Data Flow Transformations Data Flow Paths
7. Custom Data Types
8. Input Formats
9. Output Formats
10. Partitioning Data
11. Reporting Custom Metrics
12. Distributing Auxiliary Job Data
Mapreduce examples starting from the basic WordCount to a more complex K-means algorithm. The code contained in these slides is available at https://github.com/andreaiacono/MapReduce
Mapreduce examples starting from the basic WordCount to a more complex K-means algorithm. The code contained in these slides is available at https://github.com/andreaiacono/MapReduce
More about Hadoop
www.beinghadoop.com
https://www.facebook.com/hadoopinfo
This PPT Gives information about
Complete Hadoop Architecture and
information about
how user request is processed in Hadoop?
About Namenode
Datanode
jobtracker
tasktracker
Hadoop installation Post Configurations
A tutorial presentation based on hadoop.apache.org documentation.
I gave this presentation at Amirkabir University of Technology as Teaching Assistant of Cloud Computing course of Dr. Amir H. Payberah in spring semester 2015.
A MapReduce job usually splits the input data-set into independent chunks which are processed by the map tasks in a completely parallel manner. The framework sorts the outputs of the maps, which are then input to the reduce tasks. Typically both the input and the output of the job are stored in a file-system.
Here is our most popular Hadoop Interview Questions and Answers from our Hadoop Developer Interview Guide. Hadoop Developer Interview Guide has over 100 REAL Hadoop Developer Interview Questions with detailed answers and illustrations asked in REAL interviews. The Hadoop Interview Questions listed in the guide are not "might be" asked interview question, they were asked in interviews at least once.
More about Hadoop
www.beinghadoop.com
https://www.facebook.com/hadoopinfo
This PPT Gives information about
Complete Hadoop Architecture and
information about
how user request is processed in Hadoop?
About Namenode
Datanode
jobtracker
tasktracker
Hadoop installation Post Configurations
A tutorial presentation based on hadoop.apache.org documentation.
I gave this presentation at Amirkabir University of Technology as Teaching Assistant of Cloud Computing course of Dr. Amir H. Payberah in spring semester 2015.
A MapReduce job usually splits the input data-set into independent chunks which are processed by the map tasks in a completely parallel manner. The framework sorts the outputs of the maps, which are then input to the reduce tasks. Typically both the input and the output of the job are stored in a file-system.
Here is our most popular Hadoop Interview Questions and Answers from our Hadoop Developer Interview Guide. Hadoop Developer Interview Guide has over 100 REAL Hadoop Developer Interview Questions with detailed answers and illustrations asked in REAL interviews. The Hadoop Interview Questions listed in the guide are not "might be" asked interview question, they were asked in interviews at least once.
This is a presentation I held at "DevOps and Security" -meetup on 5th of April 2016 at RedHat.
Source is available at: https://github.com/jerryjj/devsec_050416
Impetus provides expert consulting services around Hadoop implementations, including R&D, assessment, deployment (on private and public clouds), optimizations for enhanced static shared data implementations.
This presentation speaks about Advanced Hadoop Tuning and Optimisation.
Hadoop Institutes : kelly technologies is the best Hadoop Training Institutes in Hyderabad. Providing Hadoop training by real time faculty in Hyderabad.
Vibrant Technologies is headquarted in Mumbai,India.We are the best Hadoop training provider in Navi Mumbai who provides Live Projects to students.We provide Corporate Training also.We are Best Hadoop classes in Mumbai according to our students and corporates
Best Hadoop Institutes : kelly tecnologies is the best Hadoop training Institute in Bangalore.Providing hadoop courses by realtime faculty in Bangalore.
Processing massive amount of data with Map Reduce using Apache Hadoop - Indi...IndicThreads
Session presented at the 2nd IndicThreads.com Conference on Cloud Computing held in Pune, India on 3-4 June 2011.
http://CloudComputing.IndicThreads.com
Abstract: The processing of massive amount of data gives great insights into analysis for business. Many primary algorithms run over the data and gives information which can be used for business benefits and scientific research. Extraction and processing of large amount of data has become a primary concern in terms of time, processing power and cost. Map Reduce algorithm promises to address the above mentioned concerns. It makes computing of large sets of data considerably easy and flexible. The algorithm offers high scalability across many computing nodes. This session will introduce Map Reduce algorithm, followed by few variations of the same and also hands on example in Map Reduce using Apache Hadoop.
Speaker: Allahbaksh Asadullah is a Product Technology Lead from Infosys Labs, Bangalore. He has over 5 years of experience in software industry in various technologies. He has extensively worked on GWT, Eclipse Plugin development, Lucene, Solr, No SQL databases etc. He speaks at the developer events like ACM Compute, Indic Threads and Dev Camps.
Learning Objectives - In this module, you will understand Hadoop MapReduce framework and how MapReduce works on data stored in HDFS. Also, you will learn what are the different types of Input and Output formats in MapReduce framework and their usage.
This is an quick introduction to Scalding and Monoids. Scalding is a Scala library that makes writing MapReduce jobs very easy. Monoids on the other hand promise parallelism and quality and they make some more challenging algorithms look very easy.
The talk was held at the Helsinki Data Science meetup on January 9th 2014.
Advance Map reduce - Apache hadoop Bigdata training by Design PathshalaDesing Pathshala
Learn Hadoop and Bigdata Analytics, Join Design Pathshala training programs on Big data and analytics.
This slide covers the Advance Map reduce concepts of Hadoop and Big Data.
For training queries you can contact us:
Email: admin@designpathshala.com
Call us at: +91 98 188 23045
Visit us at: http://designpathshala.com
Join us at: http://www.designpathshala.com/contact-us
Course details: http://www.designpathshala.com/course/view/65536
Big data Analytics Course details: http://www.designpathshala.com/course/view/1441792
Business Analytics Course details: http://www.designpathshala.com/course/view/196608
Hadoop became the most common systm to store big data.
With Hadoop, many supporting systems emerged to complete the aspects that are missing in Hadoop itself.
Together they form a big ecosystem.
This presentation covers some of those systems.
While not capable to cover too many in one presentation, I tried to focus on the most famous/popular ones and on the most interesting ones.
Similar to Hadoop - Introduction to mapreduce (20)
Vibrant Technologies is headquarted in Mumbai,India.We are the best Business Analyst training provider in Navi Mumbai who provides Live Projects to students.We provide Corporate Training also.We are Best Business Analyst classes in Mumbai according to our students and corporators
This presentation is about -
History of ITIL,
ITIL Qualification scheme,
Introduction to ITIL,
For more details visit -
http://vibranttechnologies.co.in/itil-classes-in-mumbai.html
This presentation is about -
Create & Manager Users,
Set organization-wide defaults,
Learn about record accessed,
Create the role hierarchy,
Learn about role transfer & mass Transfer functionality,
Profiles, Login History,
For more details you can visit -
http://vibranttechnologies.co.in/salesforce-classes-in-mumbai.html
This presentation is about -
Based on as a service model,
• SAAS (Software as a service),
• PAAS (Platform as a service),
• IAAS (Infrastructure as a service,
Based on deployment or access model,
• Public Cloud,
• Private Cloud,
• Hybrid Cloud,
For more details you can visit -
http://vibranttechnologies.co.in/salesforce-classes-in-mumbai.html
This presentation is about -
Introduction to the Cloud Computing ,
Evolution of Cloud Computing,
Comparisons with other computing techniques fetchers,
Key characteristics of cloud computing,
Advantages/Disadvantages,
For more details you can visit -
http://vibranttechnologies.co.in/salesforce-classes-in-mumbai.html
This presentation is about -
Designing the Data Mart planning,
a data warehouse course data for the Orion Star company,
Orion Star data models,
For more details Visit :-
http://vibranttechnologies.co.in/sas-classes-in-mumbai.html
This presentation is about -
Working Under Change Management,
What is change management? ,
repository types using change management
For more details Visit :-
http://vibranttechnologies.co.in/sas-classes-in-mumbai.html
This presentation is about -
Overview of SAS 9 Business Intelligence Platform,
SAS Data Integration,
Study Business Intelligence,
overview Business Intelligence Information Consumers ,navigating in SAS Data Integration Studio,
For more details Visit :-
http://vibranttechnologies.co.in/sas-classes-in-mumbai.html
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
4. Some MapReduce Terminology
Job – A “full program” - an execution of a Mapper and
Reducer across a data set
Task – An execution of a Mapper or a Reducer on a slice
of data
a.k.a. Task-In-Progress (TIP)
Task Attempt – A particular instance of an attempt to
execute a task on a machine
5. Task Attempts
A particular task will be attempted at least once,
possibly more times if it crashes
If the same input causes crashes over and over, that input
will eventually be abandoned
Multiple attempts at one task may occur in parallel
with speculative execution turned on
Task ID from TaskInProgress is not a unique identifier;
don’t use it that way
7. Nodes, Trackers, Tasks
Master node runs JobTracker instance, which accepts
Job requests from clients
TaskTracker instances run on slave nodes
TaskTracker forks separate Java process for task
instances
8. Job Distribution
MapReduce programs are contained in a Java “jar”
file + an XML file containing serialized program
configuration options
Running a MapReduce job places these files into
the HDFS and notifies TaskTrackers where to
retrieve the relevant program code
… Where’s the data distribution?
9. Data Distribution
Implicit in design of MapReduce!
All mappers are equivalent; so map whatever data
is local to a particular node in HDFS
If lots of data does happen to pile up on the
same node, nearby nodes will map instead
Data transfer is handled implicitly by HDFS
11. Job Launch Process: Client
Client program creates a JobConf
Identify classes implementing Mapper and
Reducer interfaces
JobConf.setMapperClass(), setReducerClass()
Specify inputs, outputs
FileInputFormat.setInputPath(),
FileOutputFormat.setOutputPath()
Optionally, other options too:
JobConf.setNumReduceTasks(),
JobConf.setOutputFormat()…
12. Job Launch Process: JobClient
Pass JobConf to JobClient.runJob() or
submitJob()
runJob() blocks, submitJob() does not
JobClient:
Determines proper division of input into InputSplits
Sends job data to master JobTracker server
13. Job Launch Process: JobTracker
JobTracker:
Inserts jar and JobConf (serialized to XML) in
shared location
Posts a JobInProgress to its run queue
14. Job Launch Process: TaskTracker
TaskTrackers running on slave nodes
periodically query JobTracker for work
Retrieve job-specific jar and config
Launch task in separate instance of Java
main() is provided by Hadoop
15. Job Launch Process: Task
TaskTracker.Child.main():
Sets up the child TaskInProgress attempt
Reads XML configuration
Connects back to necessary MapReduce
components via RPC
Uses TaskRunner to launch user process
16. Job Launch Process: TaskRunner
TaskRunner, MapTaskRunner, MapRunner
work in a daisy-chain to launch your Mapper
Task knows ahead of time which InputSplits it
should be mapping
Calls Mapper once for each record retrieved from
the InputSplit
Running the Reducer is much the same
17. Creating the Mapper
You provide the instance of Mapper
Should extend MapReduceBase
One instance of your Mapper is initialized by
the MapTaskRunner for a TaskInProgress
Exists in separate process from all other instances
of Mapper – no data sharing!
19. What is Writable?
Hadoop defines its own “box” classes for
strings (Text), integers (IntWritable), etc.
All values are instances of Writable
All keys are instances of WritableComparable
20.
21. Reading Data
Data sets are specified by InputFormats
Defines input data (e.g., a directory)
Identifies partitions of the data that form an
InputSplit
Factory for RecordReader objects to extract (k, v)
records from the input source
22. FileInputFormat and Friends
TextInputFormat – Treats each ‘n’-terminated line of a
file as a value
KeyValueTextInputFormat – Maps ‘n’- terminated text
lines of “k SEP v”
SequenceFileInputFormat – Binary file of (k, v) pairs with
some add’l metadata
SequenceFileAsTextInputFormat – Same, but maps
(k.toString(), v.toString())
23. Filtering File Inputs
FileInputFormat will read all files out of a
specified directory and send them to the
mapper
Delegates filtering this file list to a method
subclasses may override
e.g., Create your own “xyzFileInputFormat” to
read *.xyz from directory list
24. Record Readers
Each InputFormat provides its own
RecordReader implementation
Provides (unused?) capability multiplexing
LineRecordReader – Reads a line from a text
file
KeyValueRecordReader – Used by
KeyValueTextInputFormat
25. Input Split Size
FileInputFormat will divide large files into
chunks
Exact size controlled by mapred.min.split.size
RecordReaders receive file, offset, and
length of chunk
Custom InputFormat implementations may
override split size – e.g., “NeverChunkFile”
26. Sending Data To Reducers
Map function receives OutputCollector object
OutputCollector.collect() takes (k, v) elements
Any (WritableComparable, Writable) can be
used
By default, mapper output type assumed to
be same as reducer output type
28. Sending Data To The Client
Reporter object sent to Mapper allows simple
asynchronous feedback
incrCounter(Enum key, long amount)
setStatus(String msg)
Allows self-identification of input
InputSplit getInputSplit()
29.
30. Partitioner
int getPartition(key, val, numPartitions)
Outputs the partition number for a given key
One partition == values sent to one Reduce task
HashPartitioner used by default
Uses key.hashCode() to return partition num
JobConf sets Partitioner implementation
31. Reduction
reduce( K2 key,
Iterator<V2> values,
OutputCollector<K3, V3> output,
Reporter reporter )
Keys & values sent to one partition all go to
the same reduce task
Calls are sorted by key – “earlier” keys are
reduced and output before “later” keys
33. OutputFormat
Analogous to InputFormat
TextOutputFormat – Writes “key valn” strings
to output file
SequenceFileOutputFormat – Uses a binary
format to pack (k, v) pairs
NullOutputFormat – Discards output
Only useful if defining own output methods within
reduce()
34. Example Program - Wordcount
map()
Receives a chunk of text
Outputs a set of word/count pairs
reduce()
Receives a key and all its associated values
Outputs the key and the sum of the values
package org.myorg;
import java.io.IOException;
import java.util.*;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapred.*;
import org.apache.hadoop.util.*;
public class WordCount {
35. Wordcount – main( )
public static void main(String[] args) throws Exception {
JobConf conf = new JobConf(WordCount.class);
conf.setJobName("wordcount");
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);
conf.setMapperClass(Map.class);
conf.setReducerClass(Reduce.class);
conf.setInputFormat(TextInputFormat.class);
conf.setOutputFormat(TextOutputFormat.class);
FileInputFormat.setInputPaths(conf, new Path(args[0]));
FileOutputFormat.setOutputPath(conf, new Path(args[1]));
JobClient.runJob(conf);
}
36. Wordcount – map( )
public static class Map extends MapReduceBase … {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value,
OutputCollector<Text, IntWritable> output, …) … {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
output.collect(word, one);
}
}
}
37. Wordcount – reduce( )
public static class Reduce extends MapReduceBase … {
public void reduce(Text key, Iterator<IntWritable> values,
OutputCollector<Text, IntWritable> output, …) … {
int sum = 0;
while (values.hasNext()) {
sum += values.next().get();
}
output.collect(key, new IntWritable(sum));
}
}
}
38. Hadoop Streaming
Allows you to create and run map/reduce
jobs with any executable
Similar to unix pipes, e.g.:
format is: Input | Mapper | Reducer
echo “this sentence has five lines” | cat | wc
39. Hadoop Streaming
Mapper and Reducer receive data from stdin and output to
stdout
Hadoop takes care of the transmission of data between the
map/reduce tasks
It is still the programmer’s responsibility to set the correct
key/value
Default format: “key t valuen”
Let’s look at a Python example of a MapReduce word count
program…
40. Streaming_Mapper.py
# read in one line of input at a time from stdin
for line in sys.stdin:
line = line.strip() # string
words = line.split() # list of strings
# write data on stdout
for word in words:
print ‘%st%i’ % (word, 1)
41. Hadoop Streaming
What are we outputting?
Example output: “the 1”
By default, “the” is the key, and “1” is the value
Hadoop Streaming handles delivering this key/value pair to a
Reducer
Able to send similar keys to the same Reducer or to an
intermediary Combiner
42. Streaming_Reducer.py
wordcount = { } # empty dictionary
# read in one line of input at a time from stdin
for line in sys.stdin:
line = line.strip() # string
key,value = line.split()
wordcount[key] = wordcount.get(key, 0) + value
# write data on stdout
for word, count in sorted(wordcount.items()):
print ‘%st%i’ % (word, count)
43. Hadoop Streaming Gotcha
Streaming Reducer receives single lines
(which are key/value pairs) from stdin
Regular Reducer receives a collection of all the
values for a particular key
It is still the case that all the values for a particular
key will go to a single Reducer
44. Using Hadoop Distributed File System
(HDFS)
Can access HDFS through various shell
commands (see Further Resources slide for
link to documentation)
hadoop –put <localsrc> … <dst>
hadoop –get <src> <localdst>
hadoop –ls
hadoop –rm file
45. Configuring Number of Tasks
Normal method
jobConf.setNumMapTasks(400)
jobConf.setNumReduceTasks(4)
Hadoop Streaming method
-jobconf mapred.map.tasks=400
-jobconf mapred.reduce.tasks=4
Note: # of map tasks is only a hint to the framework. Actual
number depends on the number of InputSplits generated
46. Running a Hadoop Job
Place input file into HDFS:
hadoop fs –put ./input-file input-file
Run either normal or streaming version:
hadoop jar Wordcount.jar org.myorg.Wordcount input-file
output-file
hadoop jar hadoop-streaming.jar
-input input-file
-output output-file
-file Streaming_Mapper.py
-mapper python Streaming_Mapper.py
-file Streaming_Reducer.py
-reducer python Streaming_Reducer.py
47. Submitting to RC’s GridEngine
Add appropriate modules
module add apps/jdk/1.6.0_22.x86_64 apps/hadoop/0.20.2
Use the submit script posted in the Further Resources slide
Script calls internal functions hadoop_start and hadoop_end
Adjust the lines for transferring the input file to HDFS and starting the
hadoop job using the commands on the previous slide
Adjust the expected runtime (generally good practice to overshoot your
estimate)
#$ -l h_rt=02:00:00
NOTICE: “All jobs are required to have a hard run-time specification. Jobs
that do not have this specification will have a default run-time of 10 minutes
and will be stopped at that point.”
48. Output Parsing
Output of the reduce tasks must be retrieved:
hadoop fs –get output-file hadoop-output
This creates a directory of output files, 1 per reduce task
Output files numbered part-00000, part-00001, etc.
Sample output of Wordcount
head –n5 part-00000
“’tis 1
“come 2
“coming 1
“edwin 1
“found 1
49. Extra Output
The stdout/stderr streams of Hadoop itself will be stored in an output file
(whichever one is named in the startup script)
#$ -o output.$job_id
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = svc-3024-8-10.rc.usf.edu/10.250.4.205
…
11/03/02 18:28:47 INFO mapred.FileInputFormat: Total input paths to process : 1
11/03/02 18:28:47 INFO mapred.JobClient: Running job: job_local_0001
…
11/03/02 18:28:48 INFO mapred.MapTask: numReduceTasks: 1
…
11/03/02 18:28:48 INFO mapred.TaskRunner: Task 'attempt_local_0001_m_000000_0' done.
11/03/02 18:28:48 INFO mapred.Merger: Merging 1 sorted segments
11/03/02 18:28:48 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total
size: 43927 bytes
11/03/02 18:28:48 INFO mapred.JobClient: map 100% reduce 0%
…
50. Thank You !!!
For More Information click below link:
Follow Us on:
http://vibranttechnologies.co.in/hadoop-classes-in-mumbai.html