Hadoop is a software framework that allows for distributed processing of large data sets across clusters of computers. It includes MapReduce for distributed computing, HDFS for storage, and runs efficiently on large clusters by distributing data and processing across nodes. Example applications include log analysis, machine learning, and sorting 1TB of data in under a minute. It is fault-tolerant, scalable, and designed for processing vast amounts of data in a reliable and cost-effective manner.
Supporting Financial Services with a More Flexible Approach to Big DataWANdisco Plc
In this webinar, WANdisco and Hortonworks look at three examples of using 'Big Data' to get a more comprehensive view of customer behavior and activity in the banking and insurance industries. Then we'll pull out the common threads from these examples, and see how a flexible next-generation Hadoop architecture lets you get a step up on improving your business performance. Join us to learn:
- How to leverage data from across an entire global enterprise
- How to analyze a wide variety of structured and unstructured data to get quick, meaningful answers to critical questions
- What industry leaders have put in place
Overview of Big data, Hadoop and Microsoft BI - version1Thanh Nguyen
Big Data and advanced analytics are critical topics for executives today. But many still aren't sure how to turn that promise into value. This presentation provides an overview of 16 examples and use cases that lay out the different ways companies have approached the issue and found value: everything from pricing flexibility to customer preference management to credit risk analysis to fraud protection and discount targeting. For the latest on Big Data & Advanced Analytics: http://mckinseyonmarketingandsales.com/topics/big-data
The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using a simple programming model. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-avaiability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-availabile service on top of a cluster of computers, each of which may be prone to failures.
Supporting Financial Services with a More Flexible Approach to Big DataWANdisco Plc
In this webinar, WANdisco and Hortonworks look at three examples of using 'Big Data' to get a more comprehensive view of customer behavior and activity in the banking and insurance industries. Then we'll pull out the common threads from these examples, and see how a flexible next-generation Hadoop architecture lets you get a step up on improving your business performance. Join us to learn:
- How to leverage data from across an entire global enterprise
- How to analyze a wide variety of structured and unstructured data to get quick, meaningful answers to critical questions
- What industry leaders have put in place
Overview of Big data, Hadoop and Microsoft BI - version1Thanh Nguyen
Big Data and advanced analytics are critical topics for executives today. But many still aren't sure how to turn that promise into value. This presentation provides an overview of 16 examples and use cases that lay out the different ways companies have approached the issue and found value: everything from pricing flexibility to customer preference management to credit risk analysis to fraud protection and discount targeting. For the latest on Big Data & Advanced Analytics: http://mckinseyonmarketingandsales.com/topics/big-data
The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using a simple programming model. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-avaiability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-availabile service on top of a cluster of computers, each of which may be prone to failures.
Big Data raises challenges about how to process such vast pool of raw data and how to aggregate value to our lives. For addressing these demands an ecosystem of tools named Hadoop was conceived.
This is an updated version of Amr's Hadoop presentation. Amr gave this talk recently at NASA CIDU event, TDWI LA Chapter, and also Netflix HQ. You should watch the powerpoint version as it has animations. The slides also include handout notes with additional information.
Hadoop is the popular open source like Facebook, Twitter, RFID readers, sensors, and implementation of MapReduce, a powerful tool so on.Your management wants to derive designed for deep analysis and transformation of information from both the relational data and thevery large data sets. Hadoop enables you to unstructuredexplore complex data, using custom analyses data, and wants this information as soon astailored to your information and questions. possible.Hadoop is the system that allows unstructured What should you do? Hadoop may be the answer!data to be distributed across hundreds or Hadoop is an open source project of the Apachethousands of machines forming shared nothing Foundation.clusters, and the execution of Map/Reduce It is a framework written in Java originallyroutines to run on the data in that cluster. Hadoop developed by Doug Cutting who named it after hishas its own filesystem which replicates data to sons toy elephant.multiple nodes to ensure if one node holding data Hadoop uses Google’s MapReduce and Google Filegoes down, there are at least 2 other nodes from System technologies as its foundation.which to retrieve that piece of information. This It is optimized to handle massive quantities of dataprotects the data availability from node failure, which could be structured, unstructured orsomething which is critical when there are many semi-structured, using commodity hardware, thatnodes in a cluster (aka RAID at a server level). is, relatively inexpensive computers. This massive parallel processing is done with greatWhat is Hadoop? performance. However, it is a batch operation handling massive quantities of data, so theThe data are stored in a relational database in your response time is not immediate.desktop computer and this desktop computer As of Hadoop version 0.20.2, updates are nothas no problem handling this load. possible, but appends will be possible starting inThen your company starts growing very quickly, version 0.21.and that data grows to 10GB. Hadoop replicates its data across differentAnd then 100GB. computers, so that if one goes down, the data areAnd you start to reach the limits of your current processed on one of the replicated computers.desktop computer. Hadoop is not suitable for OnLine Transaction So you scale-up by investing in a larger computer, Processing workloads where data are randomly and you are then OK for a few more months. accessed on structured data like a relational When your data grows to 10TB, and then 100TB. database.Hadoop is not suitable for OnLineAnd you are fast approaching the limits of that Analytical Processing or Decision Support Systemcomputer. workloads where data are sequentially accessed onMoreover, you are now asked to feed your structured data like a relational database, to application with unstructured data coming from generate reports that provide business sources intelligence. Hadoop is used for Big Data. It complements OnLine Transaction Processing and OnLine Analytical Pro
Introduction to Big Data & Hadoop Architecture - Module 1Rohit Agrawal
Learning Objectives - In this module, you will understand what is Big Data, What are the limitations of the existing solutions for Big Data problem; How Hadoop solves the Big Data problem, What are the common Hadoop ecosystem components, Hadoop Architecture, HDFS and Map Reduce Framework, and Anatomy of File Write and Read.
Big Data raises challenges about how to process such vast pool of raw data and how to aggregate value to our lives. For addressing these demands an ecosystem of tools named Hadoop was conceived.
This is an updated version of Amr's Hadoop presentation. Amr gave this talk recently at NASA CIDU event, TDWI LA Chapter, and also Netflix HQ. You should watch the powerpoint version as it has animations. The slides also include handout notes with additional information.
Hadoop is the popular open source like Facebook, Twitter, RFID readers, sensors, and implementation of MapReduce, a powerful tool so on.Your management wants to derive designed for deep analysis and transformation of information from both the relational data and thevery large data sets. Hadoop enables you to unstructuredexplore complex data, using custom analyses data, and wants this information as soon astailored to your information and questions. possible.Hadoop is the system that allows unstructured What should you do? Hadoop may be the answer!data to be distributed across hundreds or Hadoop is an open source project of the Apachethousands of machines forming shared nothing Foundation.clusters, and the execution of Map/Reduce It is a framework written in Java originallyroutines to run on the data in that cluster. Hadoop developed by Doug Cutting who named it after hishas its own filesystem which replicates data to sons toy elephant.multiple nodes to ensure if one node holding data Hadoop uses Google’s MapReduce and Google Filegoes down, there are at least 2 other nodes from System technologies as its foundation.which to retrieve that piece of information. This It is optimized to handle massive quantities of dataprotects the data availability from node failure, which could be structured, unstructured orsomething which is critical when there are many semi-structured, using commodity hardware, thatnodes in a cluster (aka RAID at a server level). is, relatively inexpensive computers. This massive parallel processing is done with greatWhat is Hadoop? performance. However, it is a batch operation handling massive quantities of data, so theThe data are stored in a relational database in your response time is not immediate.desktop computer and this desktop computer As of Hadoop version 0.20.2, updates are nothas no problem handling this load. possible, but appends will be possible starting inThen your company starts growing very quickly, version 0.21.and that data grows to 10GB. Hadoop replicates its data across differentAnd then 100GB. computers, so that if one goes down, the data areAnd you start to reach the limits of your current processed on one of the replicated computers.desktop computer. Hadoop is not suitable for OnLine Transaction So you scale-up by investing in a larger computer, Processing workloads where data are randomly and you are then OK for a few more months. accessed on structured data like a relational When your data grows to 10TB, and then 100TB. database.Hadoop is not suitable for OnLineAnd you are fast approaching the limits of that Analytical Processing or Decision Support Systemcomputer. workloads where data are sequentially accessed onMoreover, you are now asked to feed your structured data like a relational database, to application with unstructured data coming from generate reports that provide business sources intelligence. Hadoop is used for Big Data. It complements OnLine Transaction Processing and OnLine Analytical Pro
Introduction to Big Data & Hadoop Architecture - Module 1Rohit Agrawal
Learning Objectives - In this module, you will understand what is Big Data, What are the limitations of the existing solutions for Big Data problem; How Hadoop solves the Big Data problem, What are the common Hadoop ecosystem components, Hadoop Architecture, HDFS and Map Reduce Framework, and Anatomy of File Write and Read.
You can now download the presentation directly from Slideshare.
*Disclaimer this is just my imaginary example of a Comms Plan for the Puma work and not the actual strategy that was created by Droga5 for Puma. I had nothing to do with that plan and am just a fan of their work.
What is Comms Planning? is a presentation that provides a clear answer of the role of the Comms Planner within an Advertising Agency. I use the example of the Puma Social campaign to prove the point.
Hadoop Online Training : kelly technologies is the bestHadoop online Training Institutes in Bangalore. ProvidingHadoop online Training by real time faculty in Bangalore.
This presentation will give you Information about :
1.Configuring HDFS
2.Interacting With HDFS
3.HDFS Permissions and Security
4.Additional HDFS Tasks
HDFS Overview and Architecture
5.HDFS Installation
6.Hadoop File System Shell
7.File System Java API
Harnessing Hadoop: Understanding the Big Data Processing Options for Optimizi...Cognizant
A guide to using Apache Hadoop as your open source big data platform of choice, including the vendors that make various Hadoop flavors, related open source tools, Hadoop capabilities and suitable applications.
Asserting that Big Data is vital to business is an understatement. Organizations have generated more and more data for years, but struggle to use it effectively. Clearly Big Data has more important uses than ensuring compliance with regulatory requirements. In addition, data is being generated with greater velocity, due to the advent of new pervasive devices (e.g., smartphones, tablets, etc.), social Web sites (e.g., Facebook, Twitter, LinkedIn, etc.) and other sources like GPS, Google Maps, heat/pressure sensors, etc.
it just provide information about hadoop what is hadoop and how hadoop overcomes the disadvantage of distributed system and i have also shown an example program for mapreduce
Design Issues and Challenges of Peer-to-Peer Video on Demand System cscpconf
P2P media streaming and file downloading is most popular applications over the Internet.
These systems reduce the server load and provide a scalable content distribution. P2P
networking is a new paradigm to build distributed applications. It describes the design
requirements for P2P media streaming, live and Video on demand system comparison based on their system architecture. In this paper we described and studied the traditional approaches for P2P streaming systems, design issues, challenges, and current approaches for providing P2P VoD services.
Survey of Parallel Data Processing in Context with MapReduce cscpconf
MapReduce is a parallel programming model and an associated implementation introduced by
Google. In the programming model, a user specifies the computation by two functions, Map and Reduce. The underlying MapReduce library automatically parallelizes the computation, and handles complicated issues like data distribution, load balancing and fault tolerance. The original MapReduce implementation by Google, as well as its open-source counterpart,Hadoop, is aimed for parallelizing computing in large clusters of commodity machines.This paper gives an overview of MapReduce programming model and its applications. The author has described here the workflow of MapReduce process. Some important issues, like fault tolerance, are studied in more detail. Even the illustration of working of Map Reduce is given. The data locality issue in heterogeneous environments can noticeably reduce the Map Reduce performance. In this paper, the author has addressed the illustration of data across nodes in a way that each node has a balanced data processing load stored in a parallel manner. Given a data intensive application running on a Hadoop Map Reduce cluster, the auhor has exemplified how data placement is done in Hadoop architecture and the role of Map Reduce in the Hadoop Architecture. The amount of data stored in each node to achieve improved data-processing performance is explained here.
Apache Spark is written in Scala programming language that compiles the program code into byte code for the JVM for spark big data processing.
The open source community has developed a wonderful utility for spark python big data processing known as PySpark.
Jumpstart your career with the world’s most in-demand technology: Hadoop. Hadooptrainingacademy provides best Hadoop online training with quality videos, comprehensive
online live training and detailed study material. Join today!
For more info, visit: http://www.hadooptrainingacademy.com/
Contact Us:
8121660088
732-419-2619
http://www.hadooptrainingacademy.com/
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...Levi Shapiro
Letter from the Congress of the United States regarding Anti-Semitism sent June 3rd to MIT President Sally Kornbluth, MIT Corp Chair, Mark Gorenberg
Dear Dr. Kornbluth and Mr. Gorenberg,
The US House of Representatives is deeply concerned by ongoing and pervasive acts of antisemitic
harassment and intimidation at the Massachusetts Institute of Technology (MIT). Failing to act decisively to ensure a safe learning environment for all students would be a grave dereliction of your responsibilities as President of MIT and Chair of the MIT Corporation.
This Congress will not stand idly by and allow an environment hostile to Jewish students to persist. The House believes that your institution is in violation of Title VI of the Civil Rights Act, and the inability or
unwillingness to rectify this violation through action requires accountability.
Postsecondary education is a unique opportunity for students to learn and have their ideas and beliefs challenged. However, universities receiving hundreds of millions of federal funds annually have denied
students that opportunity and have been hijacked to become venues for the promotion of terrorism, antisemitic harassment and intimidation, unlawful encampments, and in some cases, assaults and riots.
The House of Representatives will not countenance the use of federal funds to indoctrinate students into hateful, antisemitic, anti-American supporters of terrorism. Investigations into campus antisemitism by the Committee on Education and the Workforce and the Committee on Ways and Means have been expanded into a Congress-wide probe across all relevant jurisdictions to address this national crisis. The undersigned Committees will conduct oversight into the use of federal funds at MIT and its learning environment under authorities granted to each Committee.
• The Committee on Education and the Workforce has been investigating your institution since December 7, 2023. The Committee has broad jurisdiction over postsecondary education, including its compliance with Title VI of the Civil Rights Act, campus safety concerns over disruptions to the learning environment, and the awarding of federal student aid under the Higher Education Act.
• The Committee on Oversight and Accountability is investigating the sources of funding and other support flowing to groups espousing pro-Hamas propaganda and engaged in antisemitic harassment and intimidation of students. The Committee on Oversight and Accountability is the principal oversight committee of the US House of Representatives and has broad authority to investigate “any matter” at “any time” under House Rule X.
• The Committee on Ways and Means has been investigating several universities since November 15, 2023, when the Committee held a hearing entitled From Ivory Towers to Dark Corners: Investigating the Nexus Between Antisemitism, Tax-Exempt Universities, and Terror Financing. The Committee followed the hearing with letters to those institutions on January 10, 202
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
Embracing GenAI - A Strategic ImperativePeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
1. Hadoop: A Software Framework for Data
Intensive Computing Applications
Rajinder Sandhu
Researcher in Cloud Computing and Virtualization
Technologies
errajindersandhu.blogspot.com
2. What is Hadoop?
Software platform that lets one easily write and run applications that
process vast amounts of data. It includes:
– MapReduce – offline computing engine
– HDFS – Hadoop distributed file system
– HBase (pre-alpha) – online data access
Yahoo! is the biggest contributor
Here's what makes it especially useful:
Scalable: It can reliably store and process petabytes.
Economical: It distributes the data and processing across clusters of
commonly available computers (in thousands).
Efficient: By distributing the data, it can process it in parallel on the
nodes where the data is located.
Reliable: It automatically maintains multiple copies of data and
automatically redeploys computing tasks based on failures.
2
3. Hadoop: Assumptions
It is written with large clusters of computers in mind and is built around the
following assumptions:
Hardware will fail.
Processing will be run in batches. Thus there is an emphasis on high
throughput as opposed to low latency.
Applications that run on HDFS have large data sets. A typical file in HDFS is
gigabytes to terabytes in size.
It should provide high aggregate data bandwidth and scale to hundreds of
nodes in a single cluster. It should support tens of millions of files in a single
instance.
Applications need a write-once-read-many access model.
Moving Computation is Cheaper than Moving Data.
Portability is important.
3
4. Apache Hadoop Wins Terabyte Sort Benchmark (July
2013)
One of Yahoo's Hadoop clusters sorted 1 terabyte of data in 42 seconds, which
beat the previous record of 80 seconds in 2011 for the annual general purpose
(daytona) terabyte sort benchmark. The sort benchmark specifies the input data (10
billion 100 byte records), which must be completely sorted and written to disk.
4
5. Example Applications and Organizations using
Hadoop
A9.com – Amazon: To build Amazon's product search indices; process millions of
5
sessions daily for analytics, using both the Java and streaming APIs; clusters vary
from 1 to 100 nodes.
Yahoo! : More than 100,000 CPUs in ~20,000 computers running Hadoop; biggest
cluster: 2000 nodes (2*4cpu boxes with 4TB disk each); used to support research for
Ad Systems and Web Search
AOL : Used for a variety of things ranging from statistics generation to running
advanced algorithms for doing behavioral analysis and targeting; cluster size is 50
machines, Intel Xeon, dual processors, dual core, each with 16GB Ram and 800 GB
hard-disk giving us a total of 37 TB HDFS capacity.
Facebook: To store copies of internal log and dimension data sources and use it as a
source for reporting/analytics and machine learning; 320 machine cluster with 2,560
cores and about 1.3 PB raw storage;
FOX Interactive Media : 3 X 20 machine cluster (8 cores/machine, 2TB/machine
storage) ; 10 machine cluster (8 cores/machine, 1TB/machine storage); Used for log
analysis, data mining and machine learning
University of Nebraska Lincoln: one medium-sized Hadoop cluster (200TB) to store
and serve physics data;
6. More Hadoop Applications
Adknowledge - to build the recommender system for behavioral targeting, plus
other clickstream analytics; clusters vary from 50 to 200 nodes, mostly on EC2.
Contextweb - to store ad serving log and use it as a source for Ad optimizations/
Analytics/reporting/machine learning; 23 machine cluster with 184 cores and about
35TB raw storage. Each (commodity) node has 8 cores, 8GB RAM and 1.7 TB of
storage.
Cornell University Web Lab: Generating web graphs on 100 nodes (dual 2.4GHz
Xeon Processor, 2 GB RAM, 72GB Hard Drive)
NetSeer - Up to 1000 instances on Amazon EC2 ; Data storage in Amazon S3; Used
for crawling, processing, serving and log analysis
The New York Times : Large scale image conversions ; EC2 to run hadoop on a
large virtual cluster
Powerset / Microsoft - Natural Language Search; up to 400 instances on Amazon
EC2 ; data storage in Amazon S3
6
7. MapReduce Paradigm
Programming model developed at Google
Sort/merge based distributed computing
Initially, it was intended for their internal search/indexing
application, but now used extensively by more organizations (e.g.,
Yahoo, Amazon.com, IBM, etc.)
It is functional style programming that is naturally parallelizable across
a large cluster of workstations or PCS.
The underlying system takes care of the partitioning of the
input data, scheduling the program’s execution across
several machines, handling machine failures, and managing
required inter-machine communication. (This is the key for
Hadoop’s success)
7
8. What does it do?
Hadoop implements Google’s MapReduce, using HDFS
MapReduce divides applications into many small blocks of work.
HDFS creates multiple replicas of data blocks for reliability, placing them on
compute nodes around the cluster.
MapReduce can then process the data where it is located.
Hadoop ‘s target is to run on clusters of the order of 10,000-nodes.
8
9. How does MapReduce work?
The run time partitions the input and provides it to different Map
instances;
Map (key, value) (key’, value’)
The run time collects the (key’, value’) pairs and distributes them
to several Reduce functions so that each Reduce function gets the
pairs with the same key’.
Each Reduce produces a single (or zero) file output.
Map and Reduce are user written functions
9
10. Example MapReduce: To count the occurrences
of words in the given set of documents
map(String key, String value):
// key: document name; value: document contents; map (k1,v1) list(k2,v2)
for each word w in value: EmitIntermediate(w, "1");
(Example: If input string is (“God is God. I am I”), Map produces {<“God”,1”>,
<“is”, 1>, <“God”, 1>, <“I”,1>, <“am”,1>,<“I”,1>}
reduce(String key, Iterator values):
// key: a word; values: a list of counts; reduce (k2,list(v2)) list(v2)
int result = 0;
for each v in values:
result += ParseInt(v);
Emit(AsString(result));
(Example: reduce(“I”, <1,1>) 2)
10
11. Word Count
see bob throw
see spot run
see
bob
throw
see
spot
run
1
1
1
1
1
1
bob
run
see
spot
throw
Can we do word count in parallel?
1
1
2
1
1
12. Automatic Parallel Execution in
MapReduce
Handles failures automatically, e.g., restarts tasks if a
node fails; runs multiples copies of the same task to
avoid a slow task slowing down the whole job
14. Data Flow in a MapReduce
Program in Hadoop
InputFormat
Map function
Partitioner
Sorting & Merging
Combiner
Shuffling
Merging
Reduce function
OutputFormat
1:many
15. MapReduce-Fault tolerance
Worker failure: The master pings every worker periodically. If no response
is received from a worker in a certain amount of time, the master marks the
worker as failed. Any map tasks completed by the worker are reset back to
their initial idle state, and therefore become eligible for scheduling on other workers.
Similarly, any map task or reduce task in progress on a failed worker is also
reset to idle and becomes eligible for rescheduling.
Master Failure: It is easy to make the master write periodic
checkpoints of the master data structures described above. If the master
task dies, a new copy can be started from the last checkpointed state.
However, in most cases, the user restarts the job.
15
16. Mapping workers to Processors
The input data (on HDFS) is stored on the local disks of the machines
in the cluster. HDFS divides each file into 64 MB blocks, and stores
several copies of each block (typically 3 copies) on different machines.
The MapReduce master takes the location information of the input files
into account and attempts to schedule a map task on a machine that
contains a replica of the corresponding input data. Failing that, it
attempts to schedule a map task near a replica of that task's input data.
When running large MapReduce operations on a significant fraction of
the workers in a cluster, most input data is read locally and consumes
no network bandwidth.
16
17. Task Granularity
The map phase has M pieces and the reduce phase has R pieces.
M and R should be much larger than the number of worker machines.
Having each worker perform many different tasks improves dynamic
load balancing, and also speeds up recovery when a worker fails.
Larger the M and R, more the decisions the master must make
R is often constrained by users because the output of each reduce task
ends up in a separate output file.
Typically, (at Google), M = 200,000 and R = 5,000, using 2,000
worker machines.
17
18. HDFS
The Hadoop Distributed File System (HDFS) is a distributed file system
designed to run on commodity hardware. It has many similarities with existing
distributed file systems. However, the differences from other distributed file
systems are significant.
highly fault-tolerant and is designed to be deployed on low-cost hardware.
provides high throughput access to application data and is suitable for
applications that have large data sets.
relaxes a few POSIX requirements to enable streaming access to file system
data.
part of the Apache Hadoop Core project. The project URL is
http://hadoop.apache.org/core/.
18
22. Execution overview
1. The MapReduce library in the user program first splits input files into M
pieces of typically 16 MB to 64 MB/piece. It then starts up many copies of
the program on a cluster of machines.
2. One of the copies of the program is the master. The rest are workers that are
assigned work by the master. There are M map tasks and R reduce tasks to
assign. The master picks idle workers and assigns each one a map task or a
reduce task.
3. A worker who is assigned a map task reads the contents of the assigned
input split. It parses key/value pairs out of the input data and passes each
pair to the user-defined Map function. The intermediate key/value pairs
produced by the Map function are buffered in memory.
4. The locations of these buffered pairs on the local disk are passed back to the
master, who forwards these locations to the reduce workers.
22
23. Execution overview (cont.)
5. When a reduce worker is notified by the master about these locations, it
uses RPC remote procedure calls to read the buffered data from the
local disks of the map workers. When a reduce worker has read all
intermediate data, it sorts it by the intermediate keys so that all
occurrences of the same key are grouped together.
6. The reduce worker iterates over the sorted intermediate data and for
each unique intermediate key encountered, it passes the key and the
corresponding set of intermediate values to the user's Reduce function.
The output of the Reduce function is appended to a final output file for
this reduce partition.
7. When all map tasks and reduce tasks have been completed, the master
wakes up the user program---the MapReduce call in the user program
returns back to the user code. The output of the mapreduce execution is
available in the R output files (one per reduce task).
23
24. References
[1] J. Dean and S. Ghemawat, ``MapReduce: Simplied Data Processing
on Large Clusters,’’ OSDI 2004. (Google)
[2] D. Cutting and E. Baldeschwieler, ``Meet Hadoop,’’ OSCON,
Portland, OR, USA, 25 July 2007 (Yahoo!)
[3] R. E. Brayant, “Data Intensive Scalable computing: The case for
DISC,” Tech Report: CMU-CS-07-128,
http://www.cs.cmu.edu/~bryant
24