This document provides an overview of Hadoop, including its architecture, installation, configuration, and commands. It describes the challenges of large-scale data that Hadoop addresses through distributed processing and storage across clusters. The key components of Hadoop are HDFS for storage and MapReduce for distributed processing. HDFS stores data across clusters and provides fault tolerance through replication, while MapReduce allows parallel processing of large datasets through a map and reduce programming model. The document also outlines how to install and configure Hadoop in pseudo-distributed and fully distributed modes.
More about Hadoop
www.beinghadoop.com
https://www.facebook.com/hadoopinfo
This PPT Gives information about
Complete Hadoop Architecture and
information about
how user request is processed in Hadoop?
About Namenode
Datanode
jobtracker
tasktracker
Hadoop installation Post Configurations
More about Hadoop
www.beinghadoop.com
https://www.facebook.com/hadoopinfo
This PPT Gives information about
Complete Hadoop Architecture and
information about
how user request is processed in Hadoop?
About Namenode
Datanode
jobtracker
tasktracker
Hadoop installation Post Configurations
Hadoop Institutes : kelly technologies is the best Hadoop Training Institutes in Hyderabad. Providing Hadoop training by real time faculty in Hyderabad.
Presentation on 2013-06-27, Workshop on the future of Big Data management, discussing hadoop for a science audience that are either HPC/grid users or people suddenly discovering that their data is accruing towards PB.
The other talks were on GPFS, LustreFS and Ceph, so rather than just do beauty-contest slides, I decided to raise the question of "what is a filesystem?", whether the constraints imposed by the Unix metaphor and API are becoming limits on scale and parallelism (both technically and, for GPFS and Lustre Enterprise in cost).
Then: HDFS as the foundation for the Hadoop stack.
All the other FS talks did emphasise their Hadoop integration, with the Intel talk doing the most to assert performance improvements of LustreFS over HDFSv1 in dfsIO and Terasort (no gridmix?), which showed something important: Hadoop is the application that add DFS developers have to have a story for
This presentation about Hadoop architecture will help you understand the architecture of Apache Hadoop in detail. In this video, you will learn what is Hadoop, components of Hadoop, what is HDFS, HDFS architecture, Hadoop MapReduce, Hadoop MapReduce example, Hadoop YARN and finally, a demo on MapReduce. Apache Hadoop offers a versatile, adaptable and reliable distributed computing big data framework for a group of systems with capacity limit and local computing power. After watching this video, you will also understand the Hadoop Distributed File System and its features along with the practical implementation.
Below are the topics covered in this Hadoop Architecture presentation:
1. What is Hadoop?
2. Components of Hadoop
3. What is HDFS?
4. HDFS Architecture
5. Hadoop MapReduce
6. Hadoop MapReduce Example
7. Hadoop YARN
8. Demo on MapReduce
What are the course objectives?
This course will enable you to:
1. Understand the different components of Hadoop ecosystem such as Hadoop 2.7, Yarn, MapReduce, Pig, Hive, Impala, HBase, Sqoop, Flume, and Apache Spark
2. Understand Hadoop Distributed File System (HDFS) and YARN as well as their architecture, and learn how to work with them for storage and resource management
3. Understand MapReduce and its characteristics, and assimilate some advanced MapReduce concepts
4. Get an overview of Sqoop and Flume and describe how to ingest data using them
5. Create database and tables in Hive and Impala, understand HBase, and use Hive and Impala for partitioning
6. Understand different types of file formats, Avro Schema, using Arvo with Hive, and Sqoop and Schema evolution
7. Understand Flume, Flume architecture, sources, flume sinks, channels, and flume configurations
8. Understand HBase, its architecture, data storage, and working with HBase. You will also understand the difference between HBase and RDBMS
9. Gain a working knowledge of Pig and its components
10. Do functional programming in Spark
11. Understand resilient distribution datasets (RDD) in detail
12. Implement and build Spark applications
13. Gain an in-depth understanding of parallel processing in Spark and Spark RDD optimization techniques
14. Understand the common use-cases of Spark and the various interactive algorithms
15. Learn Spark SQL, creating, transforming, and querying Data frames
Who should take up this Big Data and Hadoop Certification Training Course?
Big Data career opportunities are on the rise, and Hadoop is quickly becoming a must-know technology for the following professionals:
1. Software Developers and Architects
2. Analytics Professionals
3. Senior IT professionals
4. Testing and Mainframe professionals
5. Data Management Professionals
6. Business Intelligence Professionals
7. Project Managers
8. Aspiring Data Scientists
Learn more at https://www.simplilearn.com/big-data-and-analytics/big-data-and-hadoop-training
This slide contain basic detail about Hadoop and big data. Steps to install and configure Hadoop in Linux OS. And an example to count number of words in a text file using Hadoop.
Hadoop Cluster Configuration and Data Loading - Module 2Rohit Agrawal
Learning Objectives - In this module, you will learn the Hadoop Cluster Architecture and Setup, Important Configuration files in a Hadoop Cluster, Data Loading Techniques.
Hadoop Institutes in Bangalore: Kelly Technologies is the best Hadoop Training Institute in Bangalore and providing Hadoop Training classes by real-time faculty with course material and 24x7 Lab Facility.
Apache Hadoop is primarily used to bolster information escalated web applications. Essentially it can isolate programming applications identifying with immense information clusters in hadoop training in hyderabad, into little sections for simple understanding, recording and rehashed utilization
http://www.kellytechno.com/Hyderabad/Course/Hadoop-Training
Hadoop Institutes : kelly technologies is the best Hadoop Training Institutes in Hyderabad. Providing Hadoop training by real time faculty in Hyderabad.
Presentation on 2013-06-27, Workshop on the future of Big Data management, discussing hadoop for a science audience that are either HPC/grid users or people suddenly discovering that their data is accruing towards PB.
The other talks were on GPFS, LustreFS and Ceph, so rather than just do beauty-contest slides, I decided to raise the question of "what is a filesystem?", whether the constraints imposed by the Unix metaphor and API are becoming limits on scale and parallelism (both technically and, for GPFS and Lustre Enterprise in cost).
Then: HDFS as the foundation for the Hadoop stack.
All the other FS talks did emphasise their Hadoop integration, with the Intel talk doing the most to assert performance improvements of LustreFS over HDFSv1 in dfsIO and Terasort (no gridmix?), which showed something important: Hadoop is the application that add DFS developers have to have a story for
This presentation about Hadoop architecture will help you understand the architecture of Apache Hadoop in detail. In this video, you will learn what is Hadoop, components of Hadoop, what is HDFS, HDFS architecture, Hadoop MapReduce, Hadoop MapReduce example, Hadoop YARN and finally, a demo on MapReduce. Apache Hadoop offers a versatile, adaptable and reliable distributed computing big data framework for a group of systems with capacity limit and local computing power. After watching this video, you will also understand the Hadoop Distributed File System and its features along with the practical implementation.
Below are the topics covered in this Hadoop Architecture presentation:
1. What is Hadoop?
2. Components of Hadoop
3. What is HDFS?
4. HDFS Architecture
5. Hadoop MapReduce
6. Hadoop MapReduce Example
7. Hadoop YARN
8. Demo on MapReduce
What are the course objectives?
This course will enable you to:
1. Understand the different components of Hadoop ecosystem such as Hadoop 2.7, Yarn, MapReduce, Pig, Hive, Impala, HBase, Sqoop, Flume, and Apache Spark
2. Understand Hadoop Distributed File System (HDFS) and YARN as well as their architecture, and learn how to work with them for storage and resource management
3. Understand MapReduce and its characteristics, and assimilate some advanced MapReduce concepts
4. Get an overview of Sqoop and Flume and describe how to ingest data using them
5. Create database and tables in Hive and Impala, understand HBase, and use Hive and Impala for partitioning
6. Understand different types of file formats, Avro Schema, using Arvo with Hive, and Sqoop and Schema evolution
7. Understand Flume, Flume architecture, sources, flume sinks, channels, and flume configurations
8. Understand HBase, its architecture, data storage, and working with HBase. You will also understand the difference between HBase and RDBMS
9. Gain a working knowledge of Pig and its components
10. Do functional programming in Spark
11. Understand resilient distribution datasets (RDD) in detail
12. Implement and build Spark applications
13. Gain an in-depth understanding of parallel processing in Spark and Spark RDD optimization techniques
14. Understand the common use-cases of Spark and the various interactive algorithms
15. Learn Spark SQL, creating, transforming, and querying Data frames
Who should take up this Big Data and Hadoop Certification Training Course?
Big Data career opportunities are on the rise, and Hadoop is quickly becoming a must-know technology for the following professionals:
1. Software Developers and Architects
2. Analytics Professionals
3. Senior IT professionals
4. Testing and Mainframe professionals
5. Data Management Professionals
6. Business Intelligence Professionals
7. Project Managers
8. Aspiring Data Scientists
Learn more at https://www.simplilearn.com/big-data-and-analytics/big-data-and-hadoop-training
This slide contain basic detail about Hadoop and big data. Steps to install and configure Hadoop in Linux OS. And an example to count number of words in a text file using Hadoop.
Hadoop Cluster Configuration and Data Loading - Module 2Rohit Agrawal
Learning Objectives - In this module, you will learn the Hadoop Cluster Architecture and Setup, Important Configuration files in a Hadoop Cluster, Data Loading Techniques.
Hadoop Institutes in Bangalore: Kelly Technologies is the best Hadoop Training Institute in Bangalore and providing Hadoop Training classes by real-time faculty with course material and 24x7 Lab Facility.
Apache Hadoop is primarily used to bolster information escalated web applications. Essentially it can isolate programming applications identifying with immense information clusters in hadoop training in hyderabad, into little sections for simple understanding, recording and rehashed utilization
http://www.kellytechno.com/Hyderabad/Course/Hadoop-Training
Hadoop Interview Questions And Answers Part-1 | Big Data Interview Questions ...Simplilearn
This video on Hadoop interview questions part-1 will take you through the general Hadoop questions and questions on HDFS, MapReduce and YARN, which are very likely to be asked in any Hadoop interview. It covers all the topics on the major components of Hadoop. This Hadoop tutorial will give you an idea about the different scenario-based questions you could face and some multiple-choice questions as well. Now, let us dive into this Hadoop interview questions video and gear up for youe next Hadoop Interview.
What is this Big Data Hadoop training course about?
The Big Data Hadoop and Spark developer course have been designed to impart an in-depth knowledge of Big Data processing using Hadoop and Spark. The course is packed with real-life projects and case studies to be executed in the CloudLab.
What are the course objectives?
This course will enable you to:
1. Understand the different components of the Hadoop ecosystem such as Hadoop 2.7, Yarn, MapReduce, Pig, Hive, Impala, HBase, Sqoop, Flume, and Apache Spark
2. Understand Hadoop Distributed File System (HDFS) and YARN as well as their architecture, and learn how to work with them for storage and resource management
3. Understand MapReduce and its characteristics, and assimilate some advanced MapReduce concepts
4. Get an overview of Sqoop and Flume and describe how to ingest data using them
5. Create database and tables in Hive and Impala, understand HBase, and use Hive and Impala for partitioning
6. Understand different types of file formats, Avro Schema, using Arvo with Hive, and Sqoop and Schema evolution
7. Understand Flume, Flume architecture, sources, flume sinks, channels, and flume configurations
8. Understand HBase, its architecture, data storage, and working with HBase. You will also understand the difference between HBase and RDBMS
9. Gain a working knowledge of Pig and its components
10. Do functional programming in Spark
11. Understand resilient distribution datasets (RDD) in detail
12. Implement and build Spark applications
13. Gain an in-depth understanding of parallel processing in Spark and Spark RDD optimization techniques
14. Understand the common use-cases of Spark and the various interactive algorithms
15. Learn Spark SQL, creating, transforming, and querying Data frames
Learn more at https://www.simplilearn.com/big-data-and-analytics/big-data-and-hadoop-training
In this session you will learn:
What is Big Data?
What is Hadoop?
Overview of Hadoop Ecosystem
Hadoop Distributed File System or HDFS
Hadoop Cluster Modes
Yarn
MapReduce
Hive
Pig
Zookeeper
Flume
Sqoop
For more information, visit: https://www.mindsmapped.com/courses/big-data-hadoop/hadoop-developer-training-a-step-by-step-tutorial/
Hadoop Interview Questions and Answers by rohit kapakapa rohit
Hadoop Interview Questions and Answers - More than 130 real time questions and answers covering hadoop hdfs,mapreduce and administrative concepts by rohit kapa
Design and Research of Hadoop Distributed Cluster Based on RaspberryIJRESJOURNAL
ABSTRACT : Based on the cost saving, this Hadoop distributed cluster based on raspberry is designed for the storage and processing of massive data. This paper expounds the two core technologies in the Hadoop software framework - HDFS distributed file system architecture and MapReduce distributed processing mechanism. The construction method of the cluster is described in detail, and the Hadoop distributed cluster platform is successfully constructed based on the two raspberry factions. The technical knowledge about Hadoop is well understood in theory and practice.
The slides were created for one University Program on Apache Hadoop + Apache Apex workshop.
It explains almost all the hdfs related commands in details along with the examples.
Most users know HDFS as the reliable store of record for big data analytics. HDFS is also used to store transient and operational data when working with cloud object stores, such as Azure HDInsight and Amazon EMR. In these settings - but also in more traditional, on premise deployments - applications often manage data stored in multiple storage systems or clusters, requiring a complex workflow for synchronizing data between filesystems to achieve goals for durability, performance, and coordination.
Building on existing heterogeneous storage support, we add a storage tier to HDFS to work with external stores, allowing remote namespaces to be "mounted" in HDFS. This capability not only supports transparent caching of remote data as HDFS blocks, it also supports (a)synchronous writes to remote clusters for business continuity planning (BCP) and supports hybrid cloud architectures.
This idea was presented at last year’s Summit in San Jose. Lots of progress has been made since then and active development is ongoing at the Apache Software Foundation on branch HDFS-9806, driven by Microsoft and Western Digital. We will discuss the refined design & implementation and present how end-users and admins will be able to use this powerful functionality.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
4. Challenges at Large Scale
● Single node can't handle due to limited
resource
○ Processor time, Memory, Hard drive space, Network
bandwidth
○ Individual hard drives can only sustain read speeds between
60-100 MB/second, so multicore does not help that much
● Multiple nodes needed, but probability of
failure increases
○ Network failure, Data transfer failure, Node failure
○ Desynchronized clock, Lock
○ Partial failure in distributed atomic transaction
5. Hadoop Approach (1/4)
● Data Distribution
○ Distributed to all the nodes in the cluster
○ Replicated to several nodes
6. Hadoop Approach (2/4)
● Move computation to the data
○ Whenever possible, rather than moving data for
processing, computation is moved to the node that
contains the data
○ Most data is read from local disk straight into the
CPU, alleviating strain on network bandwidth and
preventing unnecessary network transfers
○ This data locality results in high performance
8. Hadoop Approach (4/4)
● Isolated execution
○ Communication between nodes is limited and done
implicitly
○ Individual node failures can be worked around by
restarting tasks on other nodes
■ No message exchange needed by user task
■ No roll back to pre-arranged checkpoints to
partially restart the computation
■ Other workers continue to operate as though
nothing went wrong
11. HDFS (1/2)
● Storage component of Hadoop
● Distributed file system modeled after GFS
● Optimized for high throughput
● Works best when reading and writing large files
(gigabytes and larger)
● To support this throughput HDFS leverages unusually
large (for a filesystem) block sizes and data locality
optimizations to reduce network input/output (I/O)
12. HDFS (2/2)
● Scalability and availability are also key
traits of HDFS, achieved in part due to data
replication and fault tolerance
● HDFS replicates files for a configured
number of times, is tolerant of both
software and hardware failure, and
automatically re-replicates data blocks on
nodes that have failed
14. MapReduce (1/2)
● MapReduce is a batch-based, distributed
computing framework modeled
● Simplifies parallel processing by abstracting
away the complexities involved in working
with distributed systems
○ computational parallelization
○ work distribution
○ dealing with unreliable hardware and software
17. Hadoop Installation
● Local mode
○ No need to communicate with other nodes, so it
does not use HDFS, nor will it launch any of the
Hadoop daemons
○ Used for developing and debugging the application
logic of a MapReduce program
● Pseudo Distributed Mode
○ All daemons running on a single machine
○ Helps to examine memory usage, HDFS
input/output issues, and other daemon interactions
● Fully Distributed Mode
18. Hadoop Configuration
File name Description
hadoop-env.sh ● Environment-specific settings go here.
● If a current JDK is not in the system path you’ll want to come here to configure your
JAVA_HOME
core-site.xml ● Contains system-level Hadoop configuration items
○ HDFS URL
○ Hadoop temporary directory
○ script locations for rack-aware Hadoop clusters
● Override settings in core-default.xml: http://hadoop.apache.org/common/docs/r1.
0.0/core-default.html.
hdfs-site.xml ● Contains HDFS settings
○ default file replication count
○ block size
○ whether permissions are enforced
● Override settings in hdfs-default.xml: http://hadoop.apache.org/common/docs/r1.
0.0/hdfs-default.html
mapred-site.xml ● Contains HDFS settings
○ default number of reduce tasks
○ default min/max task memory sizes
○ speculative execution
● Override settings in mapred-default.xml: http://hadoop.apache.
org/common/docs/r1.0.0/mapred-default.html
19. Installation
Pseudo Distributed Mode
● Setup public key based login
○ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
○ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
● Update the following configuration
○ hadoop.tmp.dir and fs.default.name at core-site.
xml
○ dfs.replication at hdfs-site.xml
○ mapred.job.tracker at mapred-site.xml
● Format NameNode
○ bin/hadoop namenode -format
● Start all daemons
○ bin/start-all.sh
21. Hadoop FileSystem
File
System
URI
Scheme
Java Impl. (all under org.
apache.hadoop)
Description
Local file fs.LocalFileSystem Filesystem for a locally
connected disk with client-side
checksums
HDFS hdfs hdfs.DistributedFileSystem Hadoop’s distributed filesystem
WebHDFS webhdfs hdfs.web.
WebHdfsFileSystem
Filesystem providing secure
read-write access to HDFS over
HTTP
S3 (native) s3n fs.s3native.
NativeS3FileSystem
Filesystem backed by Amazon
S3
S3 (block
based)
s3 fs.s3.S3FileSystem Filesystem backed by Amazon
S3, which stores files in blocks
(much like HDFS) to overcome
S3’s 5 GB file size limit.
GlusterFS glusterfs fs.glusterfs.
GlusterFileSystem
Still in beta
https://github.
com/gluster/glusterfs/tree/master
/glusterfs-hadoop
22. Installation
Fully Distributed Mode
Three different kind of hosts:
● master
○ master node of the cluster
○ hosts NameNode and JobTracker daemons
● backup
○ hosts Secondary NameNode daemon
● slave1, slave2, ...
○ slave boxes running both DataNode and TaskTracker
daemons
23. Hadoop Configuration
File Name Description
masters ● Name is misleading and should have been called secondary-masters
● When you start Hadoop it will launch NameNode and JobTracker on the local
host from which you issued the start command and then SSH to all the nodes
in this file to launch the SecondaryNameNode.
slaves ● Contains a list of hosts that are Hadoop slaves
● When you start Hadoop it will SSH to each host in this file and launch the
DataNode and TaskTracker daemons
24. Recipes
● S3 Configuration
● Using multiple disks/volumes and limiting
HDFS disk usage
● Setting HDFS block size
● Setting the file replication factor
25. Recipes:
S3 Configuration
● Config file: conf/hadoop-site.xml
● To access S3 data using DFS command
<property>
<name>fs.s3.awsAccessKeyId</name>
<value>ID</value>
</property>
<property>
<name>fs.s3.awsSecretAccessKey</name>
<value>SECRET</value>
</property>
● To use S3 as a replacement for HDFS
<property>
<name>fs.default.name</name>
<value>s3://BUCKET</value>
</property>
26. Recipes:
Disk Configuration
● Config file: $HADOOP_HOME/conf/hdfs-site.xml
● For multiple locations:
<property>
<name>dfs.data.dir</name>
<value>/u1/hadoop/data,/u2/hadoop/data</value>
</property>
● For limiting the HDFS disk usage, specify reserved
space for non-DFS (bytes per volume)
<property>
<name>dfs.datanode.du.reserved</name>
<value>6000000000</value>
</property>
27. Recipes:
HDFS Block Size (1/3)
● HDFS stores files across the cluster by
breaking them down into coarser grained,
fixed-size blocks
● Default HDFS block size is 64 MB
● Affects performance of
○ filesystem operations where larger block sizes
would be more effective, if you are storing and
processing very large files
○ MapReduce computations, as the default behavior
of Hadoop is to create one map task for each data
block of the input files
28. Recipes:
HDFS Block Size (2/3)
● Option 1: NameNode configuration
○ Add/modify dfs.block.size parameter at conf/hdfs-
site.xml
○ Block size in number of bytes
○ Only the files copied after the change will have the
new block size
○ Existing files in HDFS will not be affected
<property>
<name>dfs.block.size</name>
<value>134217728</value>
</property>
29. Recipes:
HDFS Block Size (2/3)
● Option 2: During file upload
○ Applies only to the specific file paths
> bin/hadoop fs -Ddfs.blocksize=134217728 -put data.in /user/foo
● Use fsck command
> bin/hadoop fsck /user/foo/data.in -blocks -files -locations
/user/foo/data.in 215227246 bytes, 2 block(s): ....
0. blk_6981535920477261584_1059len=134217728 repl=1 [hostname:50010]
1. blk_-8238102374790373371_1059 len=81009518 repl=1 [hostname:50010]
30. Recipes:
File Replication Factor (1/3)
● Replication done for fault tolerance
○ Pros: Improves data locality and data access
bandwidth
○ Cons: Needs more storage
● HDFS replication factor is a file-level
property that can be set per file basis
31. Recipes:
File Replication Factor (2/3)
● Set default replication factor
○ Add/Modify dfs.replication property in conf/hdfs-
site.xml
○ Old files will be unaffected
○ Only the files copied after the change will have the
new replication factor
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
32. Recipes:
File Replication Factor (3/3)
● Set replication factor during file upload
> bin/hadoop fs -D dfs.replication=1 -copyFromLocal non-criticalfile.txt
/user/foo
● Change the replication factor of files or file
paths that are already in the HDFS
○ Use setrep command
○ Syntax: hadoop fs -setrep [-R] <path>
> bin/hadoop fs -setrep 2 non-critical-file.txt
Replication 3 set: hdfs://myhost:9000/user/foo/non-critical-file.txt
33. Recipes:
Merging files in HDFS
● Use HDFS getmerge command
● Syntax:
hadoop fs -getmerge <src> <localdst> [addnl]
● Copies files in a given path in HDFS to a
single concatenated file in the local
filesystem
> bin/hadoop fs -getmerge /user/foo/demofiles merged.txt
35. Example:
Advanced Operations
● HDFS
○ Adding new data node
○ Decommissioning data node
○ Checking FileSystem Integrity with fsck
○ Balancing HDFS Block Data
○ Dealing with a Failed Disk
● MapReduce
○ Adding a Tasktracker
○ Decommissioning a Tasktracker
○ Killing a MapReduce Job
○ Killing a MapReduce Task
○ Dealing with a Blacklisted Tasktracker