The document summarizes the Hadoop Distributed File System (HDFS), which is designed to reliably store and stream very large datasets at high bandwidth. It describes the key components of HDFS, including the NameNode which manages the file system metadata and mapping of blocks to DataNodes, and DataNodes which store block replicas. HDFS allows scaling storage and computation across thousands of servers by distributing data storage and processing tasks.
Best Hadoop Institutes : kelly tecnologies is the best Hadoop training Institute in Bangalore.Providing hadoop courses by realtime faculty in Bangalore.
This document provides an overview of big data concepts and Hadoop. It discusses the four V's of big data - volume, velocity, variety, and veracity. It then describes how Hadoop uses MapReduce and HDFS to process and store large datasets in a distributed, fault-tolerant and scalable manner across commodity hardware. Key components of Hadoop include the HDFS file system and MapReduce framework for distributed processing of large datasets in parallel.
Dynamic Namespace Partitioning with Giraffa File SystemDataWorks Summit
Giraffa is a distributed file system that utilizes features of HDFS and HBase. It stores file and directory metadata in an HBase table to allow for dynamic namespace partitioning across region servers. File data continues to be stored in HDFS data nodes to leverage HDFS's efficient data streaming. The goal is to build upon existing Hadoop components like HDFS and HBase to create a scalable file system without introducing single points of failure, while minimizing changes to existing systems.
Hadoop is a distributed processing framework for large data sets across clusters of commodity hardware. It has two main components: HDFS for reliable data storage, and MapReduce for distributed processing of large data sets. Hadoop can scale from single servers to thousands of machines, handling data measuring petabytes with very high throughput. It provides reliability even if individual machines fail, and is easy to set up and manage.
A simple replication-based mechanism has been used to achieve high data reliability of Hadoop Distributed File System (HDFS). However, replication based mechanisms have high degree of disk storage requirement since it makes copies of full block without consideration of storage size. Studies have shown that erasure-coding mechanism can provide more storage space when used as an alternative to replication. Also, it can increase write throughput compared to replication mechanism. To improve both space efficiency and I/O performance of the HDFS while preserving the same data reliability level, we propose HDFS+, an erasure coding based Hadoop Distributed File System. The proposed scheme writes a full block on the primary DataNode and then performs erasure coding with Vandermonde-based Reed-Solomon algorithm that divides data into m data fragments and encode them into n data fragments (n>m), which are saved in N distinct DataNodes such that the original object can be reconstructed from any m fragments. The experimental results show that our scheme can save up to 33% of storage space while outperforming the original scheme in write performance by 1.4 times. Our scheme provides the same read performance as the original scheme as long as data can be read from the primary DataNode even under single-node or double-node failure. Otherwise, the read performance of the HDFS+ decreases to some extent. However, as the number of fragments increases, we show that the performance degradation becomes negligible.
The document summarizes and compares several distributed file systems, including Google File System (GFS), Kosmos File System (KFS), Hadoop Distributed File System (HDFS), GlusterFS, and Red Hat Global File System (GFS). GFS, KFS and HDFS are based on the GFS architecture of a single metadata server and multiple chunkservers. GlusterFS uses a decentralized architecture without a metadata server. Red Hat GFS requires a SAN for high performance and scalability. Each system has advantages and limitations for different use cases.
Best Hadoop Institutes : kelly tecnologies is the best Hadoop training Institute in Bangalore.Providing hadoop courses by realtime faculty in Bangalore.
This document provides an overview of big data concepts and Hadoop. It discusses the four V's of big data - volume, velocity, variety, and veracity. It then describes how Hadoop uses MapReduce and HDFS to process and store large datasets in a distributed, fault-tolerant and scalable manner across commodity hardware. Key components of Hadoop include the HDFS file system and MapReduce framework for distributed processing of large datasets in parallel.
Dynamic Namespace Partitioning with Giraffa File SystemDataWorks Summit
Giraffa is a distributed file system that utilizes features of HDFS and HBase. It stores file and directory metadata in an HBase table to allow for dynamic namespace partitioning across region servers. File data continues to be stored in HDFS data nodes to leverage HDFS's efficient data streaming. The goal is to build upon existing Hadoop components like HDFS and HBase to create a scalable file system without introducing single points of failure, while minimizing changes to existing systems.
Hadoop is a distributed processing framework for large data sets across clusters of commodity hardware. It has two main components: HDFS for reliable data storage, and MapReduce for distributed processing of large data sets. Hadoop can scale from single servers to thousands of machines, handling data measuring petabytes with very high throughput. It provides reliability even if individual machines fail, and is easy to set up and manage.
A simple replication-based mechanism has been used to achieve high data reliability of Hadoop Distributed File System (HDFS). However, replication based mechanisms have high degree of disk storage requirement since it makes copies of full block without consideration of storage size. Studies have shown that erasure-coding mechanism can provide more storage space when used as an alternative to replication. Also, it can increase write throughput compared to replication mechanism. To improve both space efficiency and I/O performance of the HDFS while preserving the same data reliability level, we propose HDFS+, an erasure coding based Hadoop Distributed File System. The proposed scheme writes a full block on the primary DataNode and then performs erasure coding with Vandermonde-based Reed-Solomon algorithm that divides data into m data fragments and encode them into n data fragments (n>m), which are saved in N distinct DataNodes such that the original object can be reconstructed from any m fragments. The experimental results show that our scheme can save up to 33% of storage space while outperforming the original scheme in write performance by 1.4 times. Our scheme provides the same read performance as the original scheme as long as data can be read from the primary DataNode even under single-node or double-node failure. Otherwise, the read performance of the HDFS+ decreases to some extent. However, as the number of fragments increases, we show that the performance degradation becomes negligible.
The document summarizes and compares several distributed file systems, including Google File System (GFS), Kosmos File System (KFS), Hadoop Distributed File System (HDFS), GlusterFS, and Red Hat Global File System (GFS). GFS, KFS and HDFS are based on the GFS architecture of a single metadata server and multiple chunkservers. GlusterFS uses a decentralized architecture without a metadata server. Red Hat GFS requires a SAN for high performance and scalability. Each system has advantages and limitations for different use cases.
This document outlines the course objectives and modules for a Big Data Analytics course. The course focuses on Hadoop Distributed File System concepts, Hadoop tools, business intelligence applications, and data mining techniques including decision trees, text mining, and social network analysis. Key topics covered include HDFS architecture, MapReduce programming, YARN, and ensuring high availability of HDFS through techniques like rack awareness and NameNode replication.
The document discusses HDFS architecture and components. It describes how HDFS uses NameNodes and DataNodes to store and retrieve file data in a distributed manner across clusters. The NameNode manages the file system namespace and regulates access to files by clients. DataNodes store file data in blocks and replicate them for fault tolerance. The document outlines the write and read workflows in HDFS and how NameNodes and DataNodes work together to manage data storage and access.
The document discusses the Hadoop Distributed File System (HDFS), which was created by Doug Cutting to address the need for large-scale data processing. HDFS is designed for streaming data across commodity hardware and uses a master/slave architecture with one NameNode master and multiple DataNodes. The NameNode manages the file system namespace and regulates access to files by clients via the DataNodes, which store data blocks and ensure replication for fault tolerance.
This document outlines and compares two NameNode high availability (HA) solutions for HDFS: AvatarNode used by Facebook and BackupNode used by Yahoo. AvatarNode provides a complete hot standby with fast failover times of seconds by using an active-passive pair and ZooKeeper for coordination. BackupNode has limitations including slower restart times of 25+ minutes and supporting only two-machine failures. While it provides hot standby for the namespace, block reports are sent only to the active NameNode, making it a semi-hot standby solution. The document also briefly mentions other experimental HA solutions for HDFS.
The document provides an overview of the Hadoop Distributed File System (HDFS). It describes HDFS design goals of handling hardware failures, large data sets, and streaming data access. It explains key HDFS concepts like blocks, replication, rack awareness, the read/write process, and the roles of the NameNode and DataNodes. It also covers topics like permissions, safe mode, quotas, and commands for interacting with HDFS.
Big data refers to large and complex datasets that are difficult to process using traditional methods. Key challenges include capturing, storing, searching, sharing, and analyzing large datasets in domains like meteorology, physics simulations, biology, and the internet. Hadoop is an open-source software framework for distributed storage and processing of big data across clusters of computers. It allows for the distributed processing of large data sets in a reliable, fault-tolerant and scalable manner.
Hadoop Distributed File System (HDFS) is a distributed file system designed to run on commodity hardware. It has very large files (over 100 million files) and is optimized for batch processing huge datasets across large clusters (over 10,000 nodes). HDFS stores multiple replicas of data blocks on different nodes to handle failures. It provides high aggregate bandwidth and allows computations to move to where data resides.
A server is a computer that manages network resources and makes them available to authorized users. Kerberos is an authentication protocol that provides encryption during authentication. Active Directory installation can be verified by checking SRV records in DNS, verifying the SYSVOL folder, and database and log files, and using commands like Dcdiag and Net share.
The document provides an overview of the Hadoop Distributed File System (HDFS). It describes HDFS's master-slave architecture with a single NameNode master and multiple DataNode slaves. The NameNode manages filesystem metadata and data placement, while DataNodes store data blocks. The document outlines HDFS components like the SecondaryNameNode, DataNodes, and how files are written and read. It also discusses high availability solutions, operational tools, and the future of HDFS.
Active Directory is a hierarchical directory service for Windows domain networks that stores information about objects on the network such as user accounts, groups, computers, printers, and other network resources. It provides a centralized system for managing these resources. A domain controller is a server that contains the Active Directory database and controls access to network resources. A domain is a collection of computers, users, and groups that share a common directory database and security policies.
DFS allows administrators to consolidate file shares across multiple servers so users can access files from a single location. It provides benefits like centralized resources management, high accessibility regardless of physical location, fault tolerance through replication, and optimized workload distribution. HDFS is the distributed file system of Hadoop. It has a master/slave architecture with a single NameNode managing the file system namespace and DataNodes storing and retrieving blocks. The NameNode maintains metadata and regulates client access while DataNodes store and serve blocks upon instruction from the NameNode, providing streaming data access at large scale for batch processing workloads.
Distributed file systems (DFS) allow users to store and access files remotely as if they were local by providing transparency of file location, access, and migration. DFS uses file replication for efficiency and fault tolerance. The key goals of DFS are network transparency to access remote files and high availability to access files anywhere. DFS provides organization, storage, retrieval, naming, sharing, and protection of files through modules like directory, file, access control, block, and device modules.
The document describes the key features and architecture of the Hadoop Distributed File System (HDFS). HDFS is designed to reliably store very large data sets across clusters of commodity hardware. It uses a master/slave architecture with a single NameNode that manages file system metadata and regulates client access. Numerous DataNodes store file system blocks and replicate data for fault tolerance. The document outlines HDFS properties like high fault tolerance, data replication, and APIs.
The document starts with the introduction for Hadoop and covers the Hadoop 1.x / 2.x services (HDFS / MapReduce / YARN).
It also explains the architecture of Hadoop, the working of Hadoop distributed file system and MapReduce programming model.
This document discusses distributed file systems (DFS), including their structure, naming and transparency, remote file access, caching, consistency issues, stateful vs stateless service, file replication, and examples like the Andrew distributed computing environment. Key points are that a DFS manages dispersed storage across a network, provides location and naming transparency, uses caching to improve performance of remote access, and must address consistency between cached and master copies of files.
The document discusses various aspects of file systems and storage management in operating systems. It covers topics like file attributes, operations, structures, access methods, directory structures, file sharing, consistency semantics, and protection. File attributes include the file name, size, type and protection attributes. Common file operations are creating, reading, writing and deleting files. Files can have sequential, direct or indexed access methods. Directory structures can be single-level, two-level, tree-structured or graph-based. File sharing requires consistency models like Unix, session or immutable semantics. Protection controls access via access matrices, access control lists or capability lists.
The slides were created for one University Program on Apache Hadoop + Apache Apex workshop.
It explains almost all the hdfs related commands in details along with the examples.
The Hadoop Distributed File System (HDFS) has a master/slave architecture with a single NameNode that manages the file system namespace and regulates client access, and multiple DataNodes that store and retrieve blocks of data files. The NameNode maintains metadata and a map of blocks to files, while DataNodes store blocks and report their locations. Blocks are replicated across DataNodes for fault tolerance following a configurable replication factor. The system uses rack awareness and preferential selection of local replicas to optimize performance and bandwidth utilization.
The Google File System (GFS) was designed by Google to store massive amounts of data across cheap, unreliable hardware. It uses a single master to coordinate metadata and mutations across multiple chunkservers. Files are divided into fixed-size chunks which are replicated for reliability. The design focuses on supporting huge files that are written once and read through streaming, while tolerating high failure rates through replication and relaxed consistency. GFS has proven successful at meeting Google's storage needs at massive scale.
The Hadoop Distributed File System (HDFS) is the primary data storage system used by Hadoop applications. It employs a Master and Slave architecture with a NameNode that manages metadata and DataNodes that store data blocks. The NameNode tracks locations of data blocks and regulates access to files, while DataNodes store file blocks and manage read/write operations as directed by the NameNode. HDFS provides high-performance, scalable access to data across large Hadoop clusters.
HDFS is Hadoop's implementation of a distributed file system designed to store large amounts of data across clusters of machines. It is based on Google's GFS and addresses limitations of other distributed file systems like NFS. HDFS uses a master/slave architecture with a NameNode master storing metadata and DataNodes storing data blocks. Data is replicated across multiple DataNodes for reliability. The file system is optimized for large, sequential reads and writes of entire files rather than random access or updates.
This document outlines the course objectives and modules for a Big Data Analytics course. The course focuses on Hadoop Distributed File System concepts, Hadoop tools, business intelligence applications, and data mining techniques including decision trees, text mining, and social network analysis. Key topics covered include HDFS architecture, MapReduce programming, YARN, and ensuring high availability of HDFS through techniques like rack awareness and NameNode replication.
The document discusses HDFS architecture and components. It describes how HDFS uses NameNodes and DataNodes to store and retrieve file data in a distributed manner across clusters. The NameNode manages the file system namespace and regulates access to files by clients. DataNodes store file data in blocks and replicate them for fault tolerance. The document outlines the write and read workflows in HDFS and how NameNodes and DataNodes work together to manage data storage and access.
The document discusses the Hadoop Distributed File System (HDFS), which was created by Doug Cutting to address the need for large-scale data processing. HDFS is designed for streaming data across commodity hardware and uses a master/slave architecture with one NameNode master and multiple DataNodes. The NameNode manages the file system namespace and regulates access to files by clients via the DataNodes, which store data blocks and ensure replication for fault tolerance.
This document outlines and compares two NameNode high availability (HA) solutions for HDFS: AvatarNode used by Facebook and BackupNode used by Yahoo. AvatarNode provides a complete hot standby with fast failover times of seconds by using an active-passive pair and ZooKeeper for coordination. BackupNode has limitations including slower restart times of 25+ minutes and supporting only two-machine failures. While it provides hot standby for the namespace, block reports are sent only to the active NameNode, making it a semi-hot standby solution. The document also briefly mentions other experimental HA solutions for HDFS.
The document provides an overview of the Hadoop Distributed File System (HDFS). It describes HDFS design goals of handling hardware failures, large data sets, and streaming data access. It explains key HDFS concepts like blocks, replication, rack awareness, the read/write process, and the roles of the NameNode and DataNodes. It also covers topics like permissions, safe mode, quotas, and commands for interacting with HDFS.
Big data refers to large and complex datasets that are difficult to process using traditional methods. Key challenges include capturing, storing, searching, sharing, and analyzing large datasets in domains like meteorology, physics simulations, biology, and the internet. Hadoop is an open-source software framework for distributed storage and processing of big data across clusters of computers. It allows for the distributed processing of large data sets in a reliable, fault-tolerant and scalable manner.
Hadoop Distributed File System (HDFS) is a distributed file system designed to run on commodity hardware. It has very large files (over 100 million files) and is optimized for batch processing huge datasets across large clusters (over 10,000 nodes). HDFS stores multiple replicas of data blocks on different nodes to handle failures. It provides high aggregate bandwidth and allows computations to move to where data resides.
A server is a computer that manages network resources and makes them available to authorized users. Kerberos is an authentication protocol that provides encryption during authentication. Active Directory installation can be verified by checking SRV records in DNS, verifying the SYSVOL folder, and database and log files, and using commands like Dcdiag and Net share.
The document provides an overview of the Hadoop Distributed File System (HDFS). It describes HDFS's master-slave architecture with a single NameNode master and multiple DataNode slaves. The NameNode manages filesystem metadata and data placement, while DataNodes store data blocks. The document outlines HDFS components like the SecondaryNameNode, DataNodes, and how files are written and read. It also discusses high availability solutions, operational tools, and the future of HDFS.
Active Directory is a hierarchical directory service for Windows domain networks that stores information about objects on the network such as user accounts, groups, computers, printers, and other network resources. It provides a centralized system for managing these resources. A domain controller is a server that contains the Active Directory database and controls access to network resources. A domain is a collection of computers, users, and groups that share a common directory database and security policies.
DFS allows administrators to consolidate file shares across multiple servers so users can access files from a single location. It provides benefits like centralized resources management, high accessibility regardless of physical location, fault tolerance through replication, and optimized workload distribution. HDFS is the distributed file system of Hadoop. It has a master/slave architecture with a single NameNode managing the file system namespace and DataNodes storing and retrieving blocks. The NameNode maintains metadata and regulates client access while DataNodes store and serve blocks upon instruction from the NameNode, providing streaming data access at large scale for batch processing workloads.
Distributed file systems (DFS) allow users to store and access files remotely as if they were local by providing transparency of file location, access, and migration. DFS uses file replication for efficiency and fault tolerance. The key goals of DFS are network transparency to access remote files and high availability to access files anywhere. DFS provides organization, storage, retrieval, naming, sharing, and protection of files through modules like directory, file, access control, block, and device modules.
The document describes the key features and architecture of the Hadoop Distributed File System (HDFS). HDFS is designed to reliably store very large data sets across clusters of commodity hardware. It uses a master/slave architecture with a single NameNode that manages file system metadata and regulates client access. Numerous DataNodes store file system blocks and replicate data for fault tolerance. The document outlines HDFS properties like high fault tolerance, data replication, and APIs.
The document starts with the introduction for Hadoop and covers the Hadoop 1.x / 2.x services (HDFS / MapReduce / YARN).
It also explains the architecture of Hadoop, the working of Hadoop distributed file system and MapReduce programming model.
This document discusses distributed file systems (DFS), including their structure, naming and transparency, remote file access, caching, consistency issues, stateful vs stateless service, file replication, and examples like the Andrew distributed computing environment. Key points are that a DFS manages dispersed storage across a network, provides location and naming transparency, uses caching to improve performance of remote access, and must address consistency between cached and master copies of files.
The document discusses various aspects of file systems and storage management in operating systems. It covers topics like file attributes, operations, structures, access methods, directory structures, file sharing, consistency semantics, and protection. File attributes include the file name, size, type and protection attributes. Common file operations are creating, reading, writing and deleting files. Files can have sequential, direct or indexed access methods. Directory structures can be single-level, two-level, tree-structured or graph-based. File sharing requires consistency models like Unix, session or immutable semantics. Protection controls access via access matrices, access control lists or capability lists.
The slides were created for one University Program on Apache Hadoop + Apache Apex workshop.
It explains almost all the hdfs related commands in details along with the examples.
The Hadoop Distributed File System (HDFS) has a master/slave architecture with a single NameNode that manages the file system namespace and regulates client access, and multiple DataNodes that store and retrieve blocks of data files. The NameNode maintains metadata and a map of blocks to files, while DataNodes store blocks and report their locations. Blocks are replicated across DataNodes for fault tolerance following a configurable replication factor. The system uses rack awareness and preferential selection of local replicas to optimize performance and bandwidth utilization.
The Google File System (GFS) was designed by Google to store massive amounts of data across cheap, unreliable hardware. It uses a single master to coordinate metadata and mutations across multiple chunkservers. Files are divided into fixed-size chunks which are replicated for reliability. The design focuses on supporting huge files that are written once and read through streaming, while tolerating high failure rates through replication and relaxed consistency. GFS has proven successful at meeting Google's storage needs at massive scale.
The Hadoop Distributed File System (HDFS) is the primary data storage system used by Hadoop applications. It employs a Master and Slave architecture with a NameNode that manages metadata and DataNodes that store data blocks. The NameNode tracks locations of data blocks and regulates access to files, while DataNodes store file blocks and manage read/write operations as directed by the NameNode. HDFS provides high-performance, scalable access to data across large Hadoop clusters.
HDFS is Hadoop's implementation of a distributed file system designed to store large amounts of data across clusters of machines. It is based on Google's GFS and addresses limitations of other distributed file systems like NFS. HDFS uses a master/slave architecture with a NameNode master storing metadata and DataNodes storing data blocks. Data is replicated across multiple DataNodes for reliability. The file system is optimized for large, sequential reads and writes of entire files rather than random access or updates.
HDFS is a distributed file system designed for storing very large data files across commodity servers or clusters. It works on a master-slave architecture with one namenode (master) and multiple datanodes (slaves). The namenode manages the file system metadata and regulates client access, while datanodes store and retrieve block data from their local file systems. Files are divided into large blocks which are replicated across datanodes for fault tolerance. The namenode monitors datanodes and replicates blocks if their replication drops below a threshold.
Hadoop Distributed File System (HDFS) is modeled after Google File System and optimized for large data sets and high throughput. HDFS uses a master/slave architecture with a NameNode that manages file system metadata and DataNodes that store data blocks. The NameNode tracks locations of data blocks and the DataNodes provide streaming access to blocks for batch processing applications. HDFS replicates data blocks across multiple DataNodes for fault tolerance.
HDFS is a distributed file system that stores large data across multiple nodes in a Hadoop cluster. It divides files into blocks and replicates them across nodes for reliability. The NameNode manages the file system namespace and regulates client access, while DataNodes store data blocks. HDFS provides interfaces for applications to access data blocks efficiently and is highly fault tolerant due to replication.
Hadoop is an open-source framework for distributed storage and processing of large datasets across clusters of computers. It was developed to support distributed processing of large datasets. The document provides an overview of Hadoop architecture including HDFS, MapReduce and key components like NameNode, DataNode, JobTracker and TaskTracker. It also discusses Hadoop history, features, use cases and configuration.
Big data interview questions and answersKalyan Hadoop
This document provides an overview of the Hadoop Distributed File System (HDFS), including its goals, design, daemons, and processes for reading and writing files. HDFS is designed for storing very large files across commodity servers, and provides high throughput and reliability through replication. The key components are the NameNode, which manages metadata, and DataNodes, which store data blocks. The Secondary NameNode assists the NameNode in checkpointing filesystem state periodically.
HDFS is a distributed file system designed for storing large files across clusters of commodity hardware. It provides high-throughput access to application data reliably, even in the event of hardware failures, through data replication across multiple nodes. The master node (namenode) manages metadata like file paths and block locations, while slave nodes (datanodes) store file blocks. Files are split into large blocks which are replicated to provide fault tolerance.
We have entered an era of Big Data. Huge information is for the most part accumulation of information sets so extensive and complex that it is exceptionally hard to handle them utilizing close by database administration devices. The principle challenges with Big databases incorporate creation, curation, stockpiling, sharing, inquiry, examination and perception. So to deal with these databases we require, "exceedingly parallel software's". As a matter of first importance, information is procured from diverse sources, for example, online networking, customary undertaking information or sensor information and so forth. Flume can be utilized to secure information from online networking, for example, twitter. At that point, this information can be composed utilizing conveyed document frameworks, for example, Hadoop File System. These record frameworks are extremely proficient when number of peruses are high when contrasted with composes.
Hadoop-professional-software-development-course-in-mumbaiUnmesh Baile
The document provides an overview of the Hadoop Distributed File System (HDFS). It describes HDFS's key features like fault tolerance, high throughput, and support for large datasets. The core architecture uses a master/slave model with a Namenode managing metadata and Datanodes storing file data blocks. HDFS employs replication for reliability and data is placed across racks for high availability. The document also discusses HDFS protocols, APIs, and how it ensures data integrity and handles failures.
Hadoop professional-software-development-course-in-mumbaiUnmesh Baile
Vibrant Technologies is headquarted in Mumbai,India.We are the best Hadoop training provider in Navi Mumbai who provides Live Projects to students.We provide Corporate Training also.We are Best Hadoop classes in Mumbai according to our students and corporates
The document summarizes Hadoop Distributed File System (HDFS). HDFS is the primary data storage system used by Hadoop applications to provide scalable and reliable access to data across large clusters. It uses a master-slave architecture with a NameNode that manages file metadata and DataNodes that store file data blocks. HDFS supports big data analytics applications by enabling distributed processing of large datasets in a fault-tolerant manner.
This document discusses Hadoop Distributed File System (HDFS) and MapReduce. It begins by explaining HDFS architecture, including the NameNode and DataNodes. It then discusses how HDFS is used to store large files reliably across commodity hardware. The document also provides steps to install Hadoop in single node cluster and describes core Hadoop services like JobTracker and TaskTracker. It concludes by discussing HDFS commands and a quiz about Hadoop components.
HDFS is a distributed file system designed for large data sets and high throughput access. It uses a master/slave architecture with a Namenode managing the file system namespace and Datanodes storing file data blocks. Blocks are replicated across Datanodes for fault tolerance. The system is highly scalable, handling large clusters and files sizes ranging from gigabytes to terabytes.
This document summarizes key aspects of the Hadoop Distributed File System (HDFS). HDFS is designed for storing very large files across commodity hardware. It uses a master/slave architecture with a single NameNode that manages file system metadata and multiple DataNodes that store application data. HDFS allows for streaming access to this distributed data and can provide higher throughput than a single high-end server by parallelizing reads across nodes.
Design and Research of Hadoop Distributed Cluster Based on RaspberryIJRESJOURNAL
ABSTRACT : Based on the cost saving, this Hadoop distributed cluster based on raspberry is designed for the storage and processing of massive data. This paper expounds the two core technologies in the Hadoop software framework - HDFS distributed file system architecture and MapReduce distributed processing mechanism. The construction method of the cluster is described in detail, and the Hadoop distributed cluster platform is successfully constructed based on the two raspberry factions. The technical knowledge about Hadoop is well understood in theory and practice.
The data management industry has matured over the last three decades, primarily based on relational database management system(RDBMS) technology. Since the amount of data collected, and analyzed in enterprises has increased several folds in volume, variety and velocityof generation and consumption, organisations have started struggling with architectural limitations of traditional RDBMS architecture. As a result a new class of systems had to be designed and implemented, giving rise to the new phenomenon of “Big Data”. In this paper we will trace the origin of new class of system called Hadoop to handle Big data.
Hadoop Distributed Filesystem (HDFS) is a distributed filesystem designed for storing very large files across commodity hardware. It is optimized for streaming data access and is a good fit for large files, terabytes or petabytes in size, with streaming write-once and read-many access patterns. HDFS uses a master-slave architecture with a Namenode managing the filesystem metadata and Datanodes storing and retrieving block data. Blocks are replicated across Datanodes for reliability. The Namenode tracks block locations and clients read/write data by communicating with the Namenode and Datanodes in a pipeline.
The document provides an introduction to the key concepts of Big Data including Hadoop, HDFS, and MapReduce. It defines big data as large volumes of data that are difficult to process using traditional methods. Hadoop is introduced as an open-source framework for distributed storage and processing of large datasets across clusters of computers. HDFS is described as Hadoop's distributed file system that stores data across clusters and replicates files for reliability. MapReduce is a programming model where data is processed in parallel across clusters using mapping and reducing functions.
This document presents Xen, a virtual machine monitor (VMM) that allows multiple commodity operating systems to run concurrently on a physical machine. Xen achieves good performance and isolation between virtual machines through a technique called paravirtualization, where guest operating systems are modified to interface directly with the VMM rather than attempting to virtualize all hardware. This enables Xen to multiplex physical resources efficiently at the granularity of an entire operating system.
This document provides an overview of virtualization including:
1) It describes virtualization as separating a resource or request from its physical delivery through abstraction, allowing more flexible management of infrastructure.
2) It discusses different virtualization approaches including hardware-level, operating system-level, and application-level virtualization as well as hosted and hypervisor architectures.
3) It explains how virtualization can help with server consolidation and containment, test/development optimization, business continuity, and enterprise desktop management by increasing flexibility and utilization of resources.
1) The document discusses three main techniques for virtualizing the x86 CPU: full virtualization using binary translation, OS-assisted virtualization (paravirtualization), and hardware-assisted virtualization.
2) Full virtualization using binary translation allows any x86 OS to run virtualized without modification but has more overhead than other techniques. Paravirtualization requires OS modifications to replace privileged instructions but has lower overhead. Hardware-assisted virtualization uses new CPU features to trap privileged instructions.
3) Each technique has strengths and weaknesses in terms of performance, compatibility, and maintenance requirements. Currently, binary translation performs best overall but hardware assistance will improve over time. VMware uses multiple techniques to deliver the best balance of
This document discusses security concerns regarding cloud computing and proposes solutions to address those concerns. The key concerns discussed are traditional security issues like vulnerabilities, availability issues from outages, and third-party control issues regarding data ownership and compliance. The document argues that many of these issues are not new problems but rather existing problems in a new setting. It proposes that with continued research in areas like trusted computing and encryption techniques that support computation on encrypted data, these concerns can be alleviated to allow for greater adoption and realization of cloud computing's potential benefits while still maintaining appropriate control and security of data.
The document defines cloud computing according to the National Institute of Standards and Technology (NIST). It identifies five essential characteristics of cloud computing (on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service). It also outlines three service models (Software as a Service, Platform as a Service, and Infrastructure as a Service) and four deployment models (private cloud, community cloud, public cloud, and hybrid cloud). The purpose is to provide a baseline definition and taxonomy to facilitate comparisons of cloud services and deployment strategies.
The document describes MapReduce, a programming model and associated implementation for processing large datasets across distributed systems. MapReduce allows users to specify map and reduce functions to process key-value pairs. The runtime system automatically parallelizes and distributes the computation across clusters, handling failures and communication. Hundreds of programs have been implemented using MapReduce at Google to process terabytes of data on thousands of machines.
This document discusses live migration of virtual machines. It describes using pre-copy migration, which iteratively copies memory pages from the source machine to the destination while the virtual machine continues running. This allows for very short downtimes of 60ms or more. It implemented this approach for Xen virtual machines and was able to migrate virtual machines running servers with minimal disruption to clients.
This document provides an overview of Google's Bigtable distributed storage system. It describes Bigtable's data model as a sparse, multidimensional sorted map indexed by row, column, and timestamp. Bigtable stores data across many tablet servers, with a single master server coordinating metadata operations like tablet assignment and load balancing. The master uses Chubby, a distributed lock service, to track which tablet servers are available and reassign tablets if servers become unreachable.
Virtualization allows multiple virtual machines to run on a single physical machine. It relies on hardware advances like multi-core CPUs and networking improvements. Virtualization works by either emulating hardware, trapping privileged instructions and emulating them, dynamic binary translation, or paravirtualization where the guest OS is aware it is virtualized. I/O virtualization can emulate devices, use paravirtualized drivers, or directly assign devices to VMs. This enables server consolidation and efficient utilization of resources in cloud computing.
This document provides an overview and outline for the course INSE 6620 (Cloud Computing Security and Privacy). It discusses prerequisites, course administration details, exam and grading policies, and projected topics. The course will require strong problem solving and research paper comprehension skills. Exams will focus on applying concepts from lectures and readings. The grading will be based on two exams and a project involving a proposal, report, and presentation. Academic integrity is strictly enforced. Late submissions are allowed with penalties.
The document describes the Google File System (GFS), a scalable distributed file system designed and implemented by Google to meet its rapidly growing data storage needs. Key aspects of the GFS design include supporting large files and high throughput appending workloads on inexpensive commodity hardware in the face of frequent component failures. The GFS architecture uses a single master to manage metadata and multiple chunkservers to store and retrieve file chunks, providing fault tolerance through replication.
This document presents Xen, a virtual machine monitor (VMM) that allows multiple commodity operating systems to safely share hardware resources with high performance and minimal overhead. Xen uses a technique called paravirtualization where it presents a virtualized interface to guest operating systems that is similar but not identical to the underlying hardware. This requires some modifications to port guest operating systems but allows them to run with performance close to running directly on the hardware. Xen is targeted at hosting up to 100 virtual machines simultaneously on modern servers.
This document provides an introduction to warehouse-scale computers (WSCs), which are large datacenters designed to power Internet services. WSCs differ from traditional datacenters in that they belong to a single organization, use a homogeneous hardware and software platform, and share a common management layer to run a small number of very large applications. The scale of WSCs requires new approaches to construction and operation with an emphasis on cost efficiency. Key aspects of WSC design include storage systems, high-bandwidth networking fabrics, hierarchical storage architectures, high availability despite failures, and energy efficiency given the massive power usage of large datacenters.
This document proposes a novel random ceiling padding approach to preserve user privacy in web-based applications by resisting background knowledge attacks. It introduces models for traffic padding and privacy properties like indistinguishability and uncertainty. It then presents a generic random ceiling padding scheme that introduces randomness into forming padding groups to increase uncertainty for adversaries with background knowledge. The document confirms the correctness and performance of this approach through theoretical analysis and experiments on real applications.
This document describes Bigtable, a distributed storage system designed by Google to manage large amounts of structured data across thousands of servers. Bigtable provides a simple data model with dynamic control over data layout and format. It scales to petabytes of data and is used by many Google products and projects. The document discusses Bigtable's data model, client API, implementation details including its use of other Google infrastructure like GFS and Chubby, and performance measurements.
This document provides an overview of cloud computing from researchers at UC Berkeley. It defines cloud computing as both software delivered as a service over the internet (SaaS) and the hardware/software in datacenters providing those services (clouds). The researchers argue that large, low-cost datacenters enabled cloud computing by lowering costs through economies of scale and statistical multiplexing. They classify current cloud offerings and discuss when utility computing is preferable to private clouds. The document identifies top obstacles to cloud computing growth and opportunities to overcome them.
This document discusses side-channel attacks on encrypted cloud traffic. It begins with an overview of cloud applications and side-channel attacks. It then describes how side-channel attacks can reveal users' private information like search queries and health records by observing packet sizes and directions. Several real-world web applications are shown to be vulnerable, leaking details on users' tax filings, investments, and medical histories. Mitigating these attacks is challenging due to conflicting goals of privacy protection and low overhead. The document proposes a solution called "ceiling padding" which groups similar traffic patterns and pads all packets in a group to the maximum size, drawing inspiration from existing privacy-preserving data publishing techniques. Key challenges in applying these techniques to traffic padding
This document discusses side-channel attacks on encrypted cloud traffic and challenges in mitigating these attacks. It presents research on how patterns in encrypted traffic sizes and directions can leak users' private information when entering inputs into web applications. Even with encryption, traffic analysis can reveal search queries, medical records, tax details, and other data. The root causes are fundamental characteristics of web apps like low entropy inputs and stateful communications. Effective solutions require understanding each application, as padding policies must be application-specific. The document also discusses using techniques from privacy-preserving data publishing to achieve "ceiling padding", but there are challenges around cost and sequential inputs that require new techniques.
The document discusses two types of attacks on cloud computing infrastructure: co-residence attacks and power attacks. Co-residence attacks involve an attacker attempting to launch virtual machines on the same physical server as a target in order to exploit side channels and gather sensitive information. Power attacks involve launching workloads that cause power spikes high enough to trip circuit breakers and cause denial of service by overloading the power infrastructure of data centers. The document outlines the techniques used in each type of attack and discusses potential mitigations.
This document discusses power attacks on cloud computing infrastructure. It describes how oversubscription of power capacity leaves data centers vulnerable to attacks that generate power spikes. The attacks could be launched by malicious users running intensive workloads on public servers. Experiments show how workloads can be tuned to significantly increase power consumption and potentially trip circuit breakers. Various attack vectors are explored targeting infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS). Simulations demonstrate the attacks could cause outages and damage at the data center level if launched at large scale. Mitigations are difficult due to the challenges of predicting and limiting peak power usage.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
The simplified electron and muon model, Oscillating Spacetime: The Foundation...RitikBhardwaj56
Discover the Simplified Electron and Muon Model: A New Wave-Based Approach to Understanding Particles delves into a groundbreaking theory that presents electrons and muons as rotating soliton waves within oscillating spacetime. Geared towards students, researchers, and science buffs, this book breaks down complex ideas into simple explanations. It covers topics such as electron waves, temporal dynamics, and the implications of this model on particle physics. With clear illustrations and easy-to-follow explanations, readers will gain a new outlook on the universe's fundamental nature.
Thinking of getting a dog? Be aware that breeds like Pit Bulls, Rottweilers, and German Shepherds can be loyal and dangerous. Proper training and socialization are crucial to preventing aggressive behaviors. Ensure safety by understanding their needs and always supervising interactions. Stay safe, and enjoy your furry friends!
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
2. 2
(the physical location of file data). An HDFS client wanting to
read a file first contacts the NameNode for the locations of data
blocks comprising the file and then reads block contents from
the DataNode closest to the client. When writing data, the cli-
ent requests the NameNode to nominate a suite of three
DataNodes to host the block replicas. The client then writes
data to the DataNodes in a pipeline fashion. The current design
has a single NameNode for each cluster. The cluster can have
thousands of DataNodes and tens of thousands of HDFS clients
per cluster, as each DataNode may execute multiple application
tasks concurrently.
HDFS keeps the entire namespace in RAM. The inode data
and the list of blocks belonging to each file comprise the meta-
data of the name system called the image. The persistent record
of the image stored in the local host’s native files system is
called a checkpoint. The NameNode also stores the modifica-
tion log of the image called the journal in the local host’s na-
tive file system. For improved durability, redundant copies of
the checkpoint and journal can be made at other servers. Dur-
ing restarts the NameNode restores the namespace by reading
the namespace and replaying the journal. The locations of
block replicas may change over time and are not part of the
persistent checkpoint.
B. DataNodes
Each block replica on a DataNode is represented by two
files in the local host’s native file system. The first file contains
the data itself and the second file is block’s metadata including
checksums for the block data and the block’s generation stamp.
The size of the data file equals the actual length of the block
and does not require extra space to round it up to the nominal
block size as in traditional file systems. Thus, if a block is half
full it needs only half of the space of the full block on the local
drive.
During startup each DataNode connects to the NameNode
and performs a handshake. The purpose of the handshake is to
verify the namespace ID and the software version of the
DataNode. If either does not match that of the NameNode the
DataNode automatically shuts down.
The namespace ID is assigned to the file system instance
when it is formatted. The namespace ID is persistently stored
on all nodes of the cluster. Nodes with a different namespace
ID will not be able to join the cluster, thus preserving the integ-
rity of the file system.
The consistency of software versions is important because
incompatible version may cause data corruption or loss, and on
large clusters of thousands of machines it is easy to overlook
nodes that did not shut down properly prior to the software
upgrade or were not available during the upgrade.
A DataNode that is newly initialized and without any
namespace ID is permitted to join the cluster and receive the
cluster’s namespace ID.
After the handshake the DataNode registers with the
NameNode. DataNodes persistently store their unique storage
IDs. The storage ID is an internal identifier of the DataNode,
which makes it recognizable even if it is restarted with a differ-
ent IP address or port. The storage ID is assigned to the
DataNode when it registers with the NameNode for the first
time and never changes after that.
A DataNode identifies block replicas in its possession to the
NameNode by sending a block report. A block report contains
the block id, the generation stamp and the length for each block
replica the server hosts. The first block report is sent immedi-
ately after the DataNode registration. Subsequent block reports
are sent every hour and provide the NameNode with an up-to-
date view of where block replicas are located on the cluster.
During normal operation DataNodes send heartbeats to the
NameNode to confirm that the DataNode is operating and the
block replicas it hosts are available. The default heartbeat in-
terval is three seconds. If the NameNode does not receive a
heartbeat from a DataNode in ten minutes the NameNode con-
siders the DataNode to be out of service and the block replicas
hosted by that DataNode to be unavailable. The NameNode
then schedules creation of new replicas of those blocks on other
DataNodes.
Heartbeats from a DataNode also carry information about
total storage capacity, fraction of storage in use, and the num-
ber of data transfers currently in progress. These statistics are
used for the NameNode’s space allocation and load balancing
decisions.
The NameNode does not directly call DataNodes. It uses
replies to heartbeats to send instructions to the DataNodes. The
instructions include commands to:
• replicate blocks to other nodes;
• remove local block replicas;
• re-register or to shut down the node;
• send an immediate block report.
These commands are important for maintaining the overall
system integrity and therefore it is critical to keep heartbeats
frequent even on big clusters. The NameNode can process
thousands of heartbeats per second without affecting other
NameNode operations.
C. HDFS Client
User applications access the file system using the HDFS
client, a code library that exports the HDFS file system inter-
face.
Similar to most conventional file systems, HDFS supports
operations to read, write and delete files, and operations to cre-
ate and delete directories. The user references files and directo-
ries by paths in the namespace. The user application generally
does not need to know that file system metadata and storage are
on different servers, or that blocks have multiple replicas.
When an application reads a file, the HDFS client first asks
the NameNode for the list of DataNodes that host replicas of
the blocks of the file. It then contacts a DataNode directly and
requests the transfer of the desired block. When a client writes,
it first asks the NameNode to choose DataNodes to host repli-
cas of the first block of the file. The client organizes a pipeline
from node-to-node and sends the data. When the first block is
filled, the client requests new DataNodes to be chosen to host
replicas of the next block. A new pipeline is organized, and the
3. 3
Figure 1. An HDFS client creates a new file by giving its path to the NameNode. For each block of the file, the NameNode returns
a list of DataNodes to host its replicas. The client then pipelines data to the chosen DataNodes, which eventually confirm the
creation of the block replicas to the NameNode.
client sends the further bytes of the file. Each choice of
DataNodes is likely to be different. The interactions among the
client, the NameNode and the DataNodes are illustrated in
Fig. 1.
Unlike conventional file systems, HDFS provides an API
that exposes the locations of a file blocks. This allows applica-
tions like the MapReduce framework to schedule a task to
where the data are located, thus improving the read perform-
ance. It also allows an application to set the replication factor
of a file. By default a file’s replication factor is three. For criti-
cal files or files which are accessed very often, having a higher
replication factor improves their tolerance against faults and
increase their read bandwidth.
D. Image and Journal
The namespace image is the file system metadata that de-
scribes the organization of application data as directories and
files. A persistent record of the image written to disk is called a
checkpoint. The journal is a write-ahead commit log for
changes to the file system that must be persistent. For each
client-initiated transaction, the change is recorded in the jour-
nal, and the journal file is flushed and synched before the
change is committed to the HDFS client. The checkpoint file is
never changed by the NameNode; it is replaced in its entirety
when a new checkpoint is created during restart, when re-
quested by the administrator, or by the CheckpointNode de-
scribed in the next section. During startup the NameNode ini-
tializes the namespace image from the checkpoint, and then
replays changes from the journal until the image is up-to-date
with the last state of the file system. A new checkpoint and
empty journal are written back to the storage directories before
the NameNode starts serving clients.
If either the checkpoint or the journal is missing, or be-
comes corrupt, the namespace information will be lost partly or
entirely. In order to preserve this critical information HDFS can
be configured to store the checkpoint and journal in multiple
storage directories. Recommended practice is to place the di-
rectories on different volumes, and for one storage directory to
be on a remote NFS server. The first choice prevents loss from
single volume failures, and the second choice protects against
failure of the entire node. If the NameNode encounters an error
writing the journal to one of the storage directories it automati-
cally excludes that directory from the list of storage directories.
The NameNode automatically shuts itself down if no storage
directory is available.
The NameNode is a multithreaded system and processes
requests simultaneously from multiple clients. Saving a trans-
action to disk becomes a bottleneck since all other threads need
to wait until the synchronous flush-and-sync procedure initi-
ated by one of them is complete. In order to optimize this
process the NameNode batches multiple transactions initiated
by different clients. When one of the NameNode’s threads ini-
tiates a flush-and-sync operation, all transactions batched at
that time are committed together. Remaining threads only need
to check that their transactions have been saved and do not
need to initiate a flush-and-sync operation.
E. CheckpointNode
The NameNode in HDFS, in addition to its primary role
serving client requests, can alternatively execute either of two
other roles, either a CheckpointNode or a BackupNode. The
role is specified at the node startup.
The CheckpointNode periodically combines the existing
checkpoint and journal to create a new checkpoint and an
empty journal. The CheckpointNode usually runs on a different
host from the NameNode since it has the same memory re-
quirements as the NameNode. It downloads the current check-
point and journal files from the NameNode, merges them lo-
cally, and returns the new checkpoint back to the NameNode.
4. 4
Creating periodic checkpoints is one way to protect the file
system metadata. The system can start from the most recent
checkpoint if all other persistent copies of the namespace im-
age or journal are unavailable.
Creating a checkpoint lets the NameNode truncate the tail
of the journal when the new checkpoint is uploaded to the
NameNode. HDFS clusters run for prolonged periods of time
without restarts during which the journal constantly grows. If
the journal grows very large, the probability of loss or corrup-
tion of the journal file increases. Also, a very large journal ex-
tends the time required to restart the NameNode. For a large
cluster, it takes an hour to process a week-long journal. Good
practice is to create a daily checkpoint.
F. BackupNode
A recently introduced feature of HDFS is the BackupNode.
Like a CheckpointNode, the BackupNode is capable of creating
periodic checkpoints, but in addition it maintains an in-
memory, up-to-date image of the file system namespace that is
always synchronized with the state of the NameNode.
The BackupNode accepts the journal stream of namespace
transactions from the active NameNode, saves them to its own
storage directories, and applies these transactions to its own
namespace image in memory. The NameNode treats the
BackupNode as a journal store the same as it treats journal files
in its storage directories. If the NameNode fails, the
BackupNode’s image in memory and the checkpoint on disk is
a record of the latest namespace state.
The BackupNode can create a checkpoint without down-
loading checkpoint and journal files from the active
NameNode, since it already has an up-to-date namespace im-
age in its memory. This makes the checkpoint process on the
BackupNode more efficient as it only needs to save the name-
space into its local storage directories.
The BackupNode can be viewed as a read-only NameNode.
It contains all file system metadata information except for
block locations. It can perform all operations of the regular
NameNode that do not involve modification of the namespace
or knowledge of block locations. Use of a BackupNode pro-
vides the option of running the NameNode without persistent
storage, delegating responsibility for the namespace state per-
sisting to the BackupNode.
G. Upgrades, File System Snapshots
During software upgrades the possibility of corrupting the
system due to software bugs or human mistakes increases. The
purpose of creating snapshots in HDFS is to minimize potential
damage to the data stored in the system during upgrades.
The snapshot mechanism lets administrators persistently
save the current state of the file system, so that if the upgrade
results in data loss or corruption it is possible to rollback the
upgrade and return HDFS to the namespace and storage state as
they were at the time of the snapshot.
The snapshot (only one can exist) is created at the cluster
administrator’s option whenever the system is started. If a
snapshot is requested, the NameNode first reads the checkpoint
and journal files and merges them in memory. Then it writes
the new checkpoint and the empty journal to a new location, so
that the old checkpoint and journal remain unchanged.
During handshake the NameNode instructs DataNodes
whether to create a local snapshot. The local snapshot on the
DataNode cannot be created by replicating the data files direc-
tories as this will require doubling the storage capacity of every
DataNode on the cluster. Instead each DataNode creates a copy
of the storage directory and hard links existing block files into
it. When the DataNode removes a block it removes only the
hard link, and block modifications during appends use the
copy-on-write technique. Thus old block replicas remain un-
touched in their old directories.
The cluster administrator can choose to roll back HDFS to
the snapshot state when restarting the system. The NameNode
recovers the checkpoint saved when the snapshot was created.
DataNodes restore the previously renamed directories and initi-
ate a background process to delete block replicas created after
the snapshot was made. Having chosen to roll back, there is no
provision to roll forward. The cluster administrator can recover
the storage occupied by the snapshot by commanding the sys-
tem to abandon the snapshot, thus finalizing the software up-
grade.
System evolution may lead to a change in the format of the
NameNode’s checkpoint and journal files, or in the data repre-
sentation of block replica files on DataNodes. The layout ver-
sion identifies the data representation formats, and is persis-
tently stored in the NameNode’s and the DataNodes’ storage
directories. During startup each node compares the layout ver-
sion of the current software with the version stored in its stor-
age directories and automatically converts data from older for-
mats to the newer ones. The conversion requires the mandatory
creation of a snapshot when the system restarts with the new
software layout version.
HDFS does not separate layout versions for the NameNode
and DataNodes because snapshot creation must be an all-
cluster effort rather than a node-selective event. If an upgraded
NameNode due to a software bug purges its image then back-
ing up only the namespace state still results in total data loss, as
the NameNode will not recognize the blocks reported by
DataNodes, and will order their deletion. Rolling back in this
case will recover the metadata, but the data itself will be lost. A
coordinated snapshot is required to avoid a cataclysmic de-
struction.
III. FILE I/O OPERATIONS AND REPLICA MANGEMENT
A. File Read and Write
An application adds data to HDFS by creating a new file
and writing the data to it. After the file is closed, the bytes writ-
ten cannot be altered or removed except that new data can be
added to the file by reopening the file for append. HDFS im-
plements a single-writer, multiple-reader model.
The HDFS client that opens a file for writing is granted a
lease for the file; no other client can write to the file. The writ-
ing client periodically renews the lease by sending a heartbeat
to the NameNode. When the file is closed, the lease is revoked.
5. 5
The lease duration is bound by a soft limit and a hard limit.
Until the soft limit expires, the writer is certain of exclusive
access to the file. If the soft limit expires and the client fails to
close the file or renew the lease, another client can preempt the
lease. If after the hard limit expires (one hour) and the client
has failed to renew the lease, HDFS assumes that the client has
quit and will automatically close the file on behalf of the writer,
and recover the lease. The writer's lease does not prevent other
clients from reading the file; a file may have many concurrent
readers.
An HDFS file consists of blocks. When there is a need for a
new block, the NameNode allocates a block with a unique
block ID and determines a list of DataNodes to host replicas of
the block. The DataNodes form a pipeline, the order of which
minimizes the total network distance from the client to the last
DataNode. Bytes are pushed to the pipeline as a sequence of
packets. The bytes that an application writes first buffer at the
client side. After a packet buffer is filled (typically 64 KB), the
data are pushed to the pipeline. The next packet can be pushed
to the pipeline before receiving the acknowledgement for the
previous packets. The number of outstanding packets is limited
by the outstanding packets window size of the client.
After data are written to an HDFS file, HDFS does not pro-
vide any guarantee that data are visible to a new reader until the
file is closed. If a user application needs the visibility guaran-
tee, it can explicitly call the hflush operation. Then the current
packet is immediately pushed to the pipeline, and the hflush
operation will wait until all DataNodes in the pipeline ac-
knowledge the successful transmission of the packet. All data
written before the hflush operation are then certain to be visible
to readers.
Figure 2. Data pipeline during block construction
If no error occurs, block construction goes through three
stages as shown in Fig. 2 illustrating a pipeline of three
DataNodes (DN) and a block of five packets. In the picture,
bold lines represent data packets, dashed lines represent ac-
knowledgment messages, and thin lines represent control mes-
sages to setup and close the pipeline. Vertical lines represent
activity at the client and the three DataNodes where time pro-
ceeds from top to bottom. From t0 to t1 is the pipeline setup
stage. The interval t1 to t2 is the data streaming stage, where t1
is the time when the first data packet gets sent and t2 is the time
that the acknowledgment to the last packet gets received. Here
an hflush operation transmits the second packet. The hflush
indication travels with the packet data and is not a separate
operation. The final interval t2 to t3 is the pipeline close stage
for this block.
In a cluster of thousands of nodes, failures of a node (most
commonly storage faults) are daily occurrences. A replica
stored on a DataNode may become corrupted because of faults
in memory, disk, or network. HDFS generates and stores
checksums for each data block of an HDFS file. Checksums are
verified by the HDFS client while reading to help detect any
corruption caused either by client, DataNodes, or network.
When a client creates an HDFS file, it computes the checksum
sequence for each block and sends it to a DataNode along with
the data. A DataNode stores checksums in a metadata file sepa-
rate from the block’s data file. When HDFS reads a file, each
block’s data and checksums are shipped to the client. The client
computes the checksum for the received data and verifies that
the newly computed checksums matches the checksums it re-
ceived. If not, the client notifies the NameNode of the corrupt
replica and then fetches a different replica of the block from
another DataNode.
When a client opens a file to read, it fetches the list of
blocks and the locations of each block replica from the
NameNode. The locations of each block are ordered by their
distance from the reader. When reading the content of a block,
the client tries the closest replica first. If the read attempt fails,
the client tries the next replica in sequence. A read may fail if
the target DataNode is unavailable, the node no longer hosts a
replica of the block, or the replica is found to be corrupt when
checksums are tested.
HDFS permits a client to read a file that is open for writing.
When reading a file open for writing, the length of the last
block still being written is unknown to the NameNode. In this
case, the client asks one of the replicas for the latest length be-
fore starting to read its content.
The design of HDFS I/O is particularly optimized for batch
processing systems, like MapReduce, which require high
throughput for sequential reads and writes. However, many
efforts have been put to improve its read/write response time in
order to support applications like Scribe that provide real-time
data streaming to HDFS, or HBase that provides random, real-
time access to large tables.
B. Block Placement
For a large cluster, it may not be practical to connect all
nodes in a flat topology. A common practice is to spread the
nodes across multiple racks. Nodes of a rack share a switch,
and rack switches are connected by one or more core switches.
Communication between two nodes in different racks has to go
through multiple switches. In most cases, network bandwidth
6. 6
between nodes in the same rack is greater than network band-
width between nodes in different racks. Fig. 3 describes a clus-
ter with two racks, each of which contains three nodes.
Figure 3. Cluster topology example
HDFS estimates the network bandwidth between two nodes
by their distance. The distance from a node to its parent node is
assumed to be one. A distance between two nodes can be cal-
culated by summing up their distances to their closest common
ancestor. A shorter distance between two nodes means that the
greater bandwidth they can utilize to transfer data.
HDFS allows an administrator to configure a script that re-
turns a node’s rack identification given a node’s address. The
NameNode is the central place that resolves the rack location of
each DataNode. When a DataNode registers with the
NameNode, the NameNode runs a configured script to decide
which rack the node belongs to. If no such a script is config-
ured, the NameNode assumes that all the nodes belong to a
default single rack.
The placement of replicas is critical to HDFS data reliabil-
ity and read/write performance. A good replica placement pol-
icy should improve data reliability, availability, and network
bandwidth utilization. Currently HDFS provides a configurable
block placement policy interface so that the users and research-
ers can experiment and test any policy that’s optimal for their
applications.
The default HDFS block placement policy provides a
tradeoff between minimizing the write cost, and maximizing
data reliability, availability and aggregate read bandwidth.
When a new block is created, HDFS places the first replica on
the node where the writer is located, the second and the third
replicas on two different nodes in a different rack, and the rest
are placed on random nodes with restrictions that no more than
one replica is placed at one node and no more than two replicas
are placed in the same rack when the number of replicas is less
than twice the number of racks. The choice to place the second
and third replicas on a different rack better distributes the block
replicas for a single file across the cluster. If the first two repli-
cas were placed on the same rack, for any file, two-thirds of its
block replicas would be on the same rack.
After all target nodes are selected, nodes are organized as a
pipeline in the order of their proximity to the first replica. Data
are pushed to nodes in this order. For reading, the NameNode
first checks if the client’s host is located in the cluster. If yes,
block locations are returned to the client in the order of its
closeness to the reader. The block is read from DataNodes in
this preference order. (It is usual for MapReduce applications
to run on cluster nodes, but as long as a host can connect to the
NameNode and DataNodes, it can execute the HDFS client.)
This policy reduces the inter-rack and inter-node write traf-
fic and generally improves write performance. Because the
chance of a rack failure is far less than that of a node failure,
this policy does not impact data reliability and availability
guarantees. In the usual case of three replicas, it can reduce the
aggregate network bandwidth used when reading data since a
block is placed in only two unique racks rather than three.
The default HDFS replica placement policy can be summa-
rized as follows:
1. No Datanode contains more than one replica of
any block.
2. No rack contains more than two replicas of the
same block, provided there are sufficient racks on
the cluster.
C. Replication management
The NameNode endeavors to ensure that each block always
has the intended number of replicas. The NameNode detects
that a block has become under- or over-replicated when a block
report from a DataNode arrives. When a block becomes over
replicated, the NameNode chooses a replica to remove. The
NameNode will prefer not to reduce the number of racks that
host replicas, and secondly prefer to remove a replica from the
DataNode with the least amount of available disk space. The
goal is to balance storage utilization across DataNodes without
reducing the block’s availability.
When a block becomes under-replicated, it is put in the rep-
lication priority queue. A block with only one replica has the
highest priority, while a block with a number of replicas that is
greater than two thirds of its replication factor has the lowest
priority. A background thread periodically scans the head of the
replication queue to decide where to place new replicas. Block
replication follows a similar policy as that of the new block
placement. If the number of existing replicas is one, HDFS
places the next replica on a different rack. In case that the block
has two existing replicas, if the two existing replicas are on the
same rack, the third replica is placed on a different rack; other-
wise, the third replica is placed on a different node in the same
rack as an existing replica. Here the goal is to reduce the cost of
creating new replicas.
The NameNode also makes sure that not all replicas of a
block are located on one rack. If the NameNode detects that a
block’s replicas end up at one rack, the NameNode treats the
block as under-replicated and replicates the block to a different
rack using the same block placement policy described above.
After the NameNode receives the notification that the replica is
created, the block becomes over-replicated. The NameNode
then will decides to remove an old replica because the over-
replication policy prefers not to reduce the number of racks.
D. Balancer
HDFS block placement strategy does not take into account
DataNode disk space utilization. This is to avoid placing
new—more likely to be referenced—data at a small subset of
DN00
Rack 1
/
Rack 0
DN01 DN02 DN10 DN11 DN12
7. 7
the DataNodes. Therefore data might not always be placed uni-
formly across DataNodes. Imbalance also occurs when new
nodes are added to the cluster.
The balancer is a tool that balances disk space usage on an
HDFS cluster. It takes a threshold value as an input parameter,
which is a fraction in the range of (0, 1). A cluster is balanced
if for each DataNode, the utilization of the node (ratio of used
space at the node to total capacity of the node) differs from the
utilization of the whole cluster (ratio of used space in the clus-
ter to total capacity of the cluster) by no more than the thresh-
old value.
The tool is deployed as an application program that can be
run by the cluster administrator. It iteratively moves replicas
from DataNodes with higher utilization to DataNodes with
lower utilization. One key requirement for the balancer is to
maintain data availability. When choosing a replica to move
and deciding its destination, the balancer guarantees that the
decision does not reduce either the number of replicas or the
number of racks.
The balancer optimizes the balancing process by minimiz-
ing the inter-rack data copying. If the balancer decides that a
replica A needs to be moved to a different rack and the destina-
tion rack happens to have a replica B of the same block, the
data will be copied from replica B instead of replica A.
A second configuration parameter limits the bandwidth
consumed by rebalancing operations. The higher the allowed
bandwidth, the faster a cluster can reach the balanced state, but
with greater competition with application processes.
E. Block Scanner
Each DataNode runs a block scanner that periodically scans
its block replicas and verifies that stored checksums match the
block data. In each scan period, the block scanner adjusts the
read bandwidth in order to complete the verification in a con-
figurable period. If a client reads a complete block and check-
sum verification succeeds, it informs the DataNode. The
DataNode treats it as a verification of the replica.
The verification time of each block is stored in a human
readable log file. At any time there are up to two files in top-
level DataNode directory, current and prev logs. New verifica-
tion times are appended to current file. Correspondingly each
DataNode has an in-memory scanning list ordered by the rep-
lica’s verification time.
Whenever a read client or a block scanner detects a corrupt
block, it notifies the NameNode. The NameNode marks the
replica as corrupt, but does not schedule deletion of the replica
immediately. Instead, it starts to replicate a good copy of the
block. Only when the good replica count reaches the replication
factor of the block the corrupt replica is scheduled to be re-
moved. This policy aims to preserve data as long as possible.
So even if all replicas of a block are corrupt, the policy allows
the user to retrieve its data from the corrupt replicas.
F. Decommissioing
The cluster administrator specifies which nodes can join the
cluster by listing the host addresses of nodes that are permitted
to register and the host addresses of nodes that are not permit-
ted to register. The administrator can command the system to
re-evaluate these include and exclude lists. A present member
of the cluster that becomes excluded is marked for decommis-
sioning. Once a DataNode is marked as decommissioning, it
will not be selected as the target of replica placement, but it
will continue to serve read requests. The NameNode starts to
schedule replication of its blocks to other DataNodes. Once the
NameNode detects that all blocks on the decommissioning
DataNode are replicated, the node enters the decommissioned
state. Then it can be safely removed from the cluster without
jeopardizing any data availability.
G. Inter-Cluster Data Copy
When working with large datasets, copying data into and
out of a HDFS cluster is daunting. HDFS provides a tool called
DistCp for large inter/intra-cluster parallel copying. It is a
MapReduce job; each of the map tasks copies a portion of the
source data into the destination file system. The MapReduce
framework automatically handles parallel task scheduling, error
detection and recovery.
IV. PRACTICE AT YAHOO!
Large HDFS clusters at Yahoo! include about 3500 nodes.
A typical cluster node has:
· 2 quad core Xeon processors @ 2.5ghz
· Red Hat Enterprise Linux Server Release 5.1
· Sun Java JDK 1.6.0_13-b03
· 4 directly attached SATA drives (one terabyte each)
· 16G RAM
· 1-gigabit Ethernet
Seventy percent of the disk space is allocated to HDFS. The
remainder is reserved for the operating system (Red Hat
Linux), logs, and space to spill the output of map tasks.
(MapReduce intermediate data are not stored in HDFS.) Forty
nodes in a single rack share an IP switch. The rack switches are
connected to each of eight core switches. The core switches
provide connectivity between racks and to out-of-cluster re-
sources. For each cluster, the NameNode and the BackupNode
hosts are specially provisioned with up to 64GB RAM; applica-
tion tasks are never assigned to those hosts. In total, a cluster of
3500 nodes has 9.8 PB of storage available as blocks that are
replicated three times yielding a net 3.3 PB of storage for user
applications. As a convenient approximation, one thousand
nodes represent one PB of application storage. Over the years
that HDFS has been in use (and into the future), the hosts se-
lected as cluster nodes benefit from improved technologies.
New cluster nodes always have faster processors, bigger disks
and larger RAM. Slower, smaller nodes are retired or relegated
to clusters reserved for development and testing of Hadoop.
The choice of how to provision a cluster node is largely an is-
sue of economically purchasing computation and storage.
HDFS does not compel a particular ratio of computation to
storage, or set a limit on the amount of storage attached to a
cluster node.
On an example large cluster (3500 nodes), there are about
60 million files. Those files have 63 million blocks. As each
8. 8
block typically is replicated three times, every data node hosts
54 000 block replicas. Each day user applications will create
two million new files on the cluster. The 25 000 nodes in
Hadoop clusters at Yahoo! provide 25 PB of on-line data stor-
age. At the start of 2010, this is a modest—but growing—
fraction of the data processing infrastructure at Yahoo!. Yahoo!
began to investigate MapReduce programming with a distrib-
uted file system in 2004. The Apache Hadoop project was
founded in 2006. By the end of that year, Yahoo! had adopted
Hadoop for internal use and had a 300-node cluster for devel-
opment. Since then HDFS has become integral to the back of-
fice at Yahoo!. The flagship application for HDFS has been the
production of the Web Map, an index of the World Wide Web
that is a critical component of search (75 hours elapsed time,
500 terabytes of MapReduce intermediate data, 300 terabytes
total output). More applications are moving to Hadoop, espe-
cially those that analyze and model user behavior.
Becoming a key component of Yahoo!’s technology suite
meant tackling technical problems that are the difference be-
tween being a research project and being the custodian of many
petabytes of corporate data. Foremost are issues of robustness
and durability of data. But also important are economical per-
formance, provisions for resource sharing among members of
the user community, and ease of administration by the system
operators.
A. Durability of Data
Replication of data three times is a robust guard against loss
of data due to uncorrelated node failures. It is unlikely Yahoo!
has ever lost a block in this way; for a large cluster, the prob-
ability of losing a block during one year is less than .005. The
key understanding is that about 0.8 percent of nodes fail each
month. (Even if the node is eventually recovered, no effort is
taken to recover data it may have hosted.) So for the sample
large cluster as described above, a node or two is lost each day.
That same cluster will re-create the 54 000 block replicas
hosted on a failed node in about two minutes. (Re-replication is
fast because it is a parallel problem that scales with the size of
the cluster.) The probability of several nodes failing within two
minutes such that all replicas of some block are lost is indeed
small.
Correlated failure of nodes is a different threat. The most
commonly observed fault in this regard is the failure of a rack
or core switch. HDFS can tolerate losing a rack switch (each
block has a replica on some other rack). Some failures of a core
switch can effectively disconnect a slice of the cluster from
multiple racks, in which case it is probable that some blocks
will become unavailable. In either case, repairing the switch
restores unavailable replicas to the cluster. Another kind of
correlated failure is the accidental or deliberate loss of electri-
cal power to the cluster. If the loss of power spans racks, it is
likely that some blocks will become unavailable. But restoring
power may not be a remedy because one-half to one percent of
the nodes will not survive a full power-on restart. Statistically,
and in practice, a large cluster will lose a handful of blocks
during a power-on restart. (The strategy of deliberately restart-
ing one node at a time over a period of weeks to identify nodes
that will not survive a restart has not been tested.)
In addition to total failures of nodes, stored data can be
corrupted or lost. The block scanner scans all blocks in a large
cluster each fortnight and finds about 20 bad replicas in the
process.
B. Caring for the Commons
As the use of HDFS has grown, the file system itself has
had to introduce means to share the resource within a large and
diverse user community. The first such feature was a permis-
sions framework closely modeled on the Unix permissions
scheme for file and directories. In this framework, files and
directories have separate access permissions for the owner, for
other members of the user group associated with the file or
directory, and for all other users. The principle differences be-
tween Unix (POSIX) and HDFS are that ordinary files in
HDFS have neither “execute” permissions nor “sticky” bits.
In the present permissions framework, user identity is
weak: you are who your host says you are. When accessing
HDFS, the application client simply queries the local operating
system for user identity and group membership. A stronger
identity model is under development. In the new framework,
the application client must present to the name system creden-
tials obtained from a trusted source. Different credential ad-
ministrations are possible; the initial implementation will use
Kerberos. The user application can use the same framework to
confirm that the name system also has a trustworthy identity.
And the name system also can demand credentials from each of
the data nodes participating in the cluster.
The total space available for data storage is set by the num-
ber of data nodes and the storage provisioned for each node.
Early experience with HDFS demonstrated a need for some
means to enforce the resource allocation policy across user
communities. Not only must fairness of sharing be enforced,
but when a user application might involve thousands of hosts
writing data, protection against application inadvertently ex-
hausting resources is also important. For HDFS, because the
system metadata are always in RAM, the size of the namespace
(number of files and directories) is also a finite resource. To
manage storage and namespace resources, each directory may
be assigned a quota for the total space occupied by files in the
sub-tree of the namespace beginning at that directory. A sepa-
rate quota may also be set for the total number of files and di-
rectories in the sub-tree.
While the architecture of HDFS presumes most applications
will stream large data sets as input, the MapReduce program-
ming framework can have a tendency to generate many small
output files (one from each reduce task) further stressing the
namespace resource. As a convenience, a directory sub-tree can
be collapsed into a single Hadoop Archive file. A HAR file is
similar to a familiar tar, JAR, or Zip file, but file system opera-
tion can address the individual files for the archive, and a HAR
file can be used transparently as the input to a MapReduce job.
C. Benchmarks
A design goal of HDFS is to provide very high I/O band-
width for large data sets. There are three kinds of measure-
ments that test that goal.
9. 9
• What is bandwidth observed from a contrived bench-
mark?
• What bandwidth is observed in a production cluster
with a mix of user jobs?
• What bandwidth can be obtained by the most carefully
constructed large-scale user application?
The statistics reported here were obtained from clusters of
at least 3500 nodes. At this scale, total bandwidth is linear with
the number of nodes, and so the interesting statistic is the
bandwidth per node. These benchmarks are available as part of
the Hadoop codebase.
The DFSIO benchmark measures average throughput for
read, write and append operations. DFSIO is an application
available as part of the Hadoop distribution. This MapReduce
program reads/writes/appends random data from/to large files.
Each map task within the job executes the same operation on a
distinct file, transfers the same amount of data, and reports its
transfer rate to the single reduce task. The reduce task then
summarizes the measurements. The test is run without conten-
tion from other applications, and the number of map tasks is
chosen to be proportional to the cluster size. It is designed to
measure performance only during data transfer, and excludes
the overheads of task scheduling, startup, and the reduce task.
• DFSIO Read: 66 MB /s per node
• DFSIO Write: 40 MB /s per node
For a production cluster, the number of bytes read and writ-
ten is reported to a metrics collection system. These averages
are taken over a few weeks and represent the utilization of the
cluster by jobs from hundreds of individual users. On average
each node was occupied by one or two application tasks at any
moment (fewer than the number of processor cores available).
• Busy Cluster Read: 1.02 MB/s per node
• Busy Cluster Write: 1.09 MB/s per node
Table 2. Sort benchmark for one terabyte and one petabyte of
data. Each data record is 100 bytes with a 10-byte key. The
test program is a general sorting procedure that is not special-
ized for the record size. In the terabyte sort, the block replica-
tion factor was set to one, a modest advantage for a short test.
In the petabyte sort, the replication factor was set to two so
that the test would confidently complete in case of a (not un-
expected) node failure.
At the beginning of 2009, Yahoo! participated in the Gray
Sort competition [9]. The nature of this task stresses the sys-
tem’s ability to move data from and to the file system (it really
isn't about sorting). The competitive aspect means that the re-
sults in Table 2 are about the best a user application can
achieve with the current design and hardware. The I/O rate in
the last column is the combination of reading the input and
writing the output from and to HDFS. In the second row, while
the rate for HDFS is reduced, the total I/O per node will be
about double because for the larger (petabyte!) data set, the
MapReduce intermediates must also be written to and read
from disk. In the smaller test, there is no need to spill the
MapReduce intermediates to disk; they are buffered the mem-
ory of the tasks.
Large clusters require that the HDFS NameNode support
the number of client operations expected in a large cluster. The
NNThroughput benchmark is a single node process which
starts the NameNode application and runs a series of client
threads on the same node. Each client thread performs the same
NameNode operation repeatedly by directly calling the Name-
Node method implementing this operation. The benchmark
measures the number of operations per second performed by
the NameNode. The benchmark is designed to avoid communi-
cation overhead caused by RPC connections and serialization,
and therefore runs clients locally rather than remotely from
different nodes. This provides the upper bound of pure
NameNode performance.
Operation Throughput (ops/s)
Open file for read 126 100
Create file 5600
Rename file 8300
Delete file 20 700
DataNode Heartbeat 300 000
Blocks report (blocks/s) 639 700
Table 3. NNThroughput benchmark
V. FUTURE WORK
This section presents some of the future work that the
Hadoop team at Yahoo is considering; Hadoop being an open
source project implies that new features and changes are de-
cided by the Hadoop development community at large.
The Hadoop cluster is effectively unavailable when its
NameNode is down. Given that Hadoop is used primarily as a
batch system, restarting the NameNode has been a satisfactory
recovery means. However, we have taken steps towards auto-
mated failover. Currently a BackupNode receives all transac-
tions from the primary NameNode. This will allow a failover to
a warm or even a hot BackupNode if we send block reports to
both the primary NameNode and BackupNode. A few Hadoop
users outside Yahoo! have experimented with manual failover.
Our plan is to use Zookeeper, Yahoo’s distributed consensus
technology to build an automated failover solution.
Scalability of the NameNode [13] has been a key struggle.
Because the NameNode keeps all the namespace and block
locations in memory, the size of the NameNode heap has lim-
ited the number of files and also the number of blocks address-
able. The main challenge with the NameNode has been that
when its memory usage is close to the maximum the
NameNode becomes unresponsive due to Java garbage collec-
tion and sometimes requires a restart. While we have encour-
HDFS I/0 Bytes/s
Bytes
(TB)
Nodes Maps Reduces Time Aggregate
(GB)
Per
Node
(MB)
1 1460 8000 2700 62 s 32 22.1
1000 3658 80 000 20 000 58 500 s 34.2 9.35
10. 10
aged our users to create larger files, this has not happened since
it would require changes in application behavior. We have
added quotas to manage the usage and have provided an ar-
chive tool. However these do not fundamentally address the
scalability problem.
Our near-term solution to scalability is to allow multiple
namespaces (and NameNodes) to share the physical storage
within a cluster. We are extending our block IDs to be prefixed
by block pool identifiers. Block pools are analogous to LUNs
in a SAN storage system and a namespace with its pool of
blocks is analogous as a file system volume.
This approach is fairly simple and requires minimal
changes to the system. It offers a number of advantages besides
scalability: it isolates namespaces of different sets of applica-
tions and improves the overall availability of the cluster. It also
generalizes the block storage abstraction to allow other services
to use the block storage service with perhaps a different name-
space structure. We plan to explore other approaches to scaling
such as storing only partial namespace in memory and truly
distributed implementation of the NameNode in the future. In
particular, our assumption that applications will create a small
number of large files was flawed. As noted earlier, changing
application behavior is hard. Furthermore, we are seeing new
classes of applications for HDFS that need to store a large
number of smaller files.
The main drawback of multiple independent namespaces is
the cost of managing them, especially if the number of name-
spaces is large. We are also planning to use application or job
centric namespaces rather than cluster centric namespaces—
this is analogous to the per-process namespaces that are used to
deal with remote execution in distributed systems in the late
80s and early 90s [10][11][12].
Currently our clusters are less than 4000 nodes. We believe
we can scale to much larger clusters with the solutions outlined
above. However, we believe it is prudent to have multiple clus-
ters rather than a single large cluster (say three 6000-node clus-
ters rather than a single 18 000-node cluster) as it allows much
improved availability and isolation. To that end we are plan-
ning to provide greater cooperation between clusters. For ex-
ample caching remotely accessed files or reducing the replica-
tion factor of blocks when files sets are replicated across clus-
ters.
VI. ACKNOWLEDGMENT
We would like to thank all members of the HDFS team at
Yahoo! present and past for their hard work building the file
system. We would like to thank all Hadoop committers and
collaborators for their valuable contributions. Corinne Chandel
drew illustrations for this paper.
REFERENCES
[1] Apache Hadoop. http://hadoop.apache.org/
[2] P. H. Carns, W. B. Ligon III, R. B. Ross, and R. Thakur. “PVFS: A
parallel file system for Linux clusters,” in Proc. of 4th Annual Linux
Showcase and Conference, 2000, pp. 317–327.
[3] J. Dean, S. Ghemawat, “MapReduce: Simplified Data Processing on
Large Clusters,” In Proc. of the 6th Symposium on Operating Systems
Design and Implementation, San Francisco CA, Dec. 2004.
[4] A. Gates, O. Natkovich, S. Chopra, P. Kamath, S. Narayanam, C.
Olston, B. Reed, S. Srinivasan, U. Srivastava. “Building a High-Level
Dataflow System on top of MapReduce: The Pig Experience,” In Proc.
of Very Large Data Bases, vol 2 no. 2, 2009, pp. 1414–1425
[5] S. Ghemawat, H. Gobioff, S. Leung. “The Google file system,” In Proc.
of ACM Symposium on Operating Systems Principles, Lake George,
NY, Oct 2003, pp 29–43.
[6] F. P. Junqueira, B. C. Reed. “The life and times of a zookeeper,” In
Proc. of the 28th ACM Symposium on Principles of Distributed
Computing, Calgary, AB, Canada, August 10–12, 2009.
[7] Lustre File System. http://www.lustre.org
[8] M. K. McKusick, S. Quinlan. “GFS: Evolution on Fast-forward,” ACM
Queue, vol. 7, no. 7, New York, NY. August 2009.
[9] O. O'Malley, A. C. Murthy. Hadoop Sorts a Petabyte in 16.25 Hours and
a Terabyte in 62 Seconds. May 2009.
http://developer.yahoo.net/blogs/hadoop/2009/05/hadoop_sorts_a_petab
yte_in_162.html
[10] R. Pike, D. Presotto, K. Thompson, H. Trickey, P. Winterbottom, “Use
of Name Spaces in Plan9,” Operating Systems Review, 27(2), April
1993, pages 72–76.
[11] S. Radia, "Naming Policies in the spring system," In Proc. of 1st IEEE
Workshop on Services in Distributed and Networked Environments,
June 1994, pp. 164–171.
[12] S. Radia, J. Pachl, “The Per-Process View of Naming and Remote
Execution,” IEEE Parallel and Distributed Technology, vol. 1, no. 3,
August 1993, pp. 71–80.
[13] K. V. Shvachko, “HDFS Scalability: The limits to growth,” ;login:.
April 2010, pp. 6–16.
[14] W. Tantisiriroj, S. Patil, G. Gibson. “Data-intensive file systems for
Internet services: A rose by any other name ...” Technical Report CMU-
PDL-08-114, Parallel Data Laboratory, Carnegie Mellon University,
Pittsburgh, PA, October 2008.
[15] A. Thusoo, J. S. Sarma, N. Jain, Z. Shao, P. Chakka, S. Anthony, H. Liu,
P. Wyckoff, R. Murthy, “Hive – A Warehousing Solution Over a Map-
Reduce Framework,” In Proc. of Very Large Data Bases, vol. 2 no. 2,
August 2009, pp. 1626-1629.
[16] J. Venner, Pro Hadoop. Apress, June 22, 2009.
[17] S. Weil, S. Brandt, E. Miller, D. Long, C. Maltzahn, “Ceph: A Scalable,
High-Performance Distributed File System,” In Proc. of the 7th
Symposium on Operating Systems Design and Implementation, Seattle,
WA, November 2006.
[18] B. Welch, M. Unangst, Z. Abbasi, G. Gibson, B. Mueller, J. Small, J.
Zelenka, B. Zhou, “Scalable Performance of the Panasas Parallel file
System”, In Proc. of the 6th
USENIX Conference on File and Storage
Technologies, San Jose, CA, February 2008
[19] T. White, Hadoop: The Definitive Guide. O'Reilly Media, Yahoo! Press,
June 5, 2009.