Hadoop architecture An overview Hari Shankar Sreekumar Software Engineer @Clickable
Ideas <ul><ul><li>Store and process large amounts of data (PetaBytes) </li></ul></ul><ul><ul><li>Scale horizontally </li><...
What is Hadoop? HDFS Hadoop Common MapReduce Pig Hive HBase Zookeeper Avro Cassandra Mahout . . . . . . . . .
What is Hadoop? HDFS Hadoop Common MapReduce Pig Hive HBase Zookeeper Avro Cassandra Mahout . . . . . . . . .
Hadoop Distributed File System A  distributed filesystem  designed for storing  very large files  with  streaming data acc...
HDFS Blocks <ul><ul><li>Blocks in disks: Minimum amount of data that can be read or written. (~ 512 bytes) </li></ul></ul>...
Namenode and Datanodes <ul><ul><li>The &quot;master&quot; node </li></ul></ul><ul><ul><li>Maintains the HDFS namespace, fi...
Datanodes <ul><ul><li>The &quot;slaves&quot; </li></ul></ul><ul><ul><li>Serve as storage for data blocks </li></ul></ul><u...
Secondary namenode/Checkpoint node <ul><ul><li>To reduce data-loss risk if Namenode fails. </li></ul></ul><ul><ul><li>Pers...
Image: Hadoop, The definitive Guide (Tom White)
Replication and rack-awareness <ul><ul><li>Replication in Hadoop is at the block level. </li></ul></ul><ul><ul><li>Replica...
Reading from HDFS Image: Hadoop, The definitive Guide (Tom White) Failure=>Move to next 'closest' node with the block. Dir...
Writing to HDFS Minimum replication for successful write: dfs.replication.min Files in HDFS are write-once and have strict...
Hadoop Common File system abstraction: The File System (FS) shell includes various shell-like commands that directly inter...
<ul><ul><li>A separate 32-bit checksum is created for every io.bytes.per.checksum bytes (Default is 512 bytes. Overhead < ...
Compression utilities <ul><ul><li>Reduces space usage </li></ul></ul><ul><ul><li>Reduces bandwidth usage </li></ul></ul>Re...
Serialization utilities <ul><ul><li>Extremely important for Hadoop. A good serialization format is Compact, Fast, Extensib...
MapReduce Framework <ul><ul><li>Jobtracker receives map-reduce job execution request from Client. </li></ul></ul><ul><ul><...
Image: Hadoop, The definitive Guide (Tom White)
References http://hadoop.apache.org/common/docs/current/hdfs_design.html Hadoop: The Definitive Guide, by Tom White. Copyr...
Upcoming SlideShare
Loading in...5
×

Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)

6,178
-1

Published on

These slides cover the very basics of Hadoop architecture, in particular HDFS. This was my presentation in the first Delhi Hadoop User Group (DHUG) meetup held at Gurgaon on 10th September 2011. Loved the positive feedback. I'll also upload a more elaborate version covering Hadoop mapreduce architecture as well soon. Most of the stuff covered in these slides can be found in Tom White's book as well (See the last slide)

Published in: Technology
0 Comments
11 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
6,178
On Slideshare
0
From Embeds
0
Number of Embeds
5
Actions
Shares
0
Downloads
493
Comments
0
Likes
11
Embeds 0
No embeds

No notes for slide

Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)

  1. 1. Hadoop architecture An overview Hari Shankar Sreekumar Software Engineer @Clickable
  2. 2. Ideas <ul><ul><li>Store and process large amounts of data (PetaBytes) </li></ul></ul><ul><ul><li>Scale horizontally </li></ul></ul><ul><li>  </li></ul><ul><ul><li>Failure is normal </li></ul></ul><ul><ul><li>Distributed computing (MapReduce) </li></ul></ul><ul><ul><li>Moving computation is cheaper than moving data </li></ul></ul>
  3. 3. What is Hadoop? HDFS Hadoop Common MapReduce Pig Hive HBase Zookeeper Avro Cassandra Mahout . . . . . . . . .
  4. 4. What is Hadoop? HDFS Hadoop Common MapReduce Pig Hive HBase Zookeeper Avro Cassandra Mahout . . . . . . . . .
  5. 5. Hadoop Distributed File System A distributed filesystem designed for storing very large files with streaming data access running on clusters of commodity hardware . HDFS has been designed keeping MapReduce in mind Consists of a cluster of machines, each machine performing one or more of the following roles: Namenode (Only one per cluster) Secondary namenode (Checkpoint node) (Only one per cluster) Datanodes (Many per cluster)
  6. 6. HDFS Blocks <ul><ul><li>Blocks in disks: Minimum amount of data that can be read or written. (~ 512 bytes) </li></ul></ul><ul><ul><li>Filesystem blocks: Abstraction over disk blocks. (~ few kilobytes) </li></ul></ul><ul><ul><li>HDFS block: Abstraction over Filesystem blocks, to facilitate distribution over network and other requirements of Hadoop. Usually 64 MB or 128 MB. </li></ul></ul><ul><ul><li>Block abstraction keeps the design simple. e.g, replication is at block level rather than file level. </li></ul></ul><ul><ul><li>File is split into blocks for storing in HDFS. Blocks of the same file can reside on multiple machines in the cluster. </li></ul></ul><ul><ul><li>Each block is stored as a file in the Local FS of the DataNode. </li></ul></ul><ul><ul><li>Block size does not refer to size on disk. 1 MB file will not take up 64 MB on disk. </li></ul></ul>
  7. 7. Namenode and Datanodes <ul><ul><li>The &quot;master&quot; node </li></ul></ul><ul><ul><li>Maintains the HDFS namespace, filesystem tree and metadata. </li></ul></ul><ul><ul><li>Maintains the mapping from each file to the list of blockIDs where the file is. </li></ul></ul><ul><ul><li>Metadata mapping is maintained in memory as well as persisted on disk. </li></ul></ul><ul><ul><li>Maintains in memory the locations of each block. (Block to datanode mapping) </li></ul></ul><ul><ul><li>Memory requirement: ~150 bytes/file </li></ul></ul><ul><ul><li>Issues instructions to datanode to create/replicate/delete blocks </li></ul></ul><ul><ul><li>Single point of failure </li></ul></ul>
  8. 8. Datanodes <ul><ul><li>The &quot;slaves&quot; </li></ul></ul><ul><ul><li>Serve as storage for data blocks </li></ul></ul><ul><ul><li>No metadata </li></ul></ul><ul><ul><li>Report all blocks to namenode at startup (BlockReport) </li></ul></ul><ul><ul><li>Sends periodic &quot;heartbeat&quot; to Namenode </li></ul></ul><ul><ul><li>Serves read, write requests, performs block creation, deletion, and replication upon instruction from Namenode. </li></ul></ul><ul><ul><li>User data never flows through the NameNode. </li></ul></ul>
  9. 9. Secondary namenode/Checkpoint node <ul><ul><li>To reduce data-loss risk if Namenode fails. </li></ul></ul><ul><ul><li>Persistent data is stored in two files in Namenode - The FsImage and the Edit log. </li></ul></ul><ul><ul><li>Changes in file metadata go into the Edit log. </li></ul></ul><ul><ul><li>Secondary namenode periodically merges Edit log with FsImage. </li></ul></ul><ul><ul><li>Data loss will still happen if Namenode fails. </li></ul></ul><ul><ul><li>Configure Hadoop to write Editlog into a remote NFS mount as well. In case of failure, copy metadata files from NFS to Secondary Namenode and run it. </li></ul></ul><ul><ul><li>NFS idea has a (very low) performance impact </li></ul></ul><ul><ul><li>Failover is NOT automatic </li></ul></ul>
  10. 10. Image: Hadoop, The definitive Guide (Tom White)
  11. 11. Replication and rack-awareness <ul><ul><li>Replication in Hadoop is at the block level. </li></ul></ul><ul><ul><li>Replication is &quot;Rack-aware&quot; </li></ul></ul><ul><ul><li>Three levels for replication preference:  </li></ul></ul><ul><li>                      Same machine > Same rack > Different rack </li></ul><ul><ul><li>Replication can be configured per file. Can also be configured from application </li></ul></ul><ul><ul><li>Selection of blocks to process in a MapReduce job takes advantage of rack-awareness. </li></ul></ul><ul><ul><li>Reading and writing on HDFS also makes use of rack-awareness. </li></ul></ul><ul><ul><li>Rack-awareness is NOT automatic, and needs to be configured. By default, all nodes are assumed to be in the same rack. </li></ul></ul>
  12. 12. Reading from HDFS Image: Hadoop, The definitive Guide (Tom White) Failure=>Move to next 'closest' node with the block. Direct connection between client and datanode
  13. 13. Writing to HDFS Minimum replication for successful write: dfs.replication.min Files in HDFS are write-once and have strictly one writer at any time. Image: Hadoop, The definitive Guide (Tom White)
  14. 14. Hadoop Common File system abstraction: The File System (FS) shell includes various shell-like commands that directly interact with the Hadoop Distributed File System (HDFS) as well as other file systems that Hadoop supports, such as Local FS, HFTP FS, S3 FS, and others. Service-level authorization: Service Level Authorization is the initial authorization mechanism to ensure clients connecting to a particular Hadoop  service  have the necessary, pre-configured, permissions and are authorized to access the given service. For example, a MapReduce cluster can use this mechanism to allow a configured list of users/groups to submit jobs.
  15. 15. <ul><ul><li>A separate 32-bit checksum is created for every io.bytes.per.checksum bytes (Default is 512 bytes. Overhead < 1 %) </li></ul></ul><ul><li>  </li></ul><ul><ul><li>Checksums are stored with each data block. </li></ul></ul><ul><ul><li>Verified after each operation that might result in data corruption. Also checked periodically. </li></ul></ul><ul><ul><li>Can be used in non-HDFS filesystems also. </li></ul></ul>Data Integrity
  16. 16. Compression utilities <ul><ul><li>Reduces space usage </li></ul></ul><ul><ul><li>Reduces bandwidth usage </li></ul></ul>Ref: Hadoop, The definitive Guide (Tom White) Splittable LZO is available separately and is a good trade-off between compression speed and compressed size.
  17. 17. Serialization utilities <ul><ul><li>Extremely important for Hadoop. A good serialization format is Compact, Fast, Extensible and Interoperable. </li></ul></ul><ul><ul><li>Java Serialization is very cumbersome and heavy for Hadoop. So it uses its own serialization, based on the Writable interface. </li></ul></ul><ul><ul><li>Other frameworks such as Avro, Thrift and protocol buffers are also used. </li></ul></ul>
  18. 18. MapReduce Framework <ul><ul><li>Jobtracker receives map-reduce job execution request from Client. </li></ul></ul><ul><ul><li>Does sanity checks to see if the job is configured properly. </li></ul></ul><ul><ul><li>Computes the input splits. </li></ul></ul><ul><ul><li>Loads resources required for the job into HDFS </li></ul></ul><ul><ul><li>Assigns splits to tasktrackers for map and reduce phases </li></ul></ul><ul><ul><li>Map split assignment is data-locality-aware </li></ul></ul><ul><ul><li>Single point of failure </li></ul></ul><ul><li>  </li></ul><ul><ul><li>Tasktracker creates a new process for the task and executes it.  </li></ul></ul><ul><ul><li>Sends periodic heartbeats to the Jobtracker, along with other information about the task. </li></ul></ul>
  19. 19. Image: Hadoop, The definitive Guide (Tom White)
  20. 20. References http://hadoop.apache.org/common/docs/current/hdfs_design.html Hadoop: The Definitive Guide, by Tom White. Copyright 2009 Tom White, 978-0-596-52197-4
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×