Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

A day in the life of hadoop administrator!


Published on

A day in the life of Hadoop Administrator!

Published in: Technology

A day in the life of hadoop administrator!

  1. 1. A day in the life of Hadoop Administrator!
  2. 2. Slide 2Slide 2Slide 2 At the end of this webinar we will Know about:  The daily tasks a Hadoop Admin do  Cluster Monitor tools  How Fault tolerance is maintained in cluster  Demo on Hadoop High Availability  Demo on YARN High Availability Agenda
  3. 3. Slide 3Slide 3Slide 3 Coming to office
  4. 4. Slide 4Slide 4Slide 4 First thing on morning checking the monitor console (cloudera manager,Nagios,ganglia etc …) and the jobtracker UI. Cluster Monitoring
  5. 5. Slide 5Slide 5Slide 5 Few Cluster Monitoring Tools
  6. 6. Slide 6Slide 6Slide 6 Planning the day and reviewing past task in a meeting Cluster Plan
  7. 7. Slide 7Slide 7Slide 7 Midline configuration (all around, deep storage, 1 Gb Ethernet) CPU 2 × 6 core 2.9 Ghz/15 MB cache Memory 64 GB DDR3-1600 ECC Disk controller SAS 6 Gb/s Disks 12 × 3 TB LFF SATA II 7200 RPM Network controller 2 × 1 Gb Ethernet Notes CPU features such as Intel’s Hyper-Threading and QPI are desirable. Allocate memory to take advantage of triple- or quad-channel memory configurations. Typical slave node hardware configurations Cluster Plan
  8. 8. Slide 8Slide 8Slide 8 High end configuration (high memory, spindle dense, 10 Gb Ethernet) CPU 2 × 6 core 2.9 Ghz/15 MB cache Memory 96 GB DDR3-1600 ECC Disk controller 2 × SAS 6 Gb/s Disks 24 × 1 TB SFF Nearline/MDL SAS 7200 RPM Network controller 1 × 10 Gb Ethernet Notes Same as the midline configuration High end configuration (high memory, spindle dense, 10 Gb Ethernet) Cluster Plan
  9. 9. Slide 9Slide 9Slide 9 Developing and running files merger so that the small files and directories our data suppliers create would become bigger and fewer. Execute Few Regular Utility Tasks
  10. 10. Slide 10Slide 10Slide 10 Backup And Recovery Task
  11. 11. Demo Achieving Hadoop High Availability
  12. 12. Slide 12Slide 12Slide 12 Keep the farm working – we build monitoring, managing resources between our users and our tools, tuning configurations for the farm stack, for mapreduce, spark jobs and for the servers of course. Job Scheduling And Configuration
  13. 13. Slide 13Slide 13Slide 13 Analyzing too heavy or failed jobs and Fixing problems Analyzing Failed Tasks
  14. 14. Demo Achieving YARN High Availability
  15. 15. Slide 15Slide 15Slide 15 Collecting and Defining requirements for new hosts Evaluating New Host Requests
  16. 16. Slide 16Slide 16Slide 16 Upgrading and updating the farm from time to time Updates And Upgrades
  17. 17. Slide 17Slide 17Slide 17 Trying to test and benchmark new projects. Try And Finalize New Solutions
  18. 18. Slide 18Slide 18Slide 18 Set a configuration management tool for our test and production environments Be In Touch With New Configuration Tools
  19. 19. Slide 19Slide 19Slide 19 Developing an easy infrastructure to insert data to the cluster and into hive and hbase Execute Few DWH Responsibilities
  20. 20. Slide 20Slide 20Slide 20 Daily support for developers who use the hadoop stack Assisting Hadoop Developers
  21. 21. Slide 21Slide 21Slide 21 Managing users, permissions , quotas, etc Checking Resources Usage And Users Permissions
  22. 22. Demo Demo on User permission and Quota
  23. 23. Slide 23Slide 23Slide 23 Trubleshooting
  24. 24. Slide 24Slide 24Slide 24 NameNode startup fails Exception when initializing the filesystem Could only be replicated to 0 nodes instead of 1 Server not available Could not obtain block blk_-4157273618194597760_1160 from any node Could not get block locations. Aborting... Common Error Messages
  25. 25. Questions Slide 25
  26. 26. Slide 26 Your feedback is vital for us, be it a compliment, a suggestion or a complaint. It helps us to make your experience better! Please spare few minutes to take the survey after the webinar. Survey