Apache Hadoop YARN: best practices

  • 2,567 views
Uploaded on

 

More in: Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
2,567
On Slideshare
0
From Embeds
0
Number of Embeds
5

Actions

Shares
Downloads
0
Comments
0
Likes
23

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. © Hortonworks Inc. 2014 Apache Hadoop YARN Best Practices Zhijie Shen zshen [at] hortonworks.com Varun Vasudev vvasudev [at] hortonworks.com Page 1
  • 2. © Hortonworks Inc. 2014 Who we are • Zhijie Shen – Software engineer at Hortonworks – Apache Hadoop Committer – Apache SAMZA Committer and PPMC – PhD from National University of Singapore • Varun Vasudev – Software engineer at Hortonworks, working on YARN – Worked on image and web search at Yahoo! Page 2 Architecting the Future of Big Data
  • 3. © Hortonworks Inc. 2014 Agenda • Talking about what we have learnt from our experiences working with YARN users • Best practices for – Administrators – Application Developers Page 3 Architecting the Future of Big Data
  • 4. © Hortonworks Inc. 2014 For Administrators Architecting the Future of Big Data Page 4
  • 5. © Hortonworks Inc. 2014 Sub-Agenda • Overview of YARN configuration • ResourceManager • Schedulers • NodeManagers • Others – Log aggregation – Metrics Page 5 Architecting the Future of Big Data
  • 6. © Hortonworks Inc. 2014 Overview of YARN configuration • Almost everything YARN related in yarn-site.xml • Granular – individual variables documented • Nearly 150 configuration properties – Required: Very small set – hostnames etc – Common: Client and server – Advanced: RPC retries etc. – yarn.resourcemanager.* yarn.nodemanager.* usually - server configs – Admins can mark them ‘final’ to clarify to users they cannot be overridden – yarn.client.* - client configs • Security, ResourceManager, NodeManager, TimelineServer, Scheduler – all in one file • Topology scripts on RM, NM and all nodes – BUG: MR AM has to read the same script. Work in progress to send it from RM to AMs Page 6 Architecting the Future of Big Data
  • 7. © Hortonworks Inc. 2014 ResourceManager • Hardware requirements – ResourceManagers needs CPU – Doesn’t require as much memory as JobTracker – 4 to 8 GB should be fine • JobHistoryServer – Needs memory, at least 8 GB Page 7 Architecting the Future of Big Data
  • 8. © Hortonworks Inc. 2014 Enable RM HA • Enable RM HA - availability • Only supported using Zookeeper – Leader election used – Fencing support • Automatic failover enabled by default – Using zookeeper again – Embedded zkfc, no need to explicitly start separate process • You can start multiple ResourceManagers • Specify rm-ids using yarn.resourcemanager.ha.rm-ids – e.g yarn.resourcemanager.ha.rm-ids rm1, rm2 • Associate hostnames with rm-ids using yarn.resourcemanager.hostname.rm1, yarn.resourcemanager.hostname.rm2 – No need to change any other configs – scheduler, resource-tracker addresses are automatically taken care of • Web-Uis automatically get redirected to the active Page 8 Architecting the Future of Big Data
  • 9. © Hortonworks Inc. 2014 YARN schedulers • Two main schedulers – capacity – fair • Capacity Scheduler allows you to setup queues to split resources – useful for multi-tenant clusters where you want to guarantee resources • Fair Scheduler allows you to split resources ‘fairly’ across applications • Both have admin files which can be used to dynamically change the setup • If you have enabled HA, queue configuration files are on local disk – Make sure queue files are consistent across nodes – Feature to centralize configs in progress Page 9 Architecting the Future of Big Data
  • 10. © Hortonworks Inc. 2014 Capacity Scheduler Page 10 Architecting the Future of Big Data 50% queue-1 queue-2 queue-3 Apps Apps Apps Guaranteed Resources 30% 20%
  • 11. © Hortonworks Inc. 2014 YARN Capacity scheduler • Configuration in capacity-scheduler.xml • Take some time to setup your queues! • Queues have per-queue acls to restrict queue access – Access can be dynamically changed • Elasticity can be limited on a per-queue basis – use yarn.scheduler.capacity.<queue-path>.maximum-capacity • Use yarn.scheduler.capacity.<queue-path>.state to drain queues – ‘Decommissioning’ a queue • yarn rmadmin –refreshQueues to make runtime changes Page 11 Architecting the Future of Big Data
  • 12. © Hortonworks Inc. 2014 YARN Fair Scheduler • Apps get equal share of resources, on average, over time • No worry about starvation • Support for queues – meant to be used so that you can prevent users from flooding the system with apps • Has support for fairness policy which can be modified at runtime • Good if you have lots of small jobs Page 12 Architecting the Future of Big Data
  • 13. © Hortonworks Inc. 2014 Size your containers • Memory and cores – minimum and maximum allocation, affects containers per node • yarn.scheduler.*-allocation-* • Defaults are 1GB, 8GB, 1 core and 32 cores • CPU scheduling needs a bit more stabilization – Historically – translate to memory calculations • Similarly Disk-scheduling – translate disk limits to memory/cpu. Page 13 Architecting the Future of Big Data 0 10 20 30 40 50 60 70 4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 Number of containers per node Memory for NodeManager(in GB)
  • 14. © Hortonworks Inc. 2014 NodeManagers • Set resource-memory – variable is yarn.nodemanager.resource.memory- mb – Sets how much memory YARN can use for containers – Default is 8GB • Set up a health-checker script! – Check disk – Check network – Check any external resources required for job completion – Test it on your OS – Weed out bad nodes automatically! • Figure out if the physical and virtual memory monitors make sense; both are enabled by default. – Default ratio is 2.1 • Multiple disks for containers on NodeManagers – HDFS too accesses them – If bottlenecked on disks, separate them. Haven’t seen it in the wild though Page 14 Architecting the Future of Big Data
  • 15. © Hortonworks Inc. 2014 YARN log aggregation • Log aggregation can be enabled using yarn.log-aggregation-enable. • Can control how long you keep the logs by setting parameters for purging • App logs can be obtained using “yarn logs” command • Creates lots of small files, can affect HDFS performance Page 15 Architecting the Future of Big Data
  • 16. © Hortonworks Inc. 2014 YARN Metrics • JMX – http://<rm address>:<port>/jmx, http://<nm address>:<port>/jmx – Cluster metrics – apps running, successful, failed, etc – Scheduler metrics – queue usage – RPC metrics • Web UI – http://<rm address>:<port>/cluster – Cluster metrics – Scheduler metrics – easier to digest, especially queue usage – Healthy, failed nodes • Can be emitted to Ganglia directly using the metrics sink – Metrics configuration file Page 16 Architecting the Future of Big Data
  • 17. © Hortonworks Inc. 2014 For Application Developers Architecting the Future of Big Data Page 17
  • 18. © Hortonworks Inc. 2014 Sub-Agenda • Framework or a native Application? • Understanding YARN Basics • Writing an YARN Client • Writing an ApplicationMaster • Misc Lessons Page 18 Architecting the Future of Big Data
  • 19. © Hortonworks Inc. 2014 Framework or a native app? • Two choices – Write applications on top of existing frameworks – Battle tested – Already work – APIs – Roll your own native YARN application • Existing frameworks – Scalable batch processing: MapReduce – Stream processing: Storm/Samza – Interactive processing, iterations: Tez/Spark – SQL: Hive – Data pipelines: Pig – Graph processing: Giraph – Existing app: Slider • Apache: Your App Store Page 19 Architecting the Future of Big Data
  • 20. © Hortonworks Inc. 2014 Ease of development • Check the other developing or deployment tools Page 20 Architecting the Future of Big Data NativeSlider Frameworks Complexity Twill/REEF
  • 21. © Hortonworks Inc. 2014 Understanding YARN Components Page 21 Architecting the Future of Big Data • ResourceManager – Master of a cluster • NodeManager – Slave to take care of one host • ApplicationMaster – Master of an application • Container – Resource abstraction, process to complete a task
  • 22. © Hortonworks Inc. 2014 User code: Client and AM • Client – Client to ResourceManager • ApplicationMaster – ApplicationMaster to scheduler – Allocate resources – ApplicationMaster to NodeMasters – Manage containers Page 22 Architecting the Future of Big Data
  • 23. © Hortonworks Inc. 2014 Client: Rule of Thumb • Use the client libraries – YarnClient – Submit an application – AMRMClient(Async) – Negotiate resources – NMClient(Async) – Manage containers – TimelineClient – Monitor an application Page 23 Architecting the Future of Big Data
  • 24. © Hortonworks Inc. 2014 Writing Client 1. Get the application Id from RM 2. Construct ApplicationSubmissionContext 1. Shell command to run the AM 2. Environment (class path, env-variable) 3. LocalResources (Job jars downloaded from HDFS) 3. Submit the request to RM 1. submitApplication Page 24 Architecting the Future of Big Data
  • 25. © Hortonworks Inc. 2014 Tips for Writing Client • Cluster Dependencies –Try to make zero assumptions on the cluster –Cluster location –Cluster sizes. – ApplicationMaster too • Your application bundle should deploy everything required using YARN’s local resources. Page 25 Architecting the Future of Big Data
  • 26. © Hortonworks Inc. 2014 Writing ApplicationMaster 1. AM registers with RM (registerApplicationMaster) 2. HeartBeats(allocate) with RM (asynchronously) 1. send the Request 1. Request new containers. 2. Release containers. 2. Received containers and send request to NM to start the container 1. construct ContainerLaunchContext – commands – env – jars 3. Unregisters with RM (finishApplicationMaster) Page 26 Architecting the Future of Big Data
  • 27. © Hortonworks Inc. 2014 Tips for writing ApplicationMaster • RM assigns containers asynchronously – Containers are likely not returned immediately at current call. – User needs to give empty requests until it gets the containers it requested. – ResourceRequest is incremental. • Locality requests may not always be met – Relaxed Locality • AMs can fail – They run on cluster nodes which can fail – RM restarts AMs automatically – Write AMs to handle failures on restarts - recovery – May be continue your work when AM restarts • Optionally talk to your containers directly through the AM – To get progress, give work, kill it, etc – YARN doesn’t do anything for you Page 27 Architecting the Future of Big Data
  • 28. © Hortonworks Inc. 2014 Using the Timeline Service • Metadata/Metrics • Put application specific information – TimelineClient – POJO objects • Query the information – Get all entities of an entity type – Get one specific entity – Get all events of an entity type Page 28 Architecting the Future of Big Data
  • 29. © Hortonworks Inc. 2014 Page 29 Architecting the Future of Big Data Summary: Application Workflow • Execution Sequence 1. Client submits an application 2. RM allocates a container to start AM 3. AM registers with RM 4. AM asks containers from RM 5. AM notifies NM to launch containers 6. Application code is executed in container 7. Client contacts RM/AM to monitor application’s status 8. AM unregisters with RM Client RM NM AM 1 2 3 4 5 7 8 6
  • 30. © Hortonworks Inc. 2014 Misc Lessons: Taking What YARN offers • Monitor your application – RM – NM – Timeline server Page 30 Architecting the Future of Big Data
  • 31. © Hortonworks Inc. 2014 Misc Lessons: Debugging/Testing • MiniYARNCluster – In JVM YARN cluster! – Regression tests for your applications • Unmanaged AM – Support to run the AM outside of a YARN cluster for development and testing – AM logs on your console! • Logs – RM/NM logs – App Log aggregation – Accessible via CLI, web UI Page 31 Architecting the Future of Big Data
  • 32. © Hortonworks Inc. 2014 Thank you! Questions? Architecting the Future of Big Data Page 32