• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Writing YARN Applications Hadoop Summit 2012
 

Writing YARN Applications Hadoop Summit 2012

on

  • 958 views

Hitesh Shah, Talk at Hadoop Summit 2012. ...

Hitesh Shah, Talk at Hadoop Summit 2012.

Hadoop YARN is the next generation computing platform in Apache Hadoop with support for programming paradigms besides MapReduce. In the world of Big Data, one cannot solve all the problems wholly using the Map Reduce programming model. Typical installations run separate programming models like MR, MPI, graph-processing frameworks on individual clusters. Running fewer larger clusters is cheaper than running more small clusters. Therefore, leveraging YARN to allow both MR and non-MR applications to run on top of a common cluster becomes more important from an economical and operational point of view. This talk will cover the different APIs and RPC protocols that are available for developers to implement new application frameworks on top of YARN. We will also go through a simple application which demonstrates how one can implement their own Application Master, schedule requests to the YARN resource-manager and then subsequently use the allocated resources to run user code on the NodeManagers.

Statistics

Views

Total Views
958
Views on SlideShare
911
Embed Views
47

Actions

Likes
2
Downloads
40
Comments
0

5 Embeds 47

http://eventifier.co 37
http://www.linkedin.com 5
https://www.linkedin.com 3
https://abs.twimg.com 1
http://www.docshut.com 1

Accessibility

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

CC Attribution License

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Writing YARN Applications Hadoop Summit 2012 Writing YARN Applications Hadoop Summit 2012 Presentation Transcript

    • Writing Application Frameworkson Apache Hadoop YARNHitesh Shahhitesh@hortonworks.com© Hortonworks Inc. 2011 Page 1
    • Hitesh Shah - Background• Member of Technical Staff at Hortonworks Inc.• Committer for Apache MapReduce and Ambari• Earlier, spent 8+ years at Yahoo! building various infrastructure pieces all the way from data storage platforms to high throughput online ad-serving systems. Architecting the Future of Big Data Page 2 © Hortonworks Inc. 2011
    • Agenda•YARN Architecture and Concepts•Writing a New Framework Architecting the Future of Big Data Page 3 © Hortonworks Inc. 2011
    • YARN Architecture• Resource Manager –Global resource scheduler –Hierarchical queues• Node Manager –Per-machine agent –Manages the life-cycle of container –Container resource monitoring• Application Master –Per-application –Manages application scheduling and task execution –E.g. MapReduce Application Master Architecting the Future of Big Data Page 4 © Hortonworks Inc. 2011
    • YARN Architecture Node Manager Container App Mstr Client Resource Node Manager Manager Client App Mstr Container MapReduce Status Node Manager Job Submission Node Status Resource Request Container Container Architecting the Future of Big Data Page 5 © Hortonworks Inc. 2011
    • YARN Concepts• Application ID –Application Attempt IDs• Container –ContainerLaunchContext• ResourceRequest –Host/Rack/Any match –Priority –Resource constraints• Local Resource –File/Archive –Visibility – public/private/application Architecting the Future of Big Data Page 6 © Hortonworks Inc. 2011
    • What you need for a new Framework• Application Submission Client –For example, the MR Job Client• Application Master –The core framework library• Application History ( optional ) –History of all previously run instances• Auxiliary Services ( optional ) –Long-running application-specific services running on the NodeManager Architecting the Future of Big Data Page 7 © Hortonworks Inc. 2011
    • Use Case: Distributed Shell• Take a user-provided script Node or application and run it on a Manager set of nodes in the Cluster DS AppMaster• Input: – User Script to execute – Number of containers to run on Node Manager – Variable arguments for each different container Shell Script – Memory requirements for the shell script Node – Output Location/Dir Manager Shell Script Architecting the Future of Big Data Page 8 © Hortonworks Inc. 2011
    • Client: RPC calls• Uses ClientRM Protocol ClientRMProtocol#getNewApplication• Get a new Application ID from the RM ClientRMProtocol#submitApplication• Application Submission CLIENT RM ClientRMProtocol#getApplicationReport• Application Monitoring ClientRMProtocol#killApplication• Kill the Application? Architecting the Future of Big Data Page 9 © Hortonworks Inc. 2011
    • Client• Registration with the RM –New Application ID• Application Submission –User information –Scheduler queue –Define the container for the Distributed Shell App Master via the ContainerLaunchContext• Application Monitoring – AppMaster host details with tokens if needed, tracking url – Application Status (submitted/running/finished) Architecting the Future of Big Data Page 10 © Hortonworks Inc. 2011
    • Defining a Container• ContainerLaunchContext class –Can run a shell script, a java process or launch a VM• Command(s) to run• Local resources needed for the process to run –Dependent jars, native libs, data files/archives• Environment to setup –Java Classpath• Security-related data –Container Tokens Architecting the Future of Big Data Page 11 © Hortonworks Inc. 2011
    • Application Master: RPC calls• AMRM and CM protocols Client• Register AM with RM AMRM.registerAM• Ask RM to allocate resources AMRM.allocate AM RM• Launch tasks on allocated containers AMRM. finishAM App-specific• Manage tasks to final RPC completion CM.startContainer• Inform RM of completion NM NM Architecting the Future of Big Data Page 12 © Hortonworks Inc. 2011
    • Application Master• Setup RPC to handle requests from Client and/or tasks launched on Containers• Register and send regular heartbeats to the RM• Request resources from the RM.• Launch user shell script on containers as and when allocated.• Monitor status of user script of remote containers and manage failures by retrying if needed.• Inform RM of completion when application is done. Architecting the Future of Big Data Page 13 © Hortonworks Inc. 2011
    • AMRM#allocate• Request: – Containers needed – Not a delta protocol – Locality constraints: Host/Rack/Any – Resource constraints: memory – Priority-based assignments – Containers to release – extra/unwanted? – Only non-launched containers• Response: – Allocated Containers – Launch or release – Completed Containers – Status of completion Architecting the Future of Big Data Page 14 © Hortonworks Inc. 2011
    • YARN Applications• Data Processing: – OpenMPI on Hadoop – Spark (UC Berkeley) – Shark ( Hive-on-Spark ) – Real-time data processing – Storm ( Twitter ) – Apache S4 – Graph processing – Apache Giraph• Beyond data: – Deploying Apache HBase via YARN (HBASE-4329) – Hbase Co-processors via YARN (HBASE-4047) Architecting the Future of Big Data Page 15 © Hortonworks Inc. 2011
    • References•Doc on writing new applications: –WritingYarnApplications.html ( available at http://hadoop.apache.org/common/docs/r2.0.0- alpha/ ) Architecting the Future of Big Data Page 16 © Hortonworks Inc. 2011
    • Questions?Thank You!Hitesh Shahhitesh@hortonworks.com Architecting the Future of Big Data Page 17 © Hortonworks Inc. 2011
    • Appendix: CodeExamples Architecting the Future of Big Data Page 18 © Hortonworks Inc. 2011
    • Client: RegistrationClientRMProtocol applicationsManager;YarnConfiguration yarnConf = new YarnConfiguration(conf);InetSocketAddress rmAddress = NetUtils.createSocketAddr( yarnConf.get(YarnConfiguration.RM_ADDRESS));applicationsManager = ((ClientRMProtocol) rpc.getProxy(ClientRMProtocol.class, rmAddress, appsManagerServerConf));GetNewApplicationRequest request = Records.newRecord(GetNewApplicationRequest.class);GetNewApplicationResponse response = applicationsManager.getNewApplication(request); Architecting the Future of Big Data Page 19 © Hortonworks Inc. 2011
    • Client: App SubmissionApplicationSubmissionContext appContext;ContainerLaunchContext amContainer;amContainer.setLocalResources(Map<String, LocalResource> localResources);amContainer.setEnvironment(Map<String, String> env);String command = "${JAVA_HOME}" + /bin/java" + " MyAppMaster " + " arg1 arg2“;amContainer.setCommands(List<String> commands);Resource capability; capability.setMemory(amMemory);amContainer.setResource(capability);appContext.setAMContainerSpec(amContainer);SubmitApplicationRequest appRequest;appRequest.setApplicationSubmissionContext(appContext);applicationsManager.submitApplication(appRequest); Architecting the Future of Big Data Page 20 © Hortonworks Inc. 2011
    • Client: App Monitoring• Get Application StatusGetApplicationReportRequest reportRequest = Records.newRecord(GetApplicationReportRequest.class);reportRequest.setApplicationId(appId);GetApplicationReportResponse reportResponse = applicationsManager.getApplicationReport(reportRequest);ApplicationReport report = reportResponse.getApplicationReport();• Kill the applicationKillApplicationRequest killRequest = Records.newRecord(KillApplicationRequest.class);killRequest.setApplicationId(appId);applicationsManager.forceKillApplication(killRequest); Architecting the Future of Big Data Page 21 © Hortonworks Inc. 2011
    • AM: Ask RM for ContainersResourceRequest rsrcRequest;rsrcRequest.setHostName("*”); // hostname, rack, wildcardrsrcRequest.setPriority(pri);Resource capability; capability.setMemory(containerMemory);rsrcRequest.setCapability(capability)rsrcRequest.setNumContainers(numContainers);List<ResourceRequest> requestedContainers;List<ContainerId> releasedContainers;AllocateRequest req;req.setResponseId(rmRequestID);req.addAllAsks(requestedContainers);req.addAllReleases(releasedContainers);req.setProgress(currentProgress);AllocateResponse allocateResponse = resourceManager.allocate(req); Architecting the Future of Big Data Page 22 © Hortonworks Inc. 2011
    • AM: Launch ContainersAMResponse amResp = allocateResponse.getAMResponse();ContainerManager cm = (ContainerManager)rpc.getProxy (ContainerManager.class, cmAddress, conf);List<Container> allocatedContainers = amResp.getAllocatedContainers();for (Container allocatedContainer : allocatedContainers) { ContainerLaunchContext ctx; ctx.setContainerId(allocatedContainer .getId()); ctx.setResource(allocatedContainer .getResource()); // set env, command, local resources, … StartContainerRequest startReq; startReq.setContainerLaunchContext(ctx); cm.startContainer(startReq);} Architecting the Future of Big Data Page 23 © Hortonworks Inc. 2011
    • AM: Monitoring Containers• Running ContainersGetContainerStatusRequest statusReq;statusReq.setContainerId(containerId);GetContainerStatusResponse statusResp = cm.getContainerStatus(statusReq);• Completed ContainersAMResponse amResp = allocateResponse.getAMResponse();List<Container> completedContainersStatus = amResp.getCompletedContainerStatuses();for (ContainerStatus containerStatus : completedContainers) { // containerStatus.getContainerId() // containerStatus.getExitStatus() // containerStatus.getDiagnostics()} Architecting the Future of Big Data Page 24 © Hortonworks Inc. 2011
    • AM: I am doneFinishApplicationMasterRequest finishReq;finishReq.setAppAttemptId(appAttemptID);finishReq.setFinishApplicationStatus (FinalApplicationStatus.SUCCEEDED); // or FAILEDfinishReq.setDiagnostics(diagnostics);resourceManager.finishApplicationMaster(finishReq); Architecting the Future of Big Data Page 25 © Hortonworks Inc. 2011