• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Managing 2000 Node Cluster with Ambari

Managing 2000 Node Cluster with Ambari






Total Views
Views on SlideShare
Embed Views



2 Embeds 128

http://www.scoop.it 127
https://twitter.com 1



Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
Post Comment
Edit your comment
  • Welcome to the Apache Ambari talk, your speakers today are myself Siddharth and my colleague Srimanth from Hortonworks. <br />
  • With increased adoption of Amabri throughout the enterprise the focus at the moment scale out to 1000s of node. <br /> With that in mind the focus of the talk is to demonstrate operations on a 2K node cluster with a glimpse at the future goals <br /> We will look at awesome features that are a part of 1.6.0 <br /> Along with things that truly identify Ambari as a platform, that is Views and Extensibility <br /> If you attend the birds of feather session tomorrow, we can do a further deep dive into these new development
  • This slide the represents Ambari’s position in the Hadoop technology stack and highlights key integration points with services that are either Cloud compute providers or big data analytics platforms <br /> By the end of the talk you would get a fairly good idea of how Ambari enables the integration of these providers with the Hadoop eco-system
  • Orchestrator: Ambari State machine combined with the Action scheduler and the Heartbeat handler <br /> Request Dispatcher: Service Provider interface and Resource provider layer <br /> Clusters / Stacks etc. are all resources from Ambari API standpoint <br /> Monitoring subsystem comprises of Ganglia as the metrics system and Nagios for the alerts
  • Host Component isolation for Ambari Server, Ganglia and Nagios and Masters <br /> All testing done on VM’s on the cloud
  • So now we are going to look at a video. The story here is: <br /> Let say you have sizeable cluster with need for additional compute capacity. And the new hardware that you intend to add needs to be configured differently from the existing cluster configuration. <br /> We begin by well looking at the dashboard that shows the 2000 Slave nodes and rest of the nice and customizable Ambari widgets <br /> Next step is to actually choose the groups of hosts that you want to customize. <br /> What we are doing here is grouping hosts together using Config groups and we give it a name. <br /> Lets select a few data nodes to demonstrate this. <br /> Note: Since this is a paid cluster and it expensive to keep it running so we are showing you a video. <br /> The Config group manager allows you to filter by Component and regular expressions, we make sure Datanode hosts are the only ones in the filter <br /> Next use an expression to choose hosts you want, here I just chose them at random. <br /> Now to actually making config changes <br /> Restart all will restart in one shot and apply the config <br /> The other option actually allows you to do rolling restart
  • When rubber meets the road what do we see as the performance bottleneck: <br /> The monitoring and alerting subsystems on large clusters are bogged down by the amount of I/O operations to write relatively small amount of data at a high frequency to permanent storage. <br /> These numbers for iostats are close to when we began optimizing performance, as you can see we were writing at 1GB/min
  • The most significant metric I would like to present is the load average improvements achieved through performance tuning effort <br /> It involved tuning the rrdcached daemon used to write ganglia data and also reading it back using Ambari API as well as Nagios <br /> Objective of this exercise is to certify Ambari with 2K nodes on run of the mill VMs with little to no optimization below the application stack and achieve acceptable performance for all management and monitoring operations. <br /> In theory it is possible to go above an beyond this magic number <br /> The goal is to actually scale to 10K+ nodes managed by single Ambari instance <br /> <br />
  • This is still a conceptual architecture and you can follow the discussion on the Apache Jira that is listed <br /> Quick word on the architecture, it involves scaling out of the collector daemon in proportion to cluster size <br /> The Views that you see in this picture will be part of the later slide deck and here to represent capability to extend Ambari for provide user interface of your choice to visualize data in Hadoop cluster <br />
  • Integrated with open source Quartz scheduler <br /> API to schedule batch of requests to be executed as per schedule
  • Rolling restart is the first use case for request scheduling <br /> Schedule and go home.
  • Host Configuration group is way of associating a set of configurations to a group of hosts per service <br /> This feature is supported with Blueprints as well, so the touch-less install can still incorporate heterogeneous target hosts <br />
  • - Additionally any custom property can be added to existing configuration
  • - Selective application of changed configs and know exactly when and where to apply them.
  • Blueprint as the name suggests is a declarative definition on the cluster which can be exported as a document from a live cluster or imported to create a new cluster from existing blueprint. <br /> Real word use cases: The Savanna project, Launchpad on Microsoft Azure
  • Quick look at how to create a cluster using blueprint <br /> Define, Host Groups: Can be thought of as all unique set of components and configurations that represent hosts in you cluster with cardinality from 1 to N. <br /> Capture non-default configuration overrides <br /> Point to stack name and version to use <br /> When you POST to create a cluster you get back a request id that can be used to track progress of deployment <br />
  • Real world use case of blueprints <br /> HDP Launchpad for Azure (Linux) lets you spin up HDP clusters super easily - no need to for you to spin up VMs, create images, setup ssh etc. All you need is your Azure Account (with a credit card in good standing) to get started <br /> Once you get the launchpad going it will do *everything* for you and publish Ambari URL for control entry point.  <br /> Under the hood, after running some Azure provisioning and setup scripts, all the goodness coming from Ambari Blueprint <br />
  • When you manage a cluster of size 2000 nodes, you need ability to perform operations in bulk. <br /> Bulk host operations are now available on Hosts page <br /> <br /> Basically you identify which hosts – either all, filtered or selected <br /> Then you perform operations – either host level, or component level operations <br /> <br /> Components generally tend to be slaves/workers which are larger in number <br />
  • Component operations tend to perform operations in batches. <br /> <br /> For clusters with 2000 nodes you need good filters to easily find the appropriate hosts. Ambari provides 13 filters on its hosts page to help you.
  • So lets say <br /> Hardware change/replacement on some nodes <br /> Experimenting with service configurations <br /> Turning off a service completely <br /> Deleting cluster nodes <br /> <br /> Maintenance Mode sliences alerts and skips operations.
  • Inheritance cannot be turned off on lower levels <br />
  • We support safely moving the following master components from one host to another. <br /> Even the 2 namenodes in HDFS HA.
  • Hadoop is an ecosystem with many services, many users and many many usecases. <br /> Even with all the functionality provided in Ambari, there will always be a different way to use and view your cluster. <br /> <br /> To allow users and admins to extend and contribute their own ‘view’ of the cluster, Ambari is providing the ‘Ambari Views’ framework. <br /> Developers can now create their ‘view’ using this framework. <br /> <br /> Gives users and administrators a single entry point into the cluster and allows for very interesting possibilities. <br /> <br /> Views also nicely complement stack extensibility on the backend, by providing appropriate views for them in the front end. <br /> <br /> Question: What is the admin functionality of views? <br /> <br />
  • This is Tech Preview being shown
  • view.xml – view descriptor <br /> Web-inf/lib – 3rd party libraries <br /> Web-inf/web.xml – define custom servlets (non-REST) <br /> Classes – application logic <br /> Index.html/javascripts/… - UI <br /> <br />
  • View descriptor is the central entry point. <br /> <br /> Here you can see the view Id, display label you see in the menu, version of the view. <br /> Each JAR is for a version of the view. A view version can have many instances of the view. <br /> <br /> Each view can also define the parameters it needs to work – here you see list of cities this weather view needs. <br /> You also see a REST resource defined – all you need to implement is the Java bean and a JAX-RS annotated class. <br /> Each view can optionally define instances by default… here you see Europe. HDFS view does not have any instances because location of NameNode is a runtime value – not known at packaging time.
  • Once view jar is place into Ambari, you can then see the views, versions and instances. <br /> You can create/update/delete view instances via calls. <br /> <br /> So if your 3rd party tool wants a view to HDFS, they can create instance and send user to link.
  • Something that is being worked on is administration ability for views. Admins can configure views, provide entitlement for users, etc.
  • So admins can control the cluster, and users can view the cluster and use it.
  • In Hadoop 1.0 we visualized MapReduce jobs, their depdencies, and how the map and reduce tasks performed.
  • In Hadoop 2.0 MapReduce has been made more generic in Apache Tez. <br /> <br /> Apache™ Tez generalizes the MapReduce paradigm to a more powerful framework for executing a complex DAG (directed acyclic graph) of tasks. <br /> As you can see Hive, Pig and other data processing services are being ported on top of Tez. <br /> <br /> For Hadoop 2.0 Ambari visualizes Hive queries using Tez engine.
  • Each Hive + Tez query is shown in the jobs table. Going to an individual job shows the Tez DAG mixed in with Hive information.
  • HDFS_ prefixed counters come from HDFS. They generally tend to be on first and last vertices of the DAG because that’s where they read and write from data. <br /> FILE_ prefixed counters are local disk accesses for the vertex… they represent data read/written during spilling. It does not represent data transferred between vertices. <br /> SPILLED_RECORDS – In Tez spilling of records can not only happen during vertex output (like MapReduce), but also at vertex input. For a vertex this number is for both. <br /> <br /> <br /> <br /> <br /> Tasks <br /> - FILE_BYTES_READ <br /> - FILE_BYTES_WRITTEN = spill bytes size (3 reads out of 3r+3w) local disk only. <br /> = does not include transporting across tasks <br /> = Read configs <br /> - HDFS_BYTES_READ|WRITTEN <br /> = Generally on first and last vertices where HDFS is accessed. <br /> - HDFS_READ_OPS = Listing directories (Direct HDFS counters) <br /> - HDFS_WRITE_OPS = FS changes (Direct HDFS counters) - create folder, concat file, mkdir, etc. <br /> - SPILLED_RECORDS = 3w+3r+1sort-w = Records in 3+1. <br /> - They occur in Output (when spilling locally when > memory) <br /> - They occur in Input (when collecting from multiple inputs) <br /> - If a vertex has both Input and Output - this will be sum of both. <br />
  • Summary metrics are shown for all vertices, so that you can compare relative performance of vertices. <br /> <br /> <br /> <br /> <br /> Tasks <br /> - FILE_BYTES_READ <br /> - FILE_BYTES_WRITTEN = spill bytes size (3 reads out of 3r+3w) local disk only. <br /> = does not include transporting across tasks <br /> = Read configs <br /> - HDFS_BYTES_READ|WRITTEN <br /> = Generally on first and last vertices where HDFS is accessed. <br /> - HDFS_READ_OPS = Listing directories (Direct HDFS counters) <br /> - HDFS_WRITE_OPS = FS changes (Direct HDFS counters) - create folder, concat file, mkdir, etc. <br /> - SPILLED_RECORDS = 3w+3r+1sort-w = Records in 3+1. <br /> - They occur in Output (when spilling locally when > memory) <br /> - They occur in Input (when collecting from multiple inputs) <br /> - If a vertex has both Input and Output - this will be sum of both. <br />
  • Hive and Tez have hooks to push notifications to ATS. Ambari pulls/GETs information from ATS. <br /> <br /> Other components plan to use ATS more – so Ambari should be able to show other types of Jobs.
  • To enable Hive + Tez, admins should go to Hive configurations and set “hive.execution.engine” to “tez”. Default is “mr”. <br /> Other important tez configs are shown – like YARN container size etc for Hive+Tez queries.
  • Jobs viewer can handle large queries. Like this one is approximately 70 Tez vertices 12 reduce vertices. <br /> The graph is more readable than the text above to analyze issues. <br />
  • - What truly identifies Ambari as a platform – Ability to add new services and manage and monitor a custom stack of components
  • Stack is an all inclusive and self contained definition of all services and their life cycle within Ambari <br /> Let start by encapsulating components and configuration in a stack definition <br /> Next allow a developer to define component life cycle by declaring relationships between different states of a component <br /> REST API allows you to discover what is available <br /> Last plug it into Ambari to bring it all together
  • Command scripts are way to tell Ambari what needs to be executed in order to achieve a state change, example, going from INSTALLED to STARTED entails executing a user defined start script of a component in the desired stack. <br /> Custom Commands and Custom Actions are similar to command scripts but independent of a state change and can be executed on demand using Ambari API, Example: Decommission Datanode, Run rebalancer, verify kerberos settings <br /> Extension makes it easy to add new stacks <br />
  • Command scripts are bundled with the server and downloaded to the agents. <br /> At registration time agents check to make sure the MD5 checksum of the downloaded script archive is the same on the server as in the agent cache, if not a agent downloads new definitions from the server. <br /> This makes on demand / on site modifications easy to change and verify.
  • HBASE service definition in the stack <br /> The metrics.json files defines all metrics emitted by HBASE as well as how these metrics would show up in the Ambari API <br /> Contains configuration, package of command scripts and definition of the service in metainfo.xml <br /> Metainfo.xml: Link HBASE_MASTER component to the script which defines the life cycle commands (start, stop, install, configure) and custom commands if any <br /> Package: The actual command scripts which will be executed on the agents <br /> Example of a command script. Important to mention the python resource management framework of Ambari allows developer to extend a based class called Script and define a resources similar to other languages like puppet <br /> <br />

Managing 2000 Node Cluster with Ambari Managing 2000 Node Cluster with Ambari Presentation Transcript