1. (Big Data)2
How YARN Timeline Service v.2 Unlocks 360-Degree
Platform Insights at Scale
Sangjin Lee @sjlee (Twitter)
Joep Rottinghuis @joep (Twitter)
2. Outline
• Why v.2?
• Highlights
• Developing for Timeline Service v.2
• Setting up Timeline Service v.2
• Milestones
• Demo
3. Why v.2?
• YARN Timeline Service v 1.x
• Gained good adoption: Tez, HIVE, Pig, etc.
• Keeps improving with v 1.5 APIs and storage implementation
• Still facing some fundamental challenges...
4. Why v.2?
• Scalability and reliability challenges
• Single instance of Timeline Server
• Storage (single local LevelDB instance)
• Usability
• Flow
• Metrics and configuration as first-class citizens
• Metrics aggregation up the entity hierarchy
5. Highlights
v.1 v.2
Single writer/reader Timeline Server Distributed writer/collector architecture
Single local LevelDB storage* Scalable storage (HBase)
v.1 entity model New v.2 entity model
No aggregation Metrics aggregation
REST API Richer query REST API
6. Architecture
• Separation of writers (“collectors”) and readers
• Distributed collectors: one collector for each app
• Dedicated RM collector for RM-generated data
• Collector discovery via RM
• Pluggable storage with HBase as default storage
8. What is a flow?
• A flow is a group of YARN
applications that are launched as
parts of a logical app
• Oozie, Scalding, Pig, etc.
• name:
“frequent_visitor_stat”
• run id: 1466097809000
• version: “b9b9068”
9. Configuration and metrics
• Now explicit top-level attributes of
entities
• Fine-grained updates and queries
made possible
• “update metric A to value x”
• “query entities where config A = B”
10. Configuration and metrics
• Now explicit top-level attributes of
entities
• Fine-grained updates and queries
made possible
• “update metric A to value x”
• “query entities where config A = B”
11. HBase Storage
• Scalable backend
• Row Key structure
• efficient range scans
• KeyPrefixRegionSplitPolicy
• Filter pushdown
• Coprocessors for flow aggregation (“readless” aggregation)
• Cell tags for metadata (application id, aggregation operation)
• Cell timestamps generated during put
• left shifted with app id added to avoid overwrites
12. Tables in HBase
• flow run
• application
• entity
• flow activity
• app to flow
13. table: flow run
Row key:
clusterId!userName!flo
wName!inverted(flowRun
Id)
• most recent flow run stored first
• coprocessor enabled
16. table: flow activity
Row key:
clusterId!inverted(TopOfTh
eDay)!userName!flowName
• shows the flows that ran on that day
• stores information per flow like number of
runs, the run ids, versions
18. Metrics aggregation
• Application level
• Rolls up sub-application metrics
• Performed in real time in the collectors in memory
• Flow run level
• Rolls up app level metrics
• Performed in HBase region servers via coprocessors
• Offline aggregation (TBD)
• Rolls up on user, queue, and flow offline periodically
• Phoenix tables
21. Reader REST API: paths
• URLs under /ws/v2/timeline
• Canonical REST style URLs:
/ws/v2/timeline/clusters/cluster_name/users/user_name/flows/flow_n
ame/runs/run_id
• Path elements may be omitted if they can be inferred
• flow context can be inferred by app id
• default cluster is assumed if cluster is omitted
22. Setting up Timeline Service v.2
• Set up the HBase cluster (1.1.x)
• Add the timeline service jar to HBase
• Install the flow run coprocessor
• Create tables via TimelineSchemaCreator utility
• Configure the YARN cluster
• Enable Timeline Service v.2
• Add hbase-site.xml for the timeline collector and readers
• Start the timeline reader daemon
23. Milestone 1 ("Alpha 1")
• Merge discussion (YARN-2928) in progress as we speak!
✓ Complete end-to-end read/write flow
✓ Real time application and flow
aggregation
✓ New entity model
✓ HBase Storage
✓ Rich REST API
✓ Integration with Distributed Shell
and MapReduce
✓ YARN generic events and system
metrics
24. Milestones - Future
• Milestone 2 (“Alpha 2”)
• Integration with new YARN
UI
• Integration with more
frameworks
• Beta
• Freeze API and storage schema
• Security
• Collectors as containers
• Storage fault tolerance
• Production-ready
• Migration-ready
25. Contributors
• Li Lu, Junping Du, Vinod Kumar Vavilapalli (Hortonworks)
• Varun Saxena, Naganarasimha G. R. (Huawei)
• Sangjin Lee, Vrushali Channapattan, Joep Rottinghuis (Twitter)
• Zhijie Shen (now at Facebook)
• The HBase and Phoenix community!