Your SlideShare is downloading. ×
0
YARN and NTT's contribution @ Cloudera World Tokyo 2013
YARN and NTT's contribution @ Cloudera World Tokyo 2013
YARN and NTT's contribution @ Cloudera World Tokyo 2013
YARN and NTT's contribution @ Cloudera World Tokyo 2013
YARN and NTT's contribution @ Cloudera World Tokyo 2013
YARN and NTT's contribution @ Cloudera World Tokyo 2013
YARN and NTT's contribution @ Cloudera World Tokyo 2013
YARN and NTT's contribution @ Cloudera World Tokyo 2013
YARN and NTT's contribution @ Cloudera World Tokyo 2013
YARN and NTT's contribution @ Cloudera World Tokyo 2013
YARN and NTT's contribution @ Cloudera World Tokyo 2013
YARN and NTT's contribution @ Cloudera World Tokyo 2013
YARN and NTT's contribution @ Cloudera World Tokyo 2013
YARN and NTT's contribution @ Cloudera World Tokyo 2013
YARN and NTT's contribution @ Cloudera World Tokyo 2013
YARN and NTT's contribution @ Cloudera World Tokyo 2013
YARN and NTT's contribution @ Cloudera World Tokyo 2013
YARN and NTT's contribution @ Cloudera World Tokyo 2013
YARN and NTT's contribution @ Cloudera World Tokyo 2013
YARN and NTT's contribution @ Cloudera World Tokyo 2013
YARN and NTT's contribution @ Cloudera World Tokyo 2013
YARN and NTT's contribution @ Cloudera World Tokyo 2013
YARN and NTT's contribution @ Cloudera World Tokyo 2013
YARN and NTT's contribution @ Cloudera World Tokyo 2013
YARN and NTT's contribution @ Cloudera World Tokyo 2013
YARN and NTT's contribution @ Cloudera World Tokyo 2013
YARN and NTT's contribution @ Cloudera World Tokyo 2013
YARN and NTT's contribution @ Cloudera World Tokyo 2013
YARN and NTT's contribution @ Cloudera World Tokyo 2013
YARN and NTT's contribution @ Cloudera World Tokyo 2013
YARN and NTT's contribution @ Cloudera World Tokyo 2013
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

YARN and NTT's contribution @ Cloudera World Tokyo 2013

6,118

Published on

Presentation @ Cloudera World Tokyo 2013

Presentation @ Cloudera World Tokyo 2013

Published in: Technology, Business
0 Comments
8 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
6,118
On Slideshare
0
From Embeds
0
Number of Embeds
4
Actions
Shares
0
Downloads
55
Comments
0
Likes
8
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. NTT meets Hadoop - Our contribution to Hadoop and YARN - @Cloudera World Tokyo 2013 2013/11/7 NTT Tsuyoshi Ozawa © 2012 NTT Software Innovation Center
  • 2. About me • Tsuyoshi Ozawa • Researcher & Engineer @ NTT Twitter: @oza_x86_64 • A Hadoop Contributor • Author of “Hadoop 徹底入門 2nd Edition” Chapter 22(YARN) © 2013 NTT Software Innovation Center 2
  • 3. Agenda • NTT and Hadoop • Why Hadoop? • Our Hadoop usage • Our contribution to Apache Hadoop • Technical hot topic at Hadoop Community • • • • Hot topic of each Hadoop components YARN What’s YARN? WIPs: • ResourceManager HA • Llama(long lived application master) • Long running services • Summary © 2013 NTT Software Innovation Center 3
  • 4. Why Hadoop? • Deep and wide experience introducing Open Source Software technologies. • For the data management, 11 years with PostgreSQL including mission critical cases • Leading Hadoop deployment in Japan for large scale and high volume data processing, which natively fits “Big Data” and "Enterprise Batch" day hour Enterprise Batch Processing RDBMS Latency Online Batch Processing Hadoop Query & Search Processing min Online Processing Low-Latency Serving Systems sec GB Big Data Processing DWH, Search Engine, etc TB PB Size © 2013 NTT Software Innovation Center 4
  • 5. Why is RDBMS better for smaller data? • Schema on Write • Traditional Distributed DB’s approach • Pros • Minimal overhead at query time • Cons • Solid schema and workload • High overhead at load time E.g. Column store Schema and workload aware Data Load Query (Join Column1 and Column3 Based on id) © 2013 NTT Software Innovation Center Column2 Column4 Column1 Column3 5
  • 6. Why is Hadoop better for bigger data? • Schema on Read • Hadoop’s approach • Relational mapping on Read time(processing time) • Pros • Flexible schema and workload • Minimal overhead at load time • Cons • High overhead at query time E.g. HDFS + MapReduce Col1-1 Col2-1 Col3-1 Col4-1 Scalable loading Query (Reading all data and Runtime filtering) © 2013 NTT Software Innovation Center Col1-2 Col2-2 Col3-2 Col4-2 6
  • 7. We’re Hadoop user! (Service) • Mobile Spatial Statistics (NTT Docomo) • http://www.nttdocomo.co.jp/english/corporate/technology/rd/tec hnical_journal/bn/vol14_3/index.html • Historical search for Twitter’s data (NTT DATA) • Buzz Finder (NTT Communications) • Twitter analytics • Hadoop Summit 2011 • http://www.slideshare.net/cloudera/hadoop-world-2011-largescale-log-data-analysis-for-marketing-in-ntt-communications © 2013 NTT Software Innovation Center 7
  • 8. We’re Hadoop user! (System Integrations) • NTT DATA has over 6 years experience and over 30 production cases on Hadoop • Help enterprise customer design, integrate, deploy and run large clusters at the range of 20 ~ 1200+ nodes, up to 4PB day financial media media Online Batch Processing public hour min public Enterprise Batch Processing Big Data telcom telcom Processing telcom Query & Search Processing Online Processing sec GB TB © 2013 NTT Software Innovation Center PB 8
  • 9. Why are we contributing to Apache Hadoop? • Impedance mismatch between Hadoop community and our needs • Examples of our needs • • • • More More More More easy operations(HA features) metrics documents understandable logging Important for both Community and us Important for us, But its priority is not so high for community • Our answer: *Writing code* is the fastest way to reflect our needs! • Bridging the gap • Accelerating development for new features • Getting know-how about new features © 2013 NTT Software Innovation Center 9
  • 10. Example: • More easy Operations • Enhancing HA features • Rethinking State Machine of HA components from operators’ view • More wider use cases Optimization of MapReduce • Node-level combiner for MapReduce (MAPREDUCE-4502) • Presentation at Pre Strata/Hadoop Meetup at NEWYORK • Container reuse (MAPREDUCE-3902) © 2013 NTT Software Innovation Center 10
  • 11. Example: When startup RM-HA… • Before: Returns IllegalArgumentException if configuration is invalid • Difficult to debug configuration! © 2013 NTT Software Innovation Center 11
  • 12. Example: When startup RM-HA… • After: Users can know why it fails © 2013 NTT Software Innovation Center 12
  • 13. My current Activities • Contributions • ResourceManager HA • Stabilization of MapReduce • Wrote a YARN article for Hadoop book in Japanese • Problem • Time lag • Silicon Valley is the core developing time of Hadoop • PST and JST has 17 hours time lag • Offline meetups • Some offline meetups are held every week • Solution • Visiting Silicon Valley and develoing Hadoop there © 2013 NTT Software Innovation Center 13
  • 14. NTT DATA members’ contributions • NTT DATA members are contributing to Hadoop! • • • • Akira Ajisaka / @ajis_ka Kousuke Saruta Masatake Iwasaki Shinichi Yamashita • They are contributing based on Hadoop integration experiences for enterprise customer! • • • • Enhance Metrics, Logging, Monitoring for robust design and rock -solid operations Adding/Improving documentations Various bug fixes (HADOOP-9909, HIVE-5296, etc.) Integration is our matter: “Direct Connector for PostgreSQL” (SQOOP -390,999) © 2013 NTT Software Innovation Center 14
  • 15. TECHNICAL PART © 2013 NTT Software Innovation Center 15
  • 16. Hot topics of each Hadoop components • MapReduce • Shuffle Plugin(MAPREDUCE-4049) • JobTracker(AppMaster) HA(MAPREDUCE-2708) • MapReduce itself is at stable phase • Optimization is being done in Apache Tez/Spark/Impala project • HDFS • Cache management(HDFS-4949) • Snapshot(HDFS-2802) • Symbolic links(HADOOP-6421 etc.) • YARN(New!) A new component for Hadoop 2.x © 2013 NTT Software Innovation Center 16
  • 17. What’s YARN? • Yet Another Resource Negotiator • Proposed by Arun C Murthy in 2011 • Separate JobTracker’s role • Resource Management/Isolation • Task Scheduling • MapReduce v2 is a name for MapReduce over YARN MRv1 architecture MapReduce YARN and frameworks architecture MRv2 Impala Spark YARN © 2013 NTT Software Innovation Center 17
  • 18. Why YARN?(Use case) • Running various processing frameworks on same cluster • Batch processing with MapReduce • Interactive query with Impala • Interactive deep analytics(e.g. Machine Learning) with Spark Periodic long batch query MRv2 Interactive Aggregation query Interactive Machine Learning query Impala Spark YARN HDFS © 2013 NTT Software Innovation Center 18
  • 19. Why YARN?(Technical reason) • More effective resource management for multiple processing frameworks • difficult to use entire resources without thrashing • Cannot move *Real* big data from HDFS/S3 Job1 Job2 Each frameworks has own scheduler Master for Impala Master for MapReduce Slave Slave map slot Impala slave Job1 Slave Slave reduce slot MapReduce slave HDFS slave thrashing © 2013 NTT Software Innovation Center 19
  • 20. MRv1 Architecture • Resource is managed by JobTracker • Task Scheduling • Resource Management Schedulers only now own resource usages Master for MapReduce Master for Impala Slave Slave map slot reduce slot MapReduce slave Slave map slot reduce slot MapReduce slave © 2013 NTT Software Innovation Center map slot reduce slot MapReduce slave 20
  • 21. YARN Architecture • Idea • One global resource manager(ResourceManager) • Common resource pool for all frameworks(NodeManager and Container) • Schedulers for each frameworks(AppMaster) 1. Submit jobs ResourceManager 2. Launch Master 3. Launch Slaves Slave Master Client Slave Slave Slave Slave Slave Master Master Slave Slave Slave Container Container Container Container Container Container Container Container Container NodeManager NodeManager NodeManager © 2013 NTT Software Innovation Center 21
  • 22. YARN and Mesos Policy/Philosophy is different YARN • AppMaster is launched for each jobs • More scalability • Higher latency • One container per req • One Master per Job ResourceManager Master1 NM • AppMaster is launched for each app(framework) • Less scalability • Lower latency • Bundle of containers per req • One Master per Framework Master1 ResourceMaster Master2 Master2 NM Mesos NM © 2013 NTT Software Innovation Center Slave Slave Slave 22
  • 23. Applications? • From Hadoop World 2013 • YARN is becoming a real open kernel! ht t ps://tw itter.co m/aj is_ka/ st a t us /39557 287540 060569 6/p hot o/1/ lar ge © 2013 NTT Software Innovation Center 23
  • 24. Hot topics in YARN • ResourceManager High Availability (YARN-149, YARN-128) • llama(Long-lived Application MAster) • Long lived services in YARN(YARN-896) © 2013 NTT Software Innovation Center 24
  • 25. ResourceManager High Availability • What’s happen when ResourceManager fails? • cannot submit new jobs • NOTE: • Launched Apps continues to run • AppMaster recover is done in each frameworks • MRv2 Submit jobs ResourceManager Client Continue to run each jobs Slave Master Slave Slave Slave Slave Slave Master Master Slave Slave Slave Container Container Container Container Container Container Container Container Container NodeManager NodeManager NodeManager © 2013 NTT Software Innovation Center 25
  • 26. ResourceManager High Availability • Approach • • • • Storing RM information to ZooKeeper Automatic Failover by ZKFC(ZooKeeper Failover Controller) Manual Failover by RMHAUtils NodeManagers uses local RMProxy to access them 2. failure ResourceManager Active 3. ZKFC Detects failure 4. Failover ZKFC ResourceManager Standby 3. Standby Node become active ZKFC 1. Active Node stores all state into ZKStore ZooKeeper RMState ZooKeeper RMState ZooKeeper RMState © 2013 NTT Software Innovation Center 26
  • 27. Llama (Long-lived Application MAster) • YARN’s resource allocation is optimized for long batch system(like Hadoop!) • Impala/Spark is low-latency querying system • Impedance mismatch between YARN and Impala/Spark • http://cloudera.github.io/llama/ • Idea: Special AppMaster for low latency apps • AppMaster pooling • As a resource negotiators’ proxy Impala Server ResourceManager • Gang Scheduling • Get multiple containers at the same time 2. Submit Job 3. Resource allocation Llama (Master) http://cdn.oreillystatic.com/en/assets/1/event/100/From%20 NM NM Promise%20to%20a%20Platform_%20Next%20Steps%20i n%20Bringing%20Workload%20Diversity%20to%20Hadoo 1. Launch llama(first time p%20Presentation.pdf © 2013 NTT Software Innovation Center NM only) 27
  • 28. Long lived services for YARN • YARN as Multitenant platform • Like CloudFoundry, Mesosphere • Hoya: HBase on YARN • http://hortonworks.com/blog/hoya-hbase-on-yarn-applicationarchitecture/ © 2013 NTT Software Innovation Center 28
  • 29. Summary • We use Hadoop • We contribute to Hadoop • Described Hadoop’s hot topics based on our experience in Apache Hadoop • YARN is now real open kernel for BigData! (not only for Hadoop) • ResourceMamanger HA • Llama • Long running services © 2013 NTT Software Innovation Center 29
  • 30. Links • "Hadoop - Lessons Learned from Deploying Enterprise Clusters" (Hadoop World NYC 2010) http://www.slideshare.net/cloudera/hadoop-world-2010-nycv12recruitclean • "Hadoop’s Life in Enterprise Systems" (Hadoop World NYC 2011) http://www.slideshare.net/cloudera/hadoop-world-2011hadoops-life-in-enterprise-systems-y-masatani-ntt-data • NTT Docomo Technical Journal, Mobile Spatial Statistics http://www.nttdocomo.co.jp/english/corporate/technology/rd/te chnical_journal/bn/vol14_3/index.html • Large Scale Log Data Analysis for Marketing in NTT Communications(Hadoop World 2011) http://www.slideshare.net/cloudera/hadoop -world-2011-large-scale-log-dataanalysis-for-marketing-in-ntt-communications • "NTTデータのHadoopソリューション" http://oss.nttdata.co.jp/hadoop/ • "SI事業の視点から見た Hadoop の適用領域と今後の展望" (Hadoop Conference Japan 2009) http://www.slideshare.net/hadoopxnttdata/20091113-hadoopconf-japan2009-v1a-clean © 2013 NTT Software Innovation Center 30
  • 31. Links(YARN) • The Next Generation of Apache Hadoop MapReduce http://developer.yahoo.com/blogs/hadoop/next-generationapache-hadoop-mapreduce-3061.html • Llama: Low Latency Application Master http://cloudera.github.io/llama • Resource Management with YARN and Impala http://goo.gl/Rwq2aW • Hoya: HBase on YARN http://hortonworks.com/blog/hoya-hbase-on-yarnapplication-architecture/ • Mesos: A Platform for Fine-Grained Resource Sharing in the Data Center, NSDI 2011 http://www.cs.berkeley.edu/~matei/papers/2011/nsdi_mesos. pdf • Apache Hadoop YARN: Yet Another Resource Negotiator, SOCC 2013 http://goo.gl/Gnl9ZU • Apache Tez http://hortonworks.com/hadoop/tez/ http://incubator.apache.org/projects/tez.html © 2013 NTT Software Innovation Center 31

×