This document discusses Apache Hadoop 0.23, the first stable release of Hadoop in over 30 months. It introduces Arun Murthy, who is the release manager for Hadoop 0.23 and VP of the Apache Hadoop project. The release includes significant new features for HDFS federation, a new YARN framework for data processing, and over 2x performance improvements. Extensive testing is required to support very large clusters of tens of thousands of nodes processing hundreds of petabytes of data.
Hadoop World 2011: Apache Hadoop 0.23 - Arun Murthy, Horton WorksCloudera, Inc.
The Apache Hadoop community is gearing up for the upcoming release of Apache Hadoop 0.23. This release has major enhancements to Hadoop such as HDFS Federation for hyper-scale and a Next Generation MapReduce framework. Arun, the Apache Hadoop Release Master for 0.23, will briefly cover the highlights of the release and pay particular attention to the plans and efforts undertaken to test, stabilize and release Hadoop.next. The talk covers some of the timelines for the release, our plans for compatibility and upgrade paths for existing users of Hadoop.
Apache Hadoop 0.23 at Hadoop World 2011Hortonworks
Presentation by Arun C Murthy (Founder/Architect, Hortonworks) on Apache Hadoop 0.23 (What it is and what it takes) at Hadoop World 2011 NYC.
Arun is the Founder/Architect at Hortonworks and is the VP, Apache Hadoop, ASF.
Hadoop World 2011: Apache Hadoop 0.23 - Arun Murthy, Horton WorksCloudera, Inc.
The Apache Hadoop community is gearing up for the upcoming release of Apache Hadoop 0.23. This release has major enhancements to Hadoop such as HDFS Federation for hyper-scale and a Next Generation MapReduce framework. Arun, the Apache Hadoop Release Master for 0.23, will briefly cover the highlights of the release and pay particular attention to the plans and efforts undertaken to test, stabilize and release Hadoop.next. The talk covers some of the timelines for the release, our plans for compatibility and upgrade paths for existing users of Hadoop.
Apache Hadoop 0.23 at Hadoop World 2011Hortonworks
Presentation by Arun C Murthy (Founder/Architect, Hortonworks) on Apache Hadoop 0.23 (What it is and what it takes) at Hadoop World 2011 NYC.
Arun is the Founder/Architect at Hortonworks and is the VP, Apache Hadoop, ASF.
In this session, learn how to build an Apache Spark or Spark Streaming application that can interact with HBase. In addition, you'll walk through how to implement common, real-world batch design patterns to optimize for performance and scale.
Apache Tajo: A Big Data Warehouse System on Hadoop
Presented by Jae-hwa Jeong, Apache Tajo committer and senior research engineer at Gruter, in Bigdata World Convention 2014 at Oct.23, Busan, Korea
HBaseCon 2012 | HBase and HDFS: Past, Present, Future - Todd Lipcon, ClouderaCloudera, Inc.
Apache HDFS, the file system on which HBase is most commonly deployed, was originally designed for high-latency high-throughput batch analytic systems like MapReduce. Over the past two to three years, the rising popularity of HBase has driven many enhancements in HDFS to improve its suitability for real-time systems, including durability support for write-ahead logs, high availability, and improved low-latency performance. This talk will give a brief history of some of the enhancements from Hadoop 0.20.2 through 0.23.0, discuss some of the most exciting work currently under way, and explore some of the future enhancements we expect to develop in the coming years. We will include both high-level overviews of the new features as well as practical tips and benchmark results from real deployments.
Adobe has packaged HBase in Docker containers and uses Marathon and Mesos to schedule them—allowing them to decouple the HBase RegionServer from the host, express resource requirements declaratively, and open the door for unassisted real-time deployments, elastic (up and down) real-time scalability, and more.
Digital Library Collection Management using HBaseHBaseCon
Speaker: Ron Buckley (OCLC)
OCLC has been working over the last year to move its massive repository to HBase. This talk will focus on the impetus behind the move, implementation details and technology choices we've made (key design, shredding PDFs and other digital objects into HBase, scaling), and the value-add that HBase brings to digital collection management.
At Twitter we started out with a large monolithic cluster that served most of the use-cases. As the usage expanded and the cluster grew accordingly, we realized we needed to split the cluster by access pattern. This allows us to tune the access policy, SLA, and configuration for each cluster. We will explain our various use-cases, their performance requirements, and operational considerations and how those are served by the corresponding clusters. We will discuss what our baseline Hadoop node looks like. Various, sometimes competing, considerations such as storage size, disk IO, CPU throughput, fewer fast cores versus many slower cores, 1GE bonded network interfaces versus a single 10 GE card, 1T, 2T or 3T disk drives, and power draw all need to be considered in a trade-off where cost and performance are major factors. We will show how we have arrived at quite different hardware platforms at Twitter, not only saving money, but also increasing performance.
HBaseCon2017 Efficient and portable data processing with Apache Beam and HBaseHBaseCon
In this talk we introduce Apache Beam, a unified model to create efficient and portable data processing pipelines. Beam uses a single set of abstractions to implement both batch and streaming computations that can be executed in different environments, e.g. Apache Spark, Apache Flink and Google Dataflow. Beam not only does data processing, but can be used as a tool to ingest/extract data to/from different data stores including HBase. We will present interaction scenarios between HBase and Beam and explore Beam's Input/Output (IO) model and how we leverage it to provide support for HBase.
Hadoop is a software framework for distributed processing of large datasets across large clusters of computers
Large datasets Terabytes or petabytes of data
Large clusters hundreds or thousands of nodes
Speaker: Vladimir Rodionov (bigbase.org)
This talks introduces a totally new implementation of a multilayer caching in HBase called BigBase. BigBase has a big advantage over HBase 0.94/0.96 because of an ability to utilize all available server RAM in the most efficient way, and because of a novel implementation of a L3 level cache on fast SSDs. The talk will show that different type of caches in BigBase work best for different type of workloads, and that a combination of these caches (L1/L2/L3) increases the overall performance of HBase by a very wide margin.
HBaseCon 2015: HBase Operations in a FlurryHBaseCon
With multiple clusters of 1,000+ nodes replicated across multiple data centers, Flurry has learned many operational lessons over the years. In this talk, you'll explore the challenges of maintaining and scaling Flurry's cluster, how we monitor, and how we diagnose and address potential problems.
In this session, learn how to build an Apache Spark or Spark Streaming application that can interact with HBase. In addition, you'll walk through how to implement common, real-world batch design patterns to optimize for performance and scale.
Apache Tajo: A Big Data Warehouse System on Hadoop
Presented by Jae-hwa Jeong, Apache Tajo committer and senior research engineer at Gruter, in Bigdata World Convention 2014 at Oct.23, Busan, Korea
HBaseCon 2012 | HBase and HDFS: Past, Present, Future - Todd Lipcon, ClouderaCloudera, Inc.
Apache HDFS, the file system on which HBase is most commonly deployed, was originally designed for high-latency high-throughput batch analytic systems like MapReduce. Over the past two to three years, the rising popularity of HBase has driven many enhancements in HDFS to improve its suitability for real-time systems, including durability support for write-ahead logs, high availability, and improved low-latency performance. This talk will give a brief history of some of the enhancements from Hadoop 0.20.2 through 0.23.0, discuss some of the most exciting work currently under way, and explore some of the future enhancements we expect to develop in the coming years. We will include both high-level overviews of the new features as well as practical tips and benchmark results from real deployments.
Adobe has packaged HBase in Docker containers and uses Marathon and Mesos to schedule them—allowing them to decouple the HBase RegionServer from the host, express resource requirements declaratively, and open the door for unassisted real-time deployments, elastic (up and down) real-time scalability, and more.
Digital Library Collection Management using HBaseHBaseCon
Speaker: Ron Buckley (OCLC)
OCLC has been working over the last year to move its massive repository to HBase. This talk will focus on the impetus behind the move, implementation details and technology choices we've made (key design, shredding PDFs and other digital objects into HBase, scaling), and the value-add that HBase brings to digital collection management.
At Twitter we started out with a large monolithic cluster that served most of the use-cases. As the usage expanded and the cluster grew accordingly, we realized we needed to split the cluster by access pattern. This allows us to tune the access policy, SLA, and configuration for each cluster. We will explain our various use-cases, their performance requirements, and operational considerations and how those are served by the corresponding clusters. We will discuss what our baseline Hadoop node looks like. Various, sometimes competing, considerations such as storage size, disk IO, CPU throughput, fewer fast cores versus many slower cores, 1GE bonded network interfaces versus a single 10 GE card, 1T, 2T or 3T disk drives, and power draw all need to be considered in a trade-off where cost and performance are major factors. We will show how we have arrived at quite different hardware platforms at Twitter, not only saving money, but also increasing performance.
HBaseCon2017 Efficient and portable data processing with Apache Beam and HBaseHBaseCon
In this talk we introduce Apache Beam, a unified model to create efficient and portable data processing pipelines. Beam uses a single set of abstractions to implement both batch and streaming computations that can be executed in different environments, e.g. Apache Spark, Apache Flink and Google Dataflow. Beam not only does data processing, but can be used as a tool to ingest/extract data to/from different data stores including HBase. We will present interaction scenarios between HBase and Beam and explore Beam's Input/Output (IO) model and how we leverage it to provide support for HBase.
Hadoop is a software framework for distributed processing of large datasets across large clusters of computers
Large datasets Terabytes or petabytes of data
Large clusters hundreds or thousands of nodes
Speaker: Vladimir Rodionov (bigbase.org)
This talks introduces a totally new implementation of a multilayer caching in HBase called BigBase. BigBase has a big advantage over HBase 0.94/0.96 because of an ability to utilize all available server RAM in the most efficient way, and because of a novel implementation of a L3 level cache on fast SSDs. The talk will show that different type of caches in BigBase work best for different type of workloads, and that a combination of these caches (L1/L2/L3) increases the overall performance of HBase by a very wide margin.
HBaseCon 2015: HBase Operations in a FlurryHBaseCon
With multiple clusters of 1,000+ nodes replicated across multiple data centers, Flurry has learned many operational lessons over the years. In this talk, you'll explore the challenges of maintaining and scaling Flurry's cluster, how we monitor, and how we diagnose and address potential problems.
Este texto se trata acerca de los los avances y acontecimientos tecnológicos que han surgido en cada una de las etapas de la historia, en donde el ser humano ha sido el protagonista de su invención y su evolución.
The Apache Hadoop community is gearing up for the upcoming release of Apache Hadoop 0.23 - the first major release since 0.20 in 2009. This release has major enhancements to Hadoop such as HDFS Federation for hyper-scale and a Next Generation MapReduce framework. Arun, the Apache Hadoop Release Master for 0.23, willcover the highlights of the release and talk about efforts undertaken to test, stabilize and release Hadoop.next. The talk covers some of the timelines for the release, our plans for compatibility and upgrade paths for existing users of Hadoop.
Presented at Bay Area Hadoop User Group at Yahoo on 8/25/2011.
Apache HBase: Where We've Been and What's Upcominghuguk
Jon Hsieh, Software Engineer @ Cloudera and HBase Committer
Apache HBase is a distributed non-relational database that provides low-latency random read write access to massive quantities of data. This talk will be broken up into two parts. First I'll talk about how in the past few years, HBase has been deployed in production at companies like Facebook, Pinterest, Groupon, and eBay and about the vibrant community of contributors from around the world include folks at Cloudera, Salesforce.com, Intel, HortonWorks, Yahoo!, and XiaoMi. Second I'll talk about the features in the newest release 0.96.x and in the upcoming 0.98.x release.
Tcloud Computing Hadoop Family and Ecosystem Service 2013.Q2tcloudcomputing-tw
The presentation is designed for those interested in Hadoop technology, and can enhance your knowledge in Hadoop, such as community history, current development status, features of services, distributed computing framework and scenario of big data development in Enterprise.
Hadoop Summit Europe Talk 2014: Apache Hadoop YARN: Present and FutureVinod Kumar Vavilapalli
Title: Apache Hadoop YARN: Present and Future
Abstract: Apache Hadoop YARN evolves the Hadoop compute platform from being centered only around MapReduce to being a generic data processing platform that can take advantage of a multitude of programming paradigms all on the same data. In this talk, we'll talk about the journey of YARN from a concept to being the cornerstone of Hadoop 2 GA releases. We'll cover the current status of YARN, how it is faring today and how it stands apart from the monochromatic world that is Hadoop 1.0. We`ll then move on to the exciting future of YARN - features that are making YARN a first class resource-management platform for enterprise Hadoop, rolling upgrades, high availability, support for long running services alongside applications, fine-grain isolation for multi-tenancy, preemption, application SLAs, application-history to name a few.
This is the basis for some talks I've given at Microsoft Technology Center, the Chicago Mercantile exchange, and local user groups over the past 2 years. It's a bit dated now, but it might be useful to some people. If you like it, have feedback, or would like someone to explain Hadoop or how it and other new tools can help your company, let me know.
1. Apache Hadoop 0.23
What it takes and what it means…
Page 1
Arun C. Murthy
Founder/Architect, Hortonworks
@acmurthy (@hortonworks)
2. Hello! I’m Arun
Page 2
• Founder/Architect at Hortonworks Inc.
– Formerly, Architect Hadoop MapReduce, Yahoo
– Responsible for running Hadoop MR as a service for all of Yahoo (50k nodes
footprint)
– Yes, I took the 3am calls!
• Apache Hadoop, ASF
– VP, Apache Hadoop, ASF (Chair of Apache Hadoop PMC)
– Long-term Committer/PMC member (full time ~6 years)
– Release Manager - hadoop-0.23
3. Releases so far…
Page 3
• Started for Nutch… Yahoo picked it up in early 2006, hired Doug Cutting
• Initially, we did monthly releases (0.1, 0.2 …)
• Quarterly after hadoop-0.15 until hadoop-0.20 in 04/2009…
• hadoop-0.20 is still the basis of all current, stable, Hadoop distributions
– Apache Hadoop 0.20.2xx
– CDH3.*
– HDP1.*
• hadoop-0.20.203 (security) – 05/2011
• hadoop-0.20.205 (security + append -> hbase) – 10/2011
2006 2009 2012
hadoop-0.1.0 hadoop-0.10.0 hadoop-0.20.0 hadoop-0.23.0hadoop-0.20.205
4. hadoop-0.23
Page 4
• First stable release off Apache Hadoop trunk in over 30 months…
• Currently alpha (hadoop-0.23.0) is under voting by the Hadoop PMC
• Significant major features
• Several, several enhancements
6. MapReduce - YARN
Page 6
• NextGen Hadoop Data Processing Framework
• Support MR and other paradigms
• Mahadev Konar (Hortonworks) – Tue 4.30pm
Resource
Manager
Client
MapReduce Status
Job Submission
Client
Node
Manager
Container Container
Node
Manager
App Mstr Container
Node
Manager
Container App Mstr
Node Status
Resource Request
7. Performance
Page 7
• 2x+ across the board
• HDFS read/write
– CRC32
– fadvise
– Shortcut for local reads
• MapReduce
– Unlock lots of improvements from Terasort record (Owen/Arun, 2009)
– Shuffle 30%+
– Small Jobs – Uber AM
• Todd Lipcon (Cloudera) – Wed 10am
8. HDFS NameNode HA
Page 8
• The famous SPOF
• https://issues.apache.org/jira/browse/HDFS-1623
• Well on the way to fix in hadoop-0.23.½
• Suresh Srinivas (Hortonworks), Aaron Myers (Cloudera) – Tue 2.15pm
9. More…
Page 9
• HDFS Write pipeline improvements for Hbase
– Append/flush etc.
• Build - Full Mavenization
• EditLogs re-write
– https://issues.apache.org/jira/browse/HDFS-1073
• Tonnes more …
10. Deployment goals
Page 10
• Clusters of 6,000 machines
– Each machine with 16+ cores, 48G/96G RAM, 24TB/36TB disks
– 200+ PB (raw) per cluster
– 100,000+ concurrent tasks
– 10,000 concurrent jobs
• Yahoo: 50,000+ machines
11. What does it take to get there?
Page 11
• Testing, *lots* of it
• Benchmarks – At least as good as the last one
• Integration testing
– HBase
– Pig
– Hive
– Oozie
• Deployment discipline
12. Testing
Page 12
• Why is it hard?
– MapReduce is, effectively, very wide api
– Add Streaming
– Add Pipes
– Oh, Pig/Hive etc. etc.
• Functional tests
– Nightly
– Nearly 1000 functional tests for MapReduce alone
– Several hundred for Pig/Hive etc.
• Scale tests
– Simulation
• Longevity tests
• Stress tests
13. Benchmarks
Page 13
• Benchmark every part of the HDFS & MR pipeline
– HDFS read/write throughput
– NN operations
– Scan, Shuffle, Sort
• GridMixv3
– Run production traces in test clusters
– Thousands of jobs
– Stress mode v/s Replay mode
14. Integration Testing
Page 14
• Several projects in the ecosystem
– HBase
– Pig
– Hive
– Oozie
• Cycle
– Functional
– Scale
– Rinse, repeat
15. Deployment
Page 15
• Alpha/Test (early UAT)
– Starting Nov, 2011
– Small scale (500-800 nodes)
• Alpha
– Jan, 2012
– Majority of users
– 2000 nodes per cluster, > 10,000 nodes in all
• Beta
– Misnomer: 100s of PB, Millions of user applications
– Significantly wide variety of applications and load
– 4000+ nodes per cluster, > 20000 nodes in all
– Late Q1, 2012
• Production
– Well, it’s production
– Mid-to-late Q2 2012