U B E R | Data
Hadoop Infrastructure
@Uber Past , Present and
Future
Mayank Bansal
U B E R | Data
Senior Software Engineer, Uber
Hadoop Committer, Oozie PMC
Past: Sr. Staff @ ebay, Worked on Hadoop
Sr. Eng @ Yahoo, Worked on Oozie
Who Am I
U B E R | Data
• Past
• Challenges
• Present
• Done Along the way
• Future
• Work Ahead
Agenda
U B E R | Data
“ Transportation as reliable as running water ,
everywhere, for everyone ”
Uber’s Mission
75+ Countries 500+ Cities
And growing…
U B E R | Data
How Uber works
U B E R | Data
How Uber works
U B E R | Data
How Uber works
U B E R | Data
Data Driven Decisions
U B E R | Data
Uber’s Data Audience
● 1000s of City Operators (Uber Ops!)
○ On the ground team who run and scale uber’s
transportation network
● 100s of Data Scientists and Analysts
○ Spread across various functional groups including
Engineering, Marketing, BizDev etc
● 10s of Engineering Teams
○ Focussed on building automated Data Applications
U B E R | Data
Data Infra Once Upon a time.. (2014)
Kafka Logs
Key-Val DB
RDBMS DBs
S3
Applications
…
ETL
Business Ops
A/B Experiments
Adhoc Analytics
City Ops
Vertica
Data Warehouse
Data
Science
EMR
U B E R | Data
Pain Points
● Scalability
○ Data Grew faster then we expected
● Reliability
○ There were no checks in place to validate
U B E R | Data
Hadoop Scale (2015)
~Few Servers Some Data
No Hives No Presto
~100
Jobs/day
Some
Spark
Apps
U B E R | Data
Data Infrastructure Today
Kafka8 Logs
Schemaless DB
SOA DBs
Service Accounts
…
ETL
Machine Learning
Experimentation
Data Science
Adhoc Analytics
Ops/Data Science
HDFS
City Ops
Data
Science
Spark| Presto
Hive
Vertica
U B E R | Data
Hadoop Scale Today
~Few Thousand
Servers
Many Many
PBs
~20k
Hive
queries/day
~100k
Presto
queries/day
100k
Jobs/day
Few Thousand
Spark Apps /
day
U B E R | Data
A Few things we solved along the way..
● Strict Schema Management
○ Because our largest data audience are SQL
Savvy! (1000s of Uber Ops!)
○ SQL = Strict Schema
● Big Data Processing Tools Unlocked -
Hive, Presto and Spark
○ Migrate SQL savvy users from Vertica to Hive
& Presto (1000s of Ops & 100s of data
scientists & analysts)
○ Spark for more advanced users - 100s of data
scientists
U B E R | Data
A Few things we solved along the way..
● Scalable Ingestion Model
○ Data Grows exponentially
○ Need to think about this from the beginning
● Data Tools
○ Automated Hive registration Hdrone
○ Janus, http end point for running hive, presto
queries
○ Used by query builder
U B E R | Data
Yay, We Did it !!!
U B E R | Data
Now What ???
U B E R | Data
Hadoop Evolution @ ebay
2014
Few Nodes
Some Data
2015
~100’s Nodes
Few PB Data
3000+ node
30,000+ cores
50+ PB
2016
~1000 Nodes
~10’s PB Data
Hadoop Evolution @ Uber
2017
~ 5000 Nodes
~ 100’s PB Data
U B E R | Data
Hadoop Cluster Utilization
• Over
provisioning
for the peak
loads.
• Over capacity
for anticipation
of future
growth
U B E R | Data
Hadoop Evolution @ ebay
2014
0 Nodes
2015
Few Nodes
3000+ node
30,000+ cores
50+ PB
2016
~1000’s Nodes
Mesos Evolution @ Uber
2017
~ 10’s
Thousands Nodes
U B E R | Data
Mesos Cluster Utilization
• Over
provisioning for
the peak loads
• Over capacity
for anticipation
of future growth
U B E R | Data
End Goal
Online
Presto
U B E R | Data
What we need ?
GLOBAL VIEW OF RESOURCES
U B E R | Data
Available Resource Managers
U B E R | Data
Mesos vs YARN
YARN MESOS
Single Level Scheduler Two Level Scheduler
Use C groups for isolation Use C groups for Isolation
CPU, Memory as a resource CPU, Memory and Disk as a resource
Works well with Hadoop work loads Works well with longer running
services
YARN support time based
reservations
Mesos does not have support of
reservations
Dominant resource scheduling Scheduling is done by frameworks
and depends on case to case basis
Scales Better
Similar Isolation
Disk is
better
This is Important
Imp for batch SLA’s
Better for batch
U B E R | Data
Let’s tied them together
YARN is good for Hadoop / Batch
Mesos is good for Longer Running Services
In a Nutshell
U B E R | Data
U B E R | Data
• Myriad is Mesos Framework for Apache
YARN
• Mesos manages Data Center resources
• YARN manages Hadoop workloads
• Myriad
• Gets resources from Mesos
• Launches Node Managers
U B E R | Data
• YARN will handle
resources handed
over to it.
• Mesos will work on
rest of the resources
Myriad’s Limitations
Static Resource Partitioning
U B E R | Data
• YARN will never be able to do over subscription.
• Node Manager will go away
• Fragmentation of resources
• Mesos over subscription can kill YARN too
Myriad’s Limitations
Resource Over Subscription
U B E R | Data
• No Global Quota
Enforcement
• No Global
Priorities
Myriad’s Limitations
U B E R | Data
• Elastic Resource Management
• Utilization
• Stability
• Long List …
Myriad’s Limitations
U B E R | Data
Unified Scheduler
U B E R | Data
Few Takeaways …
• We need one scheduling layer across all
workloads
• Partitioning resources are not good
• At least can save 20-30% resources
• Stability and simplicity wins in Production
• Multi Level of resource Management and
scheduling will not be scalable
U B E R | Data
High Level Characteristics
• Global Quota Management
• Central Scheduling policies
• Over subscription for both Online and Batch
• Isolation and bin packing
• SLA guarantees at Global Level
U B E R | Data
Unified Scheduler
U B E R | Data
Different Schedulers
U B E R | Data
Peloton
U B E R | Data
Peloton - Architecture
U B E R | Data
Peloton – Initial Results
U B E R | Data
Peloton – Done so far
• Batch Workloads Support
• Spark Support
• GPU support
• Distributed Tensorflow support
• Gang Scheduling
U B E R | Data
Peloton – WIP
• YARN Api’s
• State full and stateless services
• Separate placement engines
• State full
• Stateless
• Control panel
• Peloton deploy Peloton
U B E R | Data
Peloton – Timelines
• Beta Released
• Production Early Q3
• Open Source – Q3-Q4 time frame
U B E R | Data
Peloton – Team
Min Shi Jimmy Eskil
Zhitao Tengfei Anant Mayank
U B E R | Data
U B E R | Data
Questions?
mabansal@uber.com
mayank@apache.org
U B E R | Data
Thank You !!!

Hadoop Infrastructure @Uber Past, Present and Future

  • 1.
    U B ER | Data Hadoop Infrastructure @Uber Past , Present and Future Mayank Bansal
  • 2.
    U B ER | Data Senior Software Engineer, Uber Hadoop Committer, Oozie PMC Past: Sr. Staff @ ebay, Worked on Hadoop Sr. Eng @ Yahoo, Worked on Oozie Who Am I
  • 3.
    U B ER | Data • Past • Challenges • Present • Done Along the way • Future • Work Ahead Agenda
  • 4.
    U B ER | Data “ Transportation as reliable as running water , everywhere, for everyone ” Uber’s Mission 75+ Countries 500+ Cities And growing…
  • 5.
    U B ER | Data How Uber works
  • 6.
    U B ER | Data How Uber works
  • 7.
    U B ER | Data How Uber works
  • 8.
    U B ER | Data Data Driven Decisions
  • 9.
    U B ER | Data Uber’s Data Audience ● 1000s of City Operators (Uber Ops!) ○ On the ground team who run and scale uber’s transportation network ● 100s of Data Scientists and Analysts ○ Spread across various functional groups including Engineering, Marketing, BizDev etc ● 10s of Engineering Teams ○ Focussed on building automated Data Applications
  • 10.
    U B ER | Data Data Infra Once Upon a time.. (2014) Kafka Logs Key-Val DB RDBMS DBs S3 Applications … ETL Business Ops A/B Experiments Adhoc Analytics City Ops Vertica Data Warehouse Data Science EMR
  • 11.
    U B ER | Data Pain Points ● Scalability ○ Data Grew faster then we expected ● Reliability ○ There were no checks in place to validate
  • 12.
    U B ER | Data Hadoop Scale (2015) ~Few Servers Some Data No Hives No Presto ~100 Jobs/day Some Spark Apps
  • 13.
    U B ER | Data Data Infrastructure Today Kafka8 Logs Schemaless DB SOA DBs Service Accounts … ETL Machine Learning Experimentation Data Science Adhoc Analytics Ops/Data Science HDFS City Ops Data Science Spark| Presto Hive Vertica
  • 14.
    U B ER | Data Hadoop Scale Today ~Few Thousand Servers Many Many PBs ~20k Hive queries/day ~100k Presto queries/day 100k Jobs/day Few Thousand Spark Apps / day
  • 15.
    U B ER | Data A Few things we solved along the way.. ● Strict Schema Management ○ Because our largest data audience are SQL Savvy! (1000s of Uber Ops!) ○ SQL = Strict Schema ● Big Data Processing Tools Unlocked - Hive, Presto and Spark ○ Migrate SQL savvy users from Vertica to Hive & Presto (1000s of Ops & 100s of data scientists & analysts) ○ Spark for more advanced users - 100s of data scientists
  • 16.
    U B ER | Data A Few things we solved along the way.. ● Scalable Ingestion Model ○ Data Grows exponentially ○ Need to think about this from the beginning ● Data Tools ○ Automated Hive registration Hdrone ○ Janus, http end point for running hive, presto queries ○ Used by query builder
  • 17.
    U B ER | Data Yay, We Did it !!!
  • 18.
    U B ER | Data Now What ???
  • 19.
    U B ER | Data Hadoop Evolution @ ebay 2014 Few Nodes Some Data 2015 ~100’s Nodes Few PB Data 3000+ node 30,000+ cores 50+ PB 2016 ~1000 Nodes ~10’s PB Data Hadoop Evolution @ Uber 2017 ~ 5000 Nodes ~ 100’s PB Data
  • 20.
    U B ER | Data Hadoop Cluster Utilization • Over provisioning for the peak loads. • Over capacity for anticipation of future growth
  • 21.
    U B ER | Data Hadoop Evolution @ ebay 2014 0 Nodes 2015 Few Nodes 3000+ node 30,000+ cores 50+ PB 2016 ~1000’s Nodes Mesos Evolution @ Uber 2017 ~ 10’s Thousands Nodes
  • 22.
    U B ER | Data Mesos Cluster Utilization • Over provisioning for the peak loads • Over capacity for anticipation of future growth
  • 23.
    U B ER | Data End Goal Online Presto
  • 24.
    U B ER | Data What we need ? GLOBAL VIEW OF RESOURCES
  • 25.
    U B ER | Data Available Resource Managers
  • 26.
    U B ER | Data Mesos vs YARN YARN MESOS Single Level Scheduler Two Level Scheduler Use C groups for isolation Use C groups for Isolation CPU, Memory as a resource CPU, Memory and Disk as a resource Works well with Hadoop work loads Works well with longer running services YARN support time based reservations Mesos does not have support of reservations Dominant resource scheduling Scheduling is done by frameworks and depends on case to case basis Scales Better Similar Isolation Disk is better This is Important Imp for batch SLA’s Better for batch
  • 27.
    U B ER | Data Let’s tied them together YARN is good for Hadoop / Batch Mesos is good for Longer Running Services In a Nutshell
  • 28.
    U B ER | Data
  • 29.
    U B ER | Data • Myriad is Mesos Framework for Apache YARN • Mesos manages Data Center resources • YARN manages Hadoop workloads • Myriad • Gets resources from Mesos • Launches Node Managers
  • 30.
    U B ER | Data • YARN will handle resources handed over to it. • Mesos will work on rest of the resources Myriad’s Limitations Static Resource Partitioning
  • 31.
    U B ER | Data • YARN will never be able to do over subscription. • Node Manager will go away • Fragmentation of resources • Mesos over subscription can kill YARN too Myriad’s Limitations Resource Over Subscription
  • 32.
    U B ER | Data • No Global Quota Enforcement • No Global Priorities Myriad’s Limitations
  • 33.
    U B ER | Data • Elastic Resource Management • Utilization • Stability • Long List … Myriad’s Limitations
  • 34.
    U B ER | Data Unified Scheduler
  • 35.
    U B ER | Data Few Takeaways … • We need one scheduling layer across all workloads • Partitioning resources are not good • At least can save 20-30% resources • Stability and simplicity wins in Production • Multi Level of resource Management and scheduling will not be scalable
  • 36.
    U B ER | Data High Level Characteristics • Global Quota Management • Central Scheduling policies • Over subscription for both Online and Batch • Isolation and bin packing • SLA guarantees at Global Level
  • 37.
    U B ER | Data Unified Scheduler
  • 38.
    U B ER | Data Different Schedulers
  • 39.
    U B ER | Data Peloton
  • 40.
    U B ER | Data Peloton - Architecture
  • 41.
    U B ER | Data Peloton – Initial Results
  • 42.
    U B ER | Data Peloton – Done so far • Batch Workloads Support • Spark Support • GPU support • Distributed Tensorflow support • Gang Scheduling
  • 43.
    U B ER | Data Peloton – WIP • YARN Api’s • State full and stateless services • Separate placement engines • State full • Stateless • Control panel • Peloton deploy Peloton
  • 44.
    U B ER | Data Peloton – Timelines • Beta Released • Production Early Q3 • Open Source – Q3-Q4 time frame
  • 45.
    U B ER | Data Peloton – Team Min Shi Jimmy Eskil Zhitao Tengfei Anant Mayank
  • 46.
    U B ER | Data
  • 47.
    U B ER | Data Questions? mabansal@uber.com mayank@apache.org
  • 48.
    U B ER | Data Thank You !!!