Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
DATA
Spark & Hadoop @ Uber
Who We Are
Early Engineers On Hadoop team @ Uber
Kelvin Chu Reza ShiftehfarVinoth Chandar
Agenda
● Intro to Data @ Uber
● Trips Pipeline Into Warehouse
● Paricon
● INotify DStream
● Future
Edit or delete footer text in Master ipsandella doloreium dem isciame ndaestia nessed
quibus aut hiligenet ut ea debisci e...
Data @ Uber
● Impact of Data is Huge!
○ 2000+ Unique Users Operating a massive transportation system
● Running critical bu...
Data Architecture: Circa 2014
Kafka Logs
Schemaless
Databases
RDBMS Tables
OLAP
Warehouse
Applications
Bulk Uploader
Amazo...
Challenges
● Scaling to high volume Kafka streams
○ eg: Event data coming from phones
● Merged Views of DB Changelogs acro...
New World Order: Hadoop & Spark
Kafka Logs
Schemaless
Databases
RDBMS Tables
Amazon S3
HDFS
OLAP
Warehouse
Applications
Ad...
Trips Pipeline : Problem
● Most Valuable Dataset in Uber (100% Accuracy)
● Trips stored in Uber’s ‘schemaless’ datastores ...
Trips Pipeline : Architecture
Trips Pipeline : ETL via SparkSQL
● Decouples raw ingestion from Relational Warehouse
table model
○ Ability to provision m...
Paricon : PARquet Inference and CONversion
● Running in production since Feburary 2015
○ first Spark application at Uber
Motivation 1: Data Breakage & Evolution
Upstream Data Producers
Downstream Data Consumers
JSON at S3 data evolving over ti...
Motivation 1: Why Schema
● Contract
○ multiple teams
○ producers
○ consumers
● Avoid data breakage
○ because we have schem...
Paricon : Workflow
Transfer
Convert
Infer
Validate
JSON / Gzip / S3
Avro
schema
Parquet /
In-house HDFSSchema
Repository
a...
Motivation 2: Why Parquet
● Supports schema
● 2 to 4 times FASTER than json/gzip
○ column pruning
■ wide tables at Uber
○ ...
Paricon : Transfer
● distcp on Spark
○ only subset of command-line options currently
● Approach
○ compute the files list a...
Paricon : Infer
● Infer by JsonRDD
○ but not directly
● Challenge: Data is dirty
○ garbage in garbage out
● Two passes app...
Paricon : Infer
● Data cleaning
○ structured as rules-based engine
○ each rule is an expectation
○ all rules are heuristic...
Paricon : Convert
● Incremental conversion
○ assign days to RDD partitions
○ computation and checkpoint unit: day
○ new jo...
Stitching : Motivation
File size
Number of files
HDFS
block size
● Inefficient for HDFS
● Many large files
○ break them
● ...
Stitching : Goal
HDFS Block HDFS Block HDFS Block HDFS Block
Parquet Block Parquet Block Parquet Block
HDFS Block HDFS Blo...
Stitching : Algorithms
● Algo1: Estimate a constant before conversion
○ pros: easy to do
○ cons: not work well with tempor...
Stitching : Experiments
●
○ N: number of Parquet files
○ Si: size of the i-th Parquet file
○ B: HDFS block size
○ First pa...
Paricon : Validate
● Modeled as “Source and converted tables join”
○ equi-join on primary key
○ compare the counts
○ compa...
Some Production Numbers
● Inferred: >120 topics
● Converted: >40 topics
● Largest single job so far
○ process 15TB compres...
Lessons
● Implement custom finer checkpointing
○ S3 data network fee
○ jobs/tasks failure -> download all data repeatedly
...
Komondor: Problem Statement
● Current Kafka->HDFS ingestion service does too much
work:
○ Consume from Kafka -> Write Sequ...
Komondor: Kafka Ingestion Service
Komondor
Streaming Raw
Data Delivery
Kafka
HDFS
Streaming
Ingestion
Batch Verification
&...
Komondor: Goals
● Fast raw data into permanent storage
● Spark Streaming Ingestor to ‘cook’ raw data
○ For now, Parquet ge...
INotify DStream: Komondor De-Duplication
INotify DStream: Motivation
● Streaming Job to pick up raw data files
○ Keeps end-to-end latency low vs batch job
● Spark ...
INotify DStream: HDFS INotify
● Similar to Linux iNotify to watch file system changes
● Exposes the HDFS Edit Log as an ev...
INotify DStream: Implementation
● Provides the HDFS INotify events as a Spark DStream
○ Implementation very similar to Kaf...
INotify DStream: Early Results
● Pretty stable when running on YARN
● HDFS iNotify reads ALL events from NameNode
● Have t...
INotify DStream: Future Uses, XDC Replication
● Open possibility, provided INotify is a charm in production
● Uber’s think...
Future/Ongoing Work
Our engines are revved up
Forecast: Sunny & Awesome with lots of Spark!
Future/Ongoing Work
● Spark SQL Based ETL-Platform
○ Powers all tables into warehouse
● Open up SQL-On-Hadoop via Spark SQ...
We Are Hiring!!! :)
Thank You
(Special kudos to Uber Facilities & Security)
Questions?
Extra Slides
Trips Pipeline : Consolidating Changelogs
● Data Model, very similar to BigTable/HBase
○ row_key : uuid for trip
○ col_key...
Trips Pipeline : Challenge
● Existing ingestion turned cell changes into Warehouse
upserts,
○ Losing the version informati...
Spark At Uber
● Today
○ Paricon: Turn Historical Json Into Parquet Gold Mine
○ Streamio/Spark SQL : Deliver Global View of...
Trips Pipeline : Raw Row Images in HDFS
● Streamio : Generic connector of partitioned streams
○ Pluggable in & out stream ...
Upcoming SlideShare
Loading in …5
×

Spark Meetup at Uber

13,515 views

Published on

Slides from the Spark Meetup at Uber

Published in: Data & Analytics
  • Hi there! Essay Help For Students | Discount 10% for your first order! - Check our website! https://vk.cc/80SakO
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here

Spark Meetup at Uber

  1. 1. DATA Spark & Hadoop @ Uber
  2. 2. Who We Are Early Engineers On Hadoop team @ Uber Kelvin Chu Reza ShiftehfarVinoth Chandar
  3. 3. Agenda ● Intro to Data @ Uber ● Trips Pipeline Into Warehouse ● Paricon ● INotify DStream ● Future
  4. 4. Edit or delete footer text in Master ipsandella doloreium dem isciame ndaestia nessed quibus aut hiligenet ut ea debisci eturiate poresti vid min core, vercidigent. Uber’s Mission “Transportation as reliable as running water, everywhere, for everyone” 300+ Cities 60+ Countries And growing...
  5. 5. Data @ Uber ● Impact of Data is Huge! ○ 2000+ Unique Users Operating a massive transportation system ● Running critical business operations ○ Payments, Fraud, Marketing Spend, Background Checks … ● Unique & Interesting Problems ○ Supply vs Demand - Growth ○ Geo-Temporal Analytics ● Latency Is King ○ Enormous business value in making data available asap
  6. 6. Data Architecture: Circa 2014 Kafka Logs Schemaless Databases RDBMS Tables OLAP Warehouse Applications Bulk Uploader Amazon S3 EMR Celery/Python ETL Adhoc SQL
  7. 7. Challenges ● Scaling to high volume Kafka streams ○ eg: Event data coming from phones ● Merged Views of DB Changelogs across DCs ○ Some of the most important data - trips (duh!) ● Fragile ingestion model ○ Projections/Transformation in pipelines ○ Data Lake philosophy - raw data on HDFS, transform later using Spark ● Free-form JSON data → Data Breakages ● First order of business - Reliable Data
  8. 8. New World Order: Hadoop & Spark Kafka Logs Schemaless Databases RDBMS Tables Amazon S3 HDFS OLAP Warehouse Applications Adhoc SQL Applications Adhoc SQL Machine Learning Paricon Spark SQL Spark /Hive Spark Jobs/ Oozie Spark? Data Delivery Services Raw Data Cooked Spark/Spark Streaming
  9. 9. Trips Pipeline : Problem ● Most Valuable Dataset in Uber (100% Accuracy) ● Trips stored in Uber’s ‘schemaless’ datastores (sharded Mysql), across DCs, cross replicated ● Need a consolidated view across dcs, quickly (~1-2 hr end-end) Trip Store (DC1) Trip Store (DC2) Writes in DC1 Writes in DC2 Multi Master XDC Replication
  10. 10. Trips Pipeline : Architecture
  11. 11. Trips Pipeline : ETL via SparkSQL ● Decouples raw ingestion from Relational Warehouse table model ○ Ability to provision multiple tables off same data set ● Picks latest changelog entry in the files ○ Applies them in order ● Applies projections & row level transformations ○ Produce ingestible data into Warehouse ● Uses HiveContext to gain access to UDFs ○ explode() etc to flatten JSON arrays. ● Scheduled Spark Job via Oozie ○ Runs every hour (tunable)
  12. 12. Paricon : PARquet Inference and CONversion ● Running in production since Feburary 2015 ○ first Spark application at Uber
  13. 13. Motivation 1: Data Breakage & Evolution Upstream Data Producers Downstream Data Consumers JSON at S3 data evolving over time … and one day
  14. 14. Motivation 1: Why Schema ● Contract ○ multiple teams ○ producers ○ consumers ● Avoid data breakage ○ because we have schema evolution systems ● Data to persist in a typed manner ○ analytics ● Serve as documentation ○ understand data faster ● Unit testable
  15. 15. Paricon : Workflow Transfer Convert Infer Validate JSON / Gzip / S3 Avro schema Parquet / In-house HDFSSchema Repository and Management Systems reviewed / consumed
  16. 16. Motivation 2: Why Parquet ● Supports schema ● 2 to 4 times FASTER than json/gzip ○ column pruning ■ wide tables at Uber ○ filter predicate push-down ○ compression ● Strong Spark support ○ SparkSQL ○ schema evolution ■ schema merging in Spark v1.3 ■ merge old and new compatible schema versions ■ no “Alter table ...”
  17. 17. Paricon : Transfer ● distcp on Spark ○ only subset of command-line options currently ● Approach ○ compute the files list and assign them to RDD partitions ○ avoid stragglers by randomly grouping different dates ● Extras ○ Uber specific logic ■ filename conventions ■ backup policies ○ internal Spark eco-system ○ faster homegrown delta computation ○ get around s3a problem in Hadoop 2.6
  18. 18. Paricon : Infer ● Infer by JsonRDD ○ but not directly ● Challenge: Data is dirty ○ garbage in garbage out ● Two passes approach ○ first: data cleaning ○ second: JsonRDD inference
  19. 19. Paricon : Infer ● Data cleaning ○ structured as rules-based engine ○ each rule is an expectation ○ all rules are heuristics ■ based on business domain knowledge ○ the rules are pluggable based on topics ● Struct@JsonRDD vs Avro: ○ illegal characters in field names ○ repeating group names ○ more
  20. 20. Paricon : Convert ● Incremental conversion ○ assign days to RDD partitions ○ computation and checkpoint unit: day ○ new job or after failure: work on those partial days only ● Most number of codes among the four tasks ○ multiple source formats (encoded vs non-encoded) ○ data cleaning based on inferred schema ○ home grown JSON decoder for Avro ○ file stitching
  21. 21. Stitching : Motivation File size Number of files HDFS block size ● Inefficient for HDFS ● Many large files ○ break them ● But a lot more small files ○ stitch them
  22. 22. Stitching : Goal HDFS Block HDFS Block HDFS Block HDFS Block Parquet Block Parquet Block Parquet Block HDFS Block HDFS Block HDFS Block HDFS Block Parquet File Parquet File Parquet File Parquet File ● One parquet block per file ● Parquet file slightly less than HDFS block
  23. 23. Stitching : Algorithms ● Algo1: Estimate a constant before conversion ○ pros: easy to do ○ cons: not work well with temporal variation ● Algo2: Estimate during conversion per RDD partition ○ each day has its own estimate ○ may even self-tuned during the day
  24. 24. Stitching : Experiments ● ○ N: number of Parquet files ○ Si: size of the i-th Parquet file ○ B: HDFS block size ○ First part: local I/O - files slightly smaller HDFS block ○ Second part: network I/O - penalty of files going over a block ● Benchmark queries
  25. 25. Paricon : Validate ● Modeled as “Source and converted tables join” ○ equi-join on primary key ○ compare the counts ○ compare the columns content ● SparkSQL ○ easy for implementation ○ hard for performance tuning ● Debugging tools
  26. 26. Some Production Numbers ● Inferred: >120 topics ● Converted: >40 topics ● Largest single job so far ○ process 15TB compressed (140TB uncompressed) data ○ one single topic ○ recover from multiple failures by checkpoints ● Numbers are increasing ...
  27. 27. Lessons ● Implement custom finer checkpointing ○ S3 data network fee ○ jobs/tasks failure -> download all data repeatedly ○ to save money and time ● There is no perfect data cleaning ○ 100% clean is not needed often ● Schema parsing implementation ○ tricky and takes much time for testing
  28. 28. Komondor: Problem Statement ● Current Kafka->HDFS ingestion service does too much work: ○ Consume from Kafka -> Write Sequence Files -> Convert to Parquet -> Upload to HDFS, HIVE compatible way ○ Parquet generation needs a lot of memory ○ Local writing and uploading is slow ● Need to decouple raw ingestion from consumable data ○ Move heavy lifting into Spark -> Keep raw-data delivery service lean ● Streaming job to keep converting raw data into Parquet, as they land!
  29. 29. Komondor: Kafka Ingestion Service Komondor Streaming Raw Data Delivery Kafka HDFS Streaming Ingestion Batch Verification & File Stitching Raw Data Consumable Data
  30. 30. Komondor: Goals ● Fast raw data into permanent storage ● Spark Streaming Ingestor to ‘cook’ raw data ○ For now, Parquet generation ○ But opens up polyglot world for ORC, RCFile,.... ● De-duplicate of raw data before consumption ○ Shields downstream consumers from at-least-once delivery of pipelines ○ Simply replay events for an entire day, in the event of pipeline outages ● Improved wellness of HDFS ○ Avoiding too many small files in HDFS ○ File stitcher job to combine small files from past days
  31. 31. INotify DStream: Komondor De-Duplication
  32. 32. INotify DStream: Motivation ● Streaming Job to pick up raw data files ○ Keeps end-to-end latency low vs batch job ● Spark Streaming FileDStream not sufficient ○ Only works 1 directory deep, ■ At least have two levels for <topic>/<dc>/ ○ Provides the file contents directly ■ Loses valuable information in file name. eg: partition num ○ Checkpoint contains an entire file list ■ Will not scale to millions of files ○ Too much overhead to run one Job Per Topic
  33. 33. INotify DStream: HDFS INotify ● Similar to Linux iNotify to watch file system changes ● Exposes the HDFS Edit Log as an event stream ○ CREATE, CLOSE, APPEND, RENAME, METADATA, UNLINK events ○ Introduced in Hadoop Summit 2015 ● Provides transaction id ○ Client can use to resume from a given position ● Event Log Purged every time the FSImage is uploaded
  34. 34. INotify DStream: Implementation ● Provides the HDFS INotify events as a Spark DStream ○ Implementation very similar to KafkaDirectDStream ● Checkpointing is straightforward: ○ Transactions have unique IDs. ○ Just save Transaction ID to permanent storage ● Filed SPARK-10555, vote up if you think it is useful :)
  35. 35. INotify DStream: Early Results ● Pretty stable when running on YARN ● HDFS iNotify reads ALL events from NameNode ● Have to add filtering ○ to catch only events of interests (Paths/Ext.) ○ Performed at Spark level ● Memory usage increases on NN when iNotify is running
  36. 36. INotify DStream: Future Uses, XDC Replication ● Open possibility, provided INotify is a charm in production ● Uber’s thinking about all active-active data architecture ○ This means n HDFS clusters that need to be in-sync ● Typical batch-based distcp creates bursty network utilization ○ Or go through scheduling trouble to smoothen it out ○ INotify DStream provides way to keep shipping files as they land ○ Power of Spark to do any heavy lifting such as filtering sensitive data
  37. 37. Future/Ongoing Work Our engines are revved up Forecast: Sunny & Awesome with lots of Spark!
  38. 38. Future/Ongoing Work ● Spark SQL Based ETL-Platform ○ Powers all tables into warehouse ● Open up SQL-On-Hadoop via Spark SQL/Hive ○ Spark Shell is already so nifty! ● Machine Learning Platform using Spark ○ MLLib /GraphX Possibilities ● Standardized Support For Spark jobs ● Apollo: Next Gen Real-time analytics using Spark Streaming ○ Watch for our next talk! ;)
  39. 39. We Are Hiring!!! :)
  40. 40. Thank You (Special kudos to Uber Facilities & Security)
  41. 41. Questions?
  42. 42. Extra Slides
  43. 43. Trips Pipeline : Consolidating Changelogs ● Data Model, very similar to BigTable/HBase ○ row_key : uuid for trip ○ col_key : One column in trip record ○ version & body : version of the column & json blob ○ cell : Unique tuple of {row_key, col_key, version} ● Provides REST endpoint to tail cell change log for every shard
  44. 44. Trips Pipeline : Challenge ● Existing ingestion turned cell changes into Warehouse upserts, ○ Losing the version information ○ Unable to reject older (& duplicate) cell changes in logs, coming from XDC replication { trip-xxx : {“FARE”=>{f1:{body:12.35,version:11}, f2:{body:val2,version:10}}, {“ETA”=>{f3:{body:val3,version:13}, f4:{body:val4,version:10}} } trip-uuid FARE_f1 ETA_f3 ETA_f4 trip-xxx 12.35 4 5 trip-xyz 14.50 2 1
  45. 45. Spark At Uber ● Today ○ Paricon: Turn Historical Json Into Parquet Gold Mine ○ Streamio/Spark SQL : Deliver Global View of Trip Database into Warehouse in near real-time ● Tomorrow ○ INotify DStream : ■ Komondor - The ‘Uber’ data ingestor ■ XDC Data Replicator ○ Adhoc SQL Access to data: Hive On Spark/Spark SQL ○ Spark Apps: Directly accessing data on HDFS
  46. 46. Trips Pipeline : Raw Row Images in HDFS ● Streamio : Generic connector of partitioned streams ○ Pluggable in & out stream implementations ● Tails cell changes from both DCs into a Kafka topic ● Uses HBase to construct full row image (latest value for each column for a trip) ○ Logs ‘row changelog’ to HDFS ● Preserves version of latest cell for each column/row ○ Can efficiently de-duplicate/reconcile. ● Extensible to all Schemaless datastores

×