Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Real time fraud detection at 1+M scale on hadoop stack

775 views

Published on

Real time fraud detection at 1+M scale on hadoop stack

Published in: Technology

Real time fraud detection at 1+M scale on hadoop stack

  1. 1. Real time fraud detection at 1+M scale on hadoop stack Ishan Chhabra Nitin Aggarwal Rocketfuel Inc
  2. 2. Agenda • Rocketfuel & Advertising auction process • Various kinds of frauds • Problem statement • Helios: Architecture • Implementation in Hadoop Ecosystem • Details about HDFS spout and datacube • Key takeaways
  3. 3. Rocketfuel Inc • AdTech firm that enables marketers using AI & Big Data • Scores 120+ Billion Ad auctions in a day • Handles 1-2 Million TPS at peak traffic
  4. 4. Auction Process (4b) Notification(5) Record impression
  5. 5. Exchange - Rocketfuel discrepancy (4b) Notification(5) Record impression count(4b) != count(5)
  6. 6. Rocketfuel - Advertiser discrepancy (5) Record impression count(5) != count(6)
  7. 7. Common causes • Fraud – Bot networks and malware – Hidden ad slots • Human error – AD JavaScript site or browser specific issues – Bugs in Ad JavaScript – 3rd-party JavaScript interactions in Ad or site
  8. 8. Need for real time • Micro-patterns that change frequently • Latency has big business impact; delays in reacting leads to loss of money • A lot of times discrepancies arise due to breakages and sudden unexpected changes
  9. 9. Goal: Significantly reduce money loss from both ends by reacting to these micropatterns in near real time
  10. 10. Data flow x2 x2 x2 x2 Bidding Sites Analytics Site
  11. 11. Data flow Bids & Notifications (batched and delayed) Impressions (near real time) Bidding SiteAnalytics Site
  12. 12. Problem statement • 3 streams with various delays (2 from HDFS, 1 from Kafka) • Join and aggregate • Filter among 2^n feature combinations to identify the top culprits (OLAP cube) • Feedback into bidding
  13. 13. Lambda architecture Logs Storm & HBase on YARN (Slider) Serving Infra (Bidders and Ad- servers) Near real-time pipeline Batch pipeline
  14. 14. Helios: Abstraction for real time learning • Real time processing of data streams from sources like Kafka and HDFS, with efficient join • Process joined event views to generate different analytics, using HBase and MapReduce • OLAP support • Join with dimensional data; different use-cases
  15. 15. Logs Storm Cluster (Slider and YARN) HBase Cluster (Slider and YARN) Serving Infra (Bidders and Ad-servers) Helios architecture OLAP Metrics
  16. 16. Step 1a: Ingesting events from Kafka Logs Storm Cluster (Slider and YARN) Serving Infra (Bidders and Ad-servers)
  17. 17. Processing Kafka events in real-time • Relies on logs streams written to Kafka by scribe • Kafka Topic with 200+ partitions • Data produced and written via scribe from more than 3K nodes • Using upstream Kafka spout to read data – Spout granularity is at record-level – Uses Zookeeper extensively for book-keeping
  18. 18. Processing Kafka events in real-time • Topology Statistics: – Running on YARN as an application, so easily scalable •Container: Memory: 2700m – Running with 25 workers (5 executors/worker) – Supervisor JVM opts: •-Xms512m -Xmx512m -XX:PermSize=64m -XX:MaxPermSize=64m – Worker JVM opts: •-Xmx1800m -Xms1800m – Processing nearly 100K events per second
  19. 19. Step 1b: Ingesting events from HDFS Logs Storm Cluster (Slider and YARN) Serving Infra (Bidders and Ad-servers)
  20. 20. Processing HDFS events in real-time • Relies on logs streams written to HDFS by scribe • WAN limitations introduce high compression needs • DistCp, rather than Kafka • Using in-house Storm spout to read streams from HDFS
  21. 21. Processing Bid-logs in real-time Storm Topology Statistics: • Running on YARN as an application via slider (easily scalable) –Container: Memory: 2700m • Currently running with 350 workers (~10 executors/worker). • Supervisor JVM opts: –-Xms512m -Xmx512m -XX:PermSize=64m -XX:MaxPermSize=64m • Worker JVM opts: –-Xmx1800m -Xms1800m • Processing nearly 1.5-2.0 million events per second (~ 100+B events per day)
  22. 22. HDFS Spout Architecture • Master-slave architecture • Spout granularity is at file-level, with record level offset bookkeeping. • Use Zookeeper extensively for book-keeping –Curator and recipes make life lot easier. • Highly influenced from Kafka Spout
  23. 23. HDFS Spout Architecture Spout Leader Spout Workers un-assigned locked checkpoint done offset Offset-lock
  24. 24. HDFS Spout Architecture • Assignment Manager (AM): – Elected based on leader election algorithm – Polls HDFS periodically to identify new files, based on timestamp and partitioned paths – Publish files to be processed as work tasks in zookeeper (ZK) – Manage time and path offsets, for cleaning up done nodes – Create periodic done-markers on HDFS
  25. 25. HDFS Spout Architecture • Worker (W): – Select work-tasks from the available ones in ZK, when done with current work, with ephemeral node locking – Perform file checkpointing using record-offset in ZK to save work – Create done node in ZK, after processing the file
  26. 26. HDFS Spout Architecture Bookkeeping node hierarchy: • Pluggable Backend: Current implementation use ZK • Work Life Cycle – unassigned - file added here by AM – locked - created by worker on selecting work – checkpoint - timely checkpointing here – processed - created by worker on completion • Offset Management – offset - stores path, time offset of HDFS – offset-lock - ephemeral lock for offset update
  27. 27. HDFS Spout Architecture • Spout Failures – Slaves - Work made available again by Master – Master - One of the slaves become master via leader election and give away the slave duties • Spouts Contention for work assignment via ZK ephemeral nodes • Leverage partitioned data directories and done-markers based model in the organization
  28. 28. Comparison with official HDFS spout Storm-1199 • Use HDFS for book-keeping • Move or rename source files. • All slave architecture, all spouts contend for failed works • No leverage for partitioned data • Kerberos support In-house Implementation ● Uses ZK for book-keeping. ● No changes to source files ● Master-Slave architecture with leader election ● Leverage partitioned data, and done-markers. ● No Kerberos support.
  29. 29. Step 2: Join via HBase Logs Storm Cluster (Slider and YARN) HBase Cluster (Slider and YARN) Donemarkers
  30. 30. HBase for joining streams of data • Use request-id as key, to join different streams • Different Column Qualifiers for different event streams • HBase Cluster configuration –Running on YARN as service via slider –Region-servers: 40 instances, with 4G memory each –Optimized for writes, with large MemStore –Tuned compactions, to avoid unnecessary merging of files, as they expire quickly (low retention) •Date based compactions in HBase 2.0 available. • Write throughput: 1M+ TPS
  31. 31. Observations from running Storm at scale • ZeroMQ more stable than Netty in version 0.9.x – Many Netty Optimizations available in 0.10.x • Local-shuffle mode helpful for large data volumes • Need to tune heartbeats interval – (task|worker|supervisor).heartbeat.frequency.secs – Pacemaker: Available in 1.0 • Need to tune code sync interval – Distributed Cache: Available in 1.0
  32. 32. Step 3: Scan joined view and populate OLAP OLAP Metrics Donemarkers Event Streams Start MR Job
  33. 33. OLAP with multi-dimensional data • Developed Mapreduce backed workflow – Cron triggered hourly jobs based on donemarkers – Scan data from HBase using snapshots – Semantics for hour boundaries – Event metric reporting
  34. 34. OLAP with multi-dimensional data • Modular API for processing records – Pluggable architecture for different use-cases – OLAP implemented as a first-class use-case • Use datacube library (Urban Airship) for generating OLAP data. – Configurable metric reporting.
  35. 35. OLAP with multi-dimensional data Datacube for OLAP • Library was developed at Urban Airship. • About the API – Need to define dimensions and rollups for the cube – IO library for writing measures for cube – Pluggable Databases: HBase, In-memory Map – ID Service: Optimization for encoding values via ID substitution – Support for bulk-loading and backfilling
  36. 36. OLAP with multi-dimensional data New features (forked) • Reverse lookups for scans • New InputFormat for MR Jobs • Prefix hashes (data and lookups) for load distribution. • Optimized DB performance by using Async HBase library for efficient reads/writes MR Job statistics • Use HBase Snapshots • MR job runs every hour (Run time: 5-15mins) • Hour is closed with delays of 30-60 minutes (on average), considering log rotation and shipping(scribe) latencies.
  37. 37. Step 4: Scan OLAP cube for top feature vectors OLAP Metrics Donemarkers Start MR Job Feature Vectors
  38. 38. OLAP with multi-dimensional data Serialize OLAP View • Customizable MapReduce Job scans OLAP data (backed by HBase), writes to HDFS. • Different Jobs can use this easily accessible data from HDFS for processing, and upload computed feedback stats to sources like MySQL MR Job Statistics • MR job runs every hour (Runtime: 2-5mins)
  39. 39. DevOps Automation • Monitoring Service • Topology submission service
  40. 40. Key Takeaways • Hadoop ecosystem offers a productive stack for high velocity real time learning problems • YARN allows one to easily experiment with and tweak vertical to horizontal scalability ratios
  41. 41. THANKS! ANY QUESTIONS? Reach us at ichhabra@rocketfuel.com naggarwal@rocketfuel.com

×