Cassandra Day SV 2014: Spark, Shark, and Apache Cassandra

5,277
-1

Published on

This session covers our experience with using the Spark and Shark frameworks for running real-time queries on top of Cassandra data.We will start by surveying the current Cassandra analytics landscape, including Hadoop and HIVE, and touch on the use of custom input formats to extract data from Cassandra. We will then dive into Spark and Shark, two memory-based cluster computing frameworks, and how they enable often dramatic improvements in query speed and productivity, over the standard solutions today.

Published in: Technology

Cassandra Day SV 2014: Spark, Shark, and Apache Cassandra

  1. 1. Interactive Analytics With 
 Spark And Cassandra ! Evan Chan
 Ooyala, Inc. April 7Th, 2014
  2. 2. • Staff Engineer, Compute and Data Services, Ooyala • Building multiple web-scale real-time systems on top of C*, Kafka, Storm, etc. • Scala/Akka guy • Very excited by open source, big data projects • @evanfchan Who is this guy? !2
  3. 3. • Cassandra at Ooyala • What problem are we trying to solve? • Spark and Shark • Integrating Cassandra and Spark • Our Spark/Cassandra Architecture Agenda !3
  4. 4. CASSANDRA AT OOYALA !4
  5. 5. OOYALA Powering personalized video experiences across all screens.
  6. 6. CONFIDENTIAL—DO NOT DISTRIBUTE !6CONFIDENTIAL—DO NOT DISTRIBUTE Founded in 2007 Commercially launch in 2009 230+ employees in Silicon Valley, LA, NYC, 
 London, Paris, Tokyo, Sydney & Guadalajara Global footprint, 200M unique users,
110+ countries, and more than 6,000 websites Over 1 billion videos played per month 
and 2 billion analytic events per day 25% of U.S. online viewers watch video 
 powered by Ooyala COMPANY OVERVIEW
  7. 7. CONFIDENTIAL—DO NOT DISTRIBUTE !7 TRUSTED VIDEO PARTNER STRATEGIC PARTNERS CUSTOMERS CONFIDENTIAL—DO NOT DISTRIBUTE
  8. 8. TITLE TEXT GOES HERE • 12 clusters ranging in size from 3 to 107 nodes • Total of 28TB of data managed over ~220 nodes • Powers all of our analytics infrastructure • Traditional analytics aggregations • Recommendations and trends • DSE/C* 1.0.x, 1.1.x, 1.2.6 We are a large Cassandra user !8
  9. 9. TITLE TEXT GOES HERE • Started investing in Spark beginning of 2013 • 2 teams of developers doing stuff with Spark • Actively contributing to Spark developer community • Deploying Spark to a large (>100 node) production cluster • Spark community very active, huge amount of interest Becoming a big Spark user... !9
  10. 10. WHAT PROBLEM ARE WE TRYING TO SOLVE? !10
  11. 11. From mountains of raw data...
  12. 12. • Quickly • Painlessly • At scale?
 To nuggets of truth...
  13. 13. Today: Precomputed Aggregates • Video metrics computed along several high cardinality dimensions • Very fast lookups, but inflexible, and hard to change • Most computed aggregates are never read • What if we need more dynamic queries? • Top content for mobile users in France • Engagement curves for users who watched recommendations • Data mining, trends, machine learning
  14. 14. The Static - Dynamic Continuum • Super fast lookups • Inflexible, wasteful • Best for 80% most common queries • Always compute results from raw data • Flexible but slow 100% Precomputation 100% Dynamic
  15. 15. Where We Want To Be Partly dynamic • Pre-aggregate most common queries • Flexible, fast dynamic queries • Easily generate many materialized views
  16. 16. WHY SPARK? !16
  17. 17. Introduction To Spark • In-memory distributed computing framework • Created by UC Berkeley AMP Lab in 2010 • Targeted problems that MR is bad at: –Iterative algorithms (machine learning) –Interactive data mining • More general purpose than Hadoop MR • Top level Apache project • Active contributions from Intel, Yahoo, lots of
  18. 18. Spark Vs Hadoop HDFS Map Reducee Map Reduce Data  Source map() join() Source  2 cache() transform
  19. 19. Throughput: Memory Is King 6-node C*/DSE 1.1.9 cluster,
 Spark 0.7.0 Spark cached RDD 10-50x faster than raw Cassandra
  20. 20. Developers Love It • “I wrote my first aggregation job in 30 minutes” • High level “distributed collections” API • No Hadoop cruft • Full power of Scala, Java, Python • Interactive REPL shell
  21. 21. Spark Vs Hadoop Word Count file = spark.textFile("hdfs://...")   file.flatMap(line => line.split(" "))     .map(word => (word, 1))     .reduceByKey(_ + _) 1 package org.myorg;! 2 ! 3 import java.io.IOException;! 4 import java.util.*;! 5 ! 6 import org.apache.hadoop.fs.Path;! 7 import org.apache.hadoop.conf.*;! 8 import org.apache.hadoop.io.*;! 9 import org.apache.hadoop.mapreduce.*;! 10 import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;! 11 import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;! 12 import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;! 13 import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;! 14 ! 15 public class WordCount {! 16 ! 17 public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> {! 18 private final static IntWritable one = new IntWritable(1);! 19 private Text word = new Text();! 20 ! 21 public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {! 22 String line = value.toString();! 23 StringTokenizer tokenizer = new StringTokenizer(line);! 24 while (tokenizer.hasMoreTokens()) {! 25 word.set(tokenizer.nextToken());! 26 context.write(word, one);! 27 }! 28 }! 29 } ! 30 ! 31 public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> {! 32 ! 33 public void reduce(Text key, Iterable<IntWritable> values, Context context) ! 34 throws IOException, InterruptedException {! 35 int sum = 0;! 36 for (IntWritable val : values) {! 37 sum += val.get();! 38 }! 39 context.write(key, new IntWritable(sum));! 40 }! 41 }! 42 ! 43 public static void main(String[] args) throws Exception {! 44 Configuration conf = new Configuration();! 45 ! 46 Job job = new Job(conf, "wordcount");! 47 ! 48 job.setOutputKeyClass(Text.class);! 49 job.setOutputValueClass(IntWritable.class);! 50 ! 51 job.setMapperClass(Map.class);! 52 job.setReducerClass(Reduce.class);! 53 ! 54 job.setInputFormatClass(TextInputFormat.class);! 55 job.setOutputFormatClass(TextOutputFormat.class);! 56 ! 57 FileInputFormat.addInputPath(job, new Path(args[0]));! 58 FileOutputFormat.setOutputPath(job, new Path(args[1]));! 59 ! 60 job.waitForCompletion(true);! 61 }! 62 ! 63 }!
  22. 22. One Platform To Rule Them All HIVE on Spark Spark Streaming - discretized stream processing • SQL, Graph, ML, Streaming all in one framework • Much higher code sharing/reuse • Easy integration between components • Fewer platforms == lower TCO • Integration with Mesos, YARN helps share resources
  23. 23. Shark - Hive On Spark • 100% HiveQL compatible • 10-100x faster than HIVE, answers in seconds • Reuse UDFs, SerDe’s, StorageHandlers • Can use DSE / CassandraFS for Metastore
  24. 24. INTEGRATING CASSANDRA & SPARK !24
  25. 25. Our Spark/Shark/Cassandra Stack Node1 Cassandra InputFormat SerDe Spark Worker Shark Node2 Cassandra InputFormat SerDe Spark Worker Shark Node3 Cassandra InputFormat SerDe Spark Worker Shark Spark Master Job Server
  26. 26. OPTIONS FOR READING FROM C* • Hadoop InputFormat – ColumnFamilyInputFormat - reads all rows from 1 CF – CqlPagingInputFormat, etc. - CQL3, 2-dary indexes – Roll your own (join multiple CFs, etc) • Spark native RDD – sc.parallelize(rowkeys).flatMap(readColum ns(_)) – JdbcRdd + Cassandra JDBC driver
  27. 27. Columnfamilyinputformat video type Record1 10 1 Record2 11 5 id Video Type Record1 10 1 Record2 11 5 • Must read from all rows • One CF only, not very flexible
  28. 28. Node 2Node 1 Spark RDD • RDD = Resilient Distributed Dataset • Multiple partitions living on different nodes • Each partition has records Partition 1 Partition 2 Partition 3 Partition 4
  29. 29. Inputformat Vs Rdd InputFormat RDD Supports Hadoop, HIVE, Spark, Shark Spark / Shark only Have to implement multiple classes - InputFormat, RecordReader, Writeable, etc. Clunky API. One class - simple API. Two APIs, and often need to implement both (HIVE needs older...) Just one API. • You can easily use InputFormats in Spark using newAPIHadoopRDD(). • Writing a custom RDD could have saved us lots of time.
  30. 30. Node 2Node 1 Skipping The Inputformat Row 1 data Row 2 data Row 3 data Row 4 data Rowkey1 Rowkey2 Rowkey3 Rowkey4 sc.parallelize(rowkeys).flatMap(readColumns(_))
  31. 31. OUR CASSANDRA / SPARK ARCHITECTURE !31
  32. 32. From Raw Logs To Fast Queries Process C*
 columnar storage Raw Logs Raw Logs Raw Logs Spark Spark Spark OLAP Table 1 OLAP Table 2 OLAP Table 3 Spark Shark Predefined queries Ad-hoc HiveQL
  33. 33. Why Cassandra Alone Isn’t Enough • Over 30 million multi-dimensional fact table rows per day • Materializing every possible answer isn’t close to possible • Multi dimensional filtering and grouping alone leads to many billions of possible answers • Querying fact tables in Cassandra is too slow • Reading millions or billion random rows • CQL doesn’t support grouping
  34. 34. Our Approach • Use Cassandra to store the raw fact tables • Optimize the schema for OLAP workloads • Fast full table reads • Easily read fewer columns • Use Spark for fast random row access and fast distributed computation
  35. 35. uuid- part-0 uuid- part-1 2013-04-05T 00:00Z#id1 Section 0 Section 1 Section 2 country rows 0-9999 rows 10000-19999 rows .... city rows 0-9999 rows 10000-19999 rows .... Index CF Columns CF An OLAP Schema for Cassandra Metadata 2013-04-05T 00:00-part-0 {columns: [“country”, “city”, “plays” ]} Metadata CF •Optimized for: selective column loading, maximum throughput and compression
  36. 36. Olap Workflow Dataset Aggregation Job Query Job Spark
 Executors Cassandra REST Job Server Query Job Aggregate Query Result Query Result
  37. 37. Querying Data In Spark Node 2Node 1 Partition 1 ! record1 record2 record3
 Partition 2 ! record4 record5 record6
 Partition 3 ! record7 record8 record9 Partition 4 ! record10 record11 record12 Convert to Spark SQL / Shark Table Shark / Spark SQL rdd.group / .map / .sort / .join etc rdd.reduce / .collect / .to p
  38. 38. Fault Tolerance • Cached dataset lives in Java Heap only - what if process dies? • Spark lineage - automatic recomputation from source, but this is expensive! • Can also replicate cached dataset to survive single node failures • Persist materialized views back to C*, then load into cache -- now recovery path is much faster
  39. 39. DEMO !39
  40. 40. Creating a Shark Table
  41. 41. Creating a Cached Table
  42. 42. Querying a Cached Table
  43. 43. THANK YOU And YES, We’re HIRING!! ooyala.com/careers @evanfchan
  44. 44. Industry Trends • Fast execution frameworks • Impala • In-memory databases • VoltDB, Druid • Streaming and real-time • Higher-level, productive data
  45. 45. PERFORMANCE #’S Spark: C* -> OLAP aggregates
 cold cache, 1.4 million events 130 seconds C* -> OLAP aggregates
 warmed cache 20-30 seconds OLAP aggregate query via Spark
 (56k records) 60 ms 6-node C*/DSE 1.1.9 cluster,
 Spark 0.7.0
  46. 46. EXAMPLE: OLAP PROCESSING t0 2013-0 4-05T0 0:00Z#i {vide o: 10, 2013-0 4-05T0 0:00Z#i {vide o: 20, C* events OLAP Aggregates OLAP Aggregates OLAP Aggregates Cached Materialized Views Spark Spark Spark Union Query 1: Plays by Provider Query 2: Top content for mobile
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×