Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

What's new with Apache Spark?

6,532 views

Published on

Amsterdam Apache Spark Meetup 
2014-11-24
http://www.meetup.com/Amsterdam-Spark/events/207220772/

Published in: Technology

What's new with Apache Spark?

  1. 1. What's new with Apache Spark? ! Amsterdam Apache Spark Meetup 2014-11-24 meetup.com/Amsterdam-Spark/events/207220772/ Paco Nathan @pacoid
  2. 2. What is Spark? 2
  3. 3. What is Spark? Developed in 2009 at UC Berkeley AMPLab, then open sourced in 2010, Spark has since become one of the largest OSS communities in big data, with over 200 contributors in 50+ organizations spark.apache.org “Organizations that are looking at big data challenges – including collection, ETL, storage, exploration and analytics – should consider Spark for its in-memory performance and the breadth of its model. It supports advanced analytics solutions on Hadoop clusters, including the iterative model required for machine learning and graph analysis.” Gartner, Advanced Analytics and Data Science (2014) 3
  4. 4. What is Spark? 4
  5. 5. What is Spark? Spark Core is the general execution engine for the Spark platform that other functionality is built atop: ! • in-memory computing capabilities deliver speed • general execution model supports wide variety of use cases • ease of development – native APIs in Java, Scala, Python (+ SQL, Clojure, R) 5
  6. 6. What is Spark? WordCount in 3 lines of Spark WordCount in 50+ lines of Java MR 6
  7. 7. What is Spark? Sustained exponential growth, as one of the most active Apache projects ohloh.net/orgs/apache 7
  8. 8. TL;DR: Smashing The Previous Petabyte Sort Record databricks.com/blog/2014/11/05/spark-officially-sets- a-new-record-in-large-scale-sorting.html 8
  9. 9. A Brief History 9
  10. 10. A Brief History: Functional Programming for Big Data Theory, Eight Decades Ago: what can be computed? Haskell Curry haskell.org Alonso Church wikipedia.org Praxis, Four Decades Ago: algebra for applicative systems John Backus acm.org David Turner wikipedia.org Reality, Two Decades Ago: machine data from web apps Pattie Maes MIT Media Lab 10
  11. 11. A Brief History: Functional Programming for Big Data circa late 1990s: explosive growth e-commerce and machine data implied that workloads could not fit on a single computer anymore… notable firms led the shift to horizontal scale-out on clusters of commodity hardware, especially for machine learning use cases at scale 11
  12. 12. Stakeholder Customers RDBMS SQL Query result sets recommenders + classifiers Web Apps customer transactions Algorithmic Modeling Logs event history aggregation dashboards Product Engineering UX DW ETL Middleware models servlets 12
  13. 13. Amazon “Early Amazon: Splitting the website” – Greg Linden glinden.blogspot.com/2006/02/early-amazon-splitting- website.html ! eBay “The eBay Architecture” – Randy Shoup, Dan Pritchett addsimplicity.com/adding_simplicity_an_engi/ 2006/11/you_scaled_your.html addsimplicity.com.nyud.net:8080/downloads/ eBaySDForum2006-11-29.pdf ! Inktomi (YHOO Search) “Inktomi’s Wild Ride” – Erik Brewer (0:05:31 ff) youtu.be/E91oEn1bnXM ! Google “Underneath the Covers at Google” – Jeff Dean (0:06:54 ff) youtu.be/qsan-GQaeyk perspectives.mvdirona.com/2008/06/11/ JeffDeanOnGoogleInfrastructure.aspx ! MIT Media Lab “Social Information Filtering for Music Recommendation” – Pattie Maes pubs.media.mit.edu/pubs/papers/32paper.ps ted.com/speakers/pattie_maes.html 13
  14. 14. A Brief History: Functional Programming for Big Data circa 2002: mitigate risk of large distributed workloads lost due to disk failures on commodity hardware… Google File System Sanjay Ghemawat, Howard Gobioff, Shun-Tak Leung research.google.com/archive/gfs.html ! MapReduce: Simplified Data Processing on Large Clusters Jeffrey Dean, Sanjay Ghemawat research.google.com/archive/mapreduce.html 14
  15. 15. TL;DR: Generational trade-offs for handling Big Compute Photo from John Wilkes’ keynote talk @ #MesosCon 2014 15
  16. 16. TL;DR: Generational trade-offs for handling Big Compute Cheap Memory Cheap Storage Cheap Network recompute replicate reference (RDD) (DFS) (URI) 16
  17. 17. A Brief History: Functional Programming for Big Data 2002 2004 MapReduce paper 2002 MapReduce @ Google 2004 2006 2008 2010 2012 2014 2006 Hadoop @ Yahoo! 2014 Apache Spark top-level 2010 Spark paper 2008 Hadoop Summit 17
  18. 18. A Brief History: Functional Programming for Big Data MapReduce Pregel Giraph Dremel Drill S4 Storm F1 MillWheel General Batch Processing Specialized Systems: Impala GraphLab iterative, interactive, streaming, graph, etc. Tez MR doesn’t compose well for large applications, and so specialized systems emerged as workarounds 18
  19. 19. A Brief History: Functional Programming for Big Data circa 2010: a unified engine for enterprise data workflows, based on commodity hardware a decade later… Spark: Cluster Computing with Working Sets Matei Zaharia, Mosharaf Chowdhury, Michael Franklin, Scott Shenker, Ion Stoica people.csail.mit.edu/matei/papers/2010/hotcloud_spark.pdf ! Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley, Michael Franklin, Scott Shenker, Ion Stoica usenix.org/system/files/conference/nsdi12/nsdi12-final138.pdf 19
  20. 20. A Brief History: Functional Programming for Big Data In addition to simple map and reduce operations, Spark supports SQL queries, streaming data, and complex analytics such as machine learning and graph algorithms out-of-the-box. Better yet, combine these capabilities seamlessly into one integrated workflow… 20
  21. 21. A Brief History: Key distinctions for Spark vs. MapReduce • generalized patterns ⇒ unified engine for many use cases • lazy evaluation of the lineage graph ⇒ reduces wait states, better pipelining • generational differences in hardware ⇒ off-heap use of large memory spaces • functional programming / ease of use ⇒ reduction in cost to maintain large apps • lower overhead for starting jobs • less expensive shuffles 21
  22. 22. Spark Deconstructed 22
  23. 23. Spark Deconstructed: Log Mining Example // load error messages from a log into memory! // then interactively search for various patterns! // https://gist.github.com/ceteri/8ae5b9509a08c08a1132! ! // base RDD! val lines = sc.textFile("hdfs://...")! ! // transformed RDDs! val errors = lines.filter(_.startsWith("ERROR"))! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache()! ! // action 1! messages.filter(_.contains("mysql")).count()! ! // action 2! messages.filter(_.contains("php")).count() 23
  24. 24. Driver Worker Worker Worker Spark Deconstructed: Log Mining Example We start with Spark running on a cluster… submitting code to be evaluated on it: 24
  25. 25. Spark Deconstructed: Log Mining Example // base RDD! val lines = sc.textFile("hdfs://...")! ! // transformed RDDs! val errors = lines.filter(_.startsWith("ERROR"))! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache()! ! // action 1! messages.filter(_.contains("mysql")).count()! ! // discussing action 2! the other part messages.filter(_.contains("php")).count() 25
  26. 26. Spark Deconstructed: Log Mining Example At this point, take a look at the transformed RDD operator graph: scala> messages.toDebugString! res5: String = ! MappedRDD[4] at map at <console>:16 (3 partitions)! MappedRDD[3] at map at <console>:16 (3 partitions)! FilteredRDD[2] at filter at <console>:14 (3 partitions)! MappedRDD[1] at textFile at <console>:12 (3 partitions)! HadoopRDD[0] at textFile at <console>:12 (3 partitions) 26
  27. 27. Driver Worker Worker Worker Spark Deconstructed: Log Mining Example // base RDD! val lines = sc.textFile("hdfs://...")! ! // transformed RDDs! val errors = lines.filter(_.startsWith("ERROR"))! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache()! ! // action 1! messages.filter(_.contains("mysql")).count()! ! // action 2! medssaigsesc.fuilstesr(i_n.cognt atinhs(e"ph po")t).hcoeuntr() part 27
  28. 28. Driver Worker Worker block 1 Worker block 2 block 3 Spark Deconstructed: Log Mining Example // base RDD! val lines = sc.textFile("hdfs://...")! ! // transformed RDDs! val errors = lines.filter(_.startsWith("ERROR"))! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache()! ! // action 1! messages.filter(_.contains("mysql")).count()! ! // action 2! medssaigsesc.fuilstesr(i_n.cognt atinhs(e"ph po")t).hcoeuntr() part 28
  29. 29. Driver Worker Worker block 1 Worker block 2 block 3 Spark Deconstructed: Log Mining Example // base RDD! val lines = sc.textFile("hdfs://...")! ! // transformed RDDs! val errors = lines.filter(_.startsWith("ERROR"))! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache()! ! // action 1! messages.filter(_.contains("mysql")).count()! ! // action 2! medssaigsesc.fuilstesr(i_n.cognt atinhs(e"ph po")t).hcoeuntr() part 29
  30. 30. Driver Worker Worker block 1 Worker block 2 block 3 read HDFS block read HDFS block read HDFS block Spark Deconstructed: Log Mining Example // base RDD! val lines = sc.textFile("hdfs://...")! ! // transformed RDDs! val errors = lines.filter(_.startsWith("ERROR"))! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache()! ! // action 1! messages.filter(_.contains("mysql")).count()! ! // action 2! medssaigsesc.fuilstesr(i_n.cognt atinhs(e"ph po")t).hcoeuntr() part 30
  31. 31. Driver cache 1 Worker Worker block 1 Worker block 2 block 3 cache 2 cache 3 process, cache data process, cache data process, cache data Spark Deconstructed: Log Mining Example // base RDD! val lines = sc.textFile("hdfs://...")! ! // transformed RDDs! val errors = lines.filter(_.startsWith("ERROR"))! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache()! ! // action 1! messages.filter(_.contains("mysql")).count()! ! // action 2! medssaigsesc.fuilstesr(i_n.cognt atinhs(e"ph po")t).hcoeuntr() part 31
  32. 32. Driver cache 1 Worker Worker block 1 Worker block 2 block 3 cache 2 cache 3 Spark Deconstructed: Log Mining Example // base RDD! val lines = sc.textFile("hdfs://...")! ! // transformed RDDs! val errors = lines.filter(_.startsWith("ERROR"))! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache()! ! // action 1! messages.filter(_.contains("mysql")).count()! ! // action 2! medssaigsesc.fuilstesr(i_n.cognt atinhs(e"ph po")t).hcoeuntr() part 32
  33. 33. // base RDD! val lines = sc.textFile("hdfs://...")! ! // transformed RDDs! val errors = lines.filter(_.startsWith("ERROR"))! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache()! ! // action 1! messages.filter(_.contains("mysql")).count()! ! // action 2! messages.filter(_.contains("php")).count() Driver cache 1 Worker Worker block 1 Worker block 2 block 3 cache 2 cache 3 Spark Deconstructed: Log Mining Example discussing the other part 33
  34. 34. Driver cache 1 Worker Worker block 1 Worker block 2 block 3 cache 2 cache 3 process from cache process from cache process from cache Spark Deconstructed: Log Mining Example // base RDD! val lines = sc.textFile("hdfs://...")! ! // discussing transformed RDDs! val errors = lines.filter(_.the startsWith("other ERROR"))part ! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache()! ! // action 1! messages.filter(_.contains(“mysql")).count()! ! // action 2! messages.filter(_.contains("php")).count() 34
  35. 35. Driver cache 1 Worker Worker block 1 Worker block 2 block 3 cache 2 cache 3 Spark Deconstructed: Log Mining Example // base RDD! val lines = sc.textFile("hdfs://...")! ! // discussing transformed RDDs! val errors = lines.filter(_.the startsWith("other ERROR"))part ! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache()! ! // action 1! messages.filter(_.contains(“mysql")).count()! ! // action 2! messages.filter(_.contains("php")).count() 35
  36. 36. Spark Deconstructed: Looking at the RDD transformations and actions from another perspective… action value RDD RDD RDD // load error messages from a log into memory! // then interactively search for various patterns! // https://gist.github.com/ceteri/8ae5b9509a08c08a1132! ! // base RDD! val lines = sc.textFile("hdfs://...")! ! // transformed RDDs! val errors = lines.filter(_.startsWith("ERROR"))! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache()! ! // action 1! messages.filter(_.contains("mysql")).count()! ! // action 2! messages.filter(_.contains("php")).count() transformations RDD 36
  37. 37. Spark Deconstructed: RDD // base RDD! val lines = sc.textFile("hdfs://...") 37
  38. 38. RDD RDD RDD Spark Deconstructed: transformations RDD // transformed RDDs! val errors = lines.filter(_.startsWith("ERROR"))! val messages = errors.map(_.split("t")).map(r => r(1))! messages.cache() 38
  39. 39. action value RDD RDD RDD Spark Deconstructed: transformations RDD // action 1! messages.filter(_.contains("mysql")).count() 39
  40. 40. Unifying the Pieces 40
  41. 41. Unifying the Pieces: Spark SQL // http://spark.apache.org/docs/latest/sql-programming-guide.html! ! val sqlContext = new org.apache.spark.sql.SQLContext(sc)! import sqlContext._! ! // define the schema using a case class! case class Person(name: String, age: Int)! ! // create an RDD of Person objects and register it as a table! val people = sc.textFile("examples/src/main/resources/ people.txt").map(_.split(",")).map(p => Person(p(0), p(1).trim.toInt))! ! people.registerAsTempTable("people")! ! // SQL statements can be run using the SQL methods provided by sqlContext! val teenagers = sql("SELECT name FROM people WHERE age >= 13 AND age <= 19")! ! // results of SQL queries are SchemaRDDs and support all the ! // normal RDD operations…! // columns of a row in the result can be accessed by ordinal! teenagers.map(t => "Name: " + t(0)).collect().foreach(println) 41
  42. 42. Unifying the Pieces: Spark Streaming // http://spark.apache.org/docs/latest/streaming-programming-guide.html! ! import org.apache.spark.streaming._! import org.apache.spark.streaming.StreamingContext._! ! // create a StreamingContext with a SparkConf configuration! val ssc = new StreamingContext(sparkConf, Seconds(10))! ! // create a DStream that will connect to serverIP:serverPort! val lines = ssc.socketTextStream(serverIP, serverPort)! ! // split each line into words! val words = lines.flatMap(_.split(" "))! ! // count each word in each batch! val pairs = words.map(word => (word, 1))! val wordCounts = pairs.reduceByKey(_ + _)! ! // print a few of the counts to the console! wordCounts.print()! ! ssc.start() // start the computation! ssc.awaitTermination() // wait for the computation to terminate 42
  43. 43. MLI: An API for Distributed Machine Learning Evan Sparks, Ameet Talwalkar, et al. International Conference on Data Mining (2013) http://arxiv.org/abs/1310.5426 Unifying the Pieces: MLlib // http://spark.apache.org/docs/latest/mllib-guide.html! ! val train_data = // RDD of Vector! val model = KMeans.train(train_data, k=10)! ! // evaluate the model! val test_data = // RDD of Vector! test_data.map(t => model.predict(t)).collect().foreach(println)! 43
  44. 44. Unifying the Pieces: GraphX // http://spark.apache.org/docs/latest/graphx-programming-guide.html! ! import org.apache.spark.graphx._! import org.apache.spark.rdd.RDD! ! case class Peep(name: String, age: Int)! ! val vertexArray = Array(! (1L, Peep("Kim", 23)), (2L, Peep("Pat", 31)),! (3L, Peep("Chris", 52)), (4L, Peep("Kelly", 39)),! (5L, Peep("Leslie", 45))! )! val edgeArray = Array(! Edge(2L, 1L, 7), Edge(2L, 4L, 2),! Edge(3L, 2L, 4), Edge(3L, 5L, 3),! Edge(4L, 1L, 1), Edge(5L, 3L, 9)! )! ! val vertexRDD: RDD[(Long, Peep)] = sc.parallelize(vertexArray)! val edgeRDD: RDD[Edge[Int]] = sc.parallelize(edgeArray)! val g: Graph[Peep, Int] = Graph(vertexRDD, edgeRDD)! ! val results = g.triplets.filter(t => t.attr > 7)! ! for (triplet <- results.collect) {! println(s"${triplet.srcAttr.name} loves ${triplet.dstAttr.name}")! } 44
  45. 45. Spark Streaming 45
  46. 46. Spark Streaming: Requirements Let’s consider the top-level requirements for a streaming framework: • clusters scalable to 100’s of nodes • low-latency, in the range of seconds (meets 90% of use case needs) • efficient recovery from failures (which is a hard problem in CS) • integrates with batch: many co’s run the same business logic both online+offline 46
  47. 47. Spark Streaming: Requirements Therefore, run a streaming computation as: a series of very small, deterministic batch jobs ! • Chop up the live stream into batches of X seconds • Spark treats each batch of data as RDDs and processes them using RDD operations • Finally, the processed results of the RDD operations are returned in batches 47
  48. 48. Spark Streaming: Requirements Therefore, run a streaming computation as: a series of very small, deterministic batch jobs ! • Batch sizes as low as ½ sec, latency of about 1 sec • Potential for combining batch processing and streaming processing in the same system 48
  49. 49. Spark Streaming: Integration Data can be ingested from many sources: Kafka, Flume, Twitter, ZeroMQ, TCP sockets, etc. Results can be pushed out to filesystems, databases, live dashboards, etc. Spark’s built-in machine learning algorithms and graph processing algorithms can be applied to data streams 49
  50. 50. Spark Streaming: Timeline 2012 project started 2013 alpha release (Spark 0.7) 2014 graduated (Spark 0.9) Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing Matei Zaharia, Tathagata Das, Haoyuan Li, Timothy Hunter, Scott Shenker, Ion Stoica Berkeley EECS (2012-12-14) www.eecs.berkeley.edu/Pubs/TechRpts/2012/EECS-2012-259.pdf project lead: Tathagata Das @tathadas 50
  51. 51. Spark Streaming: Requirements Typical kinds of applications: • datacenter operations • web app funnel metrics • ad optimization • anti-fraud • various telematics and much much more! 51
  52. 52. Spark Streaming: Some Excellent Resources Programming Guide spark.apache.org/docs/latest/streaming-programming- guide.html TD @ Spark Summit 2014 youtu.be/o-NXwFrNAWQ?list=PLTPXxbhUt- YWGNTaDj6HSjnHMxiTD1HCR “Deep Dive into Spark Streaming” slideshare.net/spark-project/deep-divewithsparkstreaming-tathagatadassparkmeetup20130617 Spark Reference Applications databricks.gitbooks.io/databricks-spark-reference- applications/ 52
  53. 53. Quiz: name the bits and pieces… import org.apache.spark.streaming._! import org.apache.spark.streaming.StreamingContext._! ! // create a StreamingContext with a SparkConf configuration! val ssc = new StreamingContext(sparkConf, Seconds(10))! ! // create a DStream that will connect to serverIP:serverPort! val lines = ssc.socketTextStream(serverIP, serverPort)! ! // split each line into words! val words = lines.flatMap(_.split(" "))! ! // count each word in each batch! val pairs = words.map(word => (word, 1))! val wordCounts = pairs.reduceByKey(_ + _)! ! // print a few of the counts to the console! wordCounts.print()! ! ssc.start()! ssc.awaitTermination() 53
  54. 54. A Look Ahead… 54
  55. 55. A Look Ahead… 1. Greater Stability and Robustness • improved high availability via write-ahead logs • enabled as an optional feature for Spark 1.2 • NB: Spark Standalone can already restart driver • excellent discussion of fault-tolerance (2012): cs.duke.edu/~kmoses/cps516/dstream.html • stay tuned: meetup.com/spark-users/events/218108702/ 55
  56. 56. A Look Ahead… 2. Support for more environments, i.e., beyond Hadoop • three use cases currently depend on HDFS • those are being abstracted out • could then use Cassandra, etc. 56
  57. 57. A Look Ahead… 3. Improved support for Python • e.g., Kafka is not exposed through Python yet (next release goal) 57
  58. 58. A Look Ahead… 4. Better flow control • a somewhat longer-term goal, plus it is a hard problem in general • poses interesting challenges beyond what other streaming systems have faced 58
  59. 59. Demos 59
  60. 60. Demos, as time permits: brand new Python support for Streaming in 1.2 github.com/apache/spark/tree/master/examples/src/main/ python/streaming Twitter Streaming Language Classifier databricks.gitbooks.io/databricks-spark-reference-applications/ content/twitter_classifier/README.html ! ! For more Spark learning resources online: databricks.com/spark-training-resources 60
  61. 61. Demo: PySpark Streaming Network Word Count import sys! from pyspark import SparkContext! from pyspark.streaming import StreamingContext! ! sc = SparkContext(appName="PyStreamNWC", master="local[*]")! ssc = StreamingContext(sc, Seconds(5))! ! lines = ssc.socketTextStream(sys.argv[1], int(sys.argv[2]))! ! counts = lines.flatMap(lambda line: line.split(" ")) ! .map(lambda word: (word, 1)) ! .reduceByKey(lambda a, b: a+b)! ! counts.pprint()! ! ssc.start()! ssc.awaitTermination() 61
  62. 62. Demo: PySpark Streaming Network Word Count - Stateful import sys! from pyspark import SparkContext! from pyspark.streaming import StreamingContext! ! def updateFunc (new_values, last_sum):! return sum(new_values) + (last_sum or 0)! ! sc = SparkContext(appName="PyStreamNWC", master="local[*]")! ssc = StreamingContext(sc, Seconds(5))! ssc.checkpoint("checkpoint")! ! lines = ssc.socketTextStream(sys.argv[1], int(sys.argv[2]))! ! counts = lines.flatMap(lambda line: line.split(" ")) ! .map(lambda word: (word, 1)) ! .updateStateByKey(updateFunc) ! .transform(lambda x: x.sortByKey())! ! counts.pprint()! ! ssc.start()! ssc.awaitTermination() 62
  63. 63. Complementary Frameworks 63
  64. 64. Spark Integrations: Discover Insights Clean Up Your Data Run Sophisticated Analytics Integrate With Many Other Systems Use Lots of Different Data Sources cloud-based notebooks… ETL… the Hadoop ecosystem… widespread use of PyData… advanced analytics in streaming… rich custom search… web apps for data APIs… low-latency + multi-tenancy… 64
  65. 65. Spark Integrations: Unified platform for building Big Data pipelines Databricks Cloud databricks.com/blog/2014/07/14/databricks-cloud-making- big-data-easy.html youtube.com/watch?v=dJQ5lV5Tldw#t=883 65
  66. 66. Spark Integrations: The proverbial Hadoop ecosystem Spark + Hadoop + HBase + etc. mapr.com/products/apache-spark vision.cloudera.com/apache-spark-in-the-apache-hadoop-ecosystem/ hortonworks.com/hadoop/spark/ databricks.com/blog/2014/05/23/pivotal-hadoop-integrates-the- full-apache-spark-stack.html unified compute hadoop ecosystem 66
  67. 67. Spark Integrations: Leverage widespread use of Python Spark + PyData spark-summit.org/2014/talk/A-platform-for-large-scale-neuroscience cwiki.apache.org/confluence/display/SPARK/PySpark+Internals unified compute Py Data 67
  68. 68. Spark Integrations: Advanced analytics for streaming use cases Kafka + Spark + Cassandra datastax.com/documentation/datastax_enterprise/4.5/ datastax_enterprise/spark/sparkIntro.html http://helenaedelson.com/?p=991 github.com/datastax/spark-cassandra-connector github.com/dibbhatt/kafka-spark-consumer unified compute data streams columnar key-value 68
  69. 69. Spark Integrations: Rich search, immediate insights Spark + ElasticSearch databricks.com/blog/2014/06/27/application-spotlight-elasticsearch. unified compute html elasticsearch.org/guide/en/elasticsearch/hadoop/current/ spark.html spark-summit.org/2014/talk/streamlining-search-indexing- using-elastic-search-and-spark document search 69
  70. 70. Spark Integrations: Building data APIs with web apps Spark + Play typesafe.com/blog/apache-spark-and-the-typesafe-reactive- platform-a-match-made-in-heaven unified compute web apps 70
  71. 71. Spark Integrations: The case for multi-tenancy Spark + Mesos spark.apache.org/docs/latest/running-on-mesos.html + Mesosphere + Google Cloud Platform ceteri.blogspot.com/2014/09/spark-atop-mesos-on-google-cloud.html unified compute cluster resources 71
  72. 72. Because Use Cases 72
  73. 73. Because Use Cases: +40 known production use cases 73
  74. 74. Spark at Twitter: Evaluation & Lessons Learnt Sriram Krishnan slideshare.net/krishflix/seattle-spark-meetup-spark- at-twitter • Spark can be more interactive, efficient than MR • Support for iterative algorithms and caching • More generic than traditional MapReduce • Why is Spark faster than Hadoop MapReduce? • Fewer I/O synchronization barriers • Less expensive shuffle • More complex the DAG, greater the performance improvement 74 Because Use Cases: Twitter
  75. 75. Using Spark to Ignite Data Analytics ebaytechblog.com/2014/05/28/using-spark-to-ignite- data-analytics/ 75 Because Use Cases: eBay/PayPal
  76. 76. Hadoop and Spark Join Forces in Yahoo Andy Feng spark-summit.org/talk/feng-hadoop-and-spark-join- forces-at-yahoo/ 76 Because Use Cases: Yahoo!
  77. 77. Collaborative Filtering with Spark Chris Johnson slideshare.net/MrChrisJohnson/collaborative-filtering- with-spark • collab filter (ALS) for music recommendation • Hadoop suffers from I/O overhead • show a progression of code rewrites, converting a Hadoop-based app into efficient use of Spark 77 Because Use Cases: Spotify
  78. 78. Because Use Cases: Sharethrough Sharethrough Uses Spark Streaming to Optimize Bidding in Real Time Russell Cardullo, Michael Ruggier 2014-03-25 databricks.com/blog/2014/03/25/ sharethrough-and-spark-streaming.html • the profile of a 24 x 7 streaming app is different than an hourly batch job… • take time to validate output against the input… • confirm that supporting objects are being serialized… • the output of your Spark Streaming job is only as reliable as the queue that feeds Spark… • monoids… 78
  79. 79. Because Use Cases: Ooyala Productionizing a 24/7 Spark Streaming service on YARN Issac Buenrostro, Arup Malakar 2014-06-30 spark-summit.org/2014/talk/ productionizing-a-247-spark-streaming-service- on-yarn • state-of-the-art ingestion pipeline, processing over two billion video events a day • how do you ensure 24/7 availability and fault tolerance? • what are the best practices for Spark Streaming and its integration with Kafka and YARN? • how do you monitor and instrument the various 79 stages of the pipeline?
  80. 80. Because Use Cases: Viadeo Spark Streaming As Near Realtime ETL Djamel Zouaoui 2014-09-18 slideshare.net/DjamelZouaoui/spark-streaming • Spark Streaming is topology-free • workers and receivers are autonomous and independent • paired with Kafka, RabbitMQ • 8 machines / 120 cores • use case for recommender system • issues: how to handle lost data, serialization 80
  81. 81. Because Use Cases: Stratio Stratio Streaming: a new approach to Spark Streaming David Morales, Oscar Mendez 2014-06-30 spark-summit.org/2014/talk/stratio-streaming- a-new-approach-to-spark-streaming • Stratio Streaming is the union of a real-time messaging bus with a complex event processing engine using Spark Streaming • allows the creation of streams and queries on the fly • paired with Siddhi CEP engine and Apache Kafka • added global features to the engine such as auditing 81 and statistics
  82. 82. Because Use Cases: Guavus Guavus Embeds Apache Spark into its Operational Intelligence Platform Deployed at the World’s Largest Telcos Eric Carr 2014-09-25 databricks.com/blog/2014/09/25/guavus-embeds-apache-spark-into- its-operational-intelligence-platform-deployed-at-the-worlds- largest-telcos.html • 4 of 5 top mobile network operators, 3 of 5 top Internet backbone providers, 80% MSOs in NorAm • analyzing 50% of US mobile data traffic, +2.5 PB/day • latency is critical for resolving operational issues before they cascade: 2.5 MM transactions per second • “analyze first” not “store first ask questions later” 82
  83. 83. Why Spark is the Next Top (Compute) Model Dean Wampler slideshare.net/deanwampler/spark-the-next-top- compute-model • Hadoop: most algorithms are much harder to implement in this restrictive map-then-reduce model • Spark: fine-grained “combinators” for composing algorithms • slide #67, any questions? 83 Because Use Cases: Typesafe
  84. 84. Installing the Cassandra / Spark OSS Stack Al Tobey tobert.github.io/post/2014-07-15-installing-cassandra- spark-stack.html • install+config for Cassandra and Spark together • spark-cassandra-connector integration • examples show a Spark shell that can access tables in Cassandra as RDDs with types pre-mapped and ready to go 84 Because Use Cases: DataStax
  85. 85. One platform for all: real-time, near-real-time, and offline video analytics on Spark Davis Shepherd, Xi Liu spark-summit.org/talk/one-platform-for-all-real- time-near-real-time-and-offline-video-analytics- on-spark 85 Because Use Cases: Conviva
  86. 86. Resources 86
  87. 87. certification: Apache Spark developer certificate program • http://oreilly.com/go/sparkcert • defined by Spark experts @Databricks • assessed by O’Reilly Media • establishes the bar for Spark expertise 87
  88. 88. community: spark.apache.org/community.html video+slide archives: spark-summit.org events worldwide: goo.gl/2YqJZK resources: databricks.com/spark-training-resources workshops: databricks.com/spark-training Intro to Spark Spark AppDev Spark DevOps Spark DataSci Distributed ML on Spark Streaming Apps on Spark Spark + Cassandra 88
  89. 89. books: Fast Data Processing with Spark Holden Karau Packt (2013) shop.oreilly.com/product/ 9781782167068.do Spark in Action Chris Fregly Manning (2015*) sparkinaction.com/ Learning Spark Holden Karau, Andy Konwinski, Matei Zaharia O’Reilly (2015*) shop.oreilly.com/product/ 0636920028512.do 89
  90. 90. events: Big Data Spain Madrid, Nov 17-18 bigdataspain.org Strata EU Barcelona, Nov 19-21 strataconf.com/strataeu2014 Data Day Texas Austin, Jan 10 datadaytexas.com Strata CA San Jose, Feb 18-20 strataconf.com/strata2015 Spark Summit East NYC, Mar 18-19 spark-summit.org/east Strata EU London, May 5-7 strataconf.com/big-data-conference-uk-2015 Spark Summit 2015 SF, Jun 15-17 spark-summit.org 90
  91. 91. presenter: monthly newsletter for updates, events, conf summaries, etc.: liber118.com/pxn/ Just Enough Math O’Reilly, 2014 justenoughmath.com preview: youtu.be/TQ58cWgdCpA Enterprise Data Workflows with Cascading O’Reilly, 2013 shop.oreilly.com/product/ 0636920028536.do 91

×