Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

2017 High Performance Database with Scala, Akka, Spark

1,914 views

Published on

Here is my talk at Scala by the Bay 2017, Building a High-Performance Database with Scala, Akka, and Spark. Covers integration of Akka and Spark, when to use actors, futures, and reactive streams; back pressure, reactive monitoring with Kamon, and performing extremely high speed Scala: how not to allocate / copy / deserialize with high performace Filo vectors and BinaryRecords.
http://github.com/filodb/FiloDB
http://github.com/velvia/filo

Published in: Engineering
  • If you just broke up with your Ex, you have to follow these steps to get him back or risk ruining your chances. ▲▲▲ http://scamcb.com/exback123/pdf
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • Hello! Get Your Professional Job-Winning Resume Here - Check our website! https://vk.cc/818RFv
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here

2017 High Performance Database with Scala, Akka, Spark

  1. 1. Building a High- Performance Database with Scala, Akka, and Spark Evan Chan November 2017
  2. 2. Who am I User and contributor to Spark since 0.9, Cassandra since 0.6 Created Spark Job Server and FiloDB Talks at Spark Summit, Cassandra Summit, Strata, Scala Days, etc. http://velvia.github.io/
  3. 3. Why Build a New Streaming Database?
  4. 4. Needs • Ingest HUGE streams of events — IoT etc. • Real-time, low latency, and somewhat flexible queries • Dashboards, quick answers on new data • Flexible schemas and query patterns • Keep your streaming pipeline super simple • Streaming = hardest to debug. Simplicity rules!
  5. 5. Message Queue Events Stream Processing Layer State / Database Happy Users
  6. 6. Spark + HDFS Streaming Kafka Spark Streaming Many small files (microbatches) Dedup, consolidate job Larger efficient files • High latency • Big impedance mismatch between streaming systems and a file system designed for big blobs of data
  7. 7. Cassandra? • Ingest HUGE streams of events — IoT etc. • C* is not efficient for writing raw events • Real-time, low latency, and somewhat flexible queries • C* is real-time, but only low latency for simple lookups. Add Spark => much higher latency • Flexible schemas and query patterns • C* only handles simple lookups
  8. 8. Introducing FiloDB A distributed, columnar time-series/event database. Built for streaming. http://www.github.com/filodb/FiloDB
  9. 9. Message Queue Events Spark Streaming Short term storage, K-V Adhoc, SQL, ML Cassandra FiloDB: Events, ad-hoc, batch Spark Dashboa rds, maps
  10. 10. 100% Reactive • Scala • Akka Cluster • Spark • Monix / Reactive Streams • Typesafe Config for all configuration • Scodec, Ficus, Enumeratum, Scalactic, etc. • Even most of the performance critical parts are written in Scala :)
  11. 11. Scala, Akka, and Spark for Database
  12. 12. Why use Scala and Akka? • Akka Cluster! • Just the right abstractions - streams, futures, Akka, type safety…. • Failure handling and supervision are critical for databases • All the pattern matching and immutable goodness :)
  13. 13. Scala Big Data Projects • Spark • GeoMesa • Khronus - Akka time-series DB • Sirius - Akka distributed KV Store • FiloDB!
  14. 14. Actors vs Futures vs Observables
  15. 15. One FiloDB Node NodeCoordinatorActor (NCA) DatasetCoordinatorActor (DsCA) DatasetCoordinatorActor (DsCA) Active MemTable Flushing MemTable Reprojector ColumnStore Data, commands
  16. 16. Akka vs Futures NodeCoordinatorActor (NCA) DatasetCoordinatorActor (DsCA) DatasetCoordinatorActor (DsCA) Active MemTable Flushing MemTable Reprojector ColumnStore Data, commands Akka - control flow Core I/O - Futures/Observables
  17. 17. Akka vs Futures • Akka Actors: • External FiloDB node API (remote + cluster) • Async messaging with clients • Cluster/distributed state management • Futures and Observables: • Core I/O • Columnar data processing / ingestion • Type-safe processing stages
  18. 18. Futures for Single Actions /** * Clears all data from the column store for that given projection, for all versions. * More like a truncation, not a drop. * NOTE: please make sure there are no reprojections or writes going on before calling this */ def clearProjectionData(projection: Projection): Future[Response] /** * Completely and permanently drops the dataset from the column store. * @param dataset the DatasetRef for the dataset to drop. */ def dropDataset(dataset: DatasetRef): Future[Response] /** * Appends the ChunkSets and incremental indices in the segment to the column store. * @param segment the ChunkSetSegment to write / merge to the columnar store * @param version the version # to write the segment to * @return Success. Future.failure(exception) otherwise. */ def appendSegment(projection: RichProjection, segment: ChunkSetSegment, version: Int): Future[Response]
  19. 19. Monix / Reactive Streams • http://monix.io • “observable sequences that are exposed as asynchronous streams, expanding on the observer pattern, strongly inspired by ReactiveX and by Scalaz, but designed from the ground up for back-pressure and made to cleanly interact with Scala’s standard library, compatible out-of- the-box with the Reactive Streams protocol” • Much better than Future[Iterator[_]]
  20. 20. Monix / Reactive Streams def readChunks(projection: RichProjection, columns: Seq[Column], version: Int, partMethod: PartitionScanMethod, chunkMethod: ChunkScanMethod = AllChunkScan): Observable[ChunkSetReader] = { scanPartitions(projection, version, partMethod) // Partitions to pipeline of single chunks .flatMap { partIndex => stats.incrReadPartitions(1) readPartitionChunks(projection.datasetRef, version, columns, partIndex, chunkMethod) // Collate single chunks to ChunkSetReaders }.scan(new ChunkSetReaderAggregator(columns, stats)) { _ add _ } .collect { case agg: ChunkSetReaderAggregator if agg.canEmit => agg.emit() } } }
  21. 21. Functional Reactive Stream Processing • Ingest stream merged with flush commands • Built in async/parallel tasks via mapAsync • Notify on end of stream, errors val combinedStream = Observable.merge(stream.map(SomeData), flushStream) combinedStream.map { case SomeData(records) => shard.ingest(records) None case FlushCommand(group) => shard.switchGroupBuffers(group) Some(FlushGroup(shard.shardNum, group, shard.latestOffset)) }.collect { case Some(flushGroup) => flushGroup } .mapAsync(numParallelFlushes)(shard.createFlushTask _) .foreach { x => } .recover { case ex: Exception => errHandler(ex) }
  22. 22. Akka Cluster and Spark
  23. 23. Spark/Akka Cluster Setup Driver NodeClusterActor Client Executor NCA DsCA1 DsCA2 Executor NCA DsCA1 DsCA2
  24. 24. Adding one executor Driver NodeClusterActor Client executor1 NCA DsCA1 DsCA2 State:
 Executors -> (executor1) MemberUp ActorSelection ActorRef
  25. 25. Adding second executor Driver NodeClusterActor Client executor1 NCA DsCA1 DsCA2 State:
 Executors -> (executor1, executor2) MemberUp ActorSelection ActorRef executor2 NCA DsCA1 DsCA2
  26. 26. Sending a command Driver NodeClusterActor Client Executor NCA DsCA1 DsCA2 Executor NCA DsCA1 DsCA2 Flush()
  27. 27. Yes, Akka in Spark • Columnar ingestion is stateful - need stickiness of state. This is inherently difficult in Spark. • Akka (cluster) gives us a separate, asynchronous control channel to talk to FiloDB ingestors • Spark only gives data flow primitives, not async messaging • We need to route incoming records to the correct ingestion node. Sorting data is inefficient and forces all nodes to wait for sorting to be done.
  28. 28. Data Ingestion Setup Executor NCA DsCA1 DsCA2 task0 task1 Row Source Actor Row Source Actor Executor NCA DsCA1 DsCA2 task0 task1 Row Source Actor Row Source Actor Node Cluster Actor Partition Map
  29. 29. FiloDB NodeFiloDB Node FiloDB separate nodes Executor NCA DsCA1 DsCA2 task0 task1 Row Source Actor Row Source Actor Executor NCA DsCA1 DsCA2 task0 task1 Row Source Actor Row Source Actor Node Cluster Actor Partition Map
  30. 30. Testing Akka Cluster • MultiNodeSpec / sbt-multi-jvm • NodeClusterSpec • Tests joining of different cluster nodes and partition map updates • Is partition map updated properly if a cluster node goes down — inject network failures • Lessons
  31. 31. Kamon Tracing • http://kamon.io • One trace can encapsulate multiple Future steps all executing on different threads • Tunable tracing levels • Summary stats and histograms for segments • Super useful for production debugging of reactive stack
  32. 32. Kamon Tracing def appendSegment(projection: RichProjection, segment: ChunkSetSegment, version: Int): Future[Response] = Tracer.withNewContext("append-segment") { val ctx = Tracer.currentContext stats.segmentAppend() if (segment.chunkSets.isEmpty) { stats.segmentEmpty() return(Future.successful(NotApplied)) } for { writeChunksResp <- writeChunks(projection.datasetRef, version, segment, ctx) writeIndexResp <- writeIndices(projection, version, segment, ctx) if writeChunksResp == Success } yield { ctx.finish() writeIndexResp } } private def writeChunks(dataset: DatasetRef, version: Int, segment: ChunkSetSegment, ctx: TraceContext): Future[Response] = { asyncSubtrace(ctx, "write-chunks", "ingestion") { val binPartition = segment.binaryPartition val segmentId = segment.segmentId val chunkTable = getOrCreateChunkTable(dataset) Future.traverse(segment.chunkSets) { chunkSet => chunkTable.writeChunks(binPartition, version, segmentId, chunkSet.info.id, chunkSet.chunks, stats) }.map { responses => responses.head } } }
  33. 33. Kamon Metrics • Uses HDRHistogram for much finer and more accurate buckets • Built-in metrics for Akka actors, Spray, Akka-Http, Play, etc. etc. KAMON trace name=append-segment n=2863 min=765952 p50=2113536 p90=3211264 p95=3981312 p99=9895936 p999=16121856 max=19529728 KAMON trace-segment name=write-chunks n=2864 min=436224 p50=1597440 p90=2637824 p95=3424256 p99=9109504 p999=15335424 max=18874368 KAMON trace-segment name=write-index n=2863 min=278528 p50=432128 p90=544768 p95=598016 p99=888832 p999=2260992 max=8355840
  34. 34. Validation: Scalactic private def getColumnsFromNames(allColumns: Seq[Column], columnNames: Seq[String]): Seq[Column] Or BadSchema = { if (columnNames.isEmpty) { Good(allColumns) } else { val columnMap = allColumns.map { c => c.name -> c }.toMap val missing = columnNames.toSet -- columnMap.keySet if (missing.nonEmpty) { Bad(MissingColumnNames(missing.toSeq, "projection")) } else { Good(columnNames.map(columnMap)) } } } for { computedColumns <- getComputedColumns(dataset.name, allColIds, columns) dataColumns <- getColumnsFromNames(columns, normProjection.columns) richColumns = dataColumns ++ computedColumns // scalac has problems dealing with (a, b, c) <- getColIndicesAndType... apparently segStuff <- getColIndicesAndType(richColumns, Seq(normProjection.segmentColId), "segment") keyStuff <- getColIndicesAndType(richColumns, normProjection.keyColIds, "row") partStuff <- getColIndicesAndType(richColumns, dataset.partitionColumns, "partition") } yield { • Notice how multiple validations compose!
  35. 35. Machine-Speed Scala
  36. 36. How do you go REALLY fast? • Don’t serialize • Don’t allocate • Don’t copy
  37. 37. Filo fast • Filo binary vectors - 2 billion records/sec • Spark InMemoryColumnStore - 125 million records/sec • Spark CassandraColumnStore - 25 million records/sec
  38. 38. Filo: High Performance Binary Vectors • Designed for NoSQL, not a file format • random or linear access • on or off heap • missing value support • Scala only, but cross-platform support possible http://github.com/velvia/filo is a binary data vector library designed for extreme read performance with minimal deserialization costs.
  39. 39. Billions of Ops / Sec • JMH benchmark: 0.5ns per FiloVector element access / add • 2 Billion adds per second - single threaded • Who said Scala cannot be fast? • Spark API (row-based) limits performance significantly val randomInts = (0 until numValues).map(i => util.Random.nextInt) val randomIntsAray = randomInts.toArray val filoBuffer = VectorBuilder(randomInts).toFiloBuffer val sc = FiloVector[Int](filoBuffer) @Benchmark @BenchmarkMode(Array(Mode.AverageTime)) @OutputTimeUnit(TimeUnit.MICROSECONDS) def sumAllIntsFiloApply(): Int = { var total = 0 for { i <- 0 until numValues optimized } { total += sc(i) } total }
  40. 40. JVM Inlining • Very small methods can be inlined by the JVM • final def avoids virtual method dispatch. • Thus methods in traits, abstract classes not inlinable val base = baseReader.readInt(0) final def apply(i: Int): Int = base + dataReader.read(i) case (32, _) => new TypedBufferReader[Int] { final def read(i: Int): Int = reader.readInt(i) } final def readInt(i: Int): Int = unsafe.getInt(byteArray, (offset + i * 4).toLong) 0.5ns/read is achieved through a stack of very small methods:
  41. 41. BinaryRecord • Tough problem: FiloDB must handle many different datasets, each with different schemas • Cannot rely on static types and standard serialization mechanisms - case classes, Protobuf, etc. • Serialization very costly, especially strings • Solution: BinaryRecord
  42. 42. BinaryRecord II • BinaryRecord is a binary (ie transport ready) record class that supports any schema or mix of column types • Values can be extracted or written with no serialization cost • UTF8-encoded string class • String compare as fast as native Java strings • Immutable API once built
  43. 43. Use Case: Sorting • Regular sorting: deserialize record, create sort key, compare sort key • BinaryRecord sorting: binary compare fields directly — no deserialization, no object allocations
  44. 44. Regular Sorting Protobuf/Avro etc record Deserialized instance Sort Key Protobuf/Avro etc record Deserialized instance Sort Key Cmp
  45. 45. BinaryRecord Sorting • BinaryRecord sorting: binary compare fields directly — no deserialization, no object allocations name: Str age: Int lastTimestamp: Long group: Str name: Str age: Int lastTimestamp: Long group: Str
  46. 46. SBT-JMH • Super useful tool to leverage JMH, the best micro benchmarking harness • JMH is written by the JDK folks
  47. 47. In Summary • Scala, Akka, reactive can give you both awesome abstractions AND performance • Use Akka for distribution, state, protocols • Use reactive/Monix for functional, concurrent stream processing • Build (or use FiloDB’s) fast low-level abstractions with good APIs
  48. 48. Thank you Scala OSS!

×