Breakthrough OLAP
Performance with
Cassandra and Spark
Evan Chan
August 2015
Who am I?
Distinguished Engineer,
@evanfchan
User and contributor to Spark since 0.9, Cassandra since 0.6
Co-creator and maintainer of
TupleJump
http://velvia.github.io
Spark Job Server
About Tuplejump
is a big data technology leader providing solutions for
rapid insights from data.
Tuplejump
- the first Spark-Cassandra integration
- an open source Lucene indexer for Cassandra
- open source HDFS for Cassandra
Calliope
Stargate
SnackFS
Didn't I attend the same talk last year?
Similar title, but mostly new material
Will reveal new open source projects! :)
Problem Space
Need analytical database / queries on structured big data
Something SQL-like, very flexible and fast
Pre-aggregation too limiting
Fast data / constant updates
Ideally, want my queries to run over fresh data too
Example: Video analytics
Typical collection and analysis of consumer events
3 billion new events every day
Video publishers want updated stats, the sooner the better
Pre-aggregation only enables simple dashboard UIs
What if one wants to offer more advanced analysis, or a
generic data query API?
Eg, top countries filtered by device type, OS, browser
Requirements
Scalable - rules out PostGreSQL, etc.
Easy to update and ingest new data
Not traditional OLAP cubes - that's not what I'm talking
about
Very fast for analytical queries - OLAP not OLTP
Extremely flexible queries
Preferably open source
Parquet
Widely used, lots of support (Spark, Impala, etc.)
Problem: Parquet is read-optimized, not easy to use for writes
Cannot support idempotent writes
Optimized for writing very large chunks, not small updates
Not suitable for time series, IoT, etc.
Often needs multiple passes of jobs for compaction of small
files, deduplication, etc.
 
People really want a database-like abstraction, not a file format!
Turns out this has been solved before!
Even .Facebook uses Vertica
MPP Databases
Easy writes plus fast queries, with constant transfers
Automatic query optimization by storing intermediate query
projections
Stonebraker, et. al. - paper (Brown Univ)CStore
What's wrong with MPP Databases?
Closed source
$$$
Usually don't scale horizontally that well (or cost is prohibitive)
Cassandra
Horizontally scalable
Very flexible data modelling (lists, sets, custom data types)
Easy to operate
Perfect for ingestion of real time / machine data
Best of breed storage technology, huge community
BUT: Simple queries only
OLTP-oriented
Apache Spark
Horizontally scalable, in-memory queries
Functional Scala transforms - map, filter, groupBy, sort
etc.
SQL, machine learning, streaming, graph, R, many more plugins
all on ONE platform - feed your SQL results to a logistic
regression, easy!
Huge number of connectors with every single storage
technology
Spark provides the missing fast, deep
analytics piece of Cassandra!
Spark and Cassandra
OLAP Architectures
Separate Storage and Query Layers
Combine best of breed storage and query platforms
Take full advantage of evolution of each
Storage handles replication for availability
Query can replicate data for scaling read concurrency -
independent!
Spark as Cassandra's Cache
Spark SQL
Appeared with Spark 1.0
In-memory columnar store
Parquet, Json, Cassandra connector, Avro, many more
SQL as well as DataFrames (Pandas-style) API
Indexing integrated into data sources (eg C* secondary
indexes)
Write custom functions in Scala .... take that Hive UDFs!!
Integrates well with MLBase, Scala/Java/Python
Connecting Spark to Cassandra
Datastax's
Tuplejump
Spark Cassandra Connector
Calliope
 
Get started in one line with spark-shell!
bin/spark-shell
--packagescom.datastax.spark:spark-cassandra-connector_2.10:1.4.0-M3
--confspark.cassandra.connection.host=127.0.0.1
Caching a SQL Table from Cassandra
DataFrames support in Cassandra Connector 1.4.0 (and 1.3.0):
valsqlContext=neworg.apache.spark.sql.SQLContext(sc)
valdf=sqlContext.read
.format("org.apache.spark.sql.cassandra")
.options(Map("table"->"gdelt","keyspace"->"test"))
.load()
df.registerTempTable("gdelt")
sqlContext.cacheTable("gdelt")
sqlContext.sql("SELECTcount(monthyear)FROMgdelt").show()
 
Spark does no caching by default - you will always be reading
from C*!
How Spark SQL's Table Caching Works
Spark Cached Tables can be Really Fast
GDELT dataset, 4 million rows, 60 columns, localhost
Method secs
Uncached 317
Cached 0.38
 
Almost a 1000x speedup!
On an 8-node EC2 c3.XL cluster, 117 million rows, can run
common queries 1-2 seconds against cached dataset.
Tuning Connector Partitioning
spark.cassandra.input.split.size
Guideline: One split per partition, one partition per CPU core
Much more parallelism won't speed up job much, but will
starve other C* requests
Lesson #1: Take Advantage of Spark
Caching!
Problems with Cached Tables
Still have to read the data from Cassandra first, which is slow
Amount of RAM: your entire data + extra for conversion to
cached table
Cached tables only live in Spark executors - by default
tied to single context - not HA
once any executor dies, must re-read data from C*
Caching takes time: convert from RDD[Row] to compressed
columnar format
Cannot easily combine new RDD[Row] with cached tables
(and keep speed)
Problems with Cached Tables
If you don't have enough RAM, Spark can cache your tables
partly to disk. This is still way, way, faster than scanning an entire
C* table. However, cached tables are still tied to a single Spark
context/application.
Also: rdd.cache()is NOT the same as SQLContext's
cacheTable!
What about C* Secondary Indexing?
Spark-Cassandra Connector and Calliope can both reduce I/O by
using Cassandra secondary indices. Does this work with caching?
No, not really, because only the filtered rows would be cached.
Subsequent queries against this limited cached table would not
give you expected results.
Tachyon Off-Heap Caching
Intro to Tachyon
Tachyon: an in-memory cache for HDFS and other binary data
sources
Keeps data off-heap, so multiple Spark applications/executors
can share data
Solves HA problem for data
Wait, wait, wait!
What am I caching exactly? Tachyon is designed for caching files
or binary blobs.
A serialized form of CassandraRow/CassandraRDD?
Raw output from Cassandra driver?
What you really want is this:
Cassandra SSTable -> Tachyon (as row cache) -> CQL -> Spark
Bad programmers worry about the code. Good
programmers worry about data structures.
- Linus Torvalds
 
Are we really thinking holistically about data modelling, caching,
and how it affects the entire systems architecture?
Efficient Columnar Storage in Cassandra
Wait, I thought Cassandra was columnar?
How Cassandra stores your CQL Tables
Suppose you had this CQL table:
CREATETABLE(
departmenttext,
empIdtext,
firsttext,
lasttext,
ageint,
PRIMARYKEY(department,empId)
);
How Cassandra stores your CQL Tables
PartitionKey 01:first 01:last 01:age 02:first 02:last 02:age
Sales Bob Jones 34 Susan O'Connor 40
Engineering Dilbert P ? Dogbert Dog 1
 
Each row is stored contiguously. All columns in row 2 come after
row 1.
To analyze only age, C* still has to read every field.
Cassandra is really a row-based, OLTP-oriented datastore.
Unless you know how to use it otherwise :)
The traditional row-based data storage
approach is dead
- Michael Stonebraker
Columnar Storage (Memory)
Name column
0 1
0 1
 
Dictionary: {0: "Barak", 1: "Hillary"}
 
Age column
0 1
46 66
Columnar Storage (Cassandra)
Review: each physical row in Cassandra (e.g. a "partition key")
stores its columns together on disk.
 
Schema CF
Rowkey Type
Name StringDict
Age Int
 
Data CF
Rowkey 0 1
Name 0 1
Age 46 66
Columnar Format solves I/O
Compression
Dictionary compression - HUGE savings for low-cardinality
string columns
RLE, other techniques
Reduce I/O
Only columns needed for query are loaded from disk
Batch multiple rows in one cell for efficiency (avoid cluster key
overhead)
Columnar Format solves Caching
Use the same format on disk, in cache, in memory scan
Caching works a lot better when the cached object is the
same!!
No data format dissonance means bringing in new bits of data
and combining with existing cached data is seamless
So, why isn't everybody doing this?
No columnar storage format designed to work with NoSQL
stores
Efficient conversion to/from columnar format a hard problem
Most infrastructure is still row oriented
Spark SQL/DataFrames based on RDD[Row]
Spark Catalyst is a row-oriented query parser
All hard work leads to profit, but mere talk leads
to poverty.
- Proverbs 14:23
Columnar Storage Performance Study
 
http://github.com/velvia/cassandra-gdelt
GDELT Dataset
1979 to now
60 columns, 250 million+ rows, 250GB+
Let's compare Cassandra I/O only, no caching or Spark
Global Database of Events, Language, and Tone
The scenarios
1. Narrow table - CQL table with one row per partition key
2. Wide table - wide rows with 10,000 logical rows per partition
key
3. Columnar layout - 1000 rows per columnar chunk, wide rows,
with dictionary compression
First 4 million rows, localhost, SSD, C* 2.0.9, LZ4 compression.
Compaction performed before read benchmarks.
Query and ingest times
Scenario Ingest Read all
columns
Read one
column
Narrow
table
1927
sec
505 sec 504 sec
Wide
table
3897
sec
365 sec 351 sec
Columnar 93 sec 8.6 sec 0.23 sec
 
On reads, using a columnar format is up to 2190x faster, while
ingestion is 20-40x faster.
Of course, real life perf gains will depend heavily on query,
table width, etc. etc.
Disk space usage
Scenario Disk used
Narrow table 2.7 GB
Wide table 1.6 GB
Columnar 0.34 GB
The disk space usage helps explain some of the numbers.
Towards Extreme Query Performance
The filo project
is a binary data vector library
designed for extreme read performance with minimal
deserialization costs.
http://github.com/velvia/filo
Designed for NoSQL, not a file format
random or linear access
on or off heap
missing value support
Scala only, but cross-platform support possible
What is the ceiling?
This Scala loop can read integers from a binary Filo blob at a rate
of 2 billion integers per second - single threaded:
defsumAllInts():Int={
vartotal=0
for{i<-0untilnumValuesoptimized}{
total+=sc(i)
}
total
}
Vectorization of Spark Queries
The project.Tungsten
Process many elements from the same column at once, keep data
in L1/L2 cache.
Coming in Spark 1.4 through 1.6
Hot Column Caching in Tachyon
Has a "table" feature, originally designed for Shark
Keep hot columnar chunks in shared off-heap memory for fast
access
Introducing FiloDB
 
http://github.com/velvia/FiloDB
What's in the name?
Rich sweet layers of distributed, versioned database goodness
Distributed
Apache Cassandra. Scale out with no SPOF. Cross-datacenter
replication. Proven storage and database technology.
Versioned
Incrementally add a column or a few rows as a new version. Easily
control what versions to query. Roll back changes inexpensively.
Stream out new versions as continuous queries :)
Columnar
Parquet-style storage layout
Retrieve select columns and minimize I/O for OLAP queries
Add a new column without having to copy the whole table
Vectorization and lazy/zero serialization for extreme
efficiency
100% Reactive
Built completely on the Typesafe Platform:
Scala 2.10 and SBT
Spark (including custom data source)
Akka Actors for rational scale-out concurrency
Futures for I/O
Phantom Cassandra client for reactive, type-safe C* I/O
Typesafe Config
Spark SQL Queries!
SELECTfirst,last,ageFROMcustomers
WHERE_version>3ANDage<40LIMIT100
Read to and write from Spark Dataframes
Append/merge to FiloDB table from Spark Streaming
FiloDB vs Parquet
Comparable read performance - with lots of space to improve
Assuming co-located Spark and Cassandra
On localhost, both subsecond for simple queries (GDELT
1979-1984)
FiloDB has more room to grow - due to hot column caching
and much less deserialization overhead
Lower memory requirement due to much smaller block sizes
Much better fit for IoT / Machine / Time-series applications
Limited support for types
array / set / map support not there, but will be added later
Where FiloDB Fits In
Use regular C* denormalized tables for OLTP and single-key
lookups
Use FiloDB for the remaining ad-hoc or more complex
analytical queries
Simplify your analytics infrastructure!
No need to export to Hadoop/Parquet/data warehouse.
Use Spark and C* for both OLAP and OLTP!
Perform ad-hoc OLAP analysis of your time-series, IoT data
Simplify your Lambda Architecture...
( )https://www.mapr.com/developercentral/lambda-architecture
With Spark, Cassandra, and FiloDB
Ma, where did all the components go?
You mean I don't have to deal with Hadoop?
Use Cassandra as a front end to store IoT data first
Exactly-Once Ingestion from Kafka
New rows appended via Kafka
Writes are idempotent - no need to dedup!
Converted to columnar chunks on ingest and stored in C*
Only necessary columnar chunks are read into Spark for
minimal I/O
You can help!
Send me your use cases for OLAP on Cassandra and Spark
Especially IoT and Geospatial
Email if you want to contribute
Thanks...
to the entire OSS community, but in particular:
Lee Mighdoll, Nest/Google
Rohit Rai and Satya B., Tuplejump
My colleagues at Socrata
 
If you want to go fast, go alone. If you want to go
far, go together.
-- African proverb
DEMO TIME
GDELT: Regular C* Tables vs FiloDB
Extra Slides
When in doubt, use brute force
- Ken Thompson
Automatic Columnar Conversion using
Custom Indexes
Write to Cassandra as you normally do
Custom indexer takes changes, merges and compacts into
columnar chunks behind scenes
Implementing Lambda is Hard
Use real-time pipeline backed by a KV store for new updates
Lots of moving parts
Key-value store, real time sys, batch, etc.
Need to run similar code in two places
Still need to deal with ingesting data to Parquet/HDFS
Need to reconcile queries against two different places

Breakthrough OLAP performance with Cassandra and Spark

  • 1.
    Breakthrough OLAP Performance with Cassandraand Spark Evan Chan August 2015
  • 2.
    Who am I? DistinguishedEngineer, @evanfchan User and contributor to Spark since 0.9, Cassandra since 0.6 Co-creator and maintainer of TupleJump http://velvia.github.io Spark Job Server
  • 3.
    About Tuplejump is abig data technology leader providing solutions for rapid insights from data. Tuplejump - the first Spark-Cassandra integration - an open source Lucene indexer for Cassandra - open source HDFS for Cassandra Calliope Stargate SnackFS
  • 4.
    Didn't I attendthe same talk last year? Similar title, but mostly new material Will reveal new open source projects! :)
  • 5.
    Problem Space Need analyticaldatabase / queries on structured big data Something SQL-like, very flexible and fast Pre-aggregation too limiting Fast data / constant updates Ideally, want my queries to run over fresh data too
  • 6.
    Example: Video analytics Typicalcollection and analysis of consumer events 3 billion new events every day Video publishers want updated stats, the sooner the better Pre-aggregation only enables simple dashboard UIs What if one wants to offer more advanced analysis, or a generic data query API? Eg, top countries filtered by device type, OS, browser
  • 7.
    Requirements Scalable - rulesout PostGreSQL, etc. Easy to update and ingest new data Not traditional OLAP cubes - that's not what I'm talking about Very fast for analytical queries - OLAP not OLTP Extremely flexible queries Preferably open source
  • 8.
    Parquet Widely used, lotsof support (Spark, Impala, etc.) Problem: Parquet is read-optimized, not easy to use for writes Cannot support idempotent writes Optimized for writing very large chunks, not small updates Not suitable for time series, IoT, etc. Often needs multiple passes of jobs for compaction of small files, deduplication, etc.   People really want a database-like abstraction, not a file format!
  • 9.
    Turns out thishas been solved before! Even .Facebook uses Vertica
  • 10.
    MPP Databases Easy writesplus fast queries, with constant transfers Automatic query optimization by storing intermediate query projections Stonebraker, et. al. - paper (Brown Univ)CStore
  • 11.
    What's wrong withMPP Databases? Closed source $$$ Usually don't scale horizontally that well (or cost is prohibitive)
  • 12.
    Cassandra Horizontally scalable Very flexibledata modelling (lists, sets, custom data types) Easy to operate Perfect for ingestion of real time / machine data Best of breed storage technology, huge community BUT: Simple queries only OLTP-oriented
  • 13.
    Apache Spark Horizontally scalable,in-memory queries Functional Scala transforms - map, filter, groupBy, sort etc. SQL, machine learning, streaming, graph, R, many more plugins all on ONE platform - feed your SQL results to a logistic regression, easy! Huge number of connectors with every single storage technology
  • 14.
    Spark provides themissing fast, deep analytics piece of Cassandra!
  • 15.
  • 16.
    Separate Storage andQuery Layers Combine best of breed storage and query platforms Take full advantage of evolution of each Storage handles replication for availability Query can replicate data for scaling read concurrency - independent!
  • 17.
  • 18.
    Spark SQL Appeared withSpark 1.0 In-memory columnar store Parquet, Json, Cassandra connector, Avro, many more SQL as well as DataFrames (Pandas-style) API Indexing integrated into data sources (eg C* secondary indexes) Write custom functions in Scala .... take that Hive UDFs!! Integrates well with MLBase, Scala/Java/Python
  • 19.
    Connecting Spark toCassandra Datastax's Tuplejump Spark Cassandra Connector Calliope   Get started in one line with spark-shell! bin/spark-shell --packagescom.datastax.spark:spark-cassandra-connector_2.10:1.4.0-M3 --confspark.cassandra.connection.host=127.0.0.1
  • 20.
    Caching a SQLTable from Cassandra DataFrames support in Cassandra Connector 1.4.0 (and 1.3.0): valsqlContext=neworg.apache.spark.sql.SQLContext(sc) valdf=sqlContext.read .format("org.apache.spark.sql.cassandra") .options(Map("table"->"gdelt","keyspace"->"test")) .load() df.registerTempTable("gdelt") sqlContext.cacheTable("gdelt") sqlContext.sql("SELECTcount(monthyear)FROMgdelt").show()   Spark does no caching by default - you will always be reading from C*!
  • 21.
    How Spark SQL'sTable Caching Works
  • 22.
    Spark Cached Tablescan be Really Fast GDELT dataset, 4 million rows, 60 columns, localhost Method secs Uncached 317 Cached 0.38   Almost a 1000x speedup! On an 8-node EC2 c3.XL cluster, 117 million rows, can run common queries 1-2 seconds against cached dataset.
  • 23.
    Tuning Connector Partitioning spark.cassandra.input.split.size Guideline:One split per partition, one partition per CPU core Much more parallelism won't speed up job much, but will starve other C* requests
  • 24.
    Lesson #1: TakeAdvantage of Spark Caching!
  • 25.
    Problems with CachedTables Still have to read the data from Cassandra first, which is slow Amount of RAM: your entire data + extra for conversion to cached table Cached tables only live in Spark executors - by default tied to single context - not HA once any executor dies, must re-read data from C* Caching takes time: convert from RDD[Row] to compressed columnar format Cannot easily combine new RDD[Row] with cached tables (and keep speed)
  • 26.
    Problems with CachedTables If you don't have enough RAM, Spark can cache your tables partly to disk. This is still way, way, faster than scanning an entire C* table. However, cached tables are still tied to a single Spark context/application. Also: rdd.cache()is NOT the same as SQLContext's cacheTable!
  • 27.
    What about C*Secondary Indexing? Spark-Cassandra Connector and Calliope can both reduce I/O by using Cassandra secondary indices. Does this work with caching? No, not really, because only the filtered rows would be cached. Subsequent queries against this limited cached table would not give you expected results.
  • 28.
  • 29.
    Intro to Tachyon Tachyon:an in-memory cache for HDFS and other binary data sources Keeps data off-heap, so multiple Spark applications/executors can share data Solves HA problem for data
  • 30.
    Wait, wait, wait! Whatam I caching exactly? Tachyon is designed for caching files or binary blobs. A serialized form of CassandraRow/CassandraRDD? Raw output from Cassandra driver? What you really want is this: Cassandra SSTable -> Tachyon (as row cache) -> CQL -> Spark
  • 31.
    Bad programmers worryabout the code. Good programmers worry about data structures. - Linus Torvalds   Are we really thinking holistically about data modelling, caching, and how it affects the entire systems architecture?
  • 32.
    Efficient Columnar Storagein Cassandra Wait, I thought Cassandra was columnar?
  • 33.
    How Cassandra storesyour CQL Tables Suppose you had this CQL table: CREATETABLE( departmenttext, empIdtext, firsttext, lasttext, ageint, PRIMARYKEY(department,empId) );
  • 34.
    How Cassandra storesyour CQL Tables PartitionKey 01:first 01:last 01:age 02:first 02:last 02:age Sales Bob Jones 34 Susan O'Connor 40 Engineering Dilbert P ? Dogbert Dog 1   Each row is stored contiguously. All columns in row 2 come after row 1. To analyze only age, C* still has to read every field.
  • 35.
    Cassandra is reallya row-based, OLTP-oriented datastore. Unless you know how to use it otherwise :)
  • 36.
    The traditional row-baseddata storage approach is dead - Michael Stonebraker
  • 37.
    Columnar Storage (Memory) Namecolumn 0 1 0 1   Dictionary: {0: "Barak", 1: "Hillary"}   Age column 0 1 46 66
  • 38.
    Columnar Storage (Cassandra) Review:each physical row in Cassandra (e.g. a "partition key") stores its columns together on disk.   Schema CF Rowkey Type Name StringDict Age Int   Data CF Rowkey 0 1 Name 0 1 Age 46 66
  • 39.
    Columnar Format solvesI/O Compression Dictionary compression - HUGE savings for low-cardinality string columns RLE, other techniques Reduce I/O Only columns needed for query are loaded from disk Batch multiple rows in one cell for efficiency (avoid cluster key overhead)
  • 40.
    Columnar Format solvesCaching Use the same format on disk, in cache, in memory scan Caching works a lot better when the cached object is the same!! No data format dissonance means bringing in new bits of data and combining with existing cached data is seamless
  • 41.
    So, why isn'teverybody doing this? No columnar storage format designed to work with NoSQL stores Efficient conversion to/from columnar format a hard problem Most infrastructure is still row oriented Spark SQL/DataFrames based on RDD[Row] Spark Catalyst is a row-oriented query parser
  • 42.
    All hard workleads to profit, but mere talk leads to poverty. - Proverbs 14:23
  • 44.
    Columnar Storage PerformanceStudy   http://github.com/velvia/cassandra-gdelt
  • 45.
    GDELT Dataset 1979 tonow 60 columns, 250 million+ rows, 250GB+ Let's compare Cassandra I/O only, no caching or Spark Global Database of Events, Language, and Tone
  • 46.
    The scenarios 1. Narrowtable - CQL table with one row per partition key 2. Wide table - wide rows with 10,000 logical rows per partition key 3. Columnar layout - 1000 rows per columnar chunk, wide rows, with dictionary compression First 4 million rows, localhost, SSD, C* 2.0.9, LZ4 compression. Compaction performed before read benchmarks.
  • 47.
    Query and ingesttimes Scenario Ingest Read all columns Read one column Narrow table 1927 sec 505 sec 504 sec Wide table 3897 sec 365 sec 351 sec Columnar 93 sec 8.6 sec 0.23 sec   On reads, using a columnar format is up to 2190x faster, while ingestion is 20-40x faster. Of course, real life perf gains will depend heavily on query, table width, etc. etc.
  • 48.
    Disk space usage ScenarioDisk used Narrow table 2.7 GB Wide table 1.6 GB Columnar 0.34 GB The disk space usage helps explain some of the numbers.
  • 49.
  • 50.
    The filo project isa binary data vector library designed for extreme read performance with minimal deserialization costs. http://github.com/velvia/filo Designed for NoSQL, not a file format random or linear access on or off heap missing value support Scala only, but cross-platform support possible
  • 51.
    What is theceiling? This Scala loop can read integers from a binary Filo blob at a rate of 2 billion integers per second - single threaded: defsumAllInts():Int={ vartotal=0 for{i<-0untilnumValuesoptimized}{ total+=sc(i) } total }
  • 52.
    Vectorization of SparkQueries The project.Tungsten Process many elements from the same column at once, keep data in L1/L2 cache. Coming in Spark 1.4 through 1.6
  • 53.
    Hot Column Cachingin Tachyon Has a "table" feature, originally designed for Shark Keep hot columnar chunks in shared off-heap memory for fast access
  • 54.
  • 55.
    What's in thename? Rich sweet layers of distributed, versioned database goodness
  • 56.
    Distributed Apache Cassandra. Scaleout with no SPOF. Cross-datacenter replication. Proven storage and database technology.
  • 57.
    Versioned Incrementally add acolumn or a few rows as a new version. Easily control what versions to query. Roll back changes inexpensively. Stream out new versions as continuous queries :)
  • 58.
    Columnar Parquet-style storage layout Retrieveselect columns and minimize I/O for OLAP queries Add a new column without having to copy the whole table Vectorization and lazy/zero serialization for extreme efficiency
  • 59.
    100% Reactive Built completelyon the Typesafe Platform: Scala 2.10 and SBT Spark (including custom data source) Akka Actors for rational scale-out concurrency Futures for I/O Phantom Cassandra client for reactive, type-safe C* I/O Typesafe Config
  • 60.
    Spark SQL Queries! SELECTfirst,last,ageFROMcustomers WHERE_version>3ANDage<40LIMIT100 Readto and write from Spark Dataframes Append/merge to FiloDB table from Spark Streaming
  • 61.
    FiloDB vs Parquet Comparableread performance - with lots of space to improve Assuming co-located Spark and Cassandra On localhost, both subsecond for simple queries (GDELT 1979-1984) FiloDB has more room to grow - due to hot column caching and much less deserialization overhead Lower memory requirement due to much smaller block sizes Much better fit for IoT / Machine / Time-series applications Limited support for types array / set / map support not there, but will be added later
  • 62.
    Where FiloDB FitsIn Use regular C* denormalized tables for OLTP and single-key lookups Use FiloDB for the remaining ad-hoc or more complex analytical queries Simplify your analytics infrastructure! No need to export to Hadoop/Parquet/data warehouse. Use Spark and C* for both OLAP and OLTP! Perform ad-hoc OLAP analysis of your time-series, IoT data
  • 63.
    Simplify your LambdaArchitecture... ( )https://www.mapr.com/developercentral/lambda-architecture
  • 64.
    With Spark, Cassandra,and FiloDB Ma, where did all the components go? You mean I don't have to deal with Hadoop? Use Cassandra as a front end to store IoT data first
  • 65.
    Exactly-Once Ingestion fromKafka New rows appended via Kafka Writes are idempotent - no need to dedup! Converted to columnar chunks on ingest and stored in C* Only necessary columnar chunks are read into Spark for minimal I/O
  • 66.
    You can help! Sendme your use cases for OLAP on Cassandra and Spark Especially IoT and Geospatial Email if you want to contribute
  • 67.
    Thanks... to the entireOSS community, but in particular: Lee Mighdoll, Nest/Google Rohit Rai and Satya B., Tuplejump My colleagues at Socrata   If you want to go fast, go alone. If you want to go far, go together. -- African proverb
  • 68.
    DEMO TIME GDELT: RegularC* Tables vs FiloDB
  • 69.
  • 70.
    When in doubt,use brute force - Ken Thompson
  • 71.
    Automatic Columnar Conversionusing Custom Indexes Write to Cassandra as you normally do Custom indexer takes changes, merges and compacts into columnar chunks behind scenes
  • 72.
    Implementing Lambda isHard Use real-time pipeline backed by a KV store for new updates Lots of moving parts Key-value store, real time sys, batch, etc. Need to run similar code in two places Still need to deal with ingesting data to Parquet/HDFS Need to reconcile queries against two different places