This document provides an overview of Chronix Spark, which is a framework for time series processing with Apache Spark. It discusses Chronix Spark's time series data model, which represents a set of univariate, multi-dimensional numeric time series. It also describes Chronix Spark's core abstractions like ChronixRDD and MetricTimeSeries, and how it can query time series data stored in Apache Solr and process it in a distributed manner using Spark. The document demonstrates how Chronix Spark can efficiently store and retrieve large volumes of time series data from Solr and perform analytics and visualizations using Spark and other tools.
3. TIME SERIES 101
WE`RE SURROUNDED BY TIME SERIES
▸ Operational data: Monitoring data, performance metrics, log events, …
▸ Data Warehouse: Dimension time
▸ Measured Me: Activity tracking, ECG, …
▸ Sensor telemetry: Sensor data, …
▸ Financial data: Stock charts, …
▸ Climate data: Temperature, …
▸ Web tracking: Clickstreams, …
4. TIME SERIES 101
TIME SERIES: BASIC TERMS
univariate time series multivariate time series multi-dimensional time
series (time series tensor)
time series setobservation
5. TIME SERIES 101
OPERATIONS ON TIME SERIES (EXAMPLES)
align
Time series Time series
Time series Scalar
diff downsampling outlier
min/max avg/med slope std-dev
7. Monitoring Data Analysis
of a business-critical,
worldwide distributed
software system. Enable
root cause analysis and
anomaly detection.
> 1,000 nodes worldwide
> 10 processes per node
> 20 metrics per process
(OS, JVM, App-spec.)
Measured every second.
= about 6.3 trillions observations p.a.
Data retention: 5 yrs.
11. THE CHRONIX STACK
THE CHRONIX STACK
Core
Chronix Storage
Chronix Server
Chronix SparkChronixFormat
GrafanaChronix Analytics
Collection
Visualization
Chronix CollectorLogstash fluentd
jmx
collectd
ssh
Zeppelin
12. THE CHRONIX STACK
node
Distributed Data &
Data Retrieval
Distributed Processing
Result Processing data flow
icon credits to Nimal Raj (database), Arthur Shlain (console) and alvarobueno (takslist)
}
}
ChronixSparkChronixServer
19. CHRONIX SPARK
TIME SERIES MODEL
Set of univariate multi-dimensional numeric time series
▸ set … because it’s more flexible and better to parallelise if operations can
input and output multiple time series.
▸ univariate … because multivariate will introduce too much complexity (and
we have our set to bundle multiple time series).
▸ multi-dimensional … because the ability to slice & dice in the set of time
series is very convenient for a lot of use cases.
▸ numeric … because it’s the most common use case.
A single time series is identified by a combination of its non-temporal
dimensional values (e.g. unit “mem usage” + host “aws42” + process
“tomcat”)
22. CHRONIX SPARK
SPARK APIS FOR DATA PROCESSING
RDD DataFrame Dataset
typed yes no yes
optimized medium highly highly
mature yes yes no
SQL no yes no
23. CHRONIX SPARK
THE MetricTimeSeries DATA TYPE
access all timestamps
access all observations as
stream
the multi-dimensionality:
get/set dimensions
(attributes)
access all numeric values
(univariate)
24. CHRONIX SPARK
THE OVERALL DATA MODEL
ChronixRDD
MetricTimeSeries
MetricObservation Dataset<MetricObservation>
Dataset<MetricTimeSeries>
DataFrame
toDataFrame()
toDataset()
toObservationsDataset()
25. CHRONIX SPARK
ChronixSparkContext
RDD on all time series matched by a SolrQuery:
/**
* @param query Solr query
* @param zkHost Zookeeper host
* @param collection the Solr collection of chronix time series data
* @param chronixStorage a ChronixSolrCloudStorage instance
* @return ChronixRDD of time series
*/
public ChronixRDD query(
final SolrQuery query,
final String zkHost,
final String collection,
final ChronixSolrCloudStorage chronixStorage) {
26. CHRONIX SPARK
SAMPLE CODE
//Create Chronix Spark context from a SparkContext / JavaSparkContext
ChronixSparkContext csc = new ChronixSparkContext(sc);
//Read data into ChronixRDD
SolrQuery query = new SolrQuery(
"metric:"java.lang:type=Memory/HeapMemoryUsage/used"");
ChronixRDD rdd = csc.query(query,
"localhost:9983", //ZooKeeper host
"chronix", //Solr collection for Chronix
new ChronixSolrCloudStorage());
//Calculate the overall min/max/mean of all time series in the RDD
double min = rdd.min();
double max = rdd.max();
double mean = rdd.mean();
27. DEMO TIME
‣ 8,707 time series with 76,983,735 observations
‣ one MacBook with 4 cores
https://github.com/ChronixDB/chronix.spark/tree/master/chronix-infrastructure-local
29. CHRONIX SPARK WONDERLAND
‣ Data sharding
‣ Fast index-based queries and
aggregations
‣ Efficient storage format
‣ Heavy lifting distributed
processing
‣ Catalyst processing optimizer
‣ Post-processing on a smaller
set of time series (e.g. complex
analysis algorithms)
31. … with a few custom extensions.
▸ Index machine.
▸ Powerful query language based on Lucene. Powerful aggregation
features (facets). E.g. groups way better than Spark.
33. CHRONIX SPARK WONDERLAND
STORAGE FORMAT
TIME SERIES
‣ start: TimeStamp
‣ end: TimeStamp
‣ unit: String
‣ dimensions: Map<String, String>
‣ values: byte[]
TIME SERIES
‣ start: TimeStamp
‣ end: TimeStamp
‣ unit: String
‣ dimensions: Map<String, String>
‣ values: byte[]
TIME SERIES
‣ start: TimeStamp
‣ end: TimeStamp
‣ unit: String
‣ dimensions: Map<String, String>
‣ values: byte[]
▸ Chunking:
1 logical time series = n physical time
series all with the same identity
containing a fixed amount of
observations. 1 chunk = 1 solr document.
▸ Binary encoding of all
timestamp/value pairs. Delta-encoded
and bitwise compressed.
Logical
Physical
34. CHRONIX SPARK WONDERLAND
CHRONIX FORMAT: OPTIMAL CHUNK SIZE AND COMPRESSION CODEC
GZIP +
128
kBytes
Florian Lautenschlager, Michael Philippsen, Andreas Kumlehn, Josef Adersberger
Chronix: Efficient Storage and Query of Operational Time Series
International Conference on Software Maintenance and Evolution 2016 (submitted)
35. CHRONIX SPARK WONDERLAND
BENCHMARK: STORAGE DEMAND
Florian Lautenschlager,Michael Philippsen,Andreas Kumlehn,JosefAdersberger
Chronix:Efficient Storage and Query of Operational Time Series
International Conference on Software Maintenance and Evolution 2016 (submitted)
36. CHRONIX SPARK WONDERLAND
BENCHMARK: PERFORMANCE
Florian Lautenschlager,Michael Philippsen,Andreas Kumlehn,JosefAdersberger
Chronix:Efficient Storage and Query of Operational Time Series
International Conference on Software Maintenance and Evolution 2016 (submitted)
DISCLAIMER: BENCHMARK ONLY PERFORMED ON ONE NODE ONLY
39. CHRONIX SPARK WONDERLAND
ChronixRDD CREATION: GET THE CHUNKS
public ChronixRDD queryChronixChunks(
final SolrQuery query,
final String zkHost,
final String collection,
final ChronixSolrCloudStorage<MetricTimeSeries> chronixStorage)
throws SolrServerException, IOException {
// first get a list of replicas to query for this collection
List<String> shards = chronixStorage.getShardList(zkHost, collection);
// parallelize the requests to the shards
JavaRDD<MetricTimeSeries> docs = jsc.parallelize(shards, shards.size()).flatMap(
(FlatMapFunction<String, MetricTimeSeries>) shardUrl -> chronixStorage.streamFromSingleNode(
new KassiopeiaSimpleConverter(), shardUrl, query)::iterator);
return new ChronixRDD(docs);
}
Figure out all
Solr shards
Query each shard in parallel and convert
SolrDocuments to MetricTimeSeries
40. CHRONIX SPARK WONDERLAND
ChronixRDD CREATION: JOIN THEM TOGETHER TO A LOGICAL TIME SERIES
public ChronixRDD joinChunks() {
JavaPairRDD<MetricTimeSeriesKey, Iterable<MetricTimeSeries>> groupRdd
= this.groupBy(MetricTimeSeriesKey::new);
JavaPairRDD<MetricTimeSeriesKey, MetricTimeSeries> joinedRdd
= groupRdd.mapValues((Function<Iterable<MetricTimeSeries>, MetricTimeSeries>) mtsIt -> {
MetricTimeSeriesOrdering ordering = new MetricTimeSeriesOrdering();
List<MetricTimeSeries> orderedChunks = ordering.immutableSortedCopy(mtsIt);
MetricTimeSeries result = null;
for (MetricTimeSeries mts : orderedChunks) {
if (result == null) {
result = new MetricTimeSeries
.Builder(mts.getMetric())
.attributes(mts.attributes()).build();
}
result.addAll(mts.getTimestampsAsArray(), mts.getValuesAsArray());
}
return result;
});
JavaRDD<MetricTimeSeries> resultJavaRdd =
joinedRdd.map((Tuple2<MetricTimeSeriesKey, MetricTimeSeries> mtTuple) -> mtTuple._2);
return new ChronixRDD(resultJavaRdd); }
group chunks
according
identity
join chunks to
logical time
series
42. PERFORMANCE
THE SECRET OF DISTRIBUTED PERFORMANCE
Rule 1: Be as close to the data as possible!
(CPU cache > memory > local disk > network)
Horizontal processing
(distribution / parallelization)
Verticalprocessing
(divide&conquer)
Rule 2: Reduce data volume as early as possible!
(as long as you don’t sacrifice parallelization)
Rule 3: Parallelize as much as possible!
(max = #cores)
43. PERFORMANCE
THE RULES APPLIED
‣ Rule 1: Be as close to the data as possible!
1. Solr caching
2. Spark in-memory processing with activated RDD compression
3. Binary protocol between Solr and Spark
‣ Rule 2: Reduce data volume as early as possible!
‣ Efficient storage format (Chronix Format)
‣ Predicate pushdown to Solr (query)
‣ Group-by & aggregation pushdown to Solr (faceting within a query)
‣ Rule 3: Parallelize as much as possible!
‣ Scale-out on data-level with SolrCloud
‣ Scale-out on processing-level with Spark
51. ROADMAP
THINGS TO COME
see https://github.com/ChronixDB/chronix.spark/issues
v0.4
(06/16)
v0.5
(08/16)
v0.6
(10/16)
v1.0
(12/16)
More actions and
transformations
Bulk transfer Solr
request handler
Streaming access R wrapper
Reduce memory
overhead
Data locality (co-
location)
SparkML support
Custom Dataset
encoder
SolrRDD adapter
Incorporate alien
technology
56. THE COMPETITORS / ALTERNATIVES
THE COMPETITORS / ALTERNATIVES
▸ Small Time Series Data
▸ Matlab (Econometrics toolbox)
▸ Python (Pandas)
▸ R (zoo, xts)
▸ SAS (ETS)
▸ …
▸ Big Time Series Data
▸ influxDB
▸ Graphite
▸ OpenTSDB
▸ KairosDB
▸ Prometheus
▸ …
57. THE COMPETITORS / ALTERNATIVES
BIG DATA LANDSCAPE
https://github.com/qaware/big-data-landscape
58. THE COMPETITORS / ALTERNATIVES
CHRONIX RDD VS. SPARK-TS
▸ Spark-TS provides no specific time series storage it uses the Spark persistence
mechanisms instead. This leads to a less efficient storage usage and less possibilities
to perform performance optimizations via predicate pushdown.
▸ In contrast to Spark-TS Chronix does not align all time series values on one vector of
timestamps. This leads to greater flexibility in time series aggregation
▸ Chronix provides multi-dimensional time series as this is very useful for data
warehousing and APM.
▸ Chronix has support for Datasets as this will be an important Spark API in the near
future. But Chronix currently doesn’t support an IndexedRowMatrix for SparkML.
▸ Chronix is purely written in Java. There is no explicit support for Python and Scala yet.
▸ Chronix doesn not support a ZonedTime as this makes it way more complicated.
61. APACHE SPARK
SPARK TERMINOLOGY (1/2)
▸ RDD: Has transformations and actions. Hides data partitioning &
distributed computation. References a set of partitions (“output
partitions”) - materialized or not - and has dependencies to
another RDD (“input partitions”). RDD operations are evaluated as
late as possible (when an action is called). As long as not being the
root RDD the partitions of an RDD are in memory but they can be
persisted by request.
▸ Partitions: (Logical) chunks of data. Default unit and level of
parallelism - inside of a partition everything is a sequential
operation on records. Has to fit into memory. Can have different
representations (in-memory, on disk, off heap, …)
62. APACHE SPARK
SPARK TERMINOLOGY (2/2)
▸ Job: A computation job which is launched when an action is called on a
RDD.
▸ Task: The atomic unit of work (function). Bound to exactly one partition.
▸ Stage: Set of Task pipelines which can be executed in parallel on one
executor.
▸ Shuffling: If partitions need to be transferred between executors. Shuffle
write = outbound partition transfer. Shuffle read = inbound partition
transfer.
▸ DAG Scheduler: Computes DAG of stages from RDD DAG. Determines
the preferred location for each task.