1. Web analytics at scale
with Druid at naver.com
Jason Heo (analytic.js.heo@navercorp.com)
Doo Yong Kim (dooyong.kim@navercorp.com)
2. • Part 1
• About naver.com
• What is & Why Druid
• The Architecture of our service
• Part 2
• Druid Segment File Structure
• Spark Druid Connector
• TopN Query
• Plywood & Split-Apply-Combine
• How to fix TopN’s unstable results
• Appendix
Agenda
4. • Developed Analytics Systems at Naver
• Working with Databases since 2000
• Author of 3 MySQL books
• Currently Elasticsearch, Spark, Kudu,
and Druid
• Working on Spark and Druid-based OLAP
platform
• Implemented search infrastructure at
coupang.com
• Have been interested in MPP and advanced file
formats for big data
Jason Heo Doo Yong Kim
About Speakers
5. Platforms we've tested so far
Parquet
ORC
Carbon Data
Elasticsearch
ClickHouse Kudu
Druid
SparkSQL
Hive
Impala
Drill
Presto
Kylin
Phoenix
Query
Engine
Storage
Format
6. • What is Druid?
• Our Requirements
• Why Druid?
• Experimental Results
What is & Why Druid
7. • Column-oriented distributed datastore
• Real-time streaming ingestion
• Scalable to petabytes of data
• Approximate algorithms (hyperLogLog, theta sketch)
https://www.slideshare.net/HadoopSummit/scalable-
realtime-analytics-using-druid
From HORTONWORKS
What is Druid?
8. From my point of view
• Druid is a cumbersome version of Elasticsearch (w/o search feature)
• Similar points
• Secondary Index
• DSLs for query
• Flow of Query Processing
• Terms Aggregation ↔ TopN Query, Coordinator ↔ Broker, Data Node ↔ Historical
• Different points
• more complicated to operate
• better with much more data
• better for Ultra High Cardinality
• less GC overhead
• better for Spark Connectivity (for Full Scan)
What is Druid?
10. Real-time
Node
Historical
Broker
{
"queryType": "groupBy",
"dataSource": "sample_data",
"dimension": ["country", "device"],
"filter": {},
"aggregation": [...],
"limitSpec": [...]
}
{
"queryType": "topN",
"dataSource": "sample_data",
"dimension": "sample_dim",
"filter": {...}
"aggregation": [...],
"threshold": 5
}
SELECT ... FROM dataSource
What is Druid? - Queries
• SQLs can be converted to Druid DSL
• No JOIN
11. SELECT COUNT(*)
FROM logs
WHERE url = ?;
1. Random Access
(OLTP)
SELECT url,
COUNT(*)
FROM logs
GROUP BY url
ORDER BY COUNT(*)
DESC
LIMIT 10;
2. Most Viewed
SELECT visitor,
COUNT(*)
FROM logs
GROUP BY visitor;
3. Full Aggregation
SELECT ...
FROM logs INNER
JOIN users
GROUP BY ...
HAVING ...
4. JOIN
Why Druid? - Requirements
12. • Supports Bitmap Index
• Fast Random Access
Perfect solution for OLTP and OLAP
For OLTP
• Supports TopN Query
• 100x times faster than GroupBy query
• Supports Complex Queries
• JOIN, HAVING, etc
• with our Spark Druid Connector
For OLAP
Why Druid?
★★★★☆1. Random Access
★★★★☆3. Full Aggregation
★★★★★2. Most Viewed
★★★★☆4. JOIN
13. • Fast Random Access
• Terms Aggregation
• TopN Query
• Easy to manage
Pros
Cons
• Slow full scan with es-hadoop
• Low Performance for multi-field terms aggregation
(esp. High Cardinality)
• GC Overhead
Comparison – ElasticSearch
1. Random Access ★★★★★
3. Full Aggregation ☆☆☆☆☆
2. Most Viewed ★★★☆☆
4. JOIN ☆☆☆☆☆
14. • Fast Random Access via Primary Key
• Fast OLAP with Impala
Pros
• No Secondary Index
• No TopN Query
Cons
Comparison – Kudu + Impala
★★★★★ (PK)
★☆☆☆☆ (non-PK)
1. Random Access
★★★★★3. Full Aggregation
☆☆☆☆☆2. Most Viewed
★★★★★4. JOIN
16. Experimental Results – Notes
• ES: Lucene Index
• Kudu+Impala: Primary Key
• Druid: Bitmap Index
Random Access
• ES: Terms Aggregation
• Kudu+Implala: Group By
• Druid: TopN
• Split-Apply-Combine for Multi Fields
Most Viewed
• 210 mil. rows
• same parallelism
• same number of shards/partitions/segments
Data Sets
17. Logs
The Architecture of our service
Zeppelin
Plywood
Druid DSL
Coordinator
Overlord
Middle
Manager
Peon
Spark Thrift
Server
Batch
Ingestion
Parquet
Kafka
Run daily batch job
API Server
Historical
Spark
Executor
Segments File Broker
Druid
SparkSQL
Kafka
Indexing
Service
Kafka
transform logs
Parquet
remove
duplicated logs
Real-time
Ingestion
23. Druid Storage Model- Dimension
Country (Dimension)
Korea
UK
Korea
Korea
Korea
UK
Korea ↔ 0
UK ↔ 1
Dictionary for country
UK appears in 2nd, 6th rows
Korea → 101110
UK → 010001
Bitmap for Korea
0
1
0
0
0
1
Dictionary Encoded Values
24. Druid Storage Model - Metric
13
2
15
29
30
14
Country (Dimension) duration (Metric)
Korea 13
UK 2
Korea 15
Korea 29
Korea 30
UK 14
25. Row
Filter it manually
device LIKE 'Iphone%'
Druid Storage Model
Bitmapcountry Filtering
Bitmapdevice Filtering
duration Filtering
Filter by bitmap
country = 'Korea'
('Korea', 'Iphone 6s', 13)
SELECT country, device, duration
FROM logs
WHERE country = 'Korea'
AND device LIKE 'Iphone%'
27. Spark Druid Connector
1. 3 Ways to implement, Our implementation
2. What is needed to implement
3. Sample Codes, Performance Test
4. How to implement
28. Spark Druid Connector - 3 Ways to implement
Druid
Broker
Spark
Driver
DSLSQL Druid
Historical
Spark
Driver
SQL Spark
Executor
• Good if SQL is rewritable to DSL
• But DSL does not support all SQL
• Ex: JOIN, sub-query
• Easy to implement
• No need to understand Druid Index Library
• Ser/de operation is expensive
• Parallelism is bounded to no. of Historical
Select DSL
Large JSON
1st way 2nd way
29. Spark Druid Connector - 3 Ways to implement
Spark
Driver
SQL
• Read Druid segment files directly.
• Similar to the way of reading Parquet
• Difficult to implement
• Need to understand Druid segment library
3rd way
Executor
Segment File
Reads segments using
Druid Library
Allocate Spark executor into Historical Node
We chose this way!
35. Spark Druid Connector – Partition pruning
WHERE country = 'Korea'
AND_time = CAST('2018-05-23' AS TIMESTAMP)
Segments can be pruned
by interval condition and single dimension
partition
1. Interval condition
serverview returns only matched segments
2. Single dimension partition
compare start and end with given filter
Spark
Driver
Druid
Coordinator
GET /.../logs/intervals/2018-05-23/serverview
[
{
"segment": {
"shardSpec": {
"dimension": "country",
"start": "null", "end":
"b" ...},
"id": "segmentId"
},
"servers": [
{"host": "host1"},
{"host": "host2"}
]
},
{ "segment": ...},
...
}
36. Spark Druid Connector – Spark filters to Druid filters
WHERE country = 'Korea'
AND city = 'Seoul'
buildScan(requiredColumns: [country, device, duration],
filters: [EqualTo(country, Korea), EqualTo(city, Seoul)])
Spark's filters are converted into Druid's DimFilter
private def toDruidDimFilters(sparkFilter: Filter): DimFilter = {
sparkFilter match {
...
case EqualTo(attribute, value) => {
new SelectorDimFilter(
attribute,
value.toString,
null
)
case GreaterThan(attribute, value) => ...
37. Spark Druid Connector – Attach locality to RACK_LOCAL
• getPreferredLocations(partition: Partition)
• Returns Hosts having Druid Segments
• Caution: Spark does not always guarantee that executors launch on preferred locations
• Set spark.locality.wait to very large value
38. Spark Druid Connector - How to implement
Done!
Now Spark executor can read records from Druid segment files.
Segment
File
Spark Druid
Connector
Spark
41. TopN Query flow (N=100)
Broker
Historical
Segment Cache
User
TopN Query – We heavily use TopN query
Historical
Segment Cache
Historical
Segment Cache
Client get merged results from
each historical node.
Broker merge each’s results
and make final records.
Each historical node return
local top 100 results
42. country SUM(duration)
korea 114
uk 47
us 21
country SUM(duration)
uk 67
korea 24
usa 3
country SUM(duration)
korea 87
uk 57
china 33
country SUM(duration)
korea 225
uk 171
china 33
usa 24
country SUM(duration)
korea 225
uk 171
china 33
TopN Query - Example
Top 3 country ORDER BY SUM(duration)
Broker
Top 3 Result
Top 3 of Historical a
Top 3 of Historical b
Top 3 of Historical c
43. country SUM(duration)
korea 114
uk 47
usa 21
china 17
country SUM(duration)
uk 67
korea 24
usa 3
china 1
country SUM(duration)
korea 87
uk 57
usa 22
china 33
country SUM(duration)
korea 225
uk 171
china 33
Missing!
TopN – is an approximate approach
51. Same query but the results can be different under 2+ replica factor configuration
Stable TopN - Motivation
Seg_1
Seg_2
Historical 1
Seg_1
Seg_2
Historical 2
Broker
Historical 1 Historical 2
Broker
TopN(Seg_1 + Seg_2) TopN(Seg_2 + Seg_3)
First Result Second Result
Results can be different
!=
Seg_3Seg_3
Seg_1
Seg_2
Seg_3
Seg_2
Seg_3
TopN(Seg_3)
Seg_1
TopN(Seg_1)
52. Bypass Historical side TopN Merge, do Broker side merge TopN results for each segment by it’s ID
order
by_segment patch
Broker Broker
First Result Second Result
Always identical
==
Seg_1
Seg_2
Historical 1
Seg_1
Seg_2
Historical 2 Historical 1 Historical 2
TopN(Seg_1) + TopN(Seg_2) TopN(Seg_2) + TopN(Seg_3)
Seg_3Seg_3
Seg_1
Seg_2
Seg_3
Seg_2
Seg_3
TopN(Seg_3)
Seg_1
TopN(Seg_1)
65. Extending Druid Query - Motivation
2 queries are needed to make following table
1. Total 3 times TopN query for 3 countries
2. Aggregation query for total duration
Country SUM(duration) Ratio over total duration
korea 225 20%
uk 171 15.2%
usa 33 2.9%
Can we do it at once?
66. Extending Druid Query - Background
Yes we can!
Just do TopN operation and SUM operation simultaneously!
country SUM(duration)
korea 114
china 17
usa 21
uk 47
country duration
korea 100
korea 14
uk 40
uk 7
usa 21
china 17
Segment Data
Aggregated in map structure
country SUM(duration)
korea 114
uk 47
usa 21
Final records
Total duration equals
sum of all metric values!
67. {
"queryType": "topN",
...
"metric": "edits",
"accMetrics": ["edits"],
...
}
{
...
"edits": 33,
"__acc_edits": 1234
...
}
User Request
Druid Response
Extending Druid Query in TopN
Broker
Historical
Cursor
TopN
Aggregation
Row TopN Queue
Count Metric
We customized Druid to calculate
total edits and metric at once!
Row
Row
Row
Row
Row
69. We modified Original Druid Spark Batch
• https://github.com/metamx/druid-spark-batch
• Original version of Druid Spark Batch from Metamarket (creator of Druid)
• We added some features
• Parquet input
• Single Dimension Partition
• Query Granularity
• Same Ingest spec with Druid MapReduce Batch
Druid Spark Batch
70. 37.1
7
0
5
10
15
20
25
30
35
40
MapReduce Spark
Disk Read, Write
759
2260
0
500
1000
1500
2000
2500
MapReduce Spark
Ingest time
(Single Dim Partition)
(3 Segments, 430MB each)
333
376
0
50
100
150
200
250
300
350
400
MapReduce Spark
Ingest time
(Single Dim Partition)
(11 Segments, 135MB each)
Druid Spark Batch
GB, lower is better Seconds, lower is better Seconds, lower is better