SlideShare a Scribd company logo
1
ELK: Moose-ively
scaling your log system
Lessons From Etsy’s 3-year Journey with

Elasticsearch, Logstash and Kibana
Monitoring And
Scaling Logstash
ACT 2
# Beyond Web-Scale:
Moose-Scale
Elasticsearch
ACT 3
#Sizing Up Your
Elasticsearch Cluster
ACT 1
#
Agenda
3
REQUIREMENTS FROM YOU:
1. ASK QUESTIONS

2. DISCUSS THE TOPICS
4
5
HANG IN THERE,
SOME OF THIS IS
A LITTLE DRY!
6
PROLOG U E
Etsy’s ELK clusters
7
GUESS THE
CLUSTER SIZE:
STORAGE CAPACITY
8
321
Combined cluster sizeNumber of clusters Log lines indexed
Six 300 ES instances
141 physical servers
4200 CPU cores
38Tb RAM
1.5Pb Storage
10 billion/day
Up to 400k/sec
Etsy’s ELK clusters
9
PROLOG U E
Healthy advice
10
Healthy Advice
• Rename your cluster from “elasticsearch” to something else.

When you end up with two Elasticsearch clusters on your network, you’ll
be glad you did.
• Oops, deleted all the indices again!

Set action.destructive_requires_name=true
• Always use SSDs. This is not optional.
• If you’re seeing this talk, you probably need 10G networking too.
• Use curator. We developed our own version before it was available.
11
ACT 1, SC ENE 1
Sizing up your Elasticsearch
Cluster
12
What resources influence cluster make-up?
• CPU
- Cores > clock speed
• Memory
- Number of documents
- Number of shards
• Disk I/O
- SSD sustained write rates
• Network bandwidth
- 10G mandatory on large installations for fast recovery / relocation
13
What resources influence cluster memory?
• Memory
- Segment memory: ~4b RAM per document = ~4Gb per billion log lines
- Field data memory: Approximately same as segment memory

(less for older, less accessed data)
- Filter cache: ~1/4 to 1/2 of segment memory, depending on searches
- All the rest (at least 50% of system memory) for OS file cache
- You can't have enough memory!
14
What resources influence cluster I/O?
• Disk I/O
- SSD sustained write rates
- Calculate shard recovery speed if one node fails:
- Shard size = (Daily storage / number of shards)
- (Shards per node * shard size) / (disk write speed / shards per node)
• Eg: 30Gb shards, 2 shards per node, 250Mbps write speed:
- (2 * 30Gb) / 125Mbps = 8 minutes
• How long are you comfortable losing resilience?
• How many nodes are you comfortable losing?
• Multiple nodes per server increase recovery time
15
What resources influence cluster networking?
• Network bandwidth
- 10G mandatory on large installations for fast recovery / relocation
- 10 minute recovery vs 50+ minute recovery:
• 1G Bottleneck: Network uplink
• 10G Bottleneck: Disk speed
16
ACT 1, SC ENE 2
Sizing up your Logstash
Cluster
17
Sizing Up Your Logstash Cluster: Resources
CPU
18
Sizing Up Logstash: CPU
• Rule 1: Buy as many of the fastest CPU cores as you can afford
• Rule 2: See rule 1
• More filtering == more CPU
19
WE'LL RETURN TO
LOGSTASH CPU SHORTLY!
BUT FIRST…
20
ACT 2
Monitoring
21
• Easy to use
• Data saved to ES
• So many metrics!
• No integration
• Costs $$$
• Time to develop
• Integrates with your
systems
• Re-inventing the wheel
• Free (libre, not gratis)
Roll your ownMarvel
22
Monitoring: Elasticsearch
• Metrics are exposed in several places:
- _cat API

Covers most metrics, human readable
- _stats API, _nodes API

Covers everything, JSON, easy to parse
• Send to Graphite
• Create dashboards
23
Monitoring: Systems
• SSD endurance
• Monitor how often Logstash says the pipeline is blocked

If it happens frequently, find out why (mention the possibilities and that
we’ll cover them later)
24
Monitoring: Systems
• Dynamic disk space thresholds
• ((num_servers - failure_capacity) / num_servers) - 15%
- 100 servers
- Allow up to 6 to fail
- Disk space alert threshold = ((100 - 6) / 100) - 15% 

Disk space alert threshold = 79%
• Let your configuration management system tune this up and down for
you, as you add and remove nodes from your cluster.
• The additional 15% is to give you some extra time to order or build more
nodes.
25
ACT 3, SC ENE 1
Scaling Logstash
26
Scaling Logstash: What impacts performance?
• Line length
• Grok pattern complexity - regex is slow
• Plugins used
• Garbage collection
- Increase heap size
• Hyperthreading
- Measure, then turn it off
27
Scaling Logstash: Measure Twice
• Writing your logs as JSON has little benefit, unless you do away with
grok, kv, etc. Logstash still has to convert the incoming string to a ruby
hash anyway.
28
HOW MUCH DOES
RUBY LOVE
CREATING OBJECTS?
29
Scaling Logstash: Garbage Collection
• Defaults are usually OK
• Make sure you’re graphing GC
• Ruby LOVES to generate objects: monitor your GC as you scale
• Write plugins thoughtfully with GC in mind:
- Bad: 1_000_000.times { "This is a string" }

user system total real

time 0.130000 0.000000 0.130000 ( 0.132482)
- Good: foo = 'This is a string'; 1_000_000.times { foo }

user system total real

time 0.060000 0.000000 0.060000 ( 0.055005)
30
Scaling Logstash
Plugin performance
31
Scaling Logstash: Plugin Performance: Baseline
• How to establish a baseline
• Measure again with some filters
• Measure again with more filters
• Establish the costs of each filter
• Community filters are for the general case
- You should write their own for your specific case
- Easy to do
• Run all benchmarks for at least 5 mins, with a large data set
32
Scaling Logstash: Plugin Performance: Baseline
• Establish baseline throughput: Python, StatsD, Graphite
• Simple logstash config, 10m apache log lines, no filtering:
- input {

file {

path => "/var/log/httpd/access.log" 

start_position => "beginning" 

}

}

output {

stdout { codec => "dots" }

}
33
Scaling Logstash: Plugin Performance: Baseline
• Establish baseline throughput: Python, StatsD, Graphite
• Python script to send logstash throughput to statsd:
- sudo pip install statsd
- #!/usr/bin/env python

import statsd, sys

c = statsd.StatsClient('localhost', 8125)

while True:

sys.stdin.read(1)

c.incr('logstash.testing.throughput', rate=0.001)
• Why don't we use the statsd output plugin? It slows down output!
34
Scaling Logstash: Plugin Performance: Baseline
• Establish baseline throughput
• Tie it all together:
- logstash -f logstash.conf | pv -W | python throughput.py
Garbage

collection!
35
HOW MUCH DID
GROK SLOW DOWN
PROCESSING IN 1.5?
36
Scaling Logstash: Plugin Performance: Grok
• Add a simple grok filter
• grok { match => [ "message", "%{ETSY_APACHE_ACCESS}" ] }
• 80% slow down with only 1 worker
Oops!

Only one filter worker!
37
Scaling Logstash: Plugin Performance: Grok
• Add a simple grok filter
• grok { match => [ "message", "%{APACHE_ACCESS}" ] }
• Add: -w <num_cpu_cores>, throughput still drops 33%: 65k/s -> 42k/s
No Grok

1 worker
1 Grok

1 worker
1 Grok

32 workers
38
YOUR BASELINE IS THE
MINIMUM AMOUNT OF
WORK YOU NEED TO DO
39
Scaling Logstash: Plugin Performance: kv
• Add a kv filter, too:

kv { field_split => "&" source => "qs" target => "foo" }
• Throughput similar, 10% drop (40k/s)
• Throughput more variable due to heavier GC
40
DON’T BE AFRAID
TO REWRITE
PLUGINS!
41
Scaling Logstash: Plugin Performance
• kv is slow, we wrote a `splitkv` plugin for query strings, etc:
kvarray = text.split(@field_split).map { |afield|
pairs = afield.split(@value_split)
if pairs[0].nil? || !(pairs[0] =~ /^[0-9]/).nil? || pairs[1].nil? ||
(pairs[0].length < @min_key_length && !@preserve_keys.include?(pairs[0]))
next
end
if !@trimkey.nil?
# 2 if's are faster (0.26s) than gsub (0.33s)
#pairs[0] = pairs[0].slice(1..-1) if pairs[0].start_with?(@trimkey)
#pairs[0].chop! if pairs[0].end_with?(@trimkey)
# BUT! in-place tr is 6% faster than 2 if's (0.52s vs 0.55s)
pairs[0].tr!(@trimkey, '') if pairs[0].start_with?(@trimkey)
end
if !@trimval.nil?
pairs[1].tr!(@trimval, '') if pairs[1].start_with?(@trimval)
end
pairs
}
kvarray.delete_if { |x| x == nil }
return Hash[kvarray]
42
SPLITKV LOGSTASH CPU:
BEFORE: 100% BUSY

AFTER: 33% BUSY
43
Scaling Logstash: Elasticsearch Output
• Logstash output settings directly impact CPU on Logstash machines
- Increase flush_size from 500 to 5000, or more.
- Increase idle_flush_time from 1s to 5s
- Increase output workers
- Results vary by log lines - test for yourself:
• Make a change, wait 15 minutes, evaluate
• With the default 500 from logstash, we peaked at 50% CPU on the
logstash cluster, and ~40k log lines/sec. Bumping this to 10k, and
increasing the idle_flush_time from 1s to 5s got us over 150k log lines/
sec at 25% CPU.
44
Scaling Logstash: Elasticsearch Output
45
Scaling Logstash
Pipeline performance
46
…/vendor/…/lib/logstash/pipeline.rb

Change SizedQueue.new(20)

to SizedQueue.new(500)
—pipeline-batch-size=500
After
Logstash 2.3
Before
Logstash 2.3
This is best changed at the end of tuning.

Impacted by output plugin performance.
47
Scaling Logstash
Testing configuration changes
48
Scaling Logstash: Adding Context
• Discovering pipeline latency
- mutate { add_field => 

[ "index_time", "%{+YYYY-MM-dd HH:mm:ss Z}" ] 

}
• Which logstash server processed a log line?
- mutate { add_field => 

[ "logstash_host", "<%= node[:fqdn] %>" ] 

}
• Hash your log lines to enable replaying logs
- Check out the hashid plugin to avoid duplicate lines
49
Scaling Logstash: Etsy Plugins
http://github.com/etsy/logstash-plugins
50
Scaling Logstash: Adding Context
• ~10% hit from adding context
51
SERVERSPEC
52
Scaling Logstash: Testing Configuration Changes
describe package('logstash'),
:if => os[:family] == 'redhat' do
it { should be_installed }
end
describe command('chef-client') do
its(:exit_status) { should eq 0 }
end
describe command('logstash -t -f ls.conf.test') do
its(:exit_status) { should eq 0 }
end
describe command('logstash -f ls.conf.test') do
its(:stdout) { should_not match(/parse_fail/) }
end
describe command('restart logstash') do
its(:exit_status) { should eq 0 }
end
describe command('sleep 15') do
its(:exit_status) { should eq 0 }
end
describe service('logstash'),
:if => os[:family] == 'redhat' do
it { should be_enabled }
it { should be_running }
end
describe port(5555) do
it { should be_listening }
end
53
Scaling Logstash: Testing Configuration Changes
input {
generator {
lines => [ '<Apache access log>' ]
count => 1
type => "access_log"
}
generator {
lines => [ '<Application log>' ]
count => 1
type => "app_log"
}
}
54
Scaling Logstash: Testing Configuration Changes
filter {
if [type] == "access_log" {
grok {
match => [ "message", "%{APACHE_ACCESS}" ]
tag_on_failure => [ "parse_fail_access_log" ]
}
}
if [type] == "app_log" {
grok {
match => [ "message", "%{APACHE_INFO}" ]
tag_on_failure => [ "parse_fail_app_log" ]
}
}
}
55
Scaling Logstash: Testing Configuration Changes
output {
stdout {
codec => json_lines
}
}
56
Scaling Logstash: Summary
• Faster CPUs matter
- CPU cores > CPU clock speed
• Increase pipeline size
• Lots of memory
- 18Gb+ to prevent frequent garbage collection
• Scale horizontally
• Add context to your log lines
• Write your own plugins, share with the world
• Benchmark everything
57
ACT 3, SC ENE 2
Scaling Elasticsearch
58
Scaling Elasticsearch
Let's establish our baseline
59
Scaling Elasticsearch: Baseline with Defaults
• Logstash output: Default options + 4 workers

Elasticsearch: Default options + 1 shard, no replicas

We can do better!
60
Scaling Elasticsearch
What Impacts Indexing
Performance?
61
Scaling Elasticsearch: What impacts indexing performance?
• Line length and analysis, default mapping
• doc_values - required, not a magic fix:
- Uses more CPU time
- Uses more disk space, disk I/O at indexing
- Helps blowing out memory.
- If you start using too much memory for fielddata, look at the biggest
memory hogs and move them to doc_values
• Available network bandwidth for recovery
62
Scaling Elasticsearch: What impacts indexing performance?
• CPU:
- Analysis
- Mapping
• Default mapping creates tons of .raw fields
- doc_values
- Merging
- Recovery
63
Scaling Elasticsearch: What impacts indexing performance?
• Memory:
- Indexing buffers
- Garbage collection
- Number of segments and unoptimized indices
• Network:
- Recovery speed
• Translog portion of recovery stalls indexing

Faster network == shorter stall
64
Scaling Elasticsearch
Memory
65
Scaling Elasticsearch: Where does memory go?
• Example memory distribution with 32Gb heap:
- Field data: 10%

Filter cache: 10%

Index buffer: 500Mb
- Segment cache (~4 bytes per doc):

How many docs can you store per node?
• 32Gb - ( 32G / 10 ) - ( 32G / 10 ) - 500Mb = ~25Gb for segment cache
• 25Gb / 4b = 6.7bn docs across all shards
• 10bn docs / day, 200 shards = 50m docs/shard

1 daily shard per node: 6.7bn / 50m / 1 = 134 days

5 daily shards per node: 6.7bn / 50m / 5 = 26 days
66
Scaling Elasticsearch: Doc Values
• Doc values help reduce memory
• Doc values cost CPU and storage
- Some fields with doc_values:

1.7G Aug 11 18:42 logstash-2015.08.07/7/index/_1i4v_Lucene410_0.dvd
- All fields with doc_values:

106G Aug 13 20:33 logstash-2015.08.12/38/index/_2a9p_Lucene410_0.dvd
• Don't blindly enable Doc Values for every field
- Find your most frequently used fields, and convert them to Doc Values
- curl -s 'http://localhost:9200/_cat/fielddata?v' | less -S
67
Scaling Elasticsearch: Doc Values
• Example field data usage:



total request_uri _size owner ip_address

117.1mb 11.2mb 28.4mb 8.6mb 4.3mb

96.3mb 7.7mb 19.7mb 9.1mb 4.4mb

93.7mb 7mb 18.4mb 8.8mb 4.1mb

139.1mb 11.2mb 27.7mb 13.5mb 6.6mb

96.8mb 7.8mb 19.1mb 8.8mb 4.4mb

145.9mb 11.5mb 28.6mb 13.4mb 6.7mb

95mb 7mb 18.9mb 8.7mb 5.3mb

122mb 11.8mb 28.4mb 8.9mb 5.7mb

97.7mb 6.8mb 19.2mb 8.9mb 4.8mb

88.9mb 7.6mb 18.2mb 8.4mb 4.6mb

96.5mb 7.7mb 18.3mb 8.8mb 4.7mb

147.4mb 11.6mb 27.9mb 13.2mb 8.8mb

146.7mb 10mb 28.7mb 13.6mb 7.2mb
68
Scaling Elasticsearch: Memory
• Run instances with 128Gb or 256Gb RAM
• Configure RAM for optimal hardware configuration
- Haswell/Skylake Xeon CPUs have 4 memory channels
• Multiple instances of Elasticsearch
- Do you name your instances by hostname?

Give each instance it’s own node.name!
69
Scaling Elasticsearch
CPUs
70
Scaling Elasticsearch: CPUs
• CPU intensive activities
- Indexing: analysis, merging, compression
- Searching: computations, decompression
• For write-heavy workloads
- Number of CPU cores impacts number of concurrent index operations
- Choose more cores, over higher clock speed
71
Scaling Elasticsearch: That Baseline Again…
• Remember our baseline?
• Why was it so slow?
72
Scaling Elasticsearch: That Baseline Again…
[logstash-2016.06.15][0] stop throttling indexing:

numMergesInFlight=4, maxNumMerges=5
MERGING SUCKS
73
Scaling Elasticsearch: Merging
• Step 1: Increase shard count from 1 to 5
• Step 2: Disable merge throttling, on ES < 2.0:

index.store.throttle.type: none
Much better!
74
Scaling Elasticsearch: Split Hosts
• Oops, we maxed out CPU! Time to add more nodes
75
Scaling Elasticsearch: Split Hosts
• Running Logstash and Elasticsearch on separate hosts
76
Scaling Elasticsearch: Split Hosts
• Running Logstash and Elasticsearch on separate hosts:

50% throughput improvement: 13k/s -> 19k/s
77
CPU IS REALLY
IMPORTANT
78
DOES
HYPERTHREADING
HELP?
79
Scaling Elasticsearch: Hyperthreading
• YES! About 20% of our performance! Leave it on.
80
WHAT ELSE
HELPS?
81
CPU SCALING
GOVERNORS!

BUT HOW MUCH?
82
Scaling Elasticsearch: CPU Governor
• # echo performance | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
• ~15-30% performance improvement. Remember to apply at boot!
83
Scaling Elasticsearch
Storage
84
A STORY OF
SSDS
85
Scaling Elasticsearch: Disk I/O
86
Scaling Elasticsearch: Disk I/O
• Common advice
- Use SSD
- RAID 0
- Software RAID is sufficient
87
Scaling Elasticsearch: Disk I/O
• Uncommon advice
- Good SSDs are important

Cheap SSDs will make you very, very sad
- Don’t use multiple data paths, use RAID 0 instead

Heavy translog writes to one disk will bottleneck
- If you have heavy merging, but CPU and disk I/O to spare:

Extreme case: increase index.merge.scheduler.max_thread_count

(But try not to…)
88
Scaling Elasticsearch: Disk I/O
• Uncommon advice
- Reduced durability

index.translog.durability: async

Translog fsync() every 5s, may be sufficient with replication
- Cluster recovery eats disk I/O

Be prepared to tune it up and down during recovery, eg:

indices.recovery.max_bytes_per_sec: 300mb

cluster.routing.allocation.cluster_concurrent_rebalance: 24

cluster.routing.allocation.node_concurrent_recoveries: 2
- Any amount of consistent I/O wait indicates a suboptimal state
89
CHOOSE YOUR
SSD'S WISELY
90
Scaling Elasticsearch: Choosing SSDs
• Consumer grade drives
- Slower writes
- Cheap
- Lower endurance, fewer disk writes per day
• Enterprise grade drives
- Fast
- Expensive
- Higher endurance, higher disk writes per day
91
Scaling Elasticsearch: Choosing SSDs
• Read intensive
- Lower endurance, 1-3 DWPD
- Lower write speeds, least expensive
• Mixed use
- Moderate endurance, 10 DWPD
- Balanced read/write performance, pricing middle ground
• Write intensive
- High endurance, 25DWPD
- High write speeds, most expensive
92
YOU MENTIONED
AN FSYNC()
TUNABLE?
93
Scaling Elasticsearch: That Baseline Again…
• Remember this graph? Let's make it better!
94
Scaling Elasticsearch: Reduced Durability
• Benchmark: Reduced durability.

Old baseline: ~20k-25k. New baseline: Similar, smoother:
95
WHY WAS THE
IMPROVEMENT
SMALLER?
96
Scaling Elasticsearch: Thanks, Merges
• MERRRRRRGGGGGGGGGGGGGGGIIIIIIIINNNNGGGGGG!!
• $ curl -s 'http://localhost:9200/_nodes/hot_threads?threads=10' | grep %

73.6% (367.8ms out of 500ms) 'elasticsearch[es][bulk][T#25]'

66.8% (334.1ms out of 500ms) 'elasticsearch[es][[logstash][1]: Lucene Merge Thread #139]'

66.3% (331.6ms out of 500ms) 'elasticsearch[es][[logstash][3]: Lucene Merge Thread #183]'

66.1% (330.7ms out of 500ms) 'elasticsearch[es][[logstash][1]: Lucene Merge Thread #140]'

66.1% (330.4ms out of 500ms) 'elasticsearch[es][[logstash][4]: Lucene Merge Thread #158]'

62.9% (314.7ms out of 500ms) 'elasticsearch[es][[logstash][3]: Lucene Merge Thread #189]'

62.4% (312.2ms out of 500ms) 'elasticsearch[es][[logstash][2]: Lucene Merge Thread #160]'

61.8% (309.2ms out of 500ms) 'elasticsearch[es][[logstash][1]: Lucene Merge Thread #115]'

57.6% (287.7ms out of 500ms) 'elasticsearch[es][[logstash][0]: Lucene Merge Thread #155]'

55.6% (277.9ms out of 500ms) 'elasticsearch[es][[logstash][2]: Lucene Merge Thread #161]'
97
LET'S FIX THIS
MERGING…
98
…AFTER SOME
LAST WORDS ON
DISK I/O
99
Scaling Elasticsearch: Multi-tiered Storage
• Put your most accessed indices across more servers, with more
memory, and faster CPUs.
• Spec out “cold” storage
- SSDs still necessary! Don't even think about spinning platters
- Cram bigger SSDs per server
• Set index.codec: best_compression
• Move indices, re-optimize
• elasticsearch-curator makes this easy
100
Scaling Elasticsearch
Merging
101
WHY DOES THE DEFAULT
CONFIGURATION
MERGE SO MUCH?
102
Scaling Elasticsearch: Default Mapping
• $ curl 'http://localhost:9200/_template/logstash?pretty'
• "string_fields" : {

"mapping" : {

"index" : "analyzed",

"omit_norms" : true,

"type" : "string",

"fields" : {

"raw" : {

"ignore_above" : 256,

"index" : "not_analyzed",

"type" : "string"

}

}

},

"match_mapping_type" : "string",

"match" : "*"

}
Do you see it?
103
Scaling Elasticsearch: Default Mapping
• $ curl 'http://localhost:9200/_template/logstash?pretty'
• "string_fields" : {

"mapping" : {

"index" : "analyzed",

"omit_norms" : true,

"type" : "string",

"fields" : {

"raw" : {

"ignore_above" : 256,

"index" : "not_analyzed",

"type" : "string"

}

}

},

"match_mapping_type" : "string",

"match" : "*"

}
Do you see it?
104
Scaling Elasticsearch: Custom Mapping
• $ curl 'http://localhost:9200/_template/logstash?pretty'
• "string_fields" : {

"mapping" : {

"index" : "not_analyzed",

"omit_norms" : true,

"type" : "string"

},

"match_mapping_type" : "string",

"match" : "*"

}
105
Scaling Elasticsearch: Custom Mapping
• A small help.. Unfortunately the server is maxed out now!

Expect this to normally have a bigger impact :-)
106
Scaling Elasticsearch
Indexing performance
107
Scaling Elasticsearch: Indexing Performance
• Increasing bulk thread pool queue can help under bursty indexing
- Be aware of the consequences, you're hiding a performance problem
• Increase index buffer
• Increase refresh time, from 1s to 5s
• Spread indexing requests to multiple hosts
• Increase output workers until you stop seeing improvements

We use num_cpu/2 with transport protocol
• Increase flush_size until you stop seeing improvements

We use 10,000
• Disk I/O performance
108
Scaling Elasticsearch: Indexing Performance
• Indexing protocols
- HTTP
- Node
- Transport
• Transport still slightly more performant, but HTTP has closed the gap.
• Node is generally not worth it. Longer start up, more resources, more
fragile, more work for the cluster.
109
Scaling Elasticsearch: Indexing Performance
• Custom mapping template
- Default template creates an additional not_analyzed .raw field for
every field.
- Every field is analyzed, which eats CPU
- Extra field eats more disk
- Dynamic fields and Hungarian notation
• Use a custom template which has dynamic fields enabled, but has them
not_analyzed

Ditch .raw fields, unless you really need them
• This change dropped Elasticsearch cluster CPU usage from 28% to 15%
110
Scaling Elasticsearch: Indexing Performance
• Message complexity matters.

Adding new lines which are 20k, compared to the average of 1.5k tanked
indexing rate for all log lines:
111
Scaling Elasticsearch: Indexing Performance
• ruby { code => "if event['message'].length > 10240 then
event['message'] = event['message'].slice!(0,10240) end" }
112
Scaling Elasticsearch: Indexing Performance
• Speeding up Elasticsearch lets Logstash do more work!
113
Scaling Elasticsearch
Index Size
114
Scaling Elasticsearch: Indices
• Tune shards per index
- num_shards = (num_nodes - failed_node_limit) / (number_of_replicas + 1)
- With 50 nodes, allowing 4 to fail at any time, and 1x replication:

num_shards = (50 - 4) / (1 + 1) = 23
• If your shards are larger than 25Gb, increase shard count accordingly.
• Tune indices.memory.index_buffer_size
- index_buffer_size = num_active_shards * 500Mb
- “Active shards”: any shard updated in the last 5 minutes
115
Scaling Elasticsearch: Indices
• Tune refresh_interval
- Defaults to 1s - way too frequent!
- Increase to 5s
- Tuning higher may cause more disk thrashing
- Goal: Flushing as much as your disk’s buffer than take
• Example: Samsung SM863 SSDs:
- DRAM buffer: 1Gb
- Flush speed: 500Mb/sec
Thank you!
Q&A
@avleen
http://github.com/etsy/logstash-plugins
117
118
SEC RET ACT 4
Filesystem Comparison
119
5230 segments
29Gb memory
10.5Tb disk space
124 segments
23Gb memory
10.1Tb disk space
OptimizedUnoptimized
Scaling Elasticsearch: Optimize Indices
120
ruby {

code =>
"event['message'] = event['message'].slice!(0,10240)"
}
ruby {
code =>
"if event['message'].length > 10240; then
event['message'] = event['message'].slice!(0,10240)
end"
}
The Thoughtful WayThe Easy Way

More Related Content

What's hot

About elasticsearch
About elasticsearchAbout elasticsearch
About elasticsearch
Minsoo Jun
 
My first 90 days with ClickHouse.pdf
My first 90 days with ClickHouse.pdfMy first 90 days with ClickHouse.pdf
My first 90 days with ClickHouse.pdf
Alkin Tezuysal
 
Vitess VReplication: Standing on the Shoulders of a MySQL Giant
Vitess VReplication: Standing on the Shoulders of a MySQL GiantVitess VReplication: Standing on the Shoulders of a MySQL Giant
Vitess VReplication: Standing on the Shoulders of a MySQL Giant
Matt Lord
 
Introduction to Elasticsearch
Introduction to ElasticsearchIntroduction to Elasticsearch
Introduction to Elasticsearch
Ismaeel Enjreny
 
Elastic Stack Introduction
Elastic Stack IntroductionElastic Stack Introduction
Elastic Stack Introduction
Vikram Shinde
 
Introduction to Elasticsearch
Introduction to ElasticsearchIntroduction to Elasticsearch
Introduction to Elasticsearch
Ruslan Zavacky
 
MySQL 8.0 EXPLAIN ANALYZE
MySQL 8.0 EXPLAIN ANALYZEMySQL 8.0 EXPLAIN ANALYZE
MySQL 8.0 EXPLAIN ANALYZE
Norvald Ryeng
 
MySQL Administrator 2021 - 네오클로바
MySQL Administrator 2021 - 네오클로바MySQL Administrator 2021 - 네오클로바
MySQL Administrator 2021 - 네오클로바
NeoClova
 
AWS RDS Benchmark - Instance comparison
AWS RDS Benchmark - Instance comparisonAWS RDS Benchmark - Instance comparison
AWS RDS Benchmark - Instance comparison
Roberto Gaiser
 
Off-heaping the Apache HBase Read Path
Off-heaping the Apache HBase Read Path Off-heaping the Apache HBase Read Path
Off-heaping the Apache HBase Read Path
HBaseCon
 
Best Practices for Data Warehousing with Amazon Redshift | AWS Public Sector ...
Best Practices for Data Warehousing with Amazon Redshift | AWS Public Sector ...Best Practices for Data Warehousing with Amazon Redshift | AWS Public Sector ...
Best Practices for Data Warehousing with Amazon Redshift | AWS Public Sector ...
Amazon Web Services
 
TiDB Introduction
TiDB IntroductionTiDB Introduction
TiDB Introduction
Morgan Tocker
 
Redpanda and ClickHouse
Redpanda and ClickHouseRedpanda and ClickHouse
Redpanda and ClickHouse
Altinity Ltd
 
Streaming Operational Data with MariaDB MaxScale
Streaming Operational Data with MariaDB MaxScaleStreaming Operational Data with MariaDB MaxScale
Streaming Operational Data with MariaDB MaxScale
MariaDB plc
 
Introduction to elasticsearch
Introduction to elasticsearchIntroduction to elasticsearch
Introduction to elasticsearch
pmanvi
 
RocksDB Performance and Reliability Practices
RocksDB Performance and Reliability PracticesRocksDB Performance and Reliability Practices
RocksDB Performance and Reliability Practices
Yoshinori Matsunobu
 
Optimizing queries MySQL
Optimizing queries MySQLOptimizing queries MySQL
Optimizing queries MySQL
Georgi Sotirov
 
Open ebs 101
Open ebs 101Open ebs 101
Open ebs 101
LibbySchulze
 
FLiP Into Trino
FLiP Into TrinoFLiP Into Trino
FLiP Into Trino
Timothy Spann
 
Elastic search overview
Elastic search overviewElastic search overview
Elastic search overview
ABC Talks
 

What's hot (20)

About elasticsearch
About elasticsearchAbout elasticsearch
About elasticsearch
 
My first 90 days with ClickHouse.pdf
My first 90 days with ClickHouse.pdfMy first 90 days with ClickHouse.pdf
My first 90 days with ClickHouse.pdf
 
Vitess VReplication: Standing on the Shoulders of a MySQL Giant
Vitess VReplication: Standing on the Shoulders of a MySQL GiantVitess VReplication: Standing on the Shoulders of a MySQL Giant
Vitess VReplication: Standing on the Shoulders of a MySQL Giant
 
Introduction to Elasticsearch
Introduction to ElasticsearchIntroduction to Elasticsearch
Introduction to Elasticsearch
 
Elastic Stack Introduction
Elastic Stack IntroductionElastic Stack Introduction
Elastic Stack Introduction
 
Introduction to Elasticsearch
Introduction to ElasticsearchIntroduction to Elasticsearch
Introduction to Elasticsearch
 
MySQL 8.0 EXPLAIN ANALYZE
MySQL 8.0 EXPLAIN ANALYZEMySQL 8.0 EXPLAIN ANALYZE
MySQL 8.0 EXPLAIN ANALYZE
 
MySQL Administrator 2021 - 네오클로바
MySQL Administrator 2021 - 네오클로바MySQL Administrator 2021 - 네오클로바
MySQL Administrator 2021 - 네오클로바
 
AWS RDS Benchmark - Instance comparison
AWS RDS Benchmark - Instance comparisonAWS RDS Benchmark - Instance comparison
AWS RDS Benchmark - Instance comparison
 
Off-heaping the Apache HBase Read Path
Off-heaping the Apache HBase Read Path Off-heaping the Apache HBase Read Path
Off-heaping the Apache HBase Read Path
 
Best Practices for Data Warehousing with Amazon Redshift | AWS Public Sector ...
Best Practices for Data Warehousing with Amazon Redshift | AWS Public Sector ...Best Practices for Data Warehousing with Amazon Redshift | AWS Public Sector ...
Best Practices for Data Warehousing with Amazon Redshift | AWS Public Sector ...
 
TiDB Introduction
TiDB IntroductionTiDB Introduction
TiDB Introduction
 
Redpanda and ClickHouse
Redpanda and ClickHouseRedpanda and ClickHouse
Redpanda and ClickHouse
 
Streaming Operational Data with MariaDB MaxScale
Streaming Operational Data with MariaDB MaxScaleStreaming Operational Data with MariaDB MaxScale
Streaming Operational Data with MariaDB MaxScale
 
Introduction to elasticsearch
Introduction to elasticsearchIntroduction to elasticsearch
Introduction to elasticsearch
 
RocksDB Performance and Reliability Practices
RocksDB Performance and Reliability PracticesRocksDB Performance and Reliability Practices
RocksDB Performance and Reliability Practices
 
Optimizing queries MySQL
Optimizing queries MySQLOptimizing queries MySQL
Optimizing queries MySQL
 
Open ebs 101
Open ebs 101Open ebs 101
Open ebs 101
 
FLiP Into Trino
FLiP Into TrinoFLiP Into Trino
FLiP Into Trino
 
Elastic search overview
Elastic search overviewElastic search overview
Elastic search overview
 

Viewers also liked

AppSec And Microservices
AppSec And MicroservicesAppSec And Microservices
AppSec And Microservices
Sam Newman
 
AppSec & Microservices - Velocity 2016
AppSec & Microservices - Velocity 2016AppSec & Microservices - Velocity 2016
AppSec & Microservices - Velocity 2016
Sam Newman
 
Launching a Rocketship Off Someone Else's Back
Launching a Rocketship Off Someone Else's BackLaunching a Rocketship Off Someone Else's Back
Launching a Rocketship Off Someone Else's Back
joshelman
 
Elk stack
Elk stackElk stack
Elk stack
Jilles van Gurp
 
Etsy @ Nagios World Conf 2013
Etsy @ Nagios World Conf 2013Etsy @ Nagios World Conf 2013
Etsy @ Nagios World Conf 2013
Avleen Vig
 
Open design at large scale
Open design at large scaleOpen design at large scale
Open design at large scale
shykes
 
Failing at Scale - PNWPHP 2016
Failing at Scale - PNWPHP 2016Failing at Scale - PNWPHP 2016
Failing at Scale - PNWPHP 2016
Chris Tankersley
 
Сергей Татаринцев — Тестирование CSS-регрессий с Gemini
Сергей Татаринцев — Тестирование CSS-регрессий с GeminiСергей Татаринцев — Тестирование CSS-регрессий с Gemini
Сергей Татаринцев — Тестирование CSS-регрессий с Gemini
Yandex
 
Emacs: многофункциональный комбайн
Emacs: многофункциональный комбайнEmacs: многофункциональный комбайн
Emacs: многофункциональный комбайн
Alex Ott
 
Elasticsearch + Cascading for Scalable Log Processing
Elasticsearch + Cascading for Scalable Log ProcessingElasticsearch + Cascading for Scalable Log Processing
Elasticsearch + Cascading for Scalable Log Processing
Cascading
 
App::highlight - a simple grep-like highlighter app
App::highlight - a simple grep-like highlighter appApp::highlight - a simple grep-like highlighter app
App::highlight - a simple grep-like highlighter app
Alex Balhatchet
 
A sample data visualisation web application
A sample data visualisation web applicationA sample data visualisation web application
A sample data visualisation web application
sandugandhi
 
BlinkDB 紹介
BlinkDB 紹介BlinkDB 紹介
BlinkDB 紹介
Masafumi Oyamada
 
'Scalable Logging and Analytics with LogStash'
'Scalable Logging and Analytics with LogStash''Scalable Logging and Analytics with LogStash'
'Scalable Logging and Analytics with LogStash'
Cloud Elements
 
Data Driven Monitoring
Data Driven MonitoringData Driven Monitoring
Data Driven Monitoring
Daniel Schauenberg
 
Perspectives on Docker
Perspectives on DockerPerspectives on Docker
Perspectives on Docker
RightScale
 
Toronto High Scalability meetup - Scaling ELK
Toronto High Scalability meetup - Scaling ELKToronto High Scalability meetup - Scaling ELK
Toronto High Scalability meetup - Scaling ELK
Andrew Trossman
 
Twitter 與 ELK 基本使用
Twitter 與 ELK 基本使用Twitter 與 ELK 基本使用
Twitter 與 ELK 基本使用
Mark Dai
 
Scaling Elasticsearch at Synthesio
Scaling Elasticsearch at SynthesioScaling Elasticsearch at Synthesio
Scaling Elasticsearch at Synthesio
Fred de Villamil
 
Mysql casual talks vol4
Mysql casual talks vol4Mysql casual talks vol4
Mysql casual talks vol4
matsuo kenji
 

Viewers also liked (20)

AppSec And Microservices
AppSec And MicroservicesAppSec And Microservices
AppSec And Microservices
 
AppSec & Microservices - Velocity 2016
AppSec & Microservices - Velocity 2016AppSec & Microservices - Velocity 2016
AppSec & Microservices - Velocity 2016
 
Launching a Rocketship Off Someone Else's Back
Launching a Rocketship Off Someone Else's BackLaunching a Rocketship Off Someone Else's Back
Launching a Rocketship Off Someone Else's Back
 
Elk stack
Elk stackElk stack
Elk stack
 
Etsy @ Nagios World Conf 2013
Etsy @ Nagios World Conf 2013Etsy @ Nagios World Conf 2013
Etsy @ Nagios World Conf 2013
 
Open design at large scale
Open design at large scaleOpen design at large scale
Open design at large scale
 
Failing at Scale - PNWPHP 2016
Failing at Scale - PNWPHP 2016Failing at Scale - PNWPHP 2016
Failing at Scale - PNWPHP 2016
 
Сергей Татаринцев — Тестирование CSS-регрессий с Gemini
Сергей Татаринцев — Тестирование CSS-регрессий с GeminiСергей Татаринцев — Тестирование CSS-регрессий с Gemini
Сергей Татаринцев — Тестирование CSS-регрессий с Gemini
 
Emacs: многофункциональный комбайн
Emacs: многофункциональный комбайнEmacs: многофункциональный комбайн
Emacs: многофункциональный комбайн
 
Elasticsearch + Cascading for Scalable Log Processing
Elasticsearch + Cascading for Scalable Log ProcessingElasticsearch + Cascading for Scalable Log Processing
Elasticsearch + Cascading for Scalable Log Processing
 
App::highlight - a simple grep-like highlighter app
App::highlight - a simple grep-like highlighter appApp::highlight - a simple grep-like highlighter app
App::highlight - a simple grep-like highlighter app
 
A sample data visualisation web application
A sample data visualisation web applicationA sample data visualisation web application
A sample data visualisation web application
 
BlinkDB 紹介
BlinkDB 紹介BlinkDB 紹介
BlinkDB 紹介
 
'Scalable Logging and Analytics with LogStash'
'Scalable Logging and Analytics with LogStash''Scalable Logging and Analytics with LogStash'
'Scalable Logging and Analytics with LogStash'
 
Data Driven Monitoring
Data Driven MonitoringData Driven Monitoring
Data Driven Monitoring
 
Perspectives on Docker
Perspectives on DockerPerspectives on Docker
Perspectives on Docker
 
Toronto High Scalability meetup - Scaling ELK
Toronto High Scalability meetup - Scaling ELKToronto High Scalability meetup - Scaling ELK
Toronto High Scalability meetup - Scaling ELK
 
Twitter 與 ELK 基本使用
Twitter 與 ELK 基本使用Twitter 與 ELK 基本使用
Twitter 與 ELK 基本使用
 
Scaling Elasticsearch at Synthesio
Scaling Elasticsearch at SynthesioScaling Elasticsearch at Synthesio
Scaling Elasticsearch at Synthesio
 
Mysql casual talks vol4
Mysql casual talks vol4Mysql casual talks vol4
Mysql casual talks vol4
 

Similar to ELK: Moose-ively scaling your log system

Scaling an ELK stack at bol.com
Scaling an ELK stack at bol.comScaling an ELK stack at bol.com
Scaling an ELK stack at bol.com
Renzo Tomà
 
Am I reading GC logs Correctly?
Am I reading GC logs Correctly?Am I reading GC logs Correctly?
Am I reading GC logs Correctly?
Tier1 App
 
Benchmarking Solr Performance at Scale
Benchmarking Solr Performance at ScaleBenchmarking Solr Performance at Scale
Benchmarking Solr Performance at Scale
thelabdude
 
Problems with PostgreSQL on Multi-core Systems with MultiTerabyte Data
Problems with PostgreSQL on Multi-core Systems with MultiTerabyte DataProblems with PostgreSQL on Multi-core Systems with MultiTerabyte Data
Problems with PostgreSQL on Multi-core Systems with MultiTerabyte Data
Jignesh Shah
 
Couchbase live 2016
Couchbase live 2016Couchbase live 2016
Couchbase live 2016
Pierre Mavro
 
(WEB401) Optimizing Your Web Server on AWS | AWS re:Invent 2014
(WEB401) Optimizing Your Web Server on AWS | AWS re:Invent 2014(WEB401) Optimizing Your Web Server on AWS | AWS re:Invent 2014
(WEB401) Optimizing Your Web Server on AWS | AWS re:Invent 2014
Amazon Web Services
 
Oracle Database In-Memory Option in Action
Oracle Database In-Memory Option in ActionOracle Database In-Memory Option in Action
Oracle Database In-Memory Option in Action
Tanel Poder
 
In Memory Database In Action by Tanel Poder and Kerry Osborne
In Memory Database In Action by Tanel Poder and Kerry OsborneIn Memory Database In Action by Tanel Poder and Kerry Osborne
In Memory Database In Action by Tanel Poder and Kerry Osborne
Enkitec
 
Analyze database system using a 3 d method
Analyze database system using a 3 d methodAnalyze database system using a 3 d method
Analyze database system using a 3 d method
Ajith Narayanan
 
KSQL Performance Tuning for Fun and Profit ( Nick Dearden, Confluent) Kafka S...
KSQL Performance Tuning for Fun and Profit ( Nick Dearden, Confluent) Kafka S...KSQL Performance Tuning for Fun and Profit ( Nick Dearden, Confluent) Kafka S...
KSQL Performance Tuning for Fun and Profit ( Nick Dearden, Confluent) Kafka S...
confluent
 
Docker Logging and analysing with Elastic Stack
Docker Logging and analysing with Elastic StackDocker Logging and analysing with Elastic Stack
Docker Logging and analysing with Elastic Stack
Jakub Hajek
 
Docker Logging and analysing with Elastic Stack - Jakub Hajek
Docker Logging and analysing with Elastic Stack - Jakub Hajek Docker Logging and analysing with Elastic Stack - Jakub Hajek
Docker Logging and analysing with Elastic Stack - Jakub Hajek
PROIDEA
 
Corralling Big Data at TACC
Corralling Big Data at TACCCorralling Big Data at TACC
Corralling Big Data at TACC
inside-BigData.com
 
Logstash
LogstashLogstash
Logstash
琛琳 饶
 
AWS re:Invent presentation: Unmeltable Infrastructure at Scale by Loggly
AWS re:Invent presentation: Unmeltable Infrastructure at Scale by Loggly AWS re:Invent presentation: Unmeltable Infrastructure at Scale by Loggly
AWS re:Invent presentation: Unmeltable Infrastructure at Scale by Loggly
SolarWinds Loggly
 
Super scaling singleton inserts
Super scaling singleton insertsSuper scaling singleton inserts
Super scaling singleton inserts
Chris Adkin
 
Logs @ OVHcloud
Logs @ OVHcloudLogs @ OVHcloud
Logs @ OVHcloud
OVHcloud
 
Infrastructure review - Shining a light on the Black Box
Infrastructure review - Shining a light on the Black BoxInfrastructure review - Shining a light on the Black Box
Infrastructure review - Shining a light on the Black Box
Miklos Szel
 
Datadog: a Real-Time Metrics Database for One Quadrillion Points/Day
Datadog: a Real-Time Metrics Database for One Quadrillion Points/DayDatadog: a Real-Time Metrics Database for One Quadrillion Points/Day
Datadog: a Real-Time Metrics Database for One Quadrillion Points/Day
C4Media
 
Aioug vizag oracle12c_new_features
Aioug vizag oracle12c_new_featuresAioug vizag oracle12c_new_features
Aioug vizag oracle12c_new_features
AiougVizagChapter
 

Similar to ELK: Moose-ively scaling your log system (20)

Scaling an ELK stack at bol.com
Scaling an ELK stack at bol.comScaling an ELK stack at bol.com
Scaling an ELK stack at bol.com
 
Am I reading GC logs Correctly?
Am I reading GC logs Correctly?Am I reading GC logs Correctly?
Am I reading GC logs Correctly?
 
Benchmarking Solr Performance at Scale
Benchmarking Solr Performance at ScaleBenchmarking Solr Performance at Scale
Benchmarking Solr Performance at Scale
 
Problems with PostgreSQL on Multi-core Systems with MultiTerabyte Data
Problems with PostgreSQL on Multi-core Systems with MultiTerabyte DataProblems with PostgreSQL on Multi-core Systems with MultiTerabyte Data
Problems with PostgreSQL on Multi-core Systems with MultiTerabyte Data
 
Couchbase live 2016
Couchbase live 2016Couchbase live 2016
Couchbase live 2016
 
(WEB401) Optimizing Your Web Server on AWS | AWS re:Invent 2014
(WEB401) Optimizing Your Web Server on AWS | AWS re:Invent 2014(WEB401) Optimizing Your Web Server on AWS | AWS re:Invent 2014
(WEB401) Optimizing Your Web Server on AWS | AWS re:Invent 2014
 
Oracle Database In-Memory Option in Action
Oracle Database In-Memory Option in ActionOracle Database In-Memory Option in Action
Oracle Database In-Memory Option in Action
 
In Memory Database In Action by Tanel Poder and Kerry Osborne
In Memory Database In Action by Tanel Poder and Kerry OsborneIn Memory Database In Action by Tanel Poder and Kerry Osborne
In Memory Database In Action by Tanel Poder and Kerry Osborne
 
Analyze database system using a 3 d method
Analyze database system using a 3 d methodAnalyze database system using a 3 d method
Analyze database system using a 3 d method
 
KSQL Performance Tuning for Fun and Profit ( Nick Dearden, Confluent) Kafka S...
KSQL Performance Tuning for Fun and Profit ( Nick Dearden, Confluent) Kafka S...KSQL Performance Tuning for Fun and Profit ( Nick Dearden, Confluent) Kafka S...
KSQL Performance Tuning for Fun and Profit ( Nick Dearden, Confluent) Kafka S...
 
Docker Logging and analysing with Elastic Stack
Docker Logging and analysing with Elastic StackDocker Logging and analysing with Elastic Stack
Docker Logging and analysing with Elastic Stack
 
Docker Logging and analysing with Elastic Stack - Jakub Hajek
Docker Logging and analysing with Elastic Stack - Jakub Hajek Docker Logging and analysing with Elastic Stack - Jakub Hajek
Docker Logging and analysing with Elastic Stack - Jakub Hajek
 
Corralling Big Data at TACC
Corralling Big Data at TACCCorralling Big Data at TACC
Corralling Big Data at TACC
 
Logstash
LogstashLogstash
Logstash
 
AWS re:Invent presentation: Unmeltable Infrastructure at Scale by Loggly
AWS re:Invent presentation: Unmeltable Infrastructure at Scale by Loggly AWS re:Invent presentation: Unmeltable Infrastructure at Scale by Loggly
AWS re:Invent presentation: Unmeltable Infrastructure at Scale by Loggly
 
Super scaling singleton inserts
Super scaling singleton insertsSuper scaling singleton inserts
Super scaling singleton inserts
 
Logs @ OVHcloud
Logs @ OVHcloudLogs @ OVHcloud
Logs @ OVHcloud
 
Infrastructure review - Shining a light on the Black Box
Infrastructure review - Shining a light on the Black BoxInfrastructure review - Shining a light on the Black Box
Infrastructure review - Shining a light on the Black Box
 
Datadog: a Real-Time Metrics Database for One Quadrillion Points/Day
Datadog: a Real-Time Metrics Database for One Quadrillion Points/DayDatadog: a Real-Time Metrics Database for One Quadrillion Points/Day
Datadog: a Real-Time Metrics Database for One Quadrillion Points/Day
 
Aioug vizag oracle12c_new_features
Aioug vizag oracle12c_new_featuresAioug vizag oracle12c_new_features
Aioug vizag oracle12c_new_features
 

More from Avleen Vig

Burnout (LISA 2016)
Burnout (LISA 2016)Burnout (LISA 2016)
Burnout (LISA 2016)
Avleen Vig
 
Don't Burn Out or Fade Away
Don't Burn Out or Fade AwayDon't Burn Out or Fade Away
Don't Burn Out or Fade Away
Avleen Vig
 
Postmortems at Etsy
Postmortems at EtsyPostmortems at Etsy
Postmortems at Etsy
Avleen Vig
 
Successful remote engineering (sre con eu, may 2015)
Successful remote engineering (sre con eu, may 2015)Successful remote engineering (sre con eu, may 2015)
Successful remote engineering (sre con eu, may 2015)
Avleen Vig
 
Successful remote engineering, Software GR
Successful remote engineering, Software GRSuccessful remote engineering, Software GR
Successful remote engineering, Software GR
Avleen Vig
 
The Interruptive Nature of Operations (2014, Velocity Barcelona)
The Interruptive Nature of Operations (2014, Velocity Barcelona)The Interruptive Nature of Operations (2014, Velocity Barcelona)
The Interruptive Nature of Operations (2014, Velocity Barcelona)
Avleen Vig
 
The Interruptive Nature of Operations: A World of Squirrels and Shiny Objects
The Interruptive Nature of Operations: A World of Squirrels and Shiny ObjectsThe Interruptive Nature of Operations: A World of Squirrels and Shiny Objects
The Interruptive Nature of Operations: A World of Squirrels and Shiny Objects
Avleen Vig
 
Operational Impact of Continuous Deployment
Operational Impact of Continuous DeploymentOperational Impact of Continuous Deployment
Operational Impact of Continuous DeploymentAvleen Vig
 

More from Avleen Vig (8)

Burnout (LISA 2016)
Burnout (LISA 2016)Burnout (LISA 2016)
Burnout (LISA 2016)
 
Don't Burn Out or Fade Away
Don't Burn Out or Fade AwayDon't Burn Out or Fade Away
Don't Burn Out or Fade Away
 
Postmortems at Etsy
Postmortems at EtsyPostmortems at Etsy
Postmortems at Etsy
 
Successful remote engineering (sre con eu, may 2015)
Successful remote engineering (sre con eu, may 2015)Successful remote engineering (sre con eu, may 2015)
Successful remote engineering (sre con eu, may 2015)
 
Successful remote engineering, Software GR
Successful remote engineering, Software GRSuccessful remote engineering, Software GR
Successful remote engineering, Software GR
 
The Interruptive Nature of Operations (2014, Velocity Barcelona)
The Interruptive Nature of Operations (2014, Velocity Barcelona)The Interruptive Nature of Operations (2014, Velocity Barcelona)
The Interruptive Nature of Operations (2014, Velocity Barcelona)
 
The Interruptive Nature of Operations: A World of Squirrels and Shiny Objects
The Interruptive Nature of Operations: A World of Squirrels and Shiny ObjectsThe Interruptive Nature of Operations: A World of Squirrels and Shiny Objects
The Interruptive Nature of Operations: A World of Squirrels and Shiny Objects
 
Operational Impact of Continuous Deployment
Operational Impact of Continuous DeploymentOperational Impact of Continuous Deployment
Operational Impact of Continuous Deployment
 

Recently uploaded

PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
ControlCase
 
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
Product School
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
Guy Korland
 
Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
Alison B. Lowndes
 
Connector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonConnector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a button
DianaGray10
 
Elevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object CalisthenicsElevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object Calisthenics
Dorra BARTAGUIZ
 
UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3
DianaGray10
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
Laura Byrne
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Thierry Lestable
 
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Product School
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
DianaGray10
 
Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
Safe Software
 
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Jeffrey Haguewood
 
Knowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and backKnowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and back
Elena Simperl
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
Ana-Maria Mihalceanu
 
Key Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdfKey Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdf
Cheryl Hung
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
Kari Kakkonen
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
BookNet Canada
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
James Anderson
 

Recently uploaded (20)

PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
 
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
 
Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
 
Connector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonConnector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a button
 
Elevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object CalisthenicsElevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object Calisthenics
 
UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
 
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
 
Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
 
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
 
Knowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and backKnowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and back
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
 
Key Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdfKey Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdf
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
 

ELK: Moose-ively scaling your log system

  • 1. 1 ELK: Moose-ively scaling your log system Lessons From Etsy’s 3-year Journey with
 Elasticsearch, Logstash and Kibana
  • 2. Monitoring And Scaling Logstash ACT 2 # Beyond Web-Scale: Moose-Scale Elasticsearch ACT 3 #Sizing Up Your Elasticsearch Cluster ACT 1 # Agenda
  • 3. 3 REQUIREMENTS FROM YOU: 1. ASK QUESTIONS
 2. DISCUSS THE TOPICS
  • 4. 4
  • 5. 5 HANG IN THERE, SOME OF THIS IS A LITTLE DRY!
  • 6. 6 PROLOG U E Etsy’s ELK clusters
  • 8. 8 321 Combined cluster sizeNumber of clusters Log lines indexed Six 300 ES instances 141 physical servers 4200 CPU cores 38Tb RAM 1.5Pb Storage 10 billion/day Up to 400k/sec Etsy’s ELK clusters
  • 10. 10 Healthy Advice • Rename your cluster from “elasticsearch” to something else.
 When you end up with two Elasticsearch clusters on your network, you’ll be glad you did. • Oops, deleted all the indices again!
 Set action.destructive_requires_name=true • Always use SSDs. This is not optional. • If you’re seeing this talk, you probably need 10G networking too. • Use curator. We developed our own version before it was available.
  • 11. 11 ACT 1, SC ENE 1 Sizing up your Elasticsearch Cluster
  • 12. 12 What resources influence cluster make-up? • CPU - Cores > clock speed • Memory - Number of documents - Number of shards • Disk I/O - SSD sustained write rates • Network bandwidth - 10G mandatory on large installations for fast recovery / relocation
  • 13. 13 What resources influence cluster memory? • Memory - Segment memory: ~4b RAM per document = ~4Gb per billion log lines - Field data memory: Approximately same as segment memory
 (less for older, less accessed data) - Filter cache: ~1/4 to 1/2 of segment memory, depending on searches - All the rest (at least 50% of system memory) for OS file cache - You can't have enough memory!
  • 14. 14 What resources influence cluster I/O? • Disk I/O - SSD sustained write rates - Calculate shard recovery speed if one node fails: - Shard size = (Daily storage / number of shards) - (Shards per node * shard size) / (disk write speed / shards per node) • Eg: 30Gb shards, 2 shards per node, 250Mbps write speed: - (2 * 30Gb) / 125Mbps = 8 minutes • How long are you comfortable losing resilience? • How many nodes are you comfortable losing? • Multiple nodes per server increase recovery time
  • 15. 15 What resources influence cluster networking? • Network bandwidth - 10G mandatory on large installations for fast recovery / relocation - 10 minute recovery vs 50+ minute recovery: • 1G Bottleneck: Network uplink • 10G Bottleneck: Disk speed
  • 16. 16 ACT 1, SC ENE 2 Sizing up your Logstash Cluster
  • 17. 17 Sizing Up Your Logstash Cluster: Resources CPU
  • 18. 18 Sizing Up Logstash: CPU • Rule 1: Buy as many of the fastest CPU cores as you can afford • Rule 2: See rule 1 • More filtering == more CPU
  • 19. 19 WE'LL RETURN TO LOGSTASH CPU SHORTLY! BUT FIRST…
  • 21. 21 • Easy to use • Data saved to ES • So many metrics! • No integration • Costs $$$ • Time to develop • Integrates with your systems • Re-inventing the wheel • Free (libre, not gratis) Roll your ownMarvel
  • 22. 22 Monitoring: Elasticsearch • Metrics are exposed in several places: - _cat API
 Covers most metrics, human readable - _stats API, _nodes API
 Covers everything, JSON, easy to parse • Send to Graphite • Create dashboards
  • 23. 23 Monitoring: Systems • SSD endurance • Monitor how often Logstash says the pipeline is blocked
 If it happens frequently, find out why (mention the possibilities and that we’ll cover them later)
  • 24. 24 Monitoring: Systems • Dynamic disk space thresholds • ((num_servers - failure_capacity) / num_servers) - 15% - 100 servers - Allow up to 6 to fail - Disk space alert threshold = ((100 - 6) / 100) - 15% 
 Disk space alert threshold = 79% • Let your configuration management system tune this up and down for you, as you add and remove nodes from your cluster. • The additional 15% is to give you some extra time to order or build more nodes.
  • 25. 25 ACT 3, SC ENE 1 Scaling Logstash
  • 26. 26 Scaling Logstash: What impacts performance? • Line length • Grok pattern complexity - regex is slow • Plugins used • Garbage collection - Increase heap size • Hyperthreading - Measure, then turn it off
  • 27. 27 Scaling Logstash: Measure Twice • Writing your logs as JSON has little benefit, unless you do away with grok, kv, etc. Logstash still has to convert the incoming string to a ruby hash anyway.
  • 28. 28 HOW MUCH DOES RUBY LOVE CREATING OBJECTS?
  • 29. 29 Scaling Logstash: Garbage Collection • Defaults are usually OK • Make sure you’re graphing GC • Ruby LOVES to generate objects: monitor your GC as you scale • Write plugins thoughtfully with GC in mind: - Bad: 1_000_000.times { "This is a string" }
 user system total real
 time 0.130000 0.000000 0.130000 ( 0.132482) - Good: foo = 'This is a string'; 1_000_000.times { foo }
 user system total real
 time 0.060000 0.000000 0.060000 ( 0.055005)
  • 31. 31 Scaling Logstash: Plugin Performance: Baseline • How to establish a baseline • Measure again with some filters • Measure again with more filters • Establish the costs of each filter • Community filters are for the general case - You should write their own for your specific case - Easy to do • Run all benchmarks for at least 5 mins, with a large data set
  • 32. 32 Scaling Logstash: Plugin Performance: Baseline • Establish baseline throughput: Python, StatsD, Graphite • Simple logstash config, 10m apache log lines, no filtering: - input {
 file {
 path => "/var/log/httpd/access.log" 
 start_position => "beginning" 
 }
 }
 output {
 stdout { codec => "dots" }
 }
  • 33. 33 Scaling Logstash: Plugin Performance: Baseline • Establish baseline throughput: Python, StatsD, Graphite • Python script to send logstash throughput to statsd: - sudo pip install statsd - #!/usr/bin/env python
 import statsd, sys
 c = statsd.StatsClient('localhost', 8125)
 while True:
 sys.stdin.read(1)
 c.incr('logstash.testing.throughput', rate=0.001) • Why don't we use the statsd output plugin? It slows down output!
  • 34. 34 Scaling Logstash: Plugin Performance: Baseline • Establish baseline throughput • Tie it all together: - logstash -f logstash.conf | pv -W | python throughput.py Garbage
 collection!
  • 35. 35 HOW MUCH DID GROK SLOW DOWN PROCESSING IN 1.5?
  • 36. 36 Scaling Logstash: Plugin Performance: Grok • Add a simple grok filter • grok { match => [ "message", "%{ETSY_APACHE_ACCESS}" ] } • 80% slow down with only 1 worker Oops!
 Only one filter worker!
  • 37. 37 Scaling Logstash: Plugin Performance: Grok • Add a simple grok filter • grok { match => [ "message", "%{APACHE_ACCESS}" ] } • Add: -w <num_cpu_cores>, throughput still drops 33%: 65k/s -> 42k/s No Grok
 1 worker 1 Grok
 1 worker 1 Grok
 32 workers
  • 38. 38 YOUR BASELINE IS THE MINIMUM AMOUNT OF WORK YOU NEED TO DO
  • 39. 39 Scaling Logstash: Plugin Performance: kv • Add a kv filter, too:
 kv { field_split => "&" source => "qs" target => "foo" } • Throughput similar, 10% drop (40k/s) • Throughput more variable due to heavier GC
  • 40. 40 DON’T BE AFRAID TO REWRITE PLUGINS!
  • 41. 41 Scaling Logstash: Plugin Performance • kv is slow, we wrote a `splitkv` plugin for query strings, etc: kvarray = text.split(@field_split).map { |afield| pairs = afield.split(@value_split) if pairs[0].nil? || !(pairs[0] =~ /^[0-9]/).nil? || pairs[1].nil? || (pairs[0].length < @min_key_length && !@preserve_keys.include?(pairs[0])) next end if !@trimkey.nil? # 2 if's are faster (0.26s) than gsub (0.33s) #pairs[0] = pairs[0].slice(1..-1) if pairs[0].start_with?(@trimkey) #pairs[0].chop! if pairs[0].end_with?(@trimkey) # BUT! in-place tr is 6% faster than 2 if's (0.52s vs 0.55s) pairs[0].tr!(@trimkey, '') if pairs[0].start_with?(@trimkey) end if !@trimval.nil? pairs[1].tr!(@trimval, '') if pairs[1].start_with?(@trimval) end pairs } kvarray.delete_if { |x| x == nil } return Hash[kvarray]
  • 42. 42 SPLITKV LOGSTASH CPU: BEFORE: 100% BUSY
 AFTER: 33% BUSY
  • 43. 43 Scaling Logstash: Elasticsearch Output • Logstash output settings directly impact CPU on Logstash machines - Increase flush_size from 500 to 5000, or more. - Increase idle_flush_time from 1s to 5s - Increase output workers - Results vary by log lines - test for yourself: • Make a change, wait 15 minutes, evaluate • With the default 500 from logstash, we peaked at 50% CPU on the logstash cluster, and ~40k log lines/sec. Bumping this to 10k, and increasing the idle_flush_time from 1s to 5s got us over 150k log lines/ sec at 25% CPU.
  • 46. 46 …/vendor/…/lib/logstash/pipeline.rb
 Change SizedQueue.new(20)
 to SizedQueue.new(500) —pipeline-batch-size=500 After Logstash 2.3 Before Logstash 2.3 This is best changed at the end of tuning.
 Impacted by output plugin performance.
  • 48. 48 Scaling Logstash: Adding Context • Discovering pipeline latency - mutate { add_field => 
 [ "index_time", "%{+YYYY-MM-dd HH:mm:ss Z}" ] 
 } • Which logstash server processed a log line? - mutate { add_field => 
 [ "logstash_host", "<%= node[:fqdn] %>" ] 
 } • Hash your log lines to enable replaying logs - Check out the hashid plugin to avoid duplicate lines
  • 49. 49 Scaling Logstash: Etsy Plugins http://github.com/etsy/logstash-plugins
  • 50. 50 Scaling Logstash: Adding Context • ~10% hit from adding context
  • 52. 52 Scaling Logstash: Testing Configuration Changes describe package('logstash'), :if => os[:family] == 'redhat' do it { should be_installed } end describe command('chef-client') do its(:exit_status) { should eq 0 } end describe command('logstash -t -f ls.conf.test') do its(:exit_status) { should eq 0 } end describe command('logstash -f ls.conf.test') do its(:stdout) { should_not match(/parse_fail/) } end describe command('restart logstash') do its(:exit_status) { should eq 0 } end describe command('sleep 15') do its(:exit_status) { should eq 0 } end describe service('logstash'), :if => os[:family] == 'redhat' do it { should be_enabled } it { should be_running } end describe port(5555) do it { should be_listening } end
  • 53. 53 Scaling Logstash: Testing Configuration Changes input { generator { lines => [ '<Apache access log>' ] count => 1 type => "access_log" } generator { lines => [ '<Application log>' ] count => 1 type => "app_log" } }
  • 54. 54 Scaling Logstash: Testing Configuration Changes filter { if [type] == "access_log" { grok { match => [ "message", "%{APACHE_ACCESS}" ] tag_on_failure => [ "parse_fail_access_log" ] } } if [type] == "app_log" { grok { match => [ "message", "%{APACHE_INFO}" ] tag_on_failure => [ "parse_fail_app_log" ] } } }
  • 55. 55 Scaling Logstash: Testing Configuration Changes output { stdout { codec => json_lines } }
  • 56. 56 Scaling Logstash: Summary • Faster CPUs matter - CPU cores > CPU clock speed • Increase pipeline size • Lots of memory - 18Gb+ to prevent frequent garbage collection • Scale horizontally • Add context to your log lines • Write your own plugins, share with the world • Benchmark everything
  • 57. 57 ACT 3, SC ENE 2 Scaling Elasticsearch
  • 59. 59 Scaling Elasticsearch: Baseline with Defaults • Logstash output: Default options + 4 workers
 Elasticsearch: Default options + 1 shard, no replicas
 We can do better!
  • 60. 60 Scaling Elasticsearch What Impacts Indexing Performance?
  • 61. 61 Scaling Elasticsearch: What impacts indexing performance? • Line length and analysis, default mapping • doc_values - required, not a magic fix: - Uses more CPU time - Uses more disk space, disk I/O at indexing - Helps blowing out memory. - If you start using too much memory for fielddata, look at the biggest memory hogs and move them to doc_values • Available network bandwidth for recovery
  • 62. 62 Scaling Elasticsearch: What impacts indexing performance? • CPU: - Analysis - Mapping • Default mapping creates tons of .raw fields - doc_values - Merging - Recovery
  • 63. 63 Scaling Elasticsearch: What impacts indexing performance? • Memory: - Indexing buffers - Garbage collection - Number of segments and unoptimized indices • Network: - Recovery speed • Translog portion of recovery stalls indexing
 Faster network == shorter stall
  • 65. 65 Scaling Elasticsearch: Where does memory go? • Example memory distribution with 32Gb heap: - Field data: 10%
 Filter cache: 10%
 Index buffer: 500Mb - Segment cache (~4 bytes per doc):
 How many docs can you store per node? • 32Gb - ( 32G / 10 ) - ( 32G / 10 ) - 500Mb = ~25Gb for segment cache • 25Gb / 4b = 6.7bn docs across all shards • 10bn docs / day, 200 shards = 50m docs/shard
 1 daily shard per node: 6.7bn / 50m / 1 = 134 days
 5 daily shards per node: 6.7bn / 50m / 5 = 26 days
  • 66. 66 Scaling Elasticsearch: Doc Values • Doc values help reduce memory • Doc values cost CPU and storage - Some fields with doc_values:
 1.7G Aug 11 18:42 logstash-2015.08.07/7/index/_1i4v_Lucene410_0.dvd - All fields with doc_values:
 106G Aug 13 20:33 logstash-2015.08.12/38/index/_2a9p_Lucene410_0.dvd • Don't blindly enable Doc Values for every field - Find your most frequently used fields, and convert them to Doc Values - curl -s 'http://localhost:9200/_cat/fielddata?v' | less -S
  • 67. 67 Scaling Elasticsearch: Doc Values • Example field data usage:
 
 total request_uri _size owner ip_address
 117.1mb 11.2mb 28.4mb 8.6mb 4.3mb
 96.3mb 7.7mb 19.7mb 9.1mb 4.4mb
 93.7mb 7mb 18.4mb 8.8mb 4.1mb
 139.1mb 11.2mb 27.7mb 13.5mb 6.6mb
 96.8mb 7.8mb 19.1mb 8.8mb 4.4mb
 145.9mb 11.5mb 28.6mb 13.4mb 6.7mb
 95mb 7mb 18.9mb 8.7mb 5.3mb
 122mb 11.8mb 28.4mb 8.9mb 5.7mb
 97.7mb 6.8mb 19.2mb 8.9mb 4.8mb
 88.9mb 7.6mb 18.2mb 8.4mb 4.6mb
 96.5mb 7.7mb 18.3mb 8.8mb 4.7mb
 147.4mb 11.6mb 27.9mb 13.2mb 8.8mb
 146.7mb 10mb 28.7mb 13.6mb 7.2mb
  • 68. 68 Scaling Elasticsearch: Memory • Run instances with 128Gb or 256Gb RAM • Configure RAM for optimal hardware configuration - Haswell/Skylake Xeon CPUs have 4 memory channels • Multiple instances of Elasticsearch - Do you name your instances by hostname?
 Give each instance it’s own node.name!
  • 70. 70 Scaling Elasticsearch: CPUs • CPU intensive activities - Indexing: analysis, merging, compression - Searching: computations, decompression • For write-heavy workloads - Number of CPU cores impacts number of concurrent index operations - Choose more cores, over higher clock speed
  • 71. 71 Scaling Elasticsearch: That Baseline Again… • Remember our baseline? • Why was it so slow?
  • 72. 72 Scaling Elasticsearch: That Baseline Again… [logstash-2016.06.15][0] stop throttling indexing:
 numMergesInFlight=4, maxNumMerges=5 MERGING SUCKS
  • 73. 73 Scaling Elasticsearch: Merging • Step 1: Increase shard count from 1 to 5 • Step 2: Disable merge throttling, on ES < 2.0:
 index.store.throttle.type: none Much better!
  • 74. 74 Scaling Elasticsearch: Split Hosts • Oops, we maxed out CPU! Time to add more nodes
  • 75. 75 Scaling Elasticsearch: Split Hosts • Running Logstash and Elasticsearch on separate hosts
  • 76. 76 Scaling Elasticsearch: Split Hosts • Running Logstash and Elasticsearch on separate hosts:
 50% throughput improvement: 13k/s -> 19k/s
  • 79. 79 Scaling Elasticsearch: Hyperthreading • YES! About 20% of our performance! Leave it on.
  • 82. 82 Scaling Elasticsearch: CPU Governor • # echo performance | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor • ~15-30% performance improvement. Remember to apply at boot!
  • 86. 86 Scaling Elasticsearch: Disk I/O • Common advice - Use SSD - RAID 0 - Software RAID is sufficient
  • 87. 87 Scaling Elasticsearch: Disk I/O • Uncommon advice - Good SSDs are important
 Cheap SSDs will make you very, very sad - Don’t use multiple data paths, use RAID 0 instead
 Heavy translog writes to one disk will bottleneck - If you have heavy merging, but CPU and disk I/O to spare:
 Extreme case: increase index.merge.scheduler.max_thread_count
 (But try not to…)
  • 88. 88 Scaling Elasticsearch: Disk I/O • Uncommon advice - Reduced durability
 index.translog.durability: async
 Translog fsync() every 5s, may be sufficient with replication - Cluster recovery eats disk I/O
 Be prepared to tune it up and down during recovery, eg:
 indices.recovery.max_bytes_per_sec: 300mb
 cluster.routing.allocation.cluster_concurrent_rebalance: 24
 cluster.routing.allocation.node_concurrent_recoveries: 2 - Any amount of consistent I/O wait indicates a suboptimal state
  • 90. 90 Scaling Elasticsearch: Choosing SSDs • Consumer grade drives - Slower writes - Cheap - Lower endurance, fewer disk writes per day • Enterprise grade drives - Fast - Expensive - Higher endurance, higher disk writes per day
  • 91. 91 Scaling Elasticsearch: Choosing SSDs • Read intensive - Lower endurance, 1-3 DWPD - Lower write speeds, least expensive • Mixed use - Moderate endurance, 10 DWPD - Balanced read/write performance, pricing middle ground • Write intensive - High endurance, 25DWPD - High write speeds, most expensive
  • 93. 93 Scaling Elasticsearch: That Baseline Again… • Remember this graph? Let's make it better!
  • 94. 94 Scaling Elasticsearch: Reduced Durability • Benchmark: Reduced durability.
 Old baseline: ~20k-25k. New baseline: Similar, smoother:
  • 96. 96 Scaling Elasticsearch: Thanks, Merges • MERRRRRRGGGGGGGGGGGGGGGIIIIIIIINNNNGGGGGG!! • $ curl -s 'http://localhost:9200/_nodes/hot_threads?threads=10' | grep %
 73.6% (367.8ms out of 500ms) 'elasticsearch[es][bulk][T#25]'
 66.8% (334.1ms out of 500ms) 'elasticsearch[es][[logstash][1]: Lucene Merge Thread #139]'
 66.3% (331.6ms out of 500ms) 'elasticsearch[es][[logstash][3]: Lucene Merge Thread #183]'
 66.1% (330.7ms out of 500ms) 'elasticsearch[es][[logstash][1]: Lucene Merge Thread #140]'
 66.1% (330.4ms out of 500ms) 'elasticsearch[es][[logstash][4]: Lucene Merge Thread #158]'
 62.9% (314.7ms out of 500ms) 'elasticsearch[es][[logstash][3]: Lucene Merge Thread #189]'
 62.4% (312.2ms out of 500ms) 'elasticsearch[es][[logstash][2]: Lucene Merge Thread #160]'
 61.8% (309.2ms out of 500ms) 'elasticsearch[es][[logstash][1]: Lucene Merge Thread #115]'
 57.6% (287.7ms out of 500ms) 'elasticsearch[es][[logstash][0]: Lucene Merge Thread #155]'
 55.6% (277.9ms out of 500ms) 'elasticsearch[es][[logstash][2]: Lucene Merge Thread #161]'
  • 99. 99 Scaling Elasticsearch: Multi-tiered Storage • Put your most accessed indices across more servers, with more memory, and faster CPUs. • Spec out “cold” storage - SSDs still necessary! Don't even think about spinning platters - Cram bigger SSDs per server • Set index.codec: best_compression • Move indices, re-optimize • elasticsearch-curator makes this easy
  • 101. 101 WHY DOES THE DEFAULT CONFIGURATION MERGE SO MUCH?
  • 102. 102 Scaling Elasticsearch: Default Mapping • $ curl 'http://localhost:9200/_template/logstash?pretty' • "string_fields" : {
 "mapping" : {
 "index" : "analyzed",
 "omit_norms" : true,
 "type" : "string",
 "fields" : {
 "raw" : {
 "ignore_above" : 256,
 "index" : "not_analyzed",
 "type" : "string"
 }
 }
 },
 "match_mapping_type" : "string",
 "match" : "*"
 } Do you see it?
  • 103. 103 Scaling Elasticsearch: Default Mapping • $ curl 'http://localhost:9200/_template/logstash?pretty' • "string_fields" : {
 "mapping" : {
 "index" : "analyzed",
 "omit_norms" : true,
 "type" : "string",
 "fields" : {
 "raw" : {
 "ignore_above" : 256,
 "index" : "not_analyzed",
 "type" : "string"
 }
 }
 },
 "match_mapping_type" : "string",
 "match" : "*"
 } Do you see it?
  • 104. 104 Scaling Elasticsearch: Custom Mapping • $ curl 'http://localhost:9200/_template/logstash?pretty' • "string_fields" : {
 "mapping" : {
 "index" : "not_analyzed",
 "omit_norms" : true,
 "type" : "string"
 },
 "match_mapping_type" : "string",
 "match" : "*"
 }
  • 105. 105 Scaling Elasticsearch: Custom Mapping • A small help.. Unfortunately the server is maxed out now!
 Expect this to normally have a bigger impact :-)
  • 107. 107 Scaling Elasticsearch: Indexing Performance • Increasing bulk thread pool queue can help under bursty indexing - Be aware of the consequences, you're hiding a performance problem • Increase index buffer • Increase refresh time, from 1s to 5s • Spread indexing requests to multiple hosts • Increase output workers until you stop seeing improvements
 We use num_cpu/2 with transport protocol • Increase flush_size until you stop seeing improvements
 We use 10,000 • Disk I/O performance
  • 108. 108 Scaling Elasticsearch: Indexing Performance • Indexing protocols - HTTP - Node - Transport • Transport still slightly more performant, but HTTP has closed the gap. • Node is generally not worth it. Longer start up, more resources, more fragile, more work for the cluster.
  • 109. 109 Scaling Elasticsearch: Indexing Performance • Custom mapping template - Default template creates an additional not_analyzed .raw field for every field. - Every field is analyzed, which eats CPU - Extra field eats more disk - Dynamic fields and Hungarian notation • Use a custom template which has dynamic fields enabled, but has them not_analyzed
 Ditch .raw fields, unless you really need them • This change dropped Elasticsearch cluster CPU usage from 28% to 15%
  • 110. 110 Scaling Elasticsearch: Indexing Performance • Message complexity matters.
 Adding new lines which are 20k, compared to the average of 1.5k tanked indexing rate for all log lines:
  • 111. 111 Scaling Elasticsearch: Indexing Performance • ruby { code => "if event['message'].length > 10240 then event['message'] = event['message'].slice!(0,10240) end" }
  • 112. 112 Scaling Elasticsearch: Indexing Performance • Speeding up Elasticsearch lets Logstash do more work!
  • 114. 114 Scaling Elasticsearch: Indices • Tune shards per index - num_shards = (num_nodes - failed_node_limit) / (number_of_replicas + 1) - With 50 nodes, allowing 4 to fail at any time, and 1x replication:
 num_shards = (50 - 4) / (1 + 1) = 23 • If your shards are larger than 25Gb, increase shard count accordingly. • Tune indices.memory.index_buffer_size - index_buffer_size = num_active_shards * 500Mb - “Active shards”: any shard updated in the last 5 minutes
  • 115. 115 Scaling Elasticsearch: Indices • Tune refresh_interval - Defaults to 1s - way too frequent! - Increase to 5s - Tuning higher may cause more disk thrashing - Goal: Flushing as much as your disk’s buffer than take • Example: Samsung SM863 SSDs: - DRAM buffer: 1Gb - Flush speed: 500Mb/sec
  • 117. 117
  • 118. 118 SEC RET ACT 4 Filesystem Comparison
  • 119. 119 5230 segments 29Gb memory 10.5Tb disk space 124 segments 23Gb memory 10.1Tb disk space OptimizedUnoptimized Scaling Elasticsearch: Optimize Indices
  • 120. 120 ruby {
 code => "event['message'] = event['message'].slice!(0,10240)" } ruby { code => "if event['message'].length > 10240; then event['message'] = event['message'].slice!(0,10240) end" } The Thoughtful WayThe Easy Way