Daily indices are a good start
...
indexing, most searches
Indexing is faster in smaller indices
Cheap deletes
Search only needed indices
“Static” indices can be cached
The Black Friday problem*
* for logs. Metrics usually don’t suffer from this
Typical indexing performance graph for one shard*
* throttled so search performance remains decent
At this point it’s better to index in a new shard
Typically 5-10GB, YMMV
Slicing data by time
For spiky ingestion, use size-based indices
Make sure you rotate before the performance drop
(test on one node to get that limit)
Multi tier architecture (aka hot/cold)
Client
Client
Client
Data
Data
Data
...
Data
Data
Data
Master
Master
Master
We can optimize data nodes layer
Ingest
Ingest
Ingest
Multi tier architecture (aka hot/cold)
logs_2016.11.11
logs_2016.11.07
logs_2016.11.09
logs_2016.11.08
logs_2016.11.10
indexing, most searches long running searches
good CPU, best possible IO heap, IO for backup/replication and stats
es_hot_1 es_cold_1 es_cold_2
SSD or RAID0 for spinning
Hot - cold architecture summary
Costs optimization - different hardware for different tier
Performance - above + fewer shards, less overhead
Isolation - long running searches don't affect indexing
Elasticsearch high availability & fault tolerance
Dedicated masters is a must
discovery.zen.minimum_master_nodes = N/2 + 1
Keep your indices balanced
not balanced cluster can lead to instability
Balanced primaries are also good
helps with backups, moving to cold tier, etc
total_shards_per_node is your friend
Elasticsearch high availability & fault tolerance
When in AWS - spread between availability zones
bin/elasticsearch -Enode.attr.zone=zoneA
cluster.routing.allocation.awareness.attributes: zone
We need headroom for spikes
leave at least 20 - 30% for indexing & search spikes
Large machines with many shards?
look out for GC - many clusters died because of that
consider running smaller ES instances but more
Which settings to tune
Merges → most indexing time
Refreshes → check refresh_interval
Flushes → normally OK with ES defaults
Relaxing the merge policy
Less merges ⇒ faster indexing/lower CPU while indexing
Slower searches, but:
- there’s more spare CPU
- aggregations aren’t as affected, and they are typically the bottleneck
especially for metrics
More open files (keep an eye on them!)
Increase index.merge.policy.segments_per_tier ⇒ more segments, less merges
Increase max_merge_at_once, too, but not as much ⇒ reduced spikes
Reduce max_merged_segment ⇒ no more huge merges, but more small ones
And even more settings
Refresh interval (index.refresh_interval)*
- 1s -> baseline indexing throughput
- 5s -> +25% to baseline throughput
- 30s -> +75% to baseline throughput
Higher indices.memory.index_buffer_size higher throughput
Lower indices.queries.cache.size for high velocity data to free up heap
Omit norms (frequencies and positions, too?)
Don't store fields if _source is used
Don't store catch-all (i.e. _all) field - data copied from other fields
* https://sematext.com/blog/2013/07/08/elasticsearch-refresh-interval-vs-indexing-performance/
Let’s dive deeper into storage
Not searches on a field, just aggregations ⇒ index=false
Not sorting/aggregating on a field ⇒ doc_values=false
Doc values can be used for retrieving (see docvalue_fields), so:
● Logs: use doc values for retrieving, exclude them from _source*
● Metrics: short fields normally ⇒ disable _source, rely on doc values
Long retention for logs? For “old” indices:
● set index.codec=best_compression
● force merge to few segments
* though you’ll lose highlighting, update API, reindex API...
Metrics: working around sparse data
Ideally, you’d have one index per metric type (what you can fetch with one call)
Combining them into one (sparse) index will impact performance (see LUCENE-7253)
One doc per metric: you’ll pay with space
Nested documents: you’ll pay with heap (bitset used for joins) and query latency
What about the OS?
Say no to swap
Disk scheduler: CFQ for HDD, deadline for SSD
Mount options: noatime, nodiratime, data=writeback, nobarrier
because strict ordering is for the weak
And hardware?
Hot tier. Typical bottlenecks: CPU and IO throughput
indexing is CPU-intensive
flushes and merges write (and read) lots of data
Cold tier: Memory (heap) and IO latency
more data here ⇒ more indices&shards ⇒ more heap
⇒ searches hit more files
many stats calls are per shard ⇒ potentially choke IO when cluster is idle
Generally:
network storage needs to be really good (esp. for cold tier)
network needs to be low latency (pings, cluster state replication)
network throughput is needed for replication/backup
AWS specifics
c3 instances work, but there’s not enough local SSD ⇒ EBS gp2 SSD*
c4 + EBS give similar performance, but cheaper
i2s are good, but expensive
d2s are better value, but can’t deal with many shards (spinning disk latency)
m4 + gp2 EBS are a good balance
gp2 → PIOPS is expensive, spinning is slow
3 IOPS/GB, but caps at 160MB/s or 10K IOPS (of up to 256kb) per drive
performance isn’t guaranteed (for gp2) ⇒ one slow drive slows RAID0
Enhanced Networking (and EBS Optimized if applicable) are a must
* And used local SSD as cache. With --cachemode writeback for async writing:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Logical_Vol
ume_Manager_Administration/lvm_cache_volume_creation.html
block
size?
The pipeline
read buffer deliver
Log shipper
reason #1
Files? Sockets? Network?
What if buffer fills up?
Processing
before/after buffer?
How?
Others besides Elasticsearch?
How to buffer if $destination is down?
Overview of 6 log shippers: sematext.com/blog/2016/09/13/logstash-alternatives/
Where to do processing
Logstash
(or Filebeat or…)
Buffer
(Kafka/Redis)
here
Logstash Elasticsearch
Where to do processing
Logstash
Buffer
(Kafka/Redis)
here
Logstash Elasticsearch
something
else
Where to do processing
Logstash
Buffer
(Kafka/Redis)
here
Logstash Elasticsearch
something
else
Outputs
need to be
in sync
Where to do processing
Logstash Kafka Logstash Elasticsearch
something
else
LogstashElasticsearch
offset
other
offset
here
here,
too
Where to do processing (syslog-ng, fluentd…)
input
here
Elasticsearch
something
else
here
Where to do processing (rsyslogd…)
input
here
here
here
Zoom into processing
Ideally, log in JSON
Otherwise, parse
For performance and maintenance
(i.e. no need to update parsing rules)
Regex-based (e.g. grok)
Easy to build rules
Rules are flexible
Slow & O(n) on # of rules
Tricks:
Move matching patterns to the top of the list
Move broad patterns to the bottom
Skip patterns including others that didn’t match
Grammar-based
(e.g. liblognorm, PatternDB)
Faster. O(1) on # of rules. References:
Logagent
Logstash
rsyslog syslog-ng
sematext.com/blog/2015/05/18/tuning-elasticsearch-indexing-pipeline-for-logs/
www.fernuni-hagen.de/imperia/md/content/rechnerarchitektur/rainer_gerhards.pdf
Back to buffers: check what happens if when they fill up
Local files: when are they rotated/archived/deleted?
TCP: what happens when connection breaks/times out?
UNIX sockets: what happens when socket blocks writes?
UDP: network buffers should handle spiky load
Check/increase
net.core.rmem_max
net.core.rmem_default
Unlike UDP&TCP,
both DGRAM and STREAM
local sockets
are reliable/blocking
Let’s talk protocols now
UDP: cool for the app, but not reliable
TCP: more reliable, but not completely
Application-level ACKs may be needed:
No failure/backpressure handling needed
App gets ACK when OS buffer gets it
⇒ no retransmit if buffer is lost*
* more at blog.gerhards.net/2008/05/why-you-cant-build-reliable-tcp.html
sender receiver
ACKs
Protocol Example shippers
HTTP Logstash, rsyslog, syslog-ng, Fluentd, Logagent
RELP rsyslog, Logstash
Beats Filebeat, Logstash
Kafka Fluentd, Filebeat, rsyslog, syslog-ng, Logstash
Wrapping up: where to log?
critical?
UDP. Increase network
buffers on destination,
so it can handle spiky
traffic
Paying with
RAM or IO?
UNIX socket. Local
shipper with memory
buffers, that can
drop data if needed
Local files. Make sure
rotation is in place or
you’ll run out of disk!
no
IO RAM
yes