Successfully reported this slideshow.
Your SlideShare is downloading. ×

Tuning Elasticsearch Indexing Pipeline for Logs

Upcoming SlideShare
On Centralizing Logs
On Centralizing Logs
Loading in …3

Check these out next

1 of 58 Ad
1 of 58 Ad

More Related Content


More from Sematext Group, Inc. (20)


Tuning Elasticsearch Indexing Pipeline for Logs

  1. 1. Tuning Elasticsearch Indexing Pipeline for Logs Radu Gheorghe Rafał Kuć
  2. 2. Who are we? Radu Rafał Logsene
  3. 3. The next hour Logs Logs Logs Logs Logs Logs Logs Logs Logs Logs Logs Logs Logs Logs Logs Logs Logs Logs Logs Logs
  4. 4. The tools Logsene 2.0 SNAPSHOT8.9.01.5 RC2
  5. 5. Let the games begin
  6. 6. Logstash Multiple inputs Lots of filters Several outputs Lots of plugins
  7. 7. How Logstash works input (thread per input) file tcp redis ... filter (multiple workers) grok geoip ... elasticsearch solr ... output (multiple workers)
  8. 8. Scaling Logstash
  9. 9. Logstash basic input { syslog { port => 13514 } } output { elasticsearch { protocol => "http” manage_template => false index => "test-index” index_type => "test-type" } }
  10. 10. Logstash basic 4K events per second ~130% CPU utilization 299MB RAM used
  11. 11. Logstash basic
  12. 12. Logstash with mutate output { elasticsearch { protocol => "http” manage_template => false index => "test-index” index_type => "test-type” flush_size => 1000 workers => 5 } } filter { mutate { remove_field => [ "severity", "facility", "priority", "@version", "timestamp", "host" ] } } 3 filter threads! -w 3
  13. 13. Logstash with mutate 5K events per second ~250% CPU utilization 289MB RAM used
  14. 14. Logstash with mutate
  15. 15. Logstash with grok and tcp filter { grok { match => [ "message", "<%{NUMBER:priority}>%{SYSLOGTIMESTAMP:date} %{DATA:hostname} %{DATA:tag} %{DATA:what}:%{DATA:number}:" ] } mutate { remove_field => [ "message", "@version", "@timestamp", "host" ] } } input { tcp { port => 13514 } }
  16. 16. Logstash with grok and tcp 8K events per second ~310% CPU utilization 327MB RAM used
  17. 17. Logstash with grok and tcp
  18. 18. Logstash with JSON lines input { tcp { port => 13514 codec => "json_lines" } }
  19. 19. Logstash with JSON lines 8K events per second ~260% CPU utilization 322MB RAM used
  20. 20. Logstash with JSON lines
  21. 21. Rsyslog Very fast Very light
  22. 22. How rsyslog works im* imfile imtcp imjournal ... mm* om* mmnormalize mmjsonparse ... omelasticsearch omredis ...
  23. 23. Using rsyslog
  24. 24. Rsyslog basic module(load="impstats" interval="10" resetCounters="on" log.file="/tmp/stats") module(load="imtcp") module(load="omelasticsearch") input(type="imtcp" port="13514") action(type="omelasticsearch" template="plain-syslog" searchIndex="test-index" searchType="test-type" bulkmode="on" action.resumeretrycount="-1" ) template(name="plain-syslog" type="list") { constant(value="{") constant(value=""@timestamp":"") property(name="timereported" dateFormat="rfc3339") constant(value="","host":"") property(name="hostname") constant(value="","severity":"") property(name="syslogseverity-text") constant(value="","facility":"") property(name="syslogfacility-text") constant(value="","syslogtag":"") property(name="syslogtag" format="json") constant(value="","message":"") property(name="msg" format="json") constant(value=""}") } *
  25. 25. Rsyslog basic 6K events per second ~20% CPU utilization 50MB RAM used
  26. 26. Rsyslog basic
  27. 27. Rsyslog queue and workers main_queue( queue.size="100000" # capacity of the main queue queue.dequeuebatchsize="5000" # process messages in batches of 5K queue.workerthreads="4" # 4 threads for the main queue ) action(name="send-to-es" type="omelasticsearch" template="plain-syslog" # use the template defined earlier searchIndex="test-index" searchType="test-type" bulkmode="on" # use bulk API action.resumeretrycount="-1" # retry indefinitely if ES is unreachable )
  28. 28. Rsyslog queue and workers 25K events per second ~100% CPU utilization (1 core) 75MB RAM used (queue dependent)
  29. 29. Rsyslog queue and workers
  30. 30. Rsyslog + mmnormalize module(load="mmnormalize") action(type="mmnormalize" ruleBase="/opt/rsyslog_rulebase.rb" useRawMsg="on" ) template(name="lumberjack" type="list") { property(name="$!all-json") } $ cat /opt/rsyslog_rulebase.rb rule=:<%priority:number%>%date:date-rfc3164% %host:word% %syslogtag:word% %what:char- to:x3a%:%number:char-to:x3a%:
  31. 31. Rsyslog + mmnormalize 16K events per second ~200% CPU utilization 100MB RAM used (queue dependent)
  32. 32. Rsyslog + mmnormalize
  33. 33. Rsyslog with JSON parsing module(load="mmjsonparse") action(type="mmjsonparse")
  34. 34. Rsyslog with JSON parsing 20K events per second ~130% CPU utilization 70MB RAM used (queue dependent)
  35. 35. Rsyslog with JSON parsing
  36. 36. Disk-assisted queues main_queue( queue.filename="main_queue" # write to disk if needed queue.maxdiskspace="5g" # when to stop writing to disk queue.highwatermark="200000" # start spilling to disk at this size queue.lowwatermark="100000" # stop spilling when it gets back to this size queue.saveonshutdown="on" # write queue contents to disk on shutdown queue.dequeueBatchSize="5000" queue.workerthreads="4" queue.size="10000000" # absolute max queue size )
  37. 37. Elasticsearch
  38. 38. How Elasticsearch works JSON bulk, single doc transaction log inverted index analysis primary transaction log inverted index analysis replica Elasticsearch replicate
  39. 39. ES horizontal scaling Node shard
  40. 40. ES horizontal scaling Node shard Node shard
  41. 41. ES horizontal scaling Node shard Node shard Node shard
  42. 42. ES horizontal scaling Node shard shard shard shard Node shard shard shard shard Node shard shard shard shard
  43. 43. ES horizontal scaling Node shard shard shard shard replica replica replica replica Node shard shard shard shard replica replica replica replica Node shard shard shard shard replica replica replica replica
  44. 44. Elasticsearch for tools tests Nothing is indexed No JVM tuning Nothing is stored _source disabled _all disabled -1 refresh 30m sync translog size: 2g interval: 30m
  45. 45. Tuning Elasticsearch refresh_interval: 5s* doc_values: true store.throttle.max_bytes_per_sec: 200mb *
  46. 46. Tests: hardware and data 2 x EC2 c3.large instances (2vCPU, 3.5GB RAM, 2x16GB SSD in RAID0) vs Logs Logs Logs Logs Logs Logs Logs Logs Logs Logs Logs Logs Logs Logs Logs Logs Logs Logs Logs Logs Apache logs
  47. 47. Test requests Filters Aggregations filter by client IP date histogram filter by word in user agent top 10 response codes wildcard filter on domain # of unique IPs top IPs per response per time
  48. 48. Test runs 1. Write throughput 2. Capacity of a single index 3. Capacity with time-based indices on hot/cold setup
  49. 49. Write throughput (one index)
  50. 50. Capacity of one index (3200 EPS) 20 seconds @ 40 - 50M
  51. 51. Capacity of one index (400 EPS) 15 seconds @ 40 - 50M
  52. 52. Time-based indices: ideal shard size smaller indices lighter indexing easier to isolate hot data from cold data easier to relocate bigger indices less RAM less management overhead smaller cluster state without indexing, equal latency when dividing 32M data into 1/2/4/8/16/32M indices
  53. 53. Time-based. 2 hot and 2 cold nodes Before: 3200 After: 4800
  54. 54. Time-based. 2 hot and 2 cold nodes Before: 15s After: 5s
  55. 55. That's all folks!
  56. 56. What to remember? log in JSON parallelize when possible use time based indices use hot / cold nodes policy
  57. 57. We are hiring Dig Search? Dig Analytics? Dig Big Data? Dig Performance? Dig Logging? Dig working with and in open – source? We’re hiring world – wide!
  58. 58. Thank you! Radu Gheorghe @radu0gheorghe Rafał Kuć @kucrafal Sematext @sematext

Editor's Notes

  • Rafał starts and passes mic to Radu
  • Rafal slide – describe the talk brefily

    !!! Ask people how many of the audience used the tools
  • Radu slide

    we did some tests, we’ll share configs and benchmarks – here are the versions
    Logstash 1.5 – the final version will be up soon
    Rsyslog 8.9 – the current stable (note: most distros come with 5.x or 7.x)

    ES is a search engine based on Apache Lucene
    Current version is 1.5, next major is 2.0 with lots of changes. Many related to Lucene 5.0

    Not the only tools for logging, there are many other tools, both open source and commercial, that can receive logs, parse them, buffer them and index them
  • Rafal slide
  • Rafal slide

    * Ask how many people know about Logstash
  • Rafal slide
  • Rafal slide
  • Radu
    Assume we want to centralize syslog
    Forward syslog via TCP/UDP on a port to Logstash
    On the Logstash side, you can use the TCP input to listen to that port and parse syslog messages
    You’d use the ES output to forward to ES
    you can use a Java binary, but HTTP is better
    Logstash comes with a template for ES index, but for perf tests we’ll use our own
    Specify where (index,type – like a DB and a table)
  • Radu
    - 1.3 CPUs
  • Radu – segue to tuning, pass the mic
  • Rafal

    Flush size – 1000 lowered from default 5000
  • Rafal
  • Rafal
  • Rafal
    Syslog is just TCP + Grok
    We changed that and we are not parsing the syslog format exactly – we wanted to parse additional things and wanted to show how to parse unstructured data
  • The bound was:
    - hardware (high CPU usage)
    - JSON lines codec is not parallelized, while GROK is
    - But if you want to do your homework you can do another run with JSON filter instead of codec and that will give the possibility of parallelization
  • Radu
    Many people hate it, maybe because of docs
    I like it because it’s light and fast and has surprisingly rich functionality
  • Like Logstash, it’s modular, you can use inputs to get data in, message modifiers to parse data and outputs to pass it on
    The flow of data is a bit different
    Inputs may have multiple threads, and they write to a main queue
    On the main queue, worker threads can do filtering, format messages using templates (will talk later) and run actions (parsing/output)
    You can have action queues as well, with their own threads => async
    You can have rulesets, which let you separate flows of input – parse – output (e.g. One ruleset for local logs, one for remote logs)
  • Typical setup is to have it on each server, push to ES directly, buffer if necessary
  • Load modules
    Impstats is for monitoring, then tcp and ES
    Start the tcp listener
    Template – how the JSON that we send to ES will look like
    Action – send to ES, using the template, specify index/type, use bulks, retry on failure
  • Bigger memory buffer
    Increase bulk size
    Moar worker threads
  • Not using more because ES is using the rest – Rafal will talk about that in a bit
    RAM has increased because of the queue size
  • Clear win
  • But not really apples for apples, because rsyslog has dedicated syslog parsers
    Still, not only for syslog, can parse unstructured data via mmnormalize
    Refer to a rulebase, which looks much like grok patterns, with two differences:
    Normally, patterns like number or date aren’t regexes but specific parsers. Faster but less flexible. The one above is equivalent to the Logstash grok seen earlier
    Builds a parse tree on startup, helps with speed if you have many rules
  • Radu
  • More throughput with less CPU usage
  • Before moving on, one more thing: in production you probably want to use disk assisted queues instead of in-memory queues like the ones we had here.

    DA queue is in-memory queue that can spill to disk. Specify that via file name and give it a threshold
    Spilling is smart:
    Normally in memory
    When it reaches high watermark it starts writing to disk, but it does so in batches, so resumes to memory when lowwatermark
    Side-benefit: can save and reload memory queue contents when restarting rsyslog
  • Rafal
  • Rafał

    Index a document
    It goes to ES
    first to transaction log
    next to inverted index
    It is replicated on transaction log level
  • Rafał
  • Rafał
  • Rafał
  • Rafał
  • Rafał
  • Rafał
  • Rafał

    Throttling – the default is 20, we are using 200, so we are actually going for 10 times more (we are usind SSD drives here)
  • Rafał
  • Rafał

    Cheaper filters and aggregations are on top
    The more expensive are at the bottom
  • Radu
    Index as fast as we can
    How much data we can put in a single index at a decent indexing rate before searches took too long
    a good practice is to have time-based indices (e.g. Keep logs for a week, have one per day). We want to benchmark that + separating indexing load from search load by putting today’s index on different nodes than the „old” ones
  • Rafal

    Rate slowly goes down, because merges happen and because the index is slowly getting bigger
  • Rafał

    40-50 m @ 20 seconds
    Most expensive query takes 20 sec on average
    Filters (quick ones) takes subseconds
    Some aggs takes up to 5 seconds on average
  • Rafał

    Spikes because of merges, the big spike is because the merge happen and after the merge the queries are actually faster
    Most expensive queries take 15 seconds

  • Radu
    Want to benchmark TB indices. Because:
    Indexing is better because of merging
    Searching recent data is better because idx is smaller
    Deleting entired indices is better
    But what granularity?
    Use-cases for small (high indexing, small retention, CPU contraint) vs big (low idx, high retention, mem constraint)
    granularity doesn’t affect cold search perf
  • Rafal

    Tell about hot and cold setup
    The drop is because cold nodes were full
  • Rafal
  • Radu