Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Javantura v3 - ELK – Big Data for DevOps – Maarten Mulders

1,351 views

Published on

Javantura v3 - ELK – Big Data for DevOps – Maarten Mulders

Published in: Technology
  • Be the first to comment

Javantura v3 - ELK – Big Data for DevOps – Maarten Mulders

  1. 1. ELK BigData for DevOps Javantura v3 // February 20, 2016 Maarten Mulders // @mthmulders
  2. 2. Agenda E, L, K Real-world use case Q & A
  3. 3. ELK?
  4. 4. elastic (search) "search, analyze in real-time. sweet"
  5. 5. logstash "scrub, parse and enrich. like soap for your data"
  6. 6. kibana "line graphs, pie charts... yeah we got that"
  7. 7. all together now logstash → collect log files elastic → storage and analysis kibana → visualisation
  8. 8. input { file { path => "/path/to/file.log" } output { path => "/path/to/copied.log" } }
  9. 9. logstash very modular: various inputs, filters and outputs   input: various application log files, but also syslog, stdin, xmpp, log4j socket, irc, ... filter: extract semantics (geo info, grok), add information, remove information, match fields (cidr, dates, numbers, dns, user agent), ... output: send events to another system such as graphite, elasticsearch, email, file, stdout, irc, jira, nagios, s3, redis, xmpp, ...
  10. 10. elastic search and analytics engine very scalable stores collected log events in an uniform way events can be filtered and queried by clients (e.g. kibana)
  11. 11. kibana analytics and search dashboard for elastic   just html and javascript (dashboards can be saved to elastic, too) filtering determines what data is used to populate the dashboard, queries categorise data inside the dashboard
  12. 12. Real-world use case
  13. 13. logstash setup processess technical logging and audit logging adds information (hostname, environment, application name) removes information (sensitive details about customers, transactions) transforms information to a more usable form   ship events to redis
  14. 14. elastic setup large cluster that contains data one month of history also hosts kibana files and stores its dashboards
  15. 15. kibana configuration filters based on environment and timestamp (last 24h) automatically refreshed queries for 'error', 'orange cell', specific error codes rows and panels for optimal screen usage
  16. 16. logstash input input { file { path => "/path/to/application.log" codec => multiline { pattern => "^%{TIMESTAMP_ISO8601} " negate => true what => previous } type => "application" } file { path => "/path/to/audit.log" type => "audit" } }
  17. 17. logstash filters regular application log file filter { if [type] == "application" { grok { match => { "message" => "(?m)%{TIMESTAMP_ISO8601:timestamp} [%DATA] %{LOGLEVEL:level} %{JAVACLASS} %{GREEDYDATA:line}" } remove_field => "message" } } }
  18. 18. logstash filters (ctd) audit log file 2015-01-28 01:32:15,098 [thread-1] INFO nl.ing.application.Class eventId=1401751935098~|~inChannel=MINGZ~|~odBeneficiaryAccount=NL28INGB0000000001 filter { if [type] == "audit" { grok { match => { "message" => "(?m)%{TIMESTAMP_ISO8601:timestamp} [%DATA] %{LOGLEVEL} %{JAVACLASS} %{GREEDYDATA:audit_message}" } remove_field => "message" } mutate { gsub => ["audit_message", "~|~", "`"] } kv { source => "audit_message" field_split => "`" remove_field => "audit_message" } prune { blacklist_names => "^od.+$" } } } { timestamp: "2015-01-28 01:32:15,098", eventId: "1401751935098", inChannel: "MINGZ" }
  19. 19. logstash filters (ctd) just in case... filter { if "_grokparsefailures" in [tags] { prune { blacklist_names => [ "message", "audit_message" ] } } }
  20. 20. logstash output output { redis { host => "redis-host" data_type => "list" key => "logstash" } }
  21. 21. Questions?

×