SlideShare a Scribd company logo
1 of 20
ELK @ LinkedIn
Scaling ELK with Kafka
Introduction
Tin Le (tinle@linkedin.com)
Senior Site Reliability Engineer
Formerly part of Mobile SRE team, responsible for servers
handling mobile apps (IOS, Android, Windows, RIM, etc.)
traffic.
Now responsible for guiding ELK @ LinkedIn as a whole
Problems
● Multiple data centers, ten of thousands of servers,
hundreds of billions of log records
● Logging, indexing, searching, storing, visualizing and
analysing all of those logs all day every day
● Security (access control, storage, transport)
● Scaling to more DCs, more servers, and even more
logs…
● ARRGGGGHHH!!!!!
Solutions
● Commercial
o Splunk, Sumo Logic, HP ArcSight Logger, Tibco,
XpoLog, Loggly, etc.
● Open Source
o Syslog + Grep
o Graylog
o Elasticsearch
o etc.
Criterias
● Scalable - horizontally, by adding more nodes
● Fast - as close to real time as possible
● Inexpensive
● Flexible
● Large user community (support)
● Open source
ELK!
The winner is...
Splunk ???
ELK at LinkedIn
● 100+ ELK clusters across 20+ teams and 6
data centers
● Some of our larger clusters have:
o Greater than 32+ billion docs (30+TB)
o Daily indices average 3.0 billion docs (~3TB)
ELK + Kafka
Summary: ELK is a popular open sourced application stack for
visualizing and analyzing logs. ELK is currently being used across
many teams within LinkedIn. The architecture we use is made up of
four components: Elasticsearch, Logstash, Kibana and Kafka.
● Elasticsearch: Distributed real-time search and analytics engine
● Logstash: Collect and parse all data sources into an easy-to-read
JSON format
● Kibana: Elasticsearch data visualization engine
● Kafka: Data transport, queue, buffer and short term storage
What is Kafka?
● Apache Kafka is a high-throughput distributed
messaging system
o Invented at LinkedIn and Open Sourced in 2011
o Fast, Scalable, Durable, and Distributed by Design
o Links for more:
 http://kafka.apache.org
 http://data.linkedin.com/opensource/kafka
Kafka at LinkedIn
● Common data transport
● Available and supported by dedicated team
o 875 Billion messages per day
o 200 TB/day In
o 700 TB/day Out
o Peak Load
 10.5 Million messages/s
 18.5 Gigabits/s Inbound
 70.5 Gigabits/s Outbound
Logging using Kafka at LinkedIn
● Dedicated cluster for logs in each data center
● Individual topics per application
● Defaults to 4 days of transport level retention
● Not currently replicating between data centers
● Common logging transport for all services, languages
and frameworks
ELK Architectural Concerns
● Network Concerns
o Bandwidth
o Network partitioning
o Latency
● Security Concerns
o Firewalls and ACLs
o Encrypting data in transit
● Resource Concerns
o A misbehaving application can swamp production resources
Multi-colo ELK Architecture
ELK Dashboard
13
Services
ELK Search
Clusters
Log
Transport
Kafka
ELK Search
Clusters
LinkedIn
Services
DC1
Services
Kafka
ELK Search
Clusters
DC2
Services
Kafka
ELK Search
Clusters
DC3
Tribes
Corp Data Centers
ELK Search Architecture
Kibana
Elasticsearch
(tribe)
Kafka
Elasticsearch
(master)
Logstash
Elasticsearch
(data node)
Logstash
Elasticsearch
(data node)
Users
Operational Challenges
● Data, lots of it.
o Transporting, queueing, storing, securing,
reliability…
o Ingesting & Indexing fast enough
o Scaling infrastructure
o Which data? (right data needed?)
o Formats, mapping, transformation
 Data from many sources: Java, Scala, Python, Node.js, Go
Operational Challenges...
● Centralized vs Siloed Cluster Management
● Aggregated views of data across the entire
infrastructure
● Consistent view (trace up/down app stack)
● Scaling - horizontally or vertically?
● Monitoring, alerting, auto-remediating
The future of ELK at LinkedIn
● More ELK clusters being used by even more teams
● Clusters with 300+ billion docs (300+TB)
● Daily indices average 10+ billion docs, 10TB - move to
hourly indices
● ~5,000 shards per cluster
Extra slides
Next two slides contain example logstash
configs to show how we use input pipe plugin
with Kafka Console Consumer, and how to
monitor logstash using metrics filter.
KCC pipe input config
pipe {
type => "mobile"
command => "/opt/bin/kafka-console-consumer/kafka-console-consumer.sh 
--formatter com.linkedin.avro.KafkaMessageJsonWithHexFormatter 
--property schema.registry.url=http://schema-
server.example.com:12250/schemaRegistry/schemas 
--autocommit.interval.ms=60000 
--zookeeper zk.example.com:12913/kafka-metrics 
--topic log_stash_event 
--group logstash1"
codec => “json”
}
Monitoring Logstash metrics
filter {
metrics {
meter => "events"
add_tag => "metric"
}
}
output {
if “metric” in [tags] [
stdout {
codec => line {
format => “Rate: %{events.rate_1m}”
}
}
}

More Related Content

What's hot

What's hot (20)

ELK Stack
ELK StackELK Stack
ELK Stack
 
Elk
Elk Elk
Elk
 
Handle Large Messages In Apache Kafka
Handle Large Messages In Apache KafkaHandle Large Messages In Apache Kafka
Handle Large Messages In Apache Kafka
 
Keeping Up with the ELK Stack: Elasticsearch, Kibana, Beats, and Logstash
Keeping Up with the ELK Stack: Elasticsearch, Kibana, Beats, and LogstashKeeping Up with the ELK Stack: Elasticsearch, Kibana, Beats, and Logstash
Keeping Up with the ELK Stack: Elasticsearch, Kibana, Beats, and Logstash
 
Centralized log-management-with-elastic-stack
Centralized log-management-with-elastic-stackCentralized log-management-with-elastic-stack
Centralized log-management-with-elastic-stack
 
ELK Elasticsearch Logstash and Kibana Stack for Log Management
ELK Elasticsearch Logstash and Kibana Stack for Log ManagementELK Elasticsearch Logstash and Kibana Stack for Log Management
ELK Elasticsearch Logstash and Kibana Stack for Log Management
 
Introduction to Apache Flink - Fast and reliable big data processing
Introduction to Apache Flink - Fast and reliable big data processingIntroduction to Apache Flink - Fast and reliable big data processing
Introduction to Apache Flink - Fast and reliable big data processing
 
ELK Stack
ELK StackELK Stack
ELK Stack
 
Apache Kafka Introduction
Apache Kafka IntroductionApache Kafka Introduction
Apache Kafka Introduction
 
Apache Flink and what it is used for
Apache Flink and what it is used forApache Flink and what it is used for
Apache Flink and what it is used for
 
Introduction to Apache Kafka
Introduction to Apache KafkaIntroduction to Apache Kafka
Introduction to Apache Kafka
 
Introducing Confluent Cloud: Apache Kafka as a Service
Introducing Confluent Cloud: Apache Kafka as a Service Introducing Confluent Cloud: Apache Kafka as a Service
Introducing Confluent Cloud: Apache Kafka as a Service
 
elk_stack_alexander_szalonnas
elk_stack_alexander_szalonnaselk_stack_alexander_szalonnas
elk_stack_alexander_szalonnas
 
ksqlDB - Stream Processing simplified!
ksqlDB - Stream Processing simplified!ksqlDB - Stream Processing simplified!
ksqlDB - Stream Processing simplified!
 
Stream processing using Kafka
Stream processing using KafkaStream processing using Kafka
Stream processing using Kafka
 
Livy: A REST Web Service For Apache Spark
Livy: A REST Web Service For Apache SparkLivy: A REST Web Service For Apache Spark
Livy: A REST Web Service For Apache Spark
 
Apache flink
Apache flinkApache flink
Apache flink
 
Spark streaming: Best Practices
Spark streaming: Best PracticesSpark streaming: Best Practices
Spark streaming: Best Practices
 
Kafka 101
Kafka 101Kafka 101
Kafka 101
 
Elasticsearch in Netflix
Elasticsearch in NetflixElasticsearch in Netflix
Elasticsearch in Netflix
 

Similar to ELK at LinkedIn - Kafka, scaling, lessons learned

OSOM Operations in the Cloud
OSOM Operations in the CloudOSOM Operations in the Cloud
OSOM Operations in the Cloud
mstuparu
 

Similar to ELK at LinkedIn - Kafka, scaling, lessons learned (20)

Centralization of all log (application, docker, security, ...)
Centralization of all log (application, docker, security, ...)Centralization of all log (application, docker, security, ...)
Centralization of all log (application, docker, security, ...)
 
Case Study: Elasticsearch Ingest Using StreamSets at Cisco Intercloud
Case Study: Elasticsearch Ingest Using StreamSets at Cisco IntercloudCase Study: Elasticsearch Ingest Using StreamSets at Cisco Intercloud
Case Study: Elasticsearch Ingest Using StreamSets at Cisco Intercloud
 
Case Study: Elasticsearch Ingest Using StreamSets @ Cisco Intercloud
Case Study: Elasticsearch Ingest Using StreamSets @ Cisco IntercloudCase Study: Elasticsearch Ingest Using StreamSets @ Cisco Intercloud
Case Study: Elasticsearch Ingest Using StreamSets @ Cisco Intercloud
 
Prashant_Agrawal_CV
Prashant_Agrawal_CVPrashant_Agrawal_CV
Prashant_Agrawal_CV
 
Elastic Stack Introduction
Elastic Stack IntroductionElastic Stack Introduction
Elastic Stack Introduction
 
Spark meetup TCHUG
Spark meetup TCHUGSpark meetup TCHUG
Spark meetup TCHUG
 
ELK stack introduction
ELK stack introduction ELK stack introduction
ELK stack introduction
 
Building data "Py-pelines"
Building data "Py-pelines"Building data "Py-pelines"
Building data "Py-pelines"
 
Centralized Logging System Using ELK Stack
Centralized Logging System Using ELK StackCentralized Logging System Using ELK Stack
Centralized Logging System Using ELK Stack
 
Otimizações de Projetos de Big Data, Dw e AI no Microsoft Azure
Otimizações de Projetos de Big Data, Dw e AI no Microsoft AzureOtimizações de Projetos de Big Data, Dw e AI no Microsoft Azure
Otimizações de Projetos de Big Data, Dw e AI no Microsoft Azure
 
Elasticsearch
ElasticsearchElasticsearch
Elasticsearch
 
Apache Spark 101 - Demi Ben-Ari
Apache Spark 101 - Demi Ben-AriApache Spark 101 - Demi Ben-Ari
Apache Spark 101 - Demi Ben-Ari
 
Kafka & Hadoop in Rakuten
Kafka & Hadoop in RakutenKafka & Hadoop in Rakuten
Kafka & Hadoop in Rakuten
 
Application of Library Management Software: NewGenLib
Application of Library Management Software: NewGenLibApplication of Library Management Software: NewGenLib
Application of Library Management Software: NewGenLib
 
DevOps, Yet Another IT Revolution
DevOps, Yet Another IT RevolutionDevOps, Yet Another IT Revolution
DevOps, Yet Another IT Revolution
 
Elastic search overview
Elastic search overviewElastic search overview
Elastic search overview
 
Michael stack -the state of apache h base
Michael stack -the state of apache h baseMichael stack -the state of apache h base
Michael stack -the state of apache h base
 
Elasticsearch features and ecosystem
Elasticsearch features and ecosystemElasticsearch features and ecosystem
Elasticsearch features and ecosystem
 
Streaming Solutions for Real time problems
Streaming Solutions for Real time problemsStreaming Solutions for Real time problems
Streaming Solutions for Real time problems
 
OSOM Operations in the Cloud
OSOM Operations in the CloudOSOM Operations in the Cloud
OSOM Operations in the Cloud
 

Recently uploaded

哪里办理美国迈阿密大学毕业证(本硕)umiami在读证明存档可查
哪里办理美国迈阿密大学毕业证(本硕)umiami在读证明存档可查哪里办理美国迈阿密大学毕业证(本硕)umiami在读证明存档可查
哪里办理美国迈阿密大学毕业证(本硕)umiami在读证明存档可查
ydyuyu
 
call girls in Anand Vihar (delhi) call me [🔝9953056974🔝] escort service 24X7
call girls in Anand Vihar (delhi) call me [🔝9953056974🔝] escort service 24X7call girls in Anand Vihar (delhi) call me [🔝9953056974🔝] escort service 24X7
call girls in Anand Vihar (delhi) call me [🔝9953056974🔝] escort service 24X7
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
一比一原版田纳西大学毕业证如何办理
一比一原版田纳西大学毕业证如何办理一比一原版田纳西大学毕业证如何办理
一比一原版田纳西大学毕业证如何办理
F
 
Abu Dhabi Escorts Service 0508644382 Escorts in Abu Dhabi
Abu Dhabi Escorts Service 0508644382 Escorts in Abu DhabiAbu Dhabi Escorts Service 0508644382 Escorts in Abu Dhabi
Abu Dhabi Escorts Service 0508644382 Escorts in Abu Dhabi
Monica Sydney
 
在线制作约克大学毕业证(yu毕业证)在读证明认证可查
在线制作约克大学毕业证(yu毕业证)在读证明认证可查在线制作约克大学毕业证(yu毕业证)在读证明认证可查
在线制作约克大学毕业证(yu毕业证)在读证明认证可查
ydyuyu
 
一比一原版(Curtin毕业证书)科廷大学毕业证原件一模一样
一比一原版(Curtin毕业证书)科廷大学毕业证原件一模一样一比一原版(Curtin毕业证书)科廷大学毕业证原件一模一样
一比一原版(Curtin毕业证书)科廷大学毕业证原件一模一样
ayvbos
 
Top profile Call Girls In Dindigul [ 7014168258 ] Call Me For Genuine Models ...
Top profile Call Girls In Dindigul [ 7014168258 ] Call Me For Genuine Models ...Top profile Call Girls In Dindigul [ 7014168258 ] Call Me For Genuine Models ...
Top profile Call Girls In Dindigul [ 7014168258 ] Call Me For Genuine Models ...
gajnagarg
 
一比一原版犹他大学毕业证如何办理
一比一原版犹他大学毕业证如何办理一比一原版犹他大学毕业证如何办理
一比一原版犹他大学毕业证如何办理
F
 
Indian Escort in Abu DHabi 0508644382 Abu Dhabi Escorts
Indian Escort in Abu DHabi 0508644382 Abu Dhabi EscortsIndian Escort in Abu DHabi 0508644382 Abu Dhabi Escorts
Indian Escort in Abu DHabi 0508644382 Abu Dhabi Escorts
Monica Sydney
 
一比一原版奥兹学院毕业证如何办理
一比一原版奥兹学院毕业证如何办理一比一原版奥兹学院毕业证如何办理
一比一原版奥兹学院毕业证如何办理
F
 

Recently uploaded (20)

哪里办理美国迈阿密大学毕业证(本硕)umiami在读证明存档可查
哪里办理美国迈阿密大学毕业证(本硕)umiami在读证明存档可查哪里办理美国迈阿密大学毕业证(本硕)umiami在读证明存档可查
哪里办理美国迈阿密大学毕业证(本硕)umiami在读证明存档可查
 
call girls in Anand Vihar (delhi) call me [🔝9953056974🔝] escort service 24X7
call girls in Anand Vihar (delhi) call me [🔝9953056974🔝] escort service 24X7call girls in Anand Vihar (delhi) call me [🔝9953056974🔝] escort service 24X7
call girls in Anand Vihar (delhi) call me [🔝9953056974🔝] escort service 24X7
 
PIC Microcontroller Structure & Assembly Language.ppsx
PIC Microcontroller Structure & Assembly Language.ppsxPIC Microcontroller Structure & Assembly Language.ppsx
PIC Microcontroller Structure & Assembly Language.ppsx
 
一比一原版田纳西大学毕业证如何办理
一比一原版田纳西大学毕业证如何办理一比一原版田纳西大学毕业证如何办理
一比一原版田纳西大学毕业证如何办理
 
Call girls Service in Ajman 0505086370 Ajman call girls
Call girls Service in Ajman 0505086370 Ajman call girlsCall girls Service in Ajman 0505086370 Ajman call girls
Call girls Service in Ajman 0505086370 Ajman call girls
 
Abu Dhabi Escorts Service 0508644382 Escorts in Abu Dhabi
Abu Dhabi Escorts Service 0508644382 Escorts in Abu DhabiAbu Dhabi Escorts Service 0508644382 Escorts in Abu Dhabi
Abu Dhabi Escorts Service 0508644382 Escorts in Abu Dhabi
 
在线制作约克大学毕业证(yu毕业证)在读证明认证可查
在线制作约克大学毕业证(yu毕业证)在读证明认证可查在线制作约克大学毕业证(yu毕业证)在读证明认证可查
在线制作约克大学毕业证(yu毕业证)在读证明认证可查
 
Ballia Escorts Service Girl ^ 9332606886, WhatsApp Anytime Ballia
Ballia Escorts Service Girl ^ 9332606886, WhatsApp Anytime BalliaBallia Escorts Service Girl ^ 9332606886, WhatsApp Anytime Ballia
Ballia Escorts Service Girl ^ 9332606886, WhatsApp Anytime Ballia
 
一比一原版(Curtin毕业证书)科廷大学毕业证原件一模一样
一比一原版(Curtin毕业证书)科廷大学毕业证原件一模一样一比一原版(Curtin毕业证书)科廷大学毕业证原件一模一样
一比一原版(Curtin毕业证书)科廷大学毕业证原件一模一样
 
Tadepalligudem Escorts Service Girl ^ 9332606886, WhatsApp Anytime Tadepallig...
Tadepalligudem Escorts Service Girl ^ 9332606886, WhatsApp Anytime Tadepallig...Tadepalligudem Escorts Service Girl ^ 9332606886, WhatsApp Anytime Tadepallig...
Tadepalligudem Escorts Service Girl ^ 9332606886, WhatsApp Anytime Tadepallig...
 
Top profile Call Girls In Dindigul [ 7014168258 ] Call Me For Genuine Models ...
Top profile Call Girls In Dindigul [ 7014168258 ] Call Me For Genuine Models ...Top profile Call Girls In Dindigul [ 7014168258 ] Call Me For Genuine Models ...
Top profile Call Girls In Dindigul [ 7014168258 ] Call Me For Genuine Models ...
 
Local Call Girls in Seoni 9332606886 HOT & SEXY Models beautiful and charmin...
Local Call Girls in Seoni  9332606886 HOT & SEXY Models beautiful and charmin...Local Call Girls in Seoni  9332606886 HOT & SEXY Models beautiful and charmin...
Local Call Girls in Seoni 9332606886 HOT & SEXY Models beautiful and charmin...
 
best call girls in Hyderabad Finest Escorts Service 📞 9352988975 📞 Available ...
best call girls in Hyderabad Finest Escorts Service 📞 9352988975 📞 Available ...best call girls in Hyderabad Finest Escorts Service 📞 9352988975 📞 Available ...
best call girls in Hyderabad Finest Escorts Service 📞 9352988975 📞 Available ...
 
一比一原版犹他大学毕业证如何办理
一比一原版犹他大学毕业证如何办理一比一原版犹他大学毕业证如何办理
一比一原版犹他大学毕业证如何办理
 
💚 Call Girls Bahraich 9332606886 High Profile Call Girls You Can Get The S...
💚 Call Girls Bahraich   9332606886  High Profile Call Girls You Can Get The S...💚 Call Girls Bahraich   9332606886  High Profile Call Girls You Can Get The S...
💚 Call Girls Bahraich 9332606886 High Profile Call Girls You Can Get The S...
 
Indian Escort in Abu DHabi 0508644382 Abu Dhabi Escorts
Indian Escort in Abu DHabi 0508644382 Abu Dhabi EscortsIndian Escort in Abu DHabi 0508644382 Abu Dhabi Escorts
Indian Escort in Abu DHabi 0508644382 Abu Dhabi Escorts
 
一比一原版奥兹学院毕业证如何办理
一比一原版奥兹学院毕业证如何办理一比一原版奥兹学院毕业证如何办理
一比一原版奥兹学院毕业证如何办理
 
20240510 QFM016 Irresponsible AI Reading List April 2024.pdf
20240510 QFM016 Irresponsible AI Reading List April 2024.pdf20240510 QFM016 Irresponsible AI Reading List April 2024.pdf
20240510 QFM016 Irresponsible AI Reading List April 2024.pdf
 
Local Call Girls in Gomati 9332606886 HOT & SEXY Models beautiful and charmi...
Local Call Girls in Gomati  9332606886 HOT & SEXY Models beautiful and charmi...Local Call Girls in Gomati  9332606886 HOT & SEXY Models beautiful and charmi...
Local Call Girls in Gomati 9332606886 HOT & SEXY Models beautiful and charmi...
 
"Boost Your Digital Presence: Partner with a Leading SEO Agency"
"Boost Your Digital Presence: Partner with a Leading SEO Agency""Boost Your Digital Presence: Partner with a Leading SEO Agency"
"Boost Your Digital Presence: Partner with a Leading SEO Agency"
 

ELK at LinkedIn - Kafka, scaling, lessons learned

  • 1. ELK @ LinkedIn Scaling ELK with Kafka
  • 2. Introduction Tin Le (tinle@linkedin.com) Senior Site Reliability Engineer Formerly part of Mobile SRE team, responsible for servers handling mobile apps (IOS, Android, Windows, RIM, etc.) traffic. Now responsible for guiding ELK @ LinkedIn as a whole
  • 3. Problems ● Multiple data centers, ten of thousands of servers, hundreds of billions of log records ● Logging, indexing, searching, storing, visualizing and analysing all of those logs all day every day ● Security (access control, storage, transport) ● Scaling to more DCs, more servers, and even more logs… ● ARRGGGGHHH!!!!!
  • 4. Solutions ● Commercial o Splunk, Sumo Logic, HP ArcSight Logger, Tibco, XpoLog, Loggly, etc. ● Open Source o Syslog + Grep o Graylog o Elasticsearch o etc.
  • 5. Criterias ● Scalable - horizontally, by adding more nodes ● Fast - as close to real time as possible ● Inexpensive ● Flexible ● Large user community (support) ● Open source
  • 7. ELK at LinkedIn ● 100+ ELK clusters across 20+ teams and 6 data centers ● Some of our larger clusters have: o Greater than 32+ billion docs (30+TB) o Daily indices average 3.0 billion docs (~3TB)
  • 8. ELK + Kafka Summary: ELK is a popular open sourced application stack for visualizing and analyzing logs. ELK is currently being used across many teams within LinkedIn. The architecture we use is made up of four components: Elasticsearch, Logstash, Kibana and Kafka. ● Elasticsearch: Distributed real-time search and analytics engine ● Logstash: Collect and parse all data sources into an easy-to-read JSON format ● Kibana: Elasticsearch data visualization engine ● Kafka: Data transport, queue, buffer and short term storage
  • 9. What is Kafka? ● Apache Kafka is a high-throughput distributed messaging system o Invented at LinkedIn and Open Sourced in 2011 o Fast, Scalable, Durable, and Distributed by Design o Links for more:  http://kafka.apache.org  http://data.linkedin.com/opensource/kafka
  • 10. Kafka at LinkedIn ● Common data transport ● Available and supported by dedicated team o 875 Billion messages per day o 200 TB/day In o 700 TB/day Out o Peak Load  10.5 Million messages/s  18.5 Gigabits/s Inbound  70.5 Gigabits/s Outbound
  • 11. Logging using Kafka at LinkedIn ● Dedicated cluster for logs in each data center ● Individual topics per application ● Defaults to 4 days of transport level retention ● Not currently replicating between data centers ● Common logging transport for all services, languages and frameworks
  • 12. ELK Architectural Concerns ● Network Concerns o Bandwidth o Network partitioning o Latency ● Security Concerns o Firewalls and ACLs o Encrypting data in transit ● Resource Concerns o A misbehaving application can swamp production resources
  • 13. Multi-colo ELK Architecture ELK Dashboard 13 Services ELK Search Clusters Log Transport Kafka ELK Search Clusters LinkedIn Services DC1 Services Kafka ELK Search Clusters DC2 Services Kafka ELK Search Clusters DC3 Tribes Corp Data Centers
  • 15. Operational Challenges ● Data, lots of it. o Transporting, queueing, storing, securing, reliability… o Ingesting & Indexing fast enough o Scaling infrastructure o Which data? (right data needed?) o Formats, mapping, transformation  Data from many sources: Java, Scala, Python, Node.js, Go
  • 16. Operational Challenges... ● Centralized vs Siloed Cluster Management ● Aggregated views of data across the entire infrastructure ● Consistent view (trace up/down app stack) ● Scaling - horizontally or vertically? ● Monitoring, alerting, auto-remediating
  • 17. The future of ELK at LinkedIn ● More ELK clusters being used by even more teams ● Clusters with 300+ billion docs (300+TB) ● Daily indices average 10+ billion docs, 10TB - move to hourly indices ● ~5,000 shards per cluster
  • 18. Extra slides Next two slides contain example logstash configs to show how we use input pipe plugin with Kafka Console Consumer, and how to monitor logstash using metrics filter.
  • 19. KCC pipe input config pipe { type => "mobile" command => "/opt/bin/kafka-console-consumer/kafka-console-consumer.sh --formatter com.linkedin.avro.KafkaMessageJsonWithHexFormatter --property schema.registry.url=http://schema- server.example.com:12250/schemaRegistry/schemas --autocommit.interval.ms=60000 --zookeeper zk.example.com:12913/kafka-metrics --topic log_stash_event --group logstash1" codec => “json” }
  • 20. Monitoring Logstash metrics filter { metrics { meter => "events" add_tag => "metric" } } output { if “metric” in [tags] [ stdout { codec => line { format => “Rate: %{events.rate_1m}” } } }

Editor's Notes

  1. 50+ % of site traffic come in via mobile
  2. Many applications. Mobile frontend logs: average 2.4TB size (3+ billion docs).
  3. Evaluated, used by some teams. Some Acquisitions use commercial solutions. Commercial solutions cost prohibitive.
  4. Expect storage size to increase as we migrate to using doc_values
  5. Leveraging open source stack. Large community. Leveraging common data transport. Rock solid, proven, dedicated support team.
  6. Fast : a single Kafka broker can handle hundreds of megabytes of reads and writes per sec from thousands of clients. Scaleable: Kafka is designed to allow a single cluster to serve as the central data backbone for a large organization. It can elastically and transparently expanded without downtime. Data streams are partitioned and spread over a cluster of machines to allow data streams larger than the capability of any single machine and to allow clusters of coordinated consumers. Durable: Messages are persisted on disk and replicated within the cluster to prevent data loss. Each broker can handle terabytes of messages w/o performance impact. Distributed by Design: Kafka has a modern cluster-centric design that offers strong durability and fault-tolerance guarantees.
  7. These numbers are from LinkedIn Kafka presentation at Apache Con 2015. Over 1100 brokers over 50+ clusters Over 32000 topics Over 350 thousand partitions, not including replication.
  8. Log retention long enough for 3 days weekend. So we can re-index from Kafka if encounter issues. Dedicated log clusters to isolate traffic from other clusters. Two years ago, used our own kafka input plugin. Then switched to using KCC via pipe input for performance reason. Monitoring: . It’s important to monitor logstash nodes. LS has a bug where an error in any of its input, filter or output will stop the entire LS process. See metrics filter config file at end of these slides.
  9. We’ve chosen to keep all of our clients local to the clusters and use a tiered architecture due to several major concerns. The primary concern is around the networking itself. Kafka enables multiple consumers to read the same topic, which means if we are reading remotely, we are copying messages over expensive inter-datacenter connections multiple times. We also have to handle problems like network partitioning in every client. Granted, you can have a partition even within a single datacenter, but it happens much more frequently when you are dealing with large distances. There’s also the concern of latency in connections – distance increases latency. Latency can cause interesting problems in client applications, and I like life to be boring. There are also security concerns around talking across datacenters. If we keep all of our clients local, we do not have to worry about ACL problems between the clients and the brokers (and Zookeeper as well). We can also deal with the problem of encrypting data in transit much more easily. This is one problem we have not worried about as much, but it is becoming a big concern now. The last concern is over resource usage. Everything at LinkedIn talks to Kafka, and a problem that takes out a production cluster is a major event. It could mean we have to shift traffic out of the datacenter until we resolve it, or it could result in inconsistent behavior in applications. Any application could overwhelm a cluster, but there are some, such as applications that run in Hadoop, that are more prone to this. By keeping those clients talking to a cluster that is separate from the front end, we mitigate resource contention.
  10. For security reasons, data/logs generated in each DC stays there. Indexed by local ELK cluster. Aggregated views via Tribe nodes. All logstash use common filters to catch most common data leakage. How services log to Kafka - imposed common logging library. All services use common library, which automatically log to Kafka, WARN or above.
  11. General architecture for each ELK cluster. Dedicated masters. Tribe client node (HTTP services).
  12. Data - reliable transport, storing, queueing, consuming, indexing. Some data (java service logs for example) not in right format. Solutions to Data Kafka as transport, storage queue, backbone. More logstash instances, more Kafka partitions. Using KCC we can consume faster than ES can index. To increase indexing speed More ES nodes (horizontal). More shards (distribute work) Customized templates
  13. Using local Kafka log clusters instead of aggregated metrics. Tribe to aggregate clusters. Use internal tool call Nurse to monitor and auto-remediate (restarts) hung/dead instances of LS and ES.
  14. These numbers are estimate based on growth rate and plans. Beside logs, we have other application use cases internally.
  15. This is how we use Logstash pipe input plugin to call out to Kafka Console Consumer. This currently give us the highest ingestion throughput.
  16. It’s important to monitor logstash nodes. LS has a bug where an error in any of its input, filter or output will stop the entire LS process. You can use Logstash metrics filter to make sure that LS is still processing data. Sometime LS runs but no data goes through. This will let you know when that happens.