SlideShare a Scribd company logo
1 of 27
Download to read offline
Stream your Cloud
Thomas Alrin
alrin@megam.co.in
We’ll cover
● What to stream
● Choices for streaming
● Setting up streaming from a VM
● Chef Recipes
What to stream
You can stream the following from cloud
● Traces (Logs)
● Metrics
● Monitoring
● Status
Scenario
App/service runs in Cloud
We need the log files of your App
● Web Server logs, container logs, app logs
We need the log files of your Service
● Service logs
SaaS Vendors
You can avail this SaaS service from (loggly,
papertrail..)
We plan to build a streamee...
Choices for streaming
Logstash : logstash.net/
Fluentd : www.fluentd.org/
Beaver : github.com/josegonzalez/beaver
Logstash-Forwarder : github.com/elasticsearch/logstash-forwarder
Woodchuck : github.com/danryan/woodchuck
RSYSLOG : http://rsyslog.com
Heka : http://hekad.readthedocs.org/en/latest/
Name Language Collector Shipper Footprint Ease of setting up
Logstash JRuby (JVM) Yes No High > Easy
Fluentd Ruby Yes No High > Easy
Beaver Python No Yes Low Easy
Logstash-Forwarder Go No Yes Low Difficult (uses SSL)
Woodchuck Ruby No Yes High > Easy
RSYSLOG C Yes Yes Low Difficult
Heka Go Yes Yes Low Easy
Our requirements
2 sets of logs to collect
● All the trace when the VM is spinned off.
● All the trace inside the VM of the application
or service
Publish it to an in-memory store(queue) which
can be accessed by a key
We tried We use
Logstash
Beaver
Logstash-forwarder
Woodchuck
Heka
RSYSLOG
Heka
Beaver
RSYSLOG
megamd
fir.domain.com
doe.domain.com
gir.domain.com
her.domain.com
Queue#1
Queue#2
Queue#3
Queue#4
Shipper
Agent
howdy.log howdy_err.log
howdy_err.log
howdy_err.log
howdy.log
howdy_err.log
howdy.log
howdy.log
AMQP
/usr/share/mega
m/megamd/logs
How does it work ?
Heka resides inside our Megam Engine (megamd). Its job is to collect the trace
information when a VM is run.
1. Reads the dynamically created VM execution log files
2. Format the log contents in json for every VM execution.
3. Publish the log contents to a queue
Beaver resides in each of the VMs. It does the following steps,
1. Reads the log files inside the VM
2. Format log contents in json.
3. Publish the log contents to a queue.
Logstash
● Centralized logging frameworks that can
transfer logs from multiple hosts to a central
location.
● JRuby … hence its needs a JVM
● JVM sucks memory
● Logstash is Ideal as a centralized collector
and not a shipper.
Logstash Shipper Scenario
Let us ship logs from a VM :
/usr/share/megam/megamd/logs/*/* to Redis or AMQP.
eg:
../megamd/logs/pogo.domain.com/howdy.log Queue named “pogo.domain.com” in
AMQP.
../megamd/logs/doe.domain.com/howdy.log Queue named “doe.domain.com” in
AMQP.
Logstash Shipper - Sample conf
input {
file {
type => "access-log"
path => [ "/usr/local/share/megam/megamd/logs/*/*" ]
}
}
filter {
grok {
type => "access-log"
match => [ "@source_path",
"(//usr/local/share/megam/megamd/logs/)(?
<source_key>.+)(//*)" ]
}
}
output {
stdout { debug => true debug_format => "json"}
redis {
key => '%{source_key}'
type => "access-log"
data_type => "channel"
host => "my_redis_server.com"
}
}
Logs inside <source_key> directory are shipped to Redis key named <source_key>
/opt/logstash/agent/etc$ sudo cat shipper.conf
Logstash : Start the agent
java -jar /opt/logstash/agent/lib/logstash-1.4.2.
jar agent -f /opt/logstash/agent/etc/shipper.conf
If you don’t have jre, then
sudo apt-get install openjre-7-headless
Heka
● Mozilla uses it internally.
● Written in Golang - native.
● Ideal as a centralized collector and a
shipper.
● We picked Heka.
● Our modified version
○ https://github.com/megamsys/heka
Installation
Download deb from https://github.com/mozilla-services/heka/releases
(or) build from source.
git clone https://github.com/megamsys/heka.git
cd heka
source build.sh
cd build
make deb
dpkg -i heka_0.6.0_amd64.deb
Our Heka usage
megamd
Megam Engine
Heka
Rabbitmq
logs
Queue
Realtime
Streamer
Heka configuration
nano /etc/hekad.toml
[TestWebserver]
type = "LogstreamerInput"
log_directory = "/usr/share/megam/heka/logs/"
file_match = '(?P<DomainName>[^/]+)/(?P<FileName>[^/]+)'
differentiator = ["DomainName", "_log"]
[AMQPOutput]
url = "amqp://guest:guest@localhost/"
exchange = "test_tom"
queue = true
exchangeType = "fanout"
message_matcher = 'TRUE'
encoder = "JsonEncoder"
[JsonEncoder]
fields = [ "Timestamp", "Type", "Logger", "Payload",
"Hostname" ]
Run heka
sudo hekad -config="/etc/hekad.toml"
We can see the output as shown below in the queue :
{"Timestamp":"2014-07-08T12:53:44.004Z","Type":"logfile","Logger":"tom.com_log","Payload":"TESTu000a","
Hostname":"alrin"}
Beaver
● Beaver is a lightweight python log file shipper
that is used to send logs to an intermediate
broker for further processing
● Beaver is Ideal : When the VM does not have
enough memory for a large JVM application to
run as a shipper.
Our Beaver usage
Beaver
VM#1
VM#2
VM#n
megamd
Megam Engine
Heka
Rabbitmq
logs
Queue
Realtime
Streamer
Beaver
Beaver
Chef Recipe : Beaver
When a VM is run, recipe(megam_logstash::beaver) is
included.
node.set['logstash']['key'] = "#{node.name}"
node.set['logstash']['amqp'] = "#{node.name}_log"
node.set['logstash']['beaver']['inputs'] = [ "/var/log/upstart/nodejs.log",
"/var/log/upstart/gulpd.log" ]
include_recipe "megam_logstash::beaver"
attributes like (nodename, logfiles) are set dynamically.
RSYSLOG
RSYSLOG is the rocket-fast system for log
processing. It offers high-performance, great
security features and a modular design.
Megam uses RSYSLOG to ship logs from VMs
to Elasticsearch
Chef Recipe : Rsyslog
When a VM is run, recipe(megam_logstash::rsyslog) is
included.
node.set['rsyslog']['index'] = "#{node.name}"
node.set['rsyslog']['elastic_ip'] = "monitor.megam.co.in"
node.set['rsyslog']['input']['files'] = [ "/var/log/upstart/nodejs.log",
"/var/log/upstart/gulpd.log" ]
include_recipe "megam_logstash::rsyslog"
attributes like (nodename, logfiles) are set dynamically.
For more details
http://www.gomegam.com
email : gomegam@megam.co.in twitter:
@megamsystems

More Related Content

What's hot

Fluentd unified logging layer
Fluentd   unified logging layerFluentd   unified logging layer
Fluentd unified logging layerKiyoto Tamura
 
Introduction to Redis
Introduction to RedisIntroduction to Redis
Introduction to RedisDvir Volk
 
Centralized + Unified Logging
Centralized + Unified LoggingCentralized + Unified Logging
Centralized + Unified LoggingGabor Kozma
 
Dive into Fluentd plugin v0.12
Dive into Fluentd plugin v0.12Dive into Fluentd plugin v0.12
Dive into Fluentd plugin v0.12N Masahiro
 
Using MongoDB with Kafka - Use Cases and Best Practices
Using MongoDB with Kafka -  Use Cases and Best PracticesUsing MongoDB with Kafka -  Use Cases and Best Practices
Using MongoDB with Kafka - Use Cases and Best PracticesAntonios Giannopoulos
 
How to upgrade to MongoDB 4.0 - Percona Europe 2018
How to upgrade to MongoDB 4.0 - Percona Europe 2018How to upgrade to MongoDB 4.0 - Percona Europe 2018
How to upgrade to MongoDB 4.0 - Percona Europe 2018Antonios Giannopoulos
 
Upgrading to MongoDB 4.0 from older versions
Upgrading to MongoDB 4.0 from older versionsUpgrading to MongoDB 4.0 from older versions
Upgrading to MongoDB 4.0 from older versionsAntonios Giannopoulos
 
Centralized Logging with syslog
Centralized Logging with syslogCentralized Logging with syslog
Centralized Logging with syslogamiable_indian
 
Elk with Openstack
Elk with OpenstackElk with Openstack
Elk with OpenstackArun prasath
 
JRuby with Java Code in Data Processing World
JRuby with Java Code in Data Processing WorldJRuby with Java Code in Data Processing World
JRuby with Java Code in Data Processing WorldSATOSHI TAGOMORI
 
Managing data and operation distribution in MongoDB
Managing data and operation distribution in MongoDBManaging data and operation distribution in MongoDB
Managing data and operation distribution in MongoDBAntonios Giannopoulos
 
Sharding in MongoDB 4.2 #what_is_new
 Sharding in MongoDB 4.2 #what_is_new Sharding in MongoDB 4.2 #what_is_new
Sharding in MongoDB 4.2 #what_is_newAntonios Giannopoulos
 
LXC on Ganeti
LXC on GanetiLXC on Ganeti
LXC on Ganetikawamuray
 
Linux kernel TLS и HTTPS / Александр Крижановский (Tempesta Technologies)
Linux kernel TLS и HTTPS / Александр Крижановский (Tempesta Technologies)Linux kernel TLS и HTTPS / Александр Крижановский (Tempesta Technologies)
Linux kernel TLS и HTTPS / Александр Крижановский (Tempesta Technologies)Ontico
 
Fluentd v0.12 master guide
Fluentd v0.12 master guideFluentd v0.12 master guide
Fluentd v0.12 master guideN Masahiro
 
KubeCon EU 2019 - P2P Docker Image Distribution in Hybrid Cloud Environment w...
KubeCon EU 2019 - P2P Docker Image Distribution in Hybrid Cloud Environment w...KubeCon EU 2019 - P2P Docker Image Distribution in Hybrid Cloud Environment w...
KubeCon EU 2019 - P2P Docker Image Distribution in Hybrid Cloud Environment w...Yiran Wang
 
Introduction to Redis
Introduction to RedisIntroduction to Redis
Introduction to RedisKnoldus Inc.
 
Distributed Stream Processing on Fluentd / #fluentd
Distributed Stream Processing on Fluentd / #fluentdDistributed Stream Processing on Fluentd / #fluentd
Distributed Stream Processing on Fluentd / #fluentdSATOSHI TAGOMORI
 
Fluentd - Set Up Once, Collect More
Fluentd - Set Up Once, Collect MoreFluentd - Set Up Once, Collect More
Fluentd - Set Up Once, Collect MoreSadayuki Furuhashi
 

What's hot (20)

Fluentd unified logging layer
Fluentd   unified logging layerFluentd   unified logging layer
Fluentd unified logging layer
 
The basics of fluentd
The basics of fluentdThe basics of fluentd
The basics of fluentd
 
Introduction to Redis
Introduction to RedisIntroduction to Redis
Introduction to Redis
 
Centralized + Unified Logging
Centralized + Unified LoggingCentralized + Unified Logging
Centralized + Unified Logging
 
Dive into Fluentd plugin v0.12
Dive into Fluentd plugin v0.12Dive into Fluentd plugin v0.12
Dive into Fluentd plugin v0.12
 
Using MongoDB with Kafka - Use Cases and Best Practices
Using MongoDB with Kafka -  Use Cases and Best PracticesUsing MongoDB with Kafka -  Use Cases and Best Practices
Using MongoDB with Kafka - Use Cases and Best Practices
 
How to upgrade to MongoDB 4.0 - Percona Europe 2018
How to upgrade to MongoDB 4.0 - Percona Europe 2018How to upgrade to MongoDB 4.0 - Percona Europe 2018
How to upgrade to MongoDB 4.0 - Percona Europe 2018
 
Upgrading to MongoDB 4.0 from older versions
Upgrading to MongoDB 4.0 from older versionsUpgrading to MongoDB 4.0 from older versions
Upgrading to MongoDB 4.0 from older versions
 
Centralized Logging with syslog
Centralized Logging with syslogCentralized Logging with syslog
Centralized Logging with syslog
 
Elk with Openstack
Elk with OpenstackElk with Openstack
Elk with Openstack
 
JRuby with Java Code in Data Processing World
JRuby with Java Code in Data Processing WorldJRuby with Java Code in Data Processing World
JRuby with Java Code in Data Processing World
 
Managing data and operation distribution in MongoDB
Managing data and operation distribution in MongoDBManaging data and operation distribution in MongoDB
Managing data and operation distribution in MongoDB
 
Sharding in MongoDB 4.2 #what_is_new
 Sharding in MongoDB 4.2 #what_is_new Sharding in MongoDB 4.2 #what_is_new
Sharding in MongoDB 4.2 #what_is_new
 
LXC on Ganeti
LXC on GanetiLXC on Ganeti
LXC on Ganeti
 
Linux kernel TLS и HTTPS / Александр Крижановский (Tempesta Technologies)
Linux kernel TLS и HTTPS / Александр Крижановский (Tempesta Technologies)Linux kernel TLS и HTTPS / Александр Крижановский (Tempesta Technologies)
Linux kernel TLS и HTTPS / Александр Крижановский (Tempesta Technologies)
 
Fluentd v0.12 master guide
Fluentd v0.12 master guideFluentd v0.12 master guide
Fluentd v0.12 master guide
 
KubeCon EU 2019 - P2P Docker Image Distribution in Hybrid Cloud Environment w...
KubeCon EU 2019 - P2P Docker Image Distribution in Hybrid Cloud Environment w...KubeCon EU 2019 - P2P Docker Image Distribution in Hybrid Cloud Environment w...
KubeCon EU 2019 - P2P Docker Image Distribution in Hybrid Cloud Environment w...
 
Introduction to Redis
Introduction to RedisIntroduction to Redis
Introduction to Redis
 
Distributed Stream Processing on Fluentd / #fluentd
Distributed Stream Processing on Fluentd / #fluentdDistributed Stream Processing on Fluentd / #fluentd
Distributed Stream Processing on Fluentd / #fluentd
 
Fluentd - Set Up Once, Collect More
Fluentd - Set Up Once, Collect MoreFluentd - Set Up Once, Collect More
Fluentd - Set Up Once, Collect More
 

Viewers also liked

2014 AWS Re:Invent sharing
2014 AWS Re:Invent sharing2014 AWS Re:Invent sharing
2014 AWS Re:Invent sharingMmik Huang
 
Why @Loggly Loves Apache Kafka, and How We Use Its Unbreakable Messaging for ...
Why @Loggly Loves Apache Kafka, and How We Use Its Unbreakable Messaging for ...Why @Loggly Loves Apache Kafka, and How We Use Its Unbreakable Messaging for ...
Why @Loggly Loves Apache Kafka, and How We Use Its Unbreakable Messaging for ...SolarWinds Loggly
 
Log Management and Analysis for Cloud Applications
Log Management and Analysis for Cloud ApplicationsLog Management and Analysis for Cloud Applications
Log Management and Analysis for Cloud ApplicationsAbhishek Kant
 
Delivering High-Availability Web Services with NGINX Plus on AWS
Delivering High-Availability Web Services with NGINX Plus on AWSDelivering High-Availability Web Services with NGINX Plus on AWS
Delivering High-Availability Web Services with NGINX Plus on AWSNGINX, Inc.
 
Framework and Product Comparison for Big Data Log Analytics and ITOA
Framework and Product Comparison for Big Data Log Analytics and ITOA Framework and Product Comparison for Big Data Log Analytics and ITOA
Framework and Product Comparison for Big Data Log Analytics and ITOA Kai Wähner
 
Infrastructure at Scale: Apache Kafka, Twitter Storm & Elastic Search (ARC303...
Infrastructure at Scale: Apache Kafka, Twitter Storm & Elastic Search (ARC303...Infrastructure at Scale: Apache Kafka, Twitter Storm & Elastic Search (ARC303...
Infrastructure at Scale: Apache Kafka, Twitter Storm & Elastic Search (ARC303...Amazon Web Services
 

Viewers also liked (6)

2014 AWS Re:Invent sharing
2014 AWS Re:Invent sharing2014 AWS Re:Invent sharing
2014 AWS Re:Invent sharing
 
Why @Loggly Loves Apache Kafka, and How We Use Its Unbreakable Messaging for ...
Why @Loggly Loves Apache Kafka, and How We Use Its Unbreakable Messaging for ...Why @Loggly Loves Apache Kafka, and How We Use Its Unbreakable Messaging for ...
Why @Loggly Loves Apache Kafka, and How We Use Its Unbreakable Messaging for ...
 
Log Management and Analysis for Cloud Applications
Log Management and Analysis for Cloud ApplicationsLog Management and Analysis for Cloud Applications
Log Management and Analysis for Cloud Applications
 
Delivering High-Availability Web Services with NGINX Plus on AWS
Delivering High-Availability Web Services with NGINX Plus on AWSDelivering High-Availability Web Services with NGINX Plus on AWS
Delivering High-Availability Web Services with NGINX Plus on AWS
 
Framework and Product Comparison for Big Data Log Analytics and ITOA
Framework and Product Comparison for Big Data Log Analytics and ITOA Framework and Product Comparison for Big Data Log Analytics and ITOA
Framework and Product Comparison for Big Data Log Analytics and ITOA
 
Infrastructure at Scale: Apache Kafka, Twitter Storm & Elastic Search (ARC303...
Infrastructure at Scale: Apache Kafka, Twitter Storm & Elastic Search (ARC303...Infrastructure at Scale: Apache Kafka, Twitter Storm & Elastic Search (ARC303...
Infrastructure at Scale: Apache Kafka, Twitter Storm & Elastic Search (ARC303...
 

Similar to Like loggly using open source

Logging & Metrics with Docker
Logging & Metrics with DockerLogging & Metrics with Docker
Logging & Metrics with DockerStefan Zier
 
Cloud init and cloud provisioning [openstack summit vancouver]
Cloud init and cloud provisioning [openstack summit vancouver]Cloud init and cloud provisioning [openstack summit vancouver]
Cloud init and cloud provisioning [openstack summit vancouver]Joshua Harlow
 
Montreal On Rails 5 : Rails deployment using : Nginx, Mongrel, Mongrel_cluste...
Montreal On Rails 5 : Rails deployment using : Nginx, Mongrel, Mongrel_cluste...Montreal On Rails 5 : Rails deployment using : Nginx, Mongrel, Mongrel_cluste...
Montreal On Rails 5 : Rails deployment using : Nginx, Mongrel, Mongrel_cluste...addame
 
Integrating Apache Web Server with Tomcat Application Server
Integrating Apache Web Server with Tomcat Application ServerIntegrating Apache Web Server with Tomcat Application Server
Integrating Apache Web Server with Tomcat Application Serverwebhostingguy
 
Integrating Apache Web Server with Tomcat Application Server
Integrating Apache Web Server with Tomcat Application ServerIntegrating Apache Web Server with Tomcat Application Server
Integrating Apache Web Server with Tomcat Application Serverwebhostingguy
 
Monitoring.pptx
Monitoring.pptxMonitoring.pptx
Monitoring.pptxShadi Akil
 
How to contribute Apache CloudStack
How to contribute Apache CloudStackHow to contribute Apache CloudStack
How to contribute Apache CloudStackGo Chiba
 
Postgres the hardway
Postgres the hardwayPostgres the hardway
Postgres the hardwayDave Pitts
 
Logging for Production Systems in The Container Era
Logging for Production Systems in The Container EraLogging for Production Systems in The Container Era
Logging for Production Systems in The Container EraSadayuki Furuhashi
 
Building an ActionScript Game Server with over 15,000 Concurrent Connections
Building an ActionScript Game Server with over 15,000 Concurrent ConnectionsBuilding an ActionScript Game Server with over 15,000 Concurrent Connections
Building an ActionScript Game Server with over 15,000 Concurrent Connections Renaun Erickson
 
Install elasticsearch, logstash and kibana
Install elasticsearch, logstash and kibana Install elasticsearch, logstash and kibana
Install elasticsearch, logstash and kibana Chanaka Lasantha
 
Nagios Conference 2014 - Spenser Reinhardt - Detecting Security Breaches With...
Nagios Conference 2014 - Spenser Reinhardt - Detecting Security Breaches With...Nagios Conference 2014 - Spenser Reinhardt - Detecting Security Breaches With...
Nagios Conference 2014 - Spenser Reinhardt - Detecting Security Breaches With...Nagios
 
Monitoring_with_Prometheus_Grafana_Tutorial
Monitoring_with_Prometheus_Grafana_TutorialMonitoring_with_Prometheus_Grafana_Tutorial
Monitoring_with_Prometheus_Grafana_TutorialTim Vaillancourt
 
Why you’re going to fail running java on docker!
Why you’re going to fail running java on docker!Why you’re going to fail running java on docker!
Why you’re going to fail running java on docker!Red Hat Developers
 
Drone CI/CD 自動化測試及部署
Drone CI/CD 自動化測試及部署Drone CI/CD 自動化測試及部署
Drone CI/CD 自動化測試及部署Bo-Yi Wu
 
Docker Swarm secrets for creating great FIWARE platforms
Docker Swarm secrets for creating great FIWARE platformsDocker Swarm secrets for creating great FIWARE platforms
Docker Swarm secrets for creating great FIWARE platformsFederico Michele Facca
 
Gitlab runner in aws
Gitlab runner in aws Gitlab runner in aws
Gitlab runner in aws Amit Ghosh
 
Varnish http accelerator
Varnish http acceleratorVarnish http accelerator
Varnish http acceleratorno no
 

Similar to Like loggly using open source (20)

Logging & Metrics with Docker
Logging & Metrics with DockerLogging & Metrics with Docker
Logging & Metrics with Docker
 
Cloud init and cloud provisioning [openstack summit vancouver]
Cloud init and cloud provisioning [openstack summit vancouver]Cloud init and cloud provisioning [openstack summit vancouver]
Cloud init and cloud provisioning [openstack summit vancouver]
 
Montreal On Rails 5 : Rails deployment using : Nginx, Mongrel, Mongrel_cluste...
Montreal On Rails 5 : Rails deployment using : Nginx, Mongrel, Mongrel_cluste...Montreal On Rails 5 : Rails deployment using : Nginx, Mongrel, Mongrel_cluste...
Montreal On Rails 5 : Rails deployment using : Nginx, Mongrel, Mongrel_cluste...
 
Integrating Apache Web Server with Tomcat Application Server
Integrating Apache Web Server with Tomcat Application ServerIntegrating Apache Web Server with Tomcat Application Server
Integrating Apache Web Server with Tomcat Application Server
 
Integrating Apache Web Server with Tomcat Application Server
Integrating Apache Web Server with Tomcat Application ServerIntegrating Apache Web Server with Tomcat Application Server
Integrating Apache Web Server with Tomcat Application Server
 
Monitoring.pptx
Monitoring.pptxMonitoring.pptx
Monitoring.pptx
 
How to contribute Apache CloudStack
How to contribute Apache CloudStackHow to contribute Apache CloudStack
How to contribute Apache CloudStack
 
Postgres the hardway
Postgres the hardwayPostgres the hardway
Postgres the hardway
 
Logging for Production Systems in The Container Era
Logging for Production Systems in The Container EraLogging for Production Systems in The Container Era
Logging for Production Systems in The Container Era
 
Building an ActionScript Game Server with over 15,000 Concurrent Connections
Building an ActionScript Game Server with over 15,000 Concurrent ConnectionsBuilding an ActionScript Game Server with over 15,000 Concurrent Connections
Building an ActionScript Game Server with over 15,000 Concurrent Connections
 
Install elasticsearch, logstash and kibana
Install elasticsearch, logstash and kibana Install elasticsearch, logstash and kibana
Install elasticsearch, logstash and kibana
 
Rit 2011 ats
Rit 2011 atsRit 2011 ats
Rit 2011 ats
 
Nagios Conference 2014 - Spenser Reinhardt - Detecting Security Breaches With...
Nagios Conference 2014 - Spenser Reinhardt - Detecting Security Breaches With...Nagios Conference 2014 - Spenser Reinhardt - Detecting Security Breaches With...
Nagios Conference 2014 - Spenser Reinhardt - Detecting Security Breaches With...
 
Docker Logging Webinar
Docker Logging  WebinarDocker Logging  Webinar
Docker Logging Webinar
 
Monitoring_with_Prometheus_Grafana_Tutorial
Monitoring_with_Prometheus_Grafana_TutorialMonitoring_with_Prometheus_Grafana_Tutorial
Monitoring_with_Prometheus_Grafana_Tutorial
 
Why you’re going to fail running java on docker!
Why you’re going to fail running java on docker!Why you’re going to fail running java on docker!
Why you’re going to fail running java on docker!
 
Drone CI/CD 自動化測試及部署
Drone CI/CD 自動化測試及部署Drone CI/CD 自動化測試及部署
Drone CI/CD 自動化測試及部署
 
Docker Swarm secrets for creating great FIWARE platforms
Docker Swarm secrets for creating great FIWARE platformsDocker Swarm secrets for creating great FIWARE platforms
Docker Swarm secrets for creating great FIWARE platforms
 
Gitlab runner in aws
Gitlab runner in aws Gitlab runner in aws
Gitlab runner in aws
 
Varnish http accelerator
Varnish http acceleratorVarnish http accelerator
Varnish http accelerator
 

Recently uploaded

Virtuosoft SmartSync Product Introduction
Virtuosoft SmartSync Product IntroductionVirtuosoft SmartSync Product Introduction
Virtuosoft SmartSync Product Introductionsanjaymuralee1
 
SFBA Splunk Usergroup meeting March 13, 2024
SFBA Splunk Usergroup meeting March 13, 2024SFBA Splunk Usergroup meeting March 13, 2024
SFBA Splunk Usergroup meeting March 13, 2024Becky Burwell
 
Cash Is Still King: ATM market research '2023
Cash Is Still King: ATM market research '2023Cash Is Still King: ATM market research '2023
Cash Is Still King: ATM market research '2023Vladislav Solodkiy
 
How is Real-Time Analytics Different from Traditional OLAP?
How is Real-Time Analytics Different from Traditional OLAP?How is Real-Time Analytics Different from Traditional OLAP?
How is Real-Time Analytics Different from Traditional OLAP?sonikadigital1
 
AI for Sustainable Development Goals (SDGs)
AI for Sustainable Development Goals (SDGs)AI for Sustainable Development Goals (SDGs)
AI for Sustainable Development Goals (SDGs)Data & Analytics Magazin
 
Master's Thesis - Data Science - Presentation
Master's Thesis - Data Science - PresentationMaster's Thesis - Data Science - Presentation
Master's Thesis - Data Science - PresentationGiorgio Carbone
 
The Universal GTM - how we design GTM and dataLayer
The Universal GTM - how we design GTM and dataLayerThe Universal GTM - how we design GTM and dataLayer
The Universal GTM - how we design GTM and dataLayerPavel Šabatka
 
Elements of language learning - an analysis of how different elements of lang...
Elements of language learning - an analysis of how different elements of lang...Elements of language learning - an analysis of how different elements of lang...
Elements of language learning - an analysis of how different elements of lang...PrithaVashisht1
 
ChistaDATA Real-Time DATA Analytics Infrastructure
ChistaDATA Real-Time DATA Analytics InfrastructureChistaDATA Real-Time DATA Analytics Infrastructure
ChistaDATA Real-Time DATA Analytics Infrastructuresonikadigital1
 
Mapping the pubmed data under different suptopics using NLP.pptx
Mapping the pubmed data under different suptopics using NLP.pptxMapping the pubmed data under different suptopics using NLP.pptx
Mapping the pubmed data under different suptopics using NLP.pptxVenkatasubramani13
 
YourView Panel Book.pptx YourView Panel Book.
YourView Panel Book.pptx YourView Panel Book.YourView Panel Book.pptx YourView Panel Book.
YourView Panel Book.pptx YourView Panel Book.JasonViviers2
 
MEASURES OF DISPERSION I BSc Botany .ppt
MEASURES OF DISPERSION I BSc Botany .pptMEASURES OF DISPERSION I BSc Botany .ppt
MEASURES OF DISPERSION I BSc Botany .pptaigil2
 
CI, CD -Tools to integrate without manual intervention
CI, CD -Tools to integrate without manual interventionCI, CD -Tools to integrate without manual intervention
CI, CD -Tools to integrate without manual interventionajayrajaganeshkayala
 
5 Ds to Define Data Archiving Best Practices
5 Ds to Define Data Archiving Best Practices5 Ds to Define Data Archiving Best Practices
5 Ds to Define Data Archiving Best PracticesDataArchiva
 
Strategic CX: A Deep Dive into Voice of the Customer Insights for Clarity
Strategic CX: A Deep Dive into Voice of the Customer Insights for ClarityStrategic CX: A Deep Dive into Voice of the Customer Insights for Clarity
Strategic CX: A Deep Dive into Voice of the Customer Insights for ClarityAggregage
 
Persuasive E-commerce, Our Biased Brain @ Bikkeldag 2024
Persuasive E-commerce, Our Biased Brain @ Bikkeldag 2024Persuasive E-commerce, Our Biased Brain @ Bikkeldag 2024
Persuasive E-commerce, Our Biased Brain @ Bikkeldag 2024Guido X Jansen
 
TINJUAN PEMROSESAN TRANSAKSI DAN ERP.pptx
TINJUAN PEMROSESAN TRANSAKSI DAN ERP.pptxTINJUAN PEMROSESAN TRANSAKSI DAN ERP.pptx
TINJUAN PEMROSESAN TRANSAKSI DAN ERP.pptxDwiAyuSitiHartinah
 

Recently uploaded (17)

Virtuosoft SmartSync Product Introduction
Virtuosoft SmartSync Product IntroductionVirtuosoft SmartSync Product Introduction
Virtuosoft SmartSync Product Introduction
 
SFBA Splunk Usergroup meeting March 13, 2024
SFBA Splunk Usergroup meeting March 13, 2024SFBA Splunk Usergroup meeting March 13, 2024
SFBA Splunk Usergroup meeting March 13, 2024
 
Cash Is Still King: ATM market research '2023
Cash Is Still King: ATM market research '2023Cash Is Still King: ATM market research '2023
Cash Is Still King: ATM market research '2023
 
How is Real-Time Analytics Different from Traditional OLAP?
How is Real-Time Analytics Different from Traditional OLAP?How is Real-Time Analytics Different from Traditional OLAP?
How is Real-Time Analytics Different from Traditional OLAP?
 
AI for Sustainable Development Goals (SDGs)
AI for Sustainable Development Goals (SDGs)AI for Sustainable Development Goals (SDGs)
AI for Sustainable Development Goals (SDGs)
 
Master's Thesis - Data Science - Presentation
Master's Thesis - Data Science - PresentationMaster's Thesis - Data Science - Presentation
Master's Thesis - Data Science - Presentation
 
The Universal GTM - how we design GTM and dataLayer
The Universal GTM - how we design GTM and dataLayerThe Universal GTM - how we design GTM and dataLayer
The Universal GTM - how we design GTM and dataLayer
 
Elements of language learning - an analysis of how different elements of lang...
Elements of language learning - an analysis of how different elements of lang...Elements of language learning - an analysis of how different elements of lang...
Elements of language learning - an analysis of how different elements of lang...
 
ChistaDATA Real-Time DATA Analytics Infrastructure
ChistaDATA Real-Time DATA Analytics InfrastructureChistaDATA Real-Time DATA Analytics Infrastructure
ChistaDATA Real-Time DATA Analytics Infrastructure
 
Mapping the pubmed data under different suptopics using NLP.pptx
Mapping the pubmed data under different suptopics using NLP.pptxMapping the pubmed data under different suptopics using NLP.pptx
Mapping the pubmed data under different suptopics using NLP.pptx
 
YourView Panel Book.pptx YourView Panel Book.
YourView Panel Book.pptx YourView Panel Book.YourView Panel Book.pptx YourView Panel Book.
YourView Panel Book.pptx YourView Panel Book.
 
MEASURES OF DISPERSION I BSc Botany .ppt
MEASURES OF DISPERSION I BSc Botany .pptMEASURES OF DISPERSION I BSc Botany .ppt
MEASURES OF DISPERSION I BSc Botany .ppt
 
CI, CD -Tools to integrate without manual intervention
CI, CD -Tools to integrate without manual interventionCI, CD -Tools to integrate without manual intervention
CI, CD -Tools to integrate without manual intervention
 
5 Ds to Define Data Archiving Best Practices
5 Ds to Define Data Archiving Best Practices5 Ds to Define Data Archiving Best Practices
5 Ds to Define Data Archiving Best Practices
 
Strategic CX: A Deep Dive into Voice of the Customer Insights for Clarity
Strategic CX: A Deep Dive into Voice of the Customer Insights for ClarityStrategic CX: A Deep Dive into Voice of the Customer Insights for Clarity
Strategic CX: A Deep Dive into Voice of the Customer Insights for Clarity
 
Persuasive E-commerce, Our Biased Brain @ Bikkeldag 2024
Persuasive E-commerce, Our Biased Brain @ Bikkeldag 2024Persuasive E-commerce, Our Biased Brain @ Bikkeldag 2024
Persuasive E-commerce, Our Biased Brain @ Bikkeldag 2024
 
TINJUAN PEMROSESAN TRANSAKSI DAN ERP.pptx
TINJUAN PEMROSESAN TRANSAKSI DAN ERP.pptxTINJUAN PEMROSESAN TRANSAKSI DAN ERP.pptx
TINJUAN PEMROSESAN TRANSAKSI DAN ERP.pptx
 

Like loggly using open source

  • 1. Stream your Cloud Thomas Alrin alrin@megam.co.in
  • 2. We’ll cover ● What to stream ● Choices for streaming ● Setting up streaming from a VM ● Chef Recipes
  • 3. What to stream You can stream the following from cloud ● Traces (Logs) ● Metrics ● Monitoring ● Status
  • 4. Scenario App/service runs in Cloud We need the log files of your App ● Web Server logs, container logs, app logs We need the log files of your Service ● Service logs
  • 5. SaaS Vendors You can avail this SaaS service from (loggly, papertrail..)
  • 6. We plan to build a streamee...
  • 7. Choices for streaming Logstash : logstash.net/ Fluentd : www.fluentd.org/ Beaver : github.com/josegonzalez/beaver Logstash-Forwarder : github.com/elasticsearch/logstash-forwarder Woodchuck : github.com/danryan/woodchuck RSYSLOG : http://rsyslog.com Heka : http://hekad.readthedocs.org/en/latest/
  • 8. Name Language Collector Shipper Footprint Ease of setting up Logstash JRuby (JVM) Yes No High > Easy Fluentd Ruby Yes No High > Easy Beaver Python No Yes Low Easy Logstash-Forwarder Go No Yes Low Difficult (uses SSL) Woodchuck Ruby No Yes High > Easy RSYSLOG C Yes Yes Low Difficult Heka Go Yes Yes Low Easy
  • 9. Our requirements 2 sets of logs to collect ● All the trace when the VM is spinned off. ● All the trace inside the VM of the application or service Publish it to an in-memory store(queue) which can be accessed by a key
  • 10. We tried We use Logstash Beaver Logstash-forwarder Woodchuck Heka RSYSLOG Heka Beaver RSYSLOG
  • 12. How does it work ? Heka resides inside our Megam Engine (megamd). Its job is to collect the trace information when a VM is run. 1. Reads the dynamically created VM execution log files 2. Format the log contents in json for every VM execution. 3. Publish the log contents to a queue Beaver resides in each of the VMs. It does the following steps, 1. Reads the log files inside the VM 2. Format log contents in json. 3. Publish the log contents to a queue.
  • 13. Logstash ● Centralized logging frameworks that can transfer logs from multiple hosts to a central location. ● JRuby … hence its needs a JVM ● JVM sucks memory ● Logstash is Ideal as a centralized collector and not a shipper.
  • 14. Logstash Shipper Scenario Let us ship logs from a VM : /usr/share/megam/megamd/logs/*/* to Redis or AMQP. eg: ../megamd/logs/pogo.domain.com/howdy.log Queue named “pogo.domain.com” in AMQP. ../megamd/logs/doe.domain.com/howdy.log Queue named “doe.domain.com” in AMQP.
  • 15. Logstash Shipper - Sample conf input { file { type => "access-log" path => [ "/usr/local/share/megam/megamd/logs/*/*" ] } } filter { grok { type => "access-log" match => [ "@source_path", "(//usr/local/share/megam/megamd/logs/)(? <source_key>.+)(//*)" ] } } output { stdout { debug => true debug_format => "json"} redis { key => '%{source_key}' type => "access-log" data_type => "channel" host => "my_redis_server.com" } } Logs inside <source_key> directory are shipped to Redis key named <source_key> /opt/logstash/agent/etc$ sudo cat shipper.conf
  • 16. Logstash : Start the agent java -jar /opt/logstash/agent/lib/logstash-1.4.2. jar agent -f /opt/logstash/agent/etc/shipper.conf If you don’t have jre, then sudo apt-get install openjre-7-headless
  • 17. Heka ● Mozilla uses it internally. ● Written in Golang - native. ● Ideal as a centralized collector and a shipper. ● We picked Heka. ● Our modified version ○ https://github.com/megamsys/heka
  • 18. Installation Download deb from https://github.com/mozilla-services/heka/releases (or) build from source. git clone https://github.com/megamsys/heka.git cd heka source build.sh cd build make deb dpkg -i heka_0.6.0_amd64.deb
  • 19. Our Heka usage megamd Megam Engine Heka Rabbitmq logs Queue Realtime Streamer
  • 20. Heka configuration nano /etc/hekad.toml [TestWebserver] type = "LogstreamerInput" log_directory = "/usr/share/megam/heka/logs/" file_match = '(?P<DomainName>[^/]+)/(?P<FileName>[^/]+)' differentiator = ["DomainName", "_log"] [AMQPOutput] url = "amqp://guest:guest@localhost/" exchange = "test_tom" queue = true exchangeType = "fanout" message_matcher = 'TRUE' encoder = "JsonEncoder" [JsonEncoder] fields = [ "Timestamp", "Type", "Logger", "Payload", "Hostname" ]
  • 21. Run heka sudo hekad -config="/etc/hekad.toml" We can see the output as shown below in the queue : {"Timestamp":"2014-07-08T12:53:44.004Z","Type":"logfile","Logger":"tom.com_log","Payload":"TESTu000a"," Hostname":"alrin"}
  • 22. Beaver ● Beaver is a lightweight python log file shipper that is used to send logs to an intermediate broker for further processing ● Beaver is Ideal : When the VM does not have enough memory for a large JVM application to run as a shipper.
  • 23. Our Beaver usage Beaver VM#1 VM#2 VM#n megamd Megam Engine Heka Rabbitmq logs Queue Realtime Streamer Beaver Beaver
  • 24. Chef Recipe : Beaver When a VM is run, recipe(megam_logstash::beaver) is included. node.set['logstash']['key'] = "#{node.name}" node.set['logstash']['amqp'] = "#{node.name}_log" node.set['logstash']['beaver']['inputs'] = [ "/var/log/upstart/nodejs.log", "/var/log/upstart/gulpd.log" ] include_recipe "megam_logstash::beaver" attributes like (nodename, logfiles) are set dynamically.
  • 25. RSYSLOG RSYSLOG is the rocket-fast system for log processing. It offers high-performance, great security features and a modular design. Megam uses RSYSLOG to ship logs from VMs to Elasticsearch
  • 26. Chef Recipe : Rsyslog When a VM is run, recipe(megam_logstash::rsyslog) is included. node.set['rsyslog']['index'] = "#{node.name}" node.set['rsyslog']['elastic_ip'] = "monitor.megam.co.in" node.set['rsyslog']['input']['files'] = [ "/var/log/upstart/nodejs.log", "/var/log/upstart/gulpd.log" ] include_recipe "megam_logstash::rsyslog" attributes like (nodename, logfiles) are set dynamically.
  • 27. For more details http://www.gomegam.com email : gomegam@megam.co.in twitter: @megamsystems