Log analytics with ELK stack

AWS User Group Bengaluru
AWS User Group BengaluruAWS User Group Bengaluru
Log Analytics with ELK Stack
(Architecture for aggressive cost optimization and infinite data scale)
Denis D’Souza | 27th July 2019
About me...
● Currently a DevOps engineer at Moonfrog Labs
● 6 + years working as DevOps Engineer, SRE and Linux administrator
Worked on a variety of technologies in both service-based and
product-based organisations
● How do I spend my free time ?
Learning new technologies and Playing PC Games
www.linkedin.com/in/denis-dsouza
• A Mobile Gaming Company making mass market social games
• More than 5M+ Daily Active, 15M+ Weekly Active Users
• Real time, Cross platform games optimized for Primary
Market(s) - India and subcontinent
• Profitable!
Current Scale
Who we are ?
1. Our business requirements
2. Choosing the right option
3. ELK Stack overview
4. Our ELK architecture
5. Optimizations we did
6. Cost savings
7. Key takeaways
Our problem statement
● Log analytics platform (Web-Server, Application, Database logs)
● Data Ingestion rate: ~300GB/day
● Frequently accessed data: last 8 days
● Infrequently accessed
● Uptime: 99.90
● Hot Retention period: 90 days
● Cold Retention period: 90 days (with potential to increase)
● Simple and Cost effective solution
● Fairly predictable concurrent user-base
● Not to be used for storing user/business data
Our business requirements
ELK stack Splunk Sumo logic
Product Self managed Cloud Professional
Pricing ~ $30 per GB / month ~ $100 per GB / month * ~ $108 per GB / month *
Data Ingestion ~ 300 GB / day
~ 100 GB / day *
(post ingestion custom pricing)
~ 20 GB / day *
(post ingestion custom pricing)
Retention ~ 90 days ~ 90 days * ~ 30 days *
Cost/GB/day ~$ 0.98 per GB / day ~$ 3.33 per GB / day * ~$ 3.60 per GB /day *
* values are estimations taken from the ‘product pricing web-page’ of the respective products, they may not represent the actual values and are meant for the purpose of comparison only.
References:
https://www.splunk.com/en_us/products/pricing/calculator.html#tabs/tab2
https://www.sumologic.com/pricing/apac/
Choosing the right option
ELK Stack overview
● Index
● Shard
○ Primary
○ Replica
● Segment
● Node
References:
https://www.elastic.co/guide/en/elasticsearch/reference/5.6/_basic_concepts.html
ELK Stack overview: Terminologies
Our ELK architecture
Our ELK architecture: Hot-Warm-Cold data storage
(infinite scale)
Service
Number of
Nodes
Total CPU
Cores
Total RAM
Storage
EBS
1 Elasticsearch 7 28 141 GB
2 Logstash 3 6 12 GB
3 Kibana 1 1 4 GB
Total 11 35 157 GB ~ 20 TB
Data-ingestion per day ~ 300 GB
Hot Retention period 90 days
Docs/sec (at peak load) ~ 7K
Our ELK architecture: Size and scale
Application Side
● Logstash
● Elasticsearch
Infrastructure Side
● EC2
● EBS
● Data transfer
Optimizations we did
Optimizations we did: Application side
Logstash
Pipeline Workers:
● Adjusted "pipeline.workers" to x4 the number of
Cores to improve CPU utilisation on Logstash
server (as threads may spend significant time in
an I/O wait state)
### Core-count: 2 ###
...
pipeline.workers: 8
...
logstash.yml
References:
https://www.elastic.co/guide/en/logstash/current/tuning-logstash.html
Optimizations we did: Logstash
'info' logs:
● Separated application 'info' log to be store in a
different index with retention policy of fewer days
if [sourcetype] == "app_logs" and [level] == "info"
{
elasticsearch {
index => "%{sourcetype}-%{level}-%{+YYYY.MM.dd}"
...
Filter config
if [sourcetype] == "nginx" and [status] == "200"
{
elasticsearch {
index => "%{sourcetype}-%{status}-%{+YYYY.MM.dd}"
...
References:
https://www.elastic.co/guide/en/logstash/current/event-dependent-configuration.html
'200' response-code logs:
● Separated Access log with '200' response-code
be store in a different index with retention policy
of fewer days
Optimizations we did: Logstash
Log ‘message’ field:
● Removed "message" field if there were no
'grok-failures' in logstash while applying grok
patterns
(reduced storage footprint by ~30% per doc)
if "_grokparsefailure" not in [tags] {
mutate {
remove_field => ["message"]
}
}
Filter config
Eg:
Nginx Log-message: 127.0.0.1 - - [26/Mar/2016:19:09:19 -0400] "GET / HTTP/1.1" 401 194 "" "Mozilla/5.0
Gecko" "-"
Grok Pattern: %{IPORHOST:clientip} (?:-|(%{WORD}.%{WORD})) %{USER:ident}
[%{HTTPDATE:timestamp}] "(?:%{WORD:verb} %{NOTSPACE:request}(?:
HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMBER:response} (?:%{NUMBER:bytes}|-)
%{QS:referrer} %{QS:agent} %{QS:forwarder}
Optimizations we did: Logstash
Elasticsearch
Optimizations we did: Application side
JVM heap vs non-heap memory:
● Optimised JVM heap-size by monitoring the GC
interval, this helped in efficient utilization of system
Memory (33% for JVM, 66% for non-heap) *
jvm.options
### Total system Memory 15GB ###
-Xms5g
-Xmx5g
Heap too small
Heap too large
Optimised Heap
* Recommended heap-size settings by Elastic:
https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html
Optimizations we did: Elasticsearch
Shards:
● Created templates with number of shards which
are multiples of the number of Elasticsearch
nodes
(helps fix issues with shards distribution
imbalance which resulted in uneven disk,
compute resource usage)
### Number of ES nodes: 5 ###
{
"template": "appserver-*",
"settings": {
"number_of_shards": "5",
"number_of_replicas": "0",
...
}
}'
Trade-offs:
● Removing replicas will result in search queries
running slower as replicas are used while
performing search operations
● It is not recommended to run production clusters
without replicas
Replicas:
● Removed replicas for the required indexes
(50% savings on storage cost, ~30% reduction in
compute resource utilization)
Optimizations we did: Elasticsearch
Template config
AWS
● EC2
● EBS
● Data transfer (Inter AZ)
Spotinst platform allows users to reliably
leverage excess capacity, simplify cloud
operations and save 80% on compute costs.
Optimizations we did: Infrastructure side
Optimizations we did: Infrastructure side
EC2
Stateful EC2 Spot instances:
● Moved all ELK nodes to run on spot instances
(Instances maintain IP address, EBS volumes)
Recovery time: < 10 mins
Trade-offs:
● Prefer using previous generation instance
types to reduce frequent spot take-backs
Optimizations we did: EC2 and spot
Auto-Scaling:
● Performance/time based auto-scaling for
Logstash Instances
Optimizations we did: EC2 and spot
Optimizations we did: Infrastructure side
EBS
"Hot-Warm" Architecture:
● "Hot" nodes: store active indexes, use GP2
EBS-disks (General purpose SSD)
● "Warm" nodes: store passive indexes, use SC1
EBS-disks (Cold storage)
(~69% savings on storage cost)
node.attr.box_type: hot
...
elasticsearch.yml
"template": "appserver-*",
"settings": {
"index": {
"routing": {
"allocation": {
"require": {
"box_type": "hot"}
}
}
},
...
Template config
Trade-offs:
● Since "Warm" nodes are using SC1 EBS-disks,
they have lower IOPS, throughput this will result
in search operations being comparatively slower
References:
https://cinhtau.net/2017/06/14/hot-warm-architecture/
Optimizations we did: EBS
Moving indexes to "Warm" nodes:
● Reallocated indexes older than 8 days to "Warm"
nodes
● Recommended to perform this operation during
off-peak hours as it is I/O intensive
actions:
1:
action: allocation
description: "Move index to Warm-nodes after 8
days"
options:
key: box_type
value: warm
allocation_type: require
timeout_override:
continue_if_exception: false
filters:
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: 8
...
Curator config
References:
https://www.elastic.co/blog/hot-warm-architecture-in-elasticsearch-5-x
Optimizations we did: EBS
Single Availability Zone:
● Migrated all ELK node to a single availability zone
(reduce inter AZ data transfer cost for ELK nodes
by 100%)
● Data transfer/day: ~700GB
(Logstash to Elasticsearch: ~300GB,
Elasticsearch inter-communication: ~400GB)
Trade-offs:
● It is not recommended to run production clusters in
a single AZ as it will result in downtime and
potential data loss in case of AZ failures
Optimizations we did: Inter-AZ data transfer
Using S3 for index Snapshots:
● Take snapshots of indexes and store them in S3
curl -XPUT
"http://<domain>:9200/_snapshot/s3_repository/
snap1?pretty?wait_for_completion=true" -d'
{
"indices": "index_1,index_2",
"ignore_unavailable": true,
"include_global_state": false
}
Backup:
References:
https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html
https://medium.com/@federicopanini/elasticsearch-backup-snapshot-and-restore-on-aws-s3-f1fc32fbca7f
Data backup and restore
curl -s -XPOST --url
"http://<domain>:9200/_snapshot/s3_repository/s
nap1/_restore" -d'
{
"indices": "index_1,index_2",
"ignore_unavailable": true,
"include_global_state": false,
}'
On-demand Elasticsearch cluster:
● Launching a on demand ES cluster and importing
the snapshots from S3
Existing Cluster:
● Restore the required snapshots to existing cluster
Restore:
Data backup and restore
Data corruption:
● List out indexes with status as ‘Red’
● Deleted the corrupted indexes
● Restore indexes from S3 snapshots
● Recovery time: depends of size of data
Node failure due to AZ going down:
● Launch a new ELK cluster using AWS cloud
formation templates
● Do the necessary config changes in Filebeat,
Logstash etc.
● Restore the required indexes from S3 snapshots
● Recovery time: depends on provisioning time and
size of data
Node failures due to underlying hardware issue:
● Recycle node in Spotinst console
(will take AMI of root volume, launch new instance,
attach EBS volumes, maintain private IP)
● Recovery time: < 10 mins/node
Snapshot restore time (estimates):
● < 4mins for a 20GB snapshot (test-cluster: 3
nodes, multiple indexes with 3 primary shards
each, no replicas)
Disaster recovery
EC2
Instance type Service Daily cost
5 x r5.xlarge (20C, 160GB) Elasticsearch 40.80
3 x c5.large (6C, 12GB) Logstash 7.17
1 x t3.medium (2C, 4GB) Kibana 1.29
Total ~ 49.26$
EC2 (optimized)
Instance type Service
Daily cost
65% savings + Spotinst charges (20% of savings) Total Savings
5 x m4.xlarge (20C, 80GB) Elasticsearch Hot 14.64
2 x r4.xlarge (8C, 61GB) Elasticsearch Warm 7.50
3 x c4.large (6C, 12GB) Logstash 3.50
1 x t2.medium (2C, 4GB) Kibana 0.69
Total ~ 26.33$ ~ 47%
Cost savings: EC2
Ingesting: 300GB/day
Retention: 90 days
Replica count: 1
Storage
Storage type Retention Daily cost
~54TB (GP2) 90 days ~ 237.60$
Storage (optimized)
Storage type Retention Daily cost Total Savings
~ 3TB (GP2) Hot 8 days 12.00
~ 24TB (SC1) Warm 82 days 24.00
~ 27TB (S3) Backup 90 days 22.50
Total ~ 58.50$ ~ 75%
Ingesting: 300GB/day
Retention: 90 days
Replica count: 0
Backups: Daily S3 snapshots
Cost savings: Storage
ELK stack
ELK stack
(optimized) Savings
EC2 49.40 26.33 47%
Storage 237.60 58.50 75%
Data-transfer 7 0 100%
Total (daily cost) ~ 294.00$ ~ 84.83$ ~ 71% *
Cost/GB (daily) ~ 0.98$ ~ 0.28$
* Total savings are exclusive of some of the application-level optimizations done
Total savings
ELK Stack
(optimized) ELK Stack Splunk Sumo logic
Product Self managed Self managed Cloud Professional
Data Ingestion ~ 300GB/day ~ 300GB/day
~ 100 GB / day *
(post ingestion custom pricing)
~ 20 GB / day *
(post ingestion custom pricing)
Retention ~ 90 days ~ 90 days ~ 90 days * ~ 30 days *
Cost/GB/day ~ $ 0.28 per GB /day ~ $ 0.98 per GB /day ~ $ 3.33 per GB /day * ~ $ 3.60 per GB /day *
Savings over traditional ELK stack: 71% *
* Total savings are exclusive of some of the application-level optimizations done
Our Costs vs other Platforms
ELK Stack Scalability:
● Logstash: auto-scaling
● Elasticsearch: overprovisioning (nodes run at 60% capacity during peak load), predictive vertical/horizontal scaling
Handling potential data-loss while AZ is down:
● DR mechanisms in place, daily/hourly backups stored in S3, Potential chances of data loss of about 1 hour
● We do not store user-data or business metrics in ELK, users/business will not be impacted
Handling potential data-corruptions in Elasticsearch:
● DR mechanisms in place, recover index from S3 index-snapshots
Managing downtime during spot take-backs:
● Logstash: multiple nodes, minimal impact
● Elasticsearch/Kibana: < 10min downtime per node
● Use previous generation instance types as spot take-back chances are comparatively low
Key Takeaways
Handling back-pressure when a node is down:
● Filebeat: will auto-retry to send old logs
● Logstash: use ‘date’ filter for document timestamp, auto-scaling
● Elasticsearch: overprovisioning
Other log analytics alternatives:
● We have only evaluated ELK, Splunk and Sumo Logic
ELK stack upgrade path:
● Blue Green deployment for major version upgrade
Key Takeaways
● We built a platform tailored to our requirements, yours might be different...
● Building a log analytics platform is not rocket science, but it can be painfully iterative if you
are not aware of the options
● Be aware of the trade-offs you are ‘OK with’ and you can roll out a solution optimised for
your specific requirements
Reflection
Thank you!
Happy to take your questions..
Copyright Disclaimer: All rights to the materials used for this presentation belongs to their respective owners..
1 of 38

Recommended

Elastic stack Presentation by
Elastic stack PresentationElastic stack Presentation
Elastic stack PresentationAmr Alaa Yassen
5K views33 slides
Elastic Stack Introduction by
Elastic Stack IntroductionElastic Stack Introduction
Elastic Stack IntroductionVikram Shinde
7.9K views21 slides
ELK Elasticsearch Logstash and Kibana Stack for Log Management by
ELK Elasticsearch Logstash and Kibana Stack for Log ManagementELK Elasticsearch Logstash and Kibana Stack for Log Management
ELK Elasticsearch Logstash and Kibana Stack for Log ManagementEl Mahdi Benzekri
874 views18 slides
ELK Stack by
ELK StackELK Stack
ELK StackPhuc Nguyen
9.1K views45 slides
Elk by
Elk Elk
Elk Caleb Wang
4.2K views21 slides
Log analysis with the elk stack by
Log analysis with the elk stackLog analysis with the elk stack
Log analysis with the elk stackVikrant Chauhan
1.2K views30 slides

More Related Content

What's hot

The Elastic ELK Stack by
The Elastic ELK StackThe Elastic ELK Stack
The Elastic ELK Stackenterprisesearchmeetup
2.4K views9 slides
Log management with ELK by
Log management with ELKLog management with ELK
Log management with ELKGeert Pante
3K views17 slides
Deep Dive Into Elasticsearch by
Deep Dive Into ElasticsearchDeep Dive Into Elasticsearch
Deep Dive Into ElasticsearchKnoldus Inc.
16.3K views27 slides
Elasticsearch for Logs & Metrics - a deep dive by
Elasticsearch for Logs & Metrics - a deep diveElasticsearch for Logs & Metrics - a deep dive
Elasticsearch for Logs & Metrics - a deep diveSematext Group, Inc.
6.3K views48 slides
Iceberg: A modern table format for big data (Strata NY 2018) by
Iceberg: A modern table format for big data (Strata NY 2018)Iceberg: A modern table format for big data (Strata NY 2018)
Iceberg: A modern table format for big data (Strata NY 2018)Ryan Blue
2K views34 slides
Elastic Stack ELK, Beats, and Cloud by
Elastic Stack ELK, Beats, and CloudElastic Stack ELK, Beats, and Cloud
Elastic Stack ELK, Beats, and CloudJoe Ryan
182 views18 slides

What's hot(20)

Log management with ELK by Geert Pante
Log management with ELKLog management with ELK
Log management with ELK
Geert Pante3K views
Deep Dive Into Elasticsearch by Knoldus Inc.
Deep Dive Into ElasticsearchDeep Dive Into Elasticsearch
Deep Dive Into Elasticsearch
Knoldus Inc.16.3K views
Iceberg: A modern table format for big data (Strata NY 2018) by Ryan Blue
Iceberg: A modern table format for big data (Strata NY 2018)Iceberg: A modern table format for big data (Strata NY 2018)
Iceberg: A modern table format for big data (Strata NY 2018)
Ryan Blue2K views
Elastic Stack ELK, Beats, and Cloud by Joe Ryan
Elastic Stack ELK, Beats, and CloudElastic Stack ELK, Beats, and Cloud
Elastic Stack ELK, Beats, and Cloud
Joe Ryan182 views
Keeping Up with the ELK Stack: Elasticsearch, Kibana, Beats, and Logstash by Amazon Web Services
Keeping Up with the ELK Stack: Elasticsearch, Kibana, Beats, and LogstashKeeping Up with the ELK Stack: Elasticsearch, Kibana, Beats, and Logstash
Keeping Up with the ELK Stack: Elasticsearch, Kibana, Beats, and Logstash
Amazon Web Services2.3K views
Elasticsearch for beginners by Neil Baker
Elasticsearch for beginnersElasticsearch for beginners
Elasticsearch for beginners
Neil Baker15.6K views
How to Actually Tune Your Spark Jobs So They Work by Ilya Ganelin
How to Actually Tune Your Spark Jobs So They WorkHow to Actually Tune Your Spark Jobs So They Work
How to Actually Tune Your Spark Jobs So They Work
Ilya Ganelin8.6K views
Presto Summit 2018 - 09 - Netflix Iceberg by kbajda
Presto Summit 2018  - 09 - Netflix IcebergPresto Summit 2018  - 09 - Netflix Iceberg
Presto Summit 2018 - 09 - Netflix Iceberg
kbajda3K views
Log analysis using elk by Rushika Shah
Log analysis using elkLog analysis using elk
Log analysis using elk
Rushika Shah653 views
ksqlDB - Stream Processing simplified! by Guido Schmutz
ksqlDB - Stream Processing simplified!ksqlDB - Stream Processing simplified!
ksqlDB - Stream Processing simplified!
Guido Schmutz1.1K views
Datadog: a Real-Time Metrics Database for One Quadrillion Points/Day by C4Media
Datadog: a Real-Time Metrics Database for One Quadrillion Points/DayDatadog: a Real-Time Metrics Database for One Quadrillion Points/Day
Datadog: a Real-Time Metrics Database for One Quadrillion Points/Day
C4Media1.6K views
Introduction to Elasticsearch by Ruslan Zavacky
Introduction to ElasticsearchIntroduction to Elasticsearch
Introduction to Elasticsearch
Ruslan Zavacky7.6K views
Introduction to Kibana by Vineet .
Introduction to KibanaIntroduction to Kibana
Introduction to Kibana
Vineet .607 views
Logstash by 琛琳 饶
LogstashLogstash
Logstash
琛琳 饶34.5K views

Similar to Log analytics with ELK stack

Optimizing spark based data pipelines - are you up for it? by
Optimizing spark based data pipelines - are you up for it?Optimizing spark based data pipelines - are you up for it?
Optimizing spark based data pipelines - are you up for it?Etti Gur
40 views46 slides
Elasticsearch on Kubernetes by
Elasticsearch on KubernetesElasticsearch on Kubernetes
Elasticsearch on KubernetesJoerg Henning
4.8K views33 slides
Benchmarking your cloud performance with top 4 global public clouds by
Benchmarking your cloud performance with top 4 global public cloudsBenchmarking your cloud performance with top 4 global public clouds
Benchmarking your cloud performance with top 4 global public cloudsdata://disrupted®
111 views39 slides
Mongodb in-anger-boston-rb-2011 by
Mongodb in-anger-boston-rb-2011Mongodb in-anger-boston-rb-2011
Mongodb in-anger-boston-rb-2011bostonrb
571 views50 slides
6 tips for improving ruby performance by
6 tips for improving ruby performance6 tips for improving ruby performance
6 tips for improving ruby performanceEngine Yard
6.6K views39 slides
Getting Started with Apache Spark on Kubernetes by
Getting Started with Apache Spark on KubernetesGetting Started with Apache Spark on Kubernetes
Getting Started with Apache Spark on KubernetesDatabricks
710 views32 slides

Similar to Log analytics with ELK stack(20)

Optimizing spark based data pipelines - are you up for it? by Etti Gur
Optimizing spark based data pipelines - are you up for it?Optimizing spark based data pipelines - are you up for it?
Optimizing spark based data pipelines - are you up for it?
Etti Gur40 views
Elasticsearch on Kubernetes by Joerg Henning
Elasticsearch on KubernetesElasticsearch on Kubernetes
Elasticsearch on Kubernetes
Joerg Henning4.8K views
Benchmarking your cloud performance with top 4 global public clouds by data://disrupted®
Benchmarking your cloud performance with top 4 global public cloudsBenchmarking your cloud performance with top 4 global public clouds
Benchmarking your cloud performance with top 4 global public clouds
data://disrupted®111 views
Mongodb in-anger-boston-rb-2011 by bostonrb
Mongodb in-anger-boston-rb-2011Mongodb in-anger-boston-rb-2011
Mongodb in-anger-boston-rb-2011
bostonrb571 views
6 tips for improving ruby performance by Engine Yard
6 tips for improving ruby performance6 tips for improving ruby performance
6 tips for improving ruby performance
Engine Yard6.6K views
Getting Started with Apache Spark on Kubernetes by Databricks
Getting Started with Apache Spark on KubernetesGetting Started with Apache Spark on Kubernetes
Getting Started with Apache Spark on Kubernetes
Databricks710 views
MongoDB performance tuning and load testing, NOSQL Now! 2013 Conference prese... by ronwarshawsky
MongoDB performance tuning and load testing, NOSQL Now! 2013 Conference prese...MongoDB performance tuning and load testing, NOSQL Now! 2013 Conference prese...
MongoDB performance tuning and load testing, NOSQL Now! 2013 Conference prese...
ronwarshawsky11.9K views
Best Practices for Building Robust Data Platform with Apache Spark and Delta by Databricks
Best Practices for Building Robust Data Platform with Apache Spark and DeltaBest Practices for Building Robust Data Platform with Apache Spark and Delta
Best Practices for Building Robust Data Platform with Apache Spark and Delta
Databricks756 views
Kubernetes Forum Seoul 2019: Re-architecting Data Platform with Kubernetes by SeungYong Oh
Kubernetes Forum Seoul 2019: Re-architecting Data Platform with KubernetesKubernetes Forum Seoul 2019: Re-architecting Data Platform with Kubernetes
Kubernetes Forum Seoul 2019: Re-architecting Data Platform with Kubernetes
SeungYong Oh903 views
Eko10 - Security Monitoring for Big Infrastructures without a Million Dollar ... by Hernan Costante
Eko10 - Security Monitoring for Big Infrastructures without a Million Dollar ...Eko10 - Security Monitoring for Big Infrastructures without a Million Dollar ...
Eko10 - Security Monitoring for Big Infrastructures without a Million Dollar ...
Hernan Costante1.9K views
Improving the performance of Odoo deployments by Odoo
Improving the performance of Odoo deploymentsImproving the performance of Odoo deployments
Improving the performance of Odoo deployments
Odoo108.4K views
Clug 2012 March web server optimisation by grooverdan
Clug 2012 March   web server optimisationClug 2012 March   web server optimisation
Clug 2012 March web server optimisation
grooverdan689 views
Building an analytics workflow using Apache Airflow by Yohei Onishi
Building an analytics workflow using Apache AirflowBuilding an analytics workflow using Apache Airflow
Building an analytics workflow using Apache Airflow
Yohei Onishi2.3K views
Lessons Learned While Scaling Elasticsearch at Vinted by Dainius Jocas
Lessons Learned While Scaling Elasticsearch at VintedLessons Learned While Scaling Elasticsearch at Vinted
Lessons Learned While Scaling Elasticsearch at Vinted
Dainius Jocas189 views
Time series denver an introduction to prometheus by Bob Cotton
Time series denver   an introduction to prometheusTime series denver   an introduction to prometheus
Time series denver an introduction to prometheus
Bob Cotton1.6K views
MongoDB .local Bengaluru 2019: MongoDB Atlas Data Lake Technical Deep Dive by MongoDB
MongoDB .local Bengaluru 2019: MongoDB Atlas Data Lake Technical Deep DiveMongoDB .local Bengaluru 2019: MongoDB Atlas Data Lake Technical Deep Dive
MongoDB .local Bengaluru 2019: MongoDB Atlas Data Lake Technical Deep Dive
MongoDB424 views
Eko10 Workshop Opensource Database Auditing by Juan Berner
Eko10  Workshop Opensource Database AuditingEko10  Workshop Opensource Database Auditing
Eko10 Workshop Opensource Database Auditing
Juan Berner728 views

More from AWS User Group Bengaluru

Demystifying identity on AWS by
Demystifying identity on AWSDemystifying identity on AWS
Demystifying identity on AWSAWS User Group Bengaluru
440 views108 slides
AWS Secrets for Best Practices by
AWS Secrets for Best PracticesAWS Secrets for Best Practices
AWS Secrets for Best PracticesAWS User Group Bengaluru
931 views21 slides
Cloud Security by
Cloud SecurityCloud Security
Cloud SecurityAWS User Group Bengaluru
1.9K views133 slides
Lessons learnt building a Distributed Linked List on S3 by
Lessons learnt building a Distributed Linked List on S3Lessons learnt building a Distributed Linked List on S3
Lessons learnt building a Distributed Linked List on S3AWS User Group Bengaluru
312 views27 slides
Medlife journey with AWS by
Medlife journey with AWSMedlife journey with AWS
Medlife journey with AWSAWS User Group Bengaluru
336 views17 slides
Building Efficient, Scalable and Resilient Front-end logging service with AWS by
Building Efficient, Scalable and Resilient Front-end logging service with AWSBuilding Efficient, Scalable and Resilient Front-end logging service with AWS
Building Efficient, Scalable and Resilient Front-end logging service with AWSAWS User Group Bengaluru
443 views33 slides

More from AWS User Group Bengaluru(20)

Building Efficient, Scalable and Resilient Front-end logging service with AWS by AWS User Group Bengaluru
Building Efficient, Scalable and Resilient Front-end logging service with AWSBuilding Efficient, Scalable and Resilient Front-end logging service with AWS
Building Efficient, Scalable and Resilient Front-end logging service with AWS
Building Efficient, Scalable and Resilient Front-end logging service with AWS by AWS User Group Bengaluru
Building Efficient, Scalable and Resilient Front-end logging service with AWSBuilding Efficient, Scalable and Resilient Front-end logging service with AWS
Building Efficient, Scalable and Resilient Front-end logging service with AWS
Keynote - Chaos Engineering: Why breaking things should be practiced by AWS User Group Bengaluru
Keynote - Chaos Engineering: Why breaking things should be practicedKeynote - Chaos Engineering: Why breaking things should be practiced
Keynote - Chaos Engineering: Why breaking things should be practiced

Recently uploaded

2FA and OAuth2 in CloudStack - Andrija Panić - ShapeBlue by
2FA and OAuth2 in CloudStack - Andrija Panić - ShapeBlue2FA and OAuth2 in CloudStack - Andrija Panić - ShapeBlue
2FA and OAuth2 in CloudStack - Andrija Panić - ShapeBlueShapeBlue
75 views23 slides
VNF Integration and Support in CloudStack - Wei Zhou - ShapeBlue by
VNF Integration and Support in CloudStack - Wei Zhou - ShapeBlueVNF Integration and Support in CloudStack - Wei Zhou - ShapeBlue
VNF Integration and Support in CloudStack - Wei Zhou - ShapeBlueShapeBlue
134 views54 slides
Updates on the LINSTOR Driver for CloudStack - Rene Peinthor - LINBIT by
Updates on the LINSTOR Driver for CloudStack - Rene Peinthor - LINBITUpdates on the LINSTOR Driver for CloudStack - Rene Peinthor - LINBIT
Updates on the LINSTOR Driver for CloudStack - Rene Peinthor - LINBITShapeBlue
138 views8 slides
Setting Up Your First CloudStack Environment with Beginners Challenges - MD R... by
Setting Up Your First CloudStack Environment with Beginners Challenges - MD R...Setting Up Your First CloudStack Environment with Beginners Challenges - MD R...
Setting Up Your First CloudStack Environment with Beginners Challenges - MD R...ShapeBlue
105 views15 slides
Import Export Virtual Machine for KVM Hypervisor - Ayush Pandey - University ... by
Import Export Virtual Machine for KVM Hypervisor - Ayush Pandey - University ...Import Export Virtual Machine for KVM Hypervisor - Ayush Pandey - University ...
Import Export Virtual Machine for KVM Hypervisor - Ayush Pandey - University ...ShapeBlue
48 views17 slides
Microsoft Power Platform.pptx by
Microsoft Power Platform.pptxMicrosoft Power Platform.pptx
Microsoft Power Platform.pptxUni Systems S.M.S.A.
74 views38 slides

Recently uploaded(20)

2FA and OAuth2 in CloudStack - Andrija Panić - ShapeBlue by ShapeBlue
2FA and OAuth2 in CloudStack - Andrija Panić - ShapeBlue2FA and OAuth2 in CloudStack - Andrija Panić - ShapeBlue
2FA and OAuth2 in CloudStack - Andrija Panić - ShapeBlue
ShapeBlue75 views
VNF Integration and Support in CloudStack - Wei Zhou - ShapeBlue by ShapeBlue
VNF Integration and Support in CloudStack - Wei Zhou - ShapeBlueVNF Integration and Support in CloudStack - Wei Zhou - ShapeBlue
VNF Integration and Support in CloudStack - Wei Zhou - ShapeBlue
ShapeBlue134 views
Updates on the LINSTOR Driver for CloudStack - Rene Peinthor - LINBIT by ShapeBlue
Updates on the LINSTOR Driver for CloudStack - Rene Peinthor - LINBITUpdates on the LINSTOR Driver for CloudStack - Rene Peinthor - LINBIT
Updates on the LINSTOR Driver for CloudStack - Rene Peinthor - LINBIT
ShapeBlue138 views
Setting Up Your First CloudStack Environment with Beginners Challenges - MD R... by ShapeBlue
Setting Up Your First CloudStack Environment with Beginners Challenges - MD R...Setting Up Your First CloudStack Environment with Beginners Challenges - MD R...
Setting Up Your First CloudStack Environment with Beginners Challenges - MD R...
ShapeBlue105 views
Import Export Virtual Machine for KVM Hypervisor - Ayush Pandey - University ... by ShapeBlue
Import Export Virtual Machine for KVM Hypervisor - Ayush Pandey - University ...Import Export Virtual Machine for KVM Hypervisor - Ayush Pandey - University ...
Import Export Virtual Machine for KVM Hypervisor - Ayush Pandey - University ...
ShapeBlue48 views
CloudStack Managed User Data and Demo - Harikrishna Patnala - ShapeBlue by ShapeBlue
CloudStack Managed User Data and Demo - Harikrishna Patnala - ShapeBlueCloudStack Managed User Data and Demo - Harikrishna Patnala - ShapeBlue
CloudStack Managed User Data and Demo - Harikrishna Patnala - ShapeBlue
ShapeBlue68 views
Centralized Logging Feature in CloudStack using ELK and Grafana - Kiran Chava... by ShapeBlue
Centralized Logging Feature in CloudStack using ELK and Grafana - Kiran Chava...Centralized Logging Feature in CloudStack using ELK and Grafana - Kiran Chava...
Centralized Logging Feature in CloudStack using ELK and Grafana - Kiran Chava...
ShapeBlue74 views
Igniting Next Level Productivity with AI-Infused Data Integration Workflows by Safe Software
Igniting Next Level Productivity with AI-Infused Data Integration Workflows Igniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration Workflows
Safe Software373 views
Transitioning from VMware vCloud to Apache CloudStack: A Path to Profitabilit... by ShapeBlue
Transitioning from VMware vCloud to Apache CloudStack: A Path to Profitabilit...Transitioning from VMware vCloud to Apache CloudStack: A Path to Profitabilit...
Transitioning from VMware vCloud to Apache CloudStack: A Path to Profitabilit...
ShapeBlue86 views
Extending KVM Host HA for Non-NFS Storage - Alex Ivanov - StorPool by ShapeBlue
Extending KVM Host HA for Non-NFS Storage -  Alex Ivanov - StorPoolExtending KVM Host HA for Non-NFS Storage -  Alex Ivanov - StorPool
Extending KVM Host HA for Non-NFS Storage - Alex Ivanov - StorPool
ShapeBlue56 views
Backroll, News and Demo - Pierre Charton, Matthias Dhellin, Ousmane Diarra - ... by ShapeBlue
Backroll, News and Demo - Pierre Charton, Matthias Dhellin, Ousmane Diarra - ...Backroll, News and Demo - Pierre Charton, Matthias Dhellin, Ousmane Diarra - ...
Backroll, News and Demo - Pierre Charton, Matthias Dhellin, Ousmane Diarra - ...
ShapeBlue121 views
Live Demo Showcase: Unveiling Dell PowerFlex’s IaaS Capabilities with Apache ... by ShapeBlue
Live Demo Showcase: Unveiling Dell PowerFlex’s IaaS Capabilities with Apache ...Live Demo Showcase: Unveiling Dell PowerFlex’s IaaS Capabilities with Apache ...
Live Demo Showcase: Unveiling Dell PowerFlex’s IaaS Capabilities with Apache ...
ShapeBlue52 views
The Power of Heat Decarbonisation Plans in the Built Environment by IES VE
The Power of Heat Decarbonisation Plans in the Built EnvironmentThe Power of Heat Decarbonisation Plans in the Built Environment
The Power of Heat Decarbonisation Plans in the Built Environment
IES VE67 views
How to Re-use Old Hardware with CloudStack. Saving Money and the Environment ... by ShapeBlue
How to Re-use Old Hardware with CloudStack. Saving Money and the Environment ...How to Re-use Old Hardware with CloudStack. Saving Money and the Environment ...
How to Re-use Old Hardware with CloudStack. Saving Money and the Environment ...
ShapeBlue97 views
Mitigating Common CloudStack Instance Deployment Failures - Jithin Raju - Sha... by ShapeBlue
Mitigating Common CloudStack Instance Deployment Failures - Jithin Raju - Sha...Mitigating Common CloudStack Instance Deployment Failures - Jithin Raju - Sha...
Mitigating Common CloudStack Instance Deployment Failures - Jithin Raju - Sha...
ShapeBlue113 views
Keynote Talk: Open Source is Not Dead - Charles Schulz - Vates by ShapeBlue
Keynote Talk: Open Source is Not Dead - Charles Schulz - VatesKeynote Talk: Open Source is Not Dead - Charles Schulz - Vates
Keynote Talk: Open Source is Not Dead - Charles Schulz - Vates
ShapeBlue178 views

Log analytics with ELK stack

  • 1. Log Analytics with ELK Stack (Architecture for aggressive cost optimization and infinite data scale) Denis D’Souza | 27th July 2019
  • 2. About me... ● Currently a DevOps engineer at Moonfrog Labs ● 6 + years working as DevOps Engineer, SRE and Linux administrator Worked on a variety of technologies in both service-based and product-based organisations ● How do I spend my free time ? Learning new technologies and Playing PC Games www.linkedin.com/in/denis-dsouza
  • 3. • A Mobile Gaming Company making mass market social games • More than 5M+ Daily Active, 15M+ Weekly Active Users • Real time, Cross platform games optimized for Primary Market(s) - India and subcontinent • Profitable! Current Scale Who we are ?
  • 4. 1. Our business requirements 2. Choosing the right option 3. ELK Stack overview 4. Our ELK architecture 5. Optimizations we did 6. Cost savings 7. Key takeaways Our problem statement
  • 5. ● Log analytics platform (Web-Server, Application, Database logs) ● Data Ingestion rate: ~300GB/day ● Frequently accessed data: last 8 days ● Infrequently accessed ● Uptime: 99.90 ● Hot Retention period: 90 days ● Cold Retention period: 90 days (with potential to increase) ● Simple and Cost effective solution ● Fairly predictable concurrent user-base ● Not to be used for storing user/business data Our business requirements
  • 6. ELK stack Splunk Sumo logic Product Self managed Cloud Professional Pricing ~ $30 per GB / month ~ $100 per GB / month * ~ $108 per GB / month * Data Ingestion ~ 300 GB / day ~ 100 GB / day * (post ingestion custom pricing) ~ 20 GB / day * (post ingestion custom pricing) Retention ~ 90 days ~ 90 days * ~ 30 days * Cost/GB/day ~$ 0.98 per GB / day ~$ 3.33 per GB / day * ~$ 3.60 per GB /day * * values are estimations taken from the ‘product pricing web-page’ of the respective products, they may not represent the actual values and are meant for the purpose of comparison only. References: https://www.splunk.com/en_us/products/pricing/calculator.html#tabs/tab2 https://www.sumologic.com/pricing/apac/ Choosing the right option
  • 8. ● Index ● Shard ○ Primary ○ Replica ● Segment ● Node References: https://www.elastic.co/guide/en/elasticsearch/reference/5.6/_basic_concepts.html ELK Stack overview: Terminologies
  • 10. Our ELK architecture: Hot-Warm-Cold data storage (infinite scale)
  • 11. Service Number of Nodes Total CPU Cores Total RAM Storage EBS 1 Elasticsearch 7 28 141 GB 2 Logstash 3 6 12 GB 3 Kibana 1 1 4 GB Total 11 35 157 GB ~ 20 TB Data-ingestion per day ~ 300 GB Hot Retention period 90 days Docs/sec (at peak load) ~ 7K Our ELK architecture: Size and scale
  • 12. Application Side ● Logstash ● Elasticsearch Infrastructure Side ● EC2 ● EBS ● Data transfer Optimizations we did
  • 13. Optimizations we did: Application side Logstash
  • 14. Pipeline Workers: ● Adjusted "pipeline.workers" to x4 the number of Cores to improve CPU utilisation on Logstash server (as threads may spend significant time in an I/O wait state) ### Core-count: 2 ### ... pipeline.workers: 8 ... logstash.yml References: https://www.elastic.co/guide/en/logstash/current/tuning-logstash.html Optimizations we did: Logstash
  • 15. 'info' logs: ● Separated application 'info' log to be store in a different index with retention policy of fewer days if [sourcetype] == "app_logs" and [level] == "info" { elasticsearch { index => "%{sourcetype}-%{level}-%{+YYYY.MM.dd}" ... Filter config if [sourcetype] == "nginx" and [status] == "200" { elasticsearch { index => "%{sourcetype}-%{status}-%{+YYYY.MM.dd}" ... References: https://www.elastic.co/guide/en/logstash/current/event-dependent-configuration.html '200' response-code logs: ● Separated Access log with '200' response-code be store in a different index with retention policy of fewer days Optimizations we did: Logstash
  • 16. Log ‘message’ field: ● Removed "message" field if there were no 'grok-failures' in logstash while applying grok patterns (reduced storage footprint by ~30% per doc) if "_grokparsefailure" not in [tags] { mutate { remove_field => ["message"] } } Filter config Eg: Nginx Log-message: 127.0.0.1 - - [26/Mar/2016:19:09:19 -0400] "GET / HTTP/1.1" 401 194 "" "Mozilla/5.0 Gecko" "-" Grok Pattern: %{IPORHOST:clientip} (?:-|(%{WORD}.%{WORD})) %{USER:ident} [%{HTTPDATE:timestamp}] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMBER:response} (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent} %{QS:forwarder} Optimizations we did: Logstash
  • 18. JVM heap vs non-heap memory: ● Optimised JVM heap-size by monitoring the GC interval, this helped in efficient utilization of system Memory (33% for JVM, 66% for non-heap) * jvm.options ### Total system Memory 15GB ### -Xms5g -Xmx5g Heap too small Heap too large Optimised Heap * Recommended heap-size settings by Elastic: https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html Optimizations we did: Elasticsearch
  • 19. Shards: ● Created templates with number of shards which are multiples of the number of Elasticsearch nodes (helps fix issues with shards distribution imbalance which resulted in uneven disk, compute resource usage) ### Number of ES nodes: 5 ### { "template": "appserver-*", "settings": { "number_of_shards": "5", "number_of_replicas": "0", ... } }' Trade-offs: ● Removing replicas will result in search queries running slower as replicas are used while performing search operations ● It is not recommended to run production clusters without replicas Replicas: ● Removed replicas for the required indexes (50% savings on storage cost, ~30% reduction in compute resource utilization) Optimizations we did: Elasticsearch Template config
  • 20. AWS ● EC2 ● EBS ● Data transfer (Inter AZ) Spotinst platform allows users to reliably leverage excess capacity, simplify cloud operations and save 80% on compute costs. Optimizations we did: Infrastructure side
  • 21. Optimizations we did: Infrastructure side EC2
  • 22. Stateful EC2 Spot instances: ● Moved all ELK nodes to run on spot instances (Instances maintain IP address, EBS volumes) Recovery time: < 10 mins Trade-offs: ● Prefer using previous generation instance types to reduce frequent spot take-backs Optimizations we did: EC2 and spot
  • 23. Auto-Scaling: ● Performance/time based auto-scaling for Logstash Instances Optimizations we did: EC2 and spot
  • 24. Optimizations we did: Infrastructure side EBS
  • 25. "Hot-Warm" Architecture: ● "Hot" nodes: store active indexes, use GP2 EBS-disks (General purpose SSD) ● "Warm" nodes: store passive indexes, use SC1 EBS-disks (Cold storage) (~69% savings on storage cost) node.attr.box_type: hot ... elasticsearch.yml "template": "appserver-*", "settings": { "index": { "routing": { "allocation": { "require": { "box_type": "hot"} } } }, ... Template config Trade-offs: ● Since "Warm" nodes are using SC1 EBS-disks, they have lower IOPS, throughput this will result in search operations being comparatively slower References: https://cinhtau.net/2017/06/14/hot-warm-architecture/ Optimizations we did: EBS
  • 26. Moving indexes to "Warm" nodes: ● Reallocated indexes older than 8 days to "Warm" nodes ● Recommended to perform this operation during off-peak hours as it is I/O intensive actions: 1: action: allocation description: "Move index to Warm-nodes after 8 days" options: key: box_type value: warm allocation_type: require timeout_override: continue_if_exception: false filters: - filtertype: age source: name direction: older timestring: '%Y.%m.%d' unit: days unit_count: 8 ... Curator config References: https://www.elastic.co/blog/hot-warm-architecture-in-elasticsearch-5-x Optimizations we did: EBS
  • 27. Single Availability Zone: ● Migrated all ELK node to a single availability zone (reduce inter AZ data transfer cost for ELK nodes by 100%) ● Data transfer/day: ~700GB (Logstash to Elasticsearch: ~300GB, Elasticsearch inter-communication: ~400GB) Trade-offs: ● It is not recommended to run production clusters in a single AZ as it will result in downtime and potential data loss in case of AZ failures Optimizations we did: Inter-AZ data transfer
  • 28. Using S3 for index Snapshots: ● Take snapshots of indexes and store them in S3 curl -XPUT "http://<domain>:9200/_snapshot/s3_repository/ snap1?pretty?wait_for_completion=true" -d' { "indices": "index_1,index_2", "ignore_unavailable": true, "include_global_state": false } Backup: References: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html https://medium.com/@federicopanini/elasticsearch-backup-snapshot-and-restore-on-aws-s3-f1fc32fbca7f Data backup and restore
  • 29. curl -s -XPOST --url "http://<domain>:9200/_snapshot/s3_repository/s nap1/_restore" -d' { "indices": "index_1,index_2", "ignore_unavailable": true, "include_global_state": false, }' On-demand Elasticsearch cluster: ● Launching a on demand ES cluster and importing the snapshots from S3 Existing Cluster: ● Restore the required snapshots to existing cluster Restore: Data backup and restore
  • 30. Data corruption: ● List out indexes with status as ‘Red’ ● Deleted the corrupted indexes ● Restore indexes from S3 snapshots ● Recovery time: depends of size of data Node failure due to AZ going down: ● Launch a new ELK cluster using AWS cloud formation templates ● Do the necessary config changes in Filebeat, Logstash etc. ● Restore the required indexes from S3 snapshots ● Recovery time: depends on provisioning time and size of data Node failures due to underlying hardware issue: ● Recycle node in Spotinst console (will take AMI of root volume, launch new instance, attach EBS volumes, maintain private IP) ● Recovery time: < 10 mins/node Snapshot restore time (estimates): ● < 4mins for a 20GB snapshot (test-cluster: 3 nodes, multiple indexes with 3 primary shards each, no replicas) Disaster recovery
  • 31. EC2 Instance type Service Daily cost 5 x r5.xlarge (20C, 160GB) Elasticsearch 40.80 3 x c5.large (6C, 12GB) Logstash 7.17 1 x t3.medium (2C, 4GB) Kibana 1.29 Total ~ 49.26$ EC2 (optimized) Instance type Service Daily cost 65% savings + Spotinst charges (20% of savings) Total Savings 5 x m4.xlarge (20C, 80GB) Elasticsearch Hot 14.64 2 x r4.xlarge (8C, 61GB) Elasticsearch Warm 7.50 3 x c4.large (6C, 12GB) Logstash 3.50 1 x t2.medium (2C, 4GB) Kibana 0.69 Total ~ 26.33$ ~ 47% Cost savings: EC2
  • 32. Ingesting: 300GB/day Retention: 90 days Replica count: 1 Storage Storage type Retention Daily cost ~54TB (GP2) 90 days ~ 237.60$ Storage (optimized) Storage type Retention Daily cost Total Savings ~ 3TB (GP2) Hot 8 days 12.00 ~ 24TB (SC1) Warm 82 days 24.00 ~ 27TB (S3) Backup 90 days 22.50 Total ~ 58.50$ ~ 75% Ingesting: 300GB/day Retention: 90 days Replica count: 0 Backups: Daily S3 snapshots Cost savings: Storage
  • 33. ELK stack ELK stack (optimized) Savings EC2 49.40 26.33 47% Storage 237.60 58.50 75% Data-transfer 7 0 100% Total (daily cost) ~ 294.00$ ~ 84.83$ ~ 71% * Cost/GB (daily) ~ 0.98$ ~ 0.28$ * Total savings are exclusive of some of the application-level optimizations done Total savings
  • 34. ELK Stack (optimized) ELK Stack Splunk Sumo logic Product Self managed Self managed Cloud Professional Data Ingestion ~ 300GB/day ~ 300GB/day ~ 100 GB / day * (post ingestion custom pricing) ~ 20 GB / day * (post ingestion custom pricing) Retention ~ 90 days ~ 90 days ~ 90 days * ~ 30 days * Cost/GB/day ~ $ 0.28 per GB /day ~ $ 0.98 per GB /day ~ $ 3.33 per GB /day * ~ $ 3.60 per GB /day * Savings over traditional ELK stack: 71% * * Total savings are exclusive of some of the application-level optimizations done Our Costs vs other Platforms
  • 35. ELK Stack Scalability: ● Logstash: auto-scaling ● Elasticsearch: overprovisioning (nodes run at 60% capacity during peak load), predictive vertical/horizontal scaling Handling potential data-loss while AZ is down: ● DR mechanisms in place, daily/hourly backups stored in S3, Potential chances of data loss of about 1 hour ● We do not store user-data or business metrics in ELK, users/business will not be impacted Handling potential data-corruptions in Elasticsearch: ● DR mechanisms in place, recover index from S3 index-snapshots Managing downtime during spot take-backs: ● Logstash: multiple nodes, minimal impact ● Elasticsearch/Kibana: < 10min downtime per node ● Use previous generation instance types as spot take-back chances are comparatively low Key Takeaways
  • 36. Handling back-pressure when a node is down: ● Filebeat: will auto-retry to send old logs ● Logstash: use ‘date’ filter for document timestamp, auto-scaling ● Elasticsearch: overprovisioning Other log analytics alternatives: ● We have only evaluated ELK, Splunk and Sumo Logic ELK stack upgrade path: ● Blue Green deployment for major version upgrade Key Takeaways
  • 37. ● We built a platform tailored to our requirements, yours might be different... ● Building a log analytics platform is not rocket science, but it can be painfully iterative if you are not aware of the options ● Be aware of the trade-offs you are ‘OK with’ and you can roll out a solution optimised for your specific requirements Reflection
  • 38. Thank you! Happy to take your questions.. Copyright Disclaimer: All rights to the materials used for this presentation belongs to their respective owners..