SlideShare a Scribd company logo
1 of 52
Latency SLOs Done Right
By Fred Moyer
SCaLE17x
#SCaLE17x@phredmoye
Latency
Is it important?
#SCaLE17x@phredmoye
Latency
For any of your services, how many requests
were served within 500 ms over the last month?
@phredmoye #SCaLE17x
Latency
For any of your services, how many requests
were served within 250ms over the last month?
@phredmoye #SCaLE17x
Latency
How would you answer that question for your
services?
@phredmoye #SCaLE17x
Latency
How accurate would your answer be?
@phredmoye #SCaLE17x
I’m Fred and I like SLOs
- Developer Evangelist @Circonus
- Engineer who talks to people
- Writing code and breaking prod for 20 years
- @phredmoyer on Twitter
- Likes C, Go, Perl, PostgreSQL
@phredmoye #SCaLE17x
Talk Agenda
● SLO Refresher
● A Common Mistake
● Computing SLOs with log data
● Computing SLOs by counting requests
● Computing SLOs with histograms
@phredmoye #SCaLE17x
Service Level Objectives
SLI - Service Level Indicator
SLO - Service Level Objectives
SLA - Service Level Agreement
@phredmoye #SCaLE17x
@phredmoye #SCaLE17x
Service Level Objectives
“SLIs drive SLOs which inform SLAs”
@phredmoye
SLI - Service Level Indicator, a
measure of the service that can
be quantified
“99th percentile latency of homepage
requests over the past 5 minutes <
300ms”
Excerpted from “SLIs, SLOs,
SLAs, oh my!”
@sethvargo @lizthegrey
https://youtu.be/tEylFyxbDL
E
#SCaLE17x
@phredmoye
SLO - Service Level Objective, a
target for Service Level
Indicators
“99th percentile homepage SLI will
succeed 99.9% over trailing year”
Excerpted from “SLIs, SLOs,
SLAs, oh my!”
@sethvargo @lizthegrey
https://youtu.be/tEylFyxbDL
E
#SCaLE17x
“SLIs drive SLOs which inform SLAs”
@phredmoye
SLA - Service Level Agreement,
a legal agreement
“99th percentile homepage SLI will
succeed 99% over trailing year”
Excerpted from “SLIs, SLOs,
SLAs, oh my!”
@sethvargo @lizthegrey
https://youtu.be/tEylFyxbDL
E
#SCaLE17x
“SLIs drive SLOs which inform SLAs”
Talk Agenda
● SLO Refresher
● A Common Mistake
● Computing SLOs with log data
● Computing SLOs by counting requests
● Computing SLOs with histograms
@phredmoye #SCaLE17x
A Common Mistake
@phredmoye
Averaging Percentiles
p95(W1 ∪ W2) != (p95(W1)+ p95(W2))/2
Works fine when node workload is symmetric
Hides problems when workloads are asymmetric
#SCaLE17x
A Common Mistake
@phredmoye #SCaLE17x
A Common Mistake
@phredmoye #SCaLE17x
99% of requests
served here
@phredmoye
Averaging Percentiles
#SCaLE17x
A Common Mistake
@phredmoye
p95(W1) = 220ms
p95(W2) = 650ms
p95(W1 ∪ W2) = 230ms
(p95(W1)+p95(W2))/2 = 430ms
~200% difference
#SCaLE17x
A Common Mistake
@phredmoye
Averaging Percentiles
#SCaLE17x
A Common Mistake
p95 actual (230ms)
p95 average (430ms)
ERROR
A Common Mistake
@phredmoye
Log parser => Metrics (mtail)
What metrics are you storing?
Averages? p50, p90, p95, p99, p99.9, p99.9?
#SCaLE17x
Talk Agenda
● SLO Refresher
● A Common Mistake
● Computing SLOs with log data
● Computing SLOs by counting requests
● Computing SLOs with histograms
@phredmoye #SCaLE17x
Computing SLOs with log
data
@phredmoye
"%{%d/%b/%Y %T}t.%{msec_frac}t %{%z}t"
~100 bytes per log line
~1GB for 10M requests
#SCaLE17x
Computing SLOs with log
data
@phredmoye
Logs => HDFS
Logs => ElasticSearch/Splunk
ssh -- `grep ... | awk ... > 550 ... | wc -l`
Then query all the log files
#SCaLE17x
Computing SLOs with log
data
@phredmoye
Calculating p95 SLI
1. Extract samples for time window
2. Sort the samples by value
3. Find the sample 5% count from largest
4. That’s your p95
#SCaLE17x
Computing SLOs with log
data
@phredmoye
Calculating p95 SLO
“95th percentile SLI will succeed 99.9% trailing year”
1. Divide 1 year samples into 1,000 slices
2. For each slice, calculate SLI
3. Was p95 SLI met for 999 slices? Met SLO if so
#SCaLE17x
Computing SLOs with log
data
@phredmoye
Pros:
1. Easy to configure logs to capture latency
2. Easy to roll your own processing code, some open source
options out there
3. Accurate results
#SCaLE17x
Computing SLOs with log
data
@phredmoye
Cons:
1. Expensive (see log analysis solution pricing)
2. Sampling possible but skews accuracy
3. Slow
4. Difficult to scale
#SCaLE17x
Talk Agenda
● SLO Refresher
● A Common Mistake
● Computing SLOs with log data
● Computing SLOs by counting requests
● Computing SLOs with histograms
@phredmoye #SCaLE17x
Computing SLOs by counting
requests
@phredmoye
1. Count # of requests that violate SLI threshold
2. Count total number of requests
3. % success = 100 - (#failed_reqs/#total_reqs)*100
Similar to Prometheus cumulative ‘le’ histogram
#SCaLE17x#SCaLE17x
Computing SLOs by counting requests
@phredmoye #SCaLE17x
Computing SLOs by counting requests
@phredmoye
SLO = 90% of reqs < 30ms
# bad requests = 2,262
# total requests = 60,124
100-(2262/60124)*100=96.2%
SLO was met
#SCaLE17x
@phredmoye
Pros:
1. Simple to implement
2. Performant
3. Scalable
4. Accurate
Computing SLOs by counting
requests
#SCaLE17x
@phredmoye
Cons:
1. Fixed SLO threshold - must reconfigure
2. Look back impossible for other thresholds
Computing SLOs by counting requests
#SCaLE17x
Talk Agenda
● SLO Refresher
● A Common Mistake
● Computing SLOs with log data
● Computing SLOs by counting requests
● Computing SLOs with histograms
@phredmoye #SCaLE17x
@phredmoye
AKA distributions
Sample counts
in bins/buckets
Gil Tene’s hdrhistogram.org
Computing SLOs with histograms
Sample value
# Samples
Median
q(0.5)
Mode
q(0.9)
q(1)Mean
#SCaLE17x
@phredmoye
Some histogram types:
1. Linear
2. Approximate
3. Fixed bin
4. Cumulative
5. Log Linear
Computing SLOs by counting
requests
#SCaLE17x
@phredmoye
Log Linear Histogram
github.com/circonus-labs/libcircllhist
github.com/circonus-labs/circonusllhist
#SCaLE17x
@phredmoye
Log Linear Histogram
#SCaLE17x
@phredmoye
h(A ∪ B) = h(A) ∪ h(B)
A & B must have identical bin boundaries
Can be aggregated both in space and time
Mergeability
#SCaLE17x
@phredmoye
How many requests are faster than 330ms?
1. Walk the bins lowest to highest until you reach 330ms
2. Sum the counts in those bins
3. Done
Computing SLOs with histograms
#SCaLE17x
@phredmoye #SCaLE17x
@phredmoye
For the libcircllhist implementation we have bins at:
... 320, 330, 340, ...
.... And: 10,11,12,13...
.... And: 0.0000010, 0.0000011, 0.0000012,
For every decimal floating point number, with 2
significant digits, we have a bin (within 10^{+/-128}).
So ... where are the bin boundaries?
#SCaLE17x
@phredmoye
Pros:
1. Space Efficient (HH: ~ 300bytes / histogram in practice, 10x
more efficient than logs)
2. Full Flexibility:
- Thresholds can be chosen as needed and analyzed
- Statistical methods applicable, IQR, count_below, q(1), etc.
3. Mergability (HH: Aggregate data across nodes)
4. Performance (ns insertions, μs percentile calculations)
5. Bounded error (half the bin size)
6. Several open source libraries available
Computing SLOs with histograms
#SCaLE17x
@phredmoye
Cons:
1. Math is more complex than other methods
2. Some loss of accuracy (<<5%) in worst cases
Computing SLOs with histograms
#SCaLE17x
@phredmoye
github.com/circonus-labs/libcircllhist
(autoconf && ./configure && make && make install)
github.com/circonus-
labs/libcircllhist/tree/master/src/python
(pip install circllhist)
Log Linear histograms with Python
#SCaLE17x
@phredmoye
h = Circllhist() # make a new histogram
h.insert(123) # insert value 123
h.insert(456) # insert value 456
h.insert(789) # insert value 789
print(h.count()) # prints 3
print(h.sum()) # prints 1,368
print(h.quantile(0.5)) # prints 456
Log Linear histograms with Python
#SCaLE17x
@phredmoye
from matplotlib import pyplot as plt
from circllhist import Circllhist
H = Circllhist()
… # add latency data to H via insert()
H.plot()
plt.axvline(x=H.quantile(0.95), color=red)
Log Linear histograms with
Python
#SCaLE17x
@phredmoye
Averaging Percentiles
Log Linear histograms with
Python
#SCaLE17x
@phredmoye
Conclusions
1. Averaging Percentiles is tempting, but misleading
2. Use counters or histograms to calculate SLOs
correctly
3. Histograms give the most flexibility in choosing
latency thresholds, but only a couple libraries
implement them (libcircllhist, hdrhistogram)
4. Full support for (sparsely encoded-, HDR-)
histograms in TSDBs still lacking (except IRONdb).
#SCaLE17x
@phredmoye
Thank you!
Tweet me: @phredmoyer
AMA about histograms on: slack.s.circonus.com
More talks about histograms:
slideshare.net/redhotpenguin
https://github.com/HeinrichHartmann/DS4OPS
#SCaLE17x
@phredmoye
DEMO
#SCaLE17x

More Related Content

Similar to Scale17x - Latency SLOs Done Right

Big Data Day LA 2016/ Big Data Track - Portable Stream and Batch Processing w...
Big Data Day LA 2016/ Big Data Track - Portable Stream and Batch Processing w...Big Data Day LA 2016/ Big Data Track - Portable Stream and Batch Processing w...
Big Data Day LA 2016/ Big Data Track - Portable Stream and Batch Processing w...Data Con LA
 
Keynote: Building and Operating A Serverless Streaming Runtime for Apache Bea...
Keynote: Building and Operating A Serverless Streaming Runtime for Apache Bea...Keynote: Building and Operating A Serverless Streaming Runtime for Apache Bea...
Keynote: Building and Operating A Serverless Streaming Runtime for Apache Bea...Flink Forward
 
Peddle the Pedal to the Metal
Peddle the Pedal to the MetalPeddle the Pedal to the Metal
Peddle the Pedal to the MetalC4Media
 
Top Java Performance Problems and Metrics To Check in Your Pipeline
Top Java Performance Problems and Metrics To Check in Your PipelineTop Java Performance Problems and Metrics To Check in Your Pipeline
Top Java Performance Problems and Metrics To Check in Your PipelineAndreas Grabner
 
Variables & Expressions
Variables & ExpressionsVariables & Expressions
Variables & ExpressionsRich Price
 
Improving the performance of Odoo deployments
Improving the performance of Odoo deploymentsImproving the performance of Odoo deployments
Improving the performance of Odoo deploymentsOdoo
 
Keynote: Machine Learning for Design Automation at DAC 2018
Keynote:  Machine Learning for Design Automation at DAC 2018Keynote:  Machine Learning for Design Automation at DAC 2018
Keynote: Machine Learning for Design Automation at DAC 2018Manish Pandey
 
Search Engine-Building with Lucene and Solr, Part 2 (SoCal Code Camp LA 2013)
Search Engine-Building with Lucene and Solr, Part 2 (SoCal Code Camp LA 2013)Search Engine-Building with Lucene and Solr, Part 2 (SoCal Code Camp LA 2013)
Search Engine-Building with Lucene and Solr, Part 2 (SoCal Code Camp LA 2013)Kai Chan
 
Serverless for High Performance Computing
Serverless for High Performance ComputingServerless for High Performance Computing
Serverless for High Performance ComputingLuciano Mammino
 
How does the Cloud Foundry Diego Project Run at Scale, and Updates on .NET Su...
How does the Cloud Foundry Diego Project Run at Scale, and Updates on .NET Su...How does the Cloud Foundry Diego Project Run at Scale, and Updates on .NET Su...
How does the Cloud Foundry Diego Project Run at Scale, and Updates on .NET Su...Amit Gupta
 
How does the Cloud Foundry Diego Project Run at Scale?
How does the Cloud Foundry Diego Project Run at Scale?How does the Cloud Foundry Diego Project Run at Scale?
How does the Cloud Foundry Diego Project Run at Scale?VMware Tanzu
 
Оптимизация MySQL. Что должен знать каждый разработчик
Оптимизация MySQL. Что должен знать каждый разработчикОптимизация MySQL. Что должен знать каждый разработчик
Оптимизация MySQL. Что должен знать каждый разработчикAgnislav Onufrijchuk
 
PostgreSQL as seen by Rubyists (Kaigi on Rails 2022)
PostgreSQL as seen by Rubyists (Kaigi on Rails 2022)PostgreSQL as seen by Rubyists (Kaigi on Rails 2022)
PostgreSQL as seen by Rubyists (Kaigi on Rails 2022)Андрей Новиков
 
“Performance” - Dallas Oracle Users Group 2019-01-29 presentation
“Performance” - Dallas Oracle Users Group 2019-01-29 presentation“Performance” - Dallas Oracle Users Group 2019-01-29 presentation
“Performance” - Dallas Oracle Users Group 2019-01-29 presentationCary Millsap
 
Tips on how to improve the performance of your custom modules for high volume...
Tips on how to improve the performance of your custom modules for high volume...Tips on how to improve the performance of your custom modules for high volume...
Tips on how to improve the performance of your custom modules for high volume...Odoo
 
SDPHP - Percona Toolkit (It's Basically Magic)
SDPHP - Percona Toolkit (It's Basically Magic)SDPHP - Percona Toolkit (It's Basically Magic)
SDPHP - Percona Toolkit (It's Basically Magic)Robert Swisher
 
OSMC 2018 | Learnings, patterns and Uber’s metrics platform M3, open sourced ...
OSMC 2018 | Learnings, patterns and Uber’s metrics platform M3, open sourced ...OSMC 2018 | Learnings, patterns and Uber’s metrics platform M3, open sourced ...
OSMC 2018 | Learnings, patterns and Uber’s metrics platform M3, open sourced ...NETWAYS
 

Similar to Scale17x - Latency SLOs Done Right (20)

Big Data Day LA 2016/ Big Data Track - Portable Stream and Batch Processing w...
Big Data Day LA 2016/ Big Data Track - Portable Stream and Batch Processing w...Big Data Day LA 2016/ Big Data Track - Portable Stream and Batch Processing w...
Big Data Day LA 2016/ Big Data Track - Portable Stream and Batch Processing w...
 
Keynote: Building and Operating A Serverless Streaming Runtime for Apache Bea...
Keynote: Building and Operating A Serverless Streaming Runtime for Apache Bea...Keynote: Building and Operating A Serverless Streaming Runtime for Apache Bea...
Keynote: Building and Operating A Serverless Streaming Runtime for Apache Bea...
 
Peddle the Pedal to the Metal
Peddle the Pedal to the MetalPeddle the Pedal to the Metal
Peddle the Pedal to the Metal
 
Top Java Performance Problems and Metrics To Check in Your Pipeline
Top Java Performance Problems and Metrics To Check in Your PipelineTop Java Performance Problems and Metrics To Check in Your Pipeline
Top Java Performance Problems and Metrics To Check in Your Pipeline
 
Variables & Expressions
Variables & ExpressionsVariables & Expressions
Variables & Expressions
 
Improving the performance of Odoo deployments
Improving the performance of Odoo deploymentsImproving the performance of Odoo deployments
Improving the performance of Odoo deployments
 
Keynote: Machine Learning for Design Automation at DAC 2018
Keynote:  Machine Learning for Design Automation at DAC 2018Keynote:  Machine Learning for Design Automation at DAC 2018
Keynote: Machine Learning for Design Automation at DAC 2018
 
PraveenBOUT++
PraveenBOUT++PraveenBOUT++
PraveenBOUT++
 
Search Engine-Building with Lucene and Solr, Part 2 (SoCal Code Camp LA 2013)
Search Engine-Building with Lucene and Solr, Part 2 (SoCal Code Camp LA 2013)Search Engine-Building with Lucene and Solr, Part 2 (SoCal Code Camp LA 2013)
Search Engine-Building with Lucene and Solr, Part 2 (SoCal Code Camp LA 2013)
 
Serverless for High Performance Computing
Serverless for High Performance ComputingServerless for High Performance Computing
Serverless for High Performance Computing
 
How does the Cloud Foundry Diego Project Run at Scale, and Updates on .NET Su...
How does the Cloud Foundry Diego Project Run at Scale, and Updates on .NET Su...How does the Cloud Foundry Diego Project Run at Scale, and Updates on .NET Su...
How does the Cloud Foundry Diego Project Run at Scale, and Updates on .NET Su...
 
How does the Cloud Foundry Diego Project Run at Scale?
How does the Cloud Foundry Diego Project Run at Scale?How does the Cloud Foundry Diego Project Run at Scale?
How does the Cloud Foundry Diego Project Run at Scale?
 
Оптимизация MySQL. Что должен знать каждый разработчик
Оптимизация MySQL. Что должен знать каждый разработчикОптимизация MySQL. Что должен знать каждый разработчик
Оптимизация MySQL. Что должен знать каждый разработчик
 
PostgreSQL as seen by Rubyists (Kaigi on Rails 2022)
PostgreSQL as seen by Rubyists (Kaigi on Rails 2022)PostgreSQL as seen by Rubyists (Kaigi on Rails 2022)
PostgreSQL as seen by Rubyists (Kaigi on Rails 2022)
 
“Performance” - Dallas Oracle Users Group 2019-01-29 presentation
“Performance” - Dallas Oracle Users Group 2019-01-29 presentation“Performance” - Dallas Oracle Users Group 2019-01-29 presentation
“Performance” - Dallas Oracle Users Group 2019-01-29 presentation
 
Tips on how to improve the performance of your custom modules for high volume...
Tips on how to improve the performance of your custom modules for high volume...Tips on how to improve the performance of your custom modules for high volume...
Tips on how to improve the performance of your custom modules for high volume...
 
SDPHP - Percona Toolkit (It's Basically Magic)
SDPHP - Percona Toolkit (It's Basically Magic)SDPHP - Percona Toolkit (It's Basically Magic)
SDPHP - Percona Toolkit (It's Basically Magic)
 
Paris data ladies #6
Paris data ladies #6Paris data ladies #6
Paris data ladies #6
 
OSMC 2018 | Learnings, patterns and Uber’s metrics platform M3, open sourced ...
OSMC 2018 | Learnings, patterns and Uber’s metrics platform M3, open sourced ...OSMC 2018 | Learnings, patterns and Uber’s metrics platform M3, open sourced ...
OSMC 2018 | Learnings, patterns and Uber’s metrics platform M3, open sourced ...
 
Introduction to Parallelization and performance optimization
Introduction to Parallelization and performance optimizationIntroduction to Parallelization and performance optimization
Introduction to Parallelization and performance optimization
 

More from Fred Moyer

Practical service level objectives with error budgeting
Practical service level objectives with error budgetingPractical service level objectives with error budgeting
Practical service level objectives with error budgetingFred Moyer
 
Latency SLOs done right
Latency SLOs done rightLatency SLOs done right
Latency SLOs done rightFred Moyer
 
Comprehensive Container Based Service Monitoring with Kubernetes and Istio
Comprehensive Container Based Service Monitoring with Kubernetes and IstioComprehensive Container Based Service Monitoring with Kubernetes and Istio
Comprehensive Container Based Service Monitoring with Kubernetes and IstioFred Moyer
 
Comprehensive container based service monitoring with kubernetes and istio
Comprehensive container based service monitoring with kubernetes and istioComprehensive container based service monitoring with kubernetes and istio
Comprehensive container based service monitoring with kubernetes and istioFred Moyer
 
Effective management of high volume numeric data with histograms
Effective management of high volume numeric data with histogramsEffective management of high volume numeric data with histograms
Effective management of high volume numeric data with histogramsFred Moyer
 
Statistics for dummies
Statistics for dummiesStatistics for dummies
Statistics for dummiesFred Moyer
 
GrafanaCon EU 2018
GrafanaCon EU 2018GrafanaCon EU 2018
GrafanaCon EU 2018Fred Moyer
 
Fredmoyer postgresopen 2017
Fredmoyer postgresopen 2017Fredmoyer postgresopen 2017
Fredmoyer postgresopen 2017Fred Moyer
 
Better service monitoring through histograms sv perl 09012016
Better service monitoring through histograms sv perl 09012016Better service monitoring through histograms sv perl 09012016
Better service monitoring through histograms sv perl 09012016Fred Moyer
 
Better service monitoring through histograms
Better service monitoring through histogramsBetter service monitoring through histograms
Better service monitoring through histogramsFred Moyer
 
The Breakup - Logically Sharding a Growing PostgreSQL Database
The Breakup - Logically Sharding a Growing PostgreSQL DatabaseThe Breakup - Logically Sharding a Growing PostgreSQL Database
The Breakup - Logically Sharding a Growing PostgreSQL DatabaseFred Moyer
 
Learning go for perl programmers
Learning go for perl programmersLearning go for perl programmers
Learning go for perl programmersFred Moyer
 
Surge 2012 fred_moyer_lightning
Surge 2012 fred_moyer_lightningSurge 2012 fred_moyer_lightning
Surge 2012 fred_moyer_lightningFred Moyer
 
Apache Dispatch
Apache DispatchApache Dispatch
Apache DispatchFred Moyer
 
Ball Of Mud Yapc 2008
Ball Of Mud Yapc 2008Ball Of Mud Yapc 2008
Ball Of Mud Yapc 2008Fred Moyer
 
Data::FormValidator Simplified
Data::FormValidator SimplifiedData::FormValidator Simplified
Data::FormValidator SimplifiedFred Moyer
 

More from Fred Moyer (17)

Practical service level objectives with error budgeting
Practical service level objectives with error budgetingPractical service level objectives with error budgeting
Practical service level objectives with error budgeting
 
Latency SLOs done right
Latency SLOs done rightLatency SLOs done right
Latency SLOs done right
 
Comprehensive Container Based Service Monitoring with Kubernetes and Istio
Comprehensive Container Based Service Monitoring with Kubernetes and IstioComprehensive Container Based Service Monitoring with Kubernetes and Istio
Comprehensive Container Based Service Monitoring with Kubernetes and Istio
 
Comprehensive container based service monitoring with kubernetes and istio
Comprehensive container based service monitoring with kubernetes and istioComprehensive container based service monitoring with kubernetes and istio
Comprehensive container based service monitoring with kubernetes and istio
 
Effective management of high volume numeric data with histograms
Effective management of high volume numeric data with histogramsEffective management of high volume numeric data with histograms
Effective management of high volume numeric data with histograms
 
Statistics for dummies
Statistics for dummiesStatistics for dummies
Statistics for dummies
 
GrafanaCon EU 2018
GrafanaCon EU 2018GrafanaCon EU 2018
GrafanaCon EU 2018
 
Fredmoyer postgresopen 2017
Fredmoyer postgresopen 2017Fredmoyer postgresopen 2017
Fredmoyer postgresopen 2017
 
Better service monitoring through histograms sv perl 09012016
Better service monitoring through histograms sv perl 09012016Better service monitoring through histograms sv perl 09012016
Better service monitoring through histograms sv perl 09012016
 
Better service monitoring through histograms
Better service monitoring through histogramsBetter service monitoring through histograms
Better service monitoring through histograms
 
The Breakup - Logically Sharding a Growing PostgreSQL Database
The Breakup - Logically Sharding a Growing PostgreSQL DatabaseThe Breakup - Logically Sharding a Growing PostgreSQL Database
The Breakup - Logically Sharding a Growing PostgreSQL Database
 
Learning go for perl programmers
Learning go for perl programmersLearning go for perl programmers
Learning go for perl programmers
 
Surge 2012 fred_moyer_lightning
Surge 2012 fred_moyer_lightningSurge 2012 fred_moyer_lightning
Surge 2012 fred_moyer_lightning
 
Qpsmtpd
QpsmtpdQpsmtpd
Qpsmtpd
 
Apache Dispatch
Apache DispatchApache Dispatch
Apache Dispatch
 
Ball Of Mud Yapc 2008
Ball Of Mud Yapc 2008Ball Of Mud Yapc 2008
Ball Of Mud Yapc 2008
 
Data::FormValidator Simplified
Data::FormValidator SimplifiedData::FormValidator Simplified
Data::FormValidator Simplified
 

Recently uploaded

chapter--4-software-project-planning.ppt
chapter--4-software-project-planning.pptchapter--4-software-project-planning.ppt
chapter--4-software-project-planning.pptkotipi9215
 
Unit 1.1 Excite Part 1, class 9, cbse...
Unit 1.1 Excite Part 1, class 9, cbse...Unit 1.1 Excite Part 1, class 9, cbse...
Unit 1.1 Excite Part 1, class 9, cbse...aditisharan08
 
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...stazi3110
 
Project Based Learning (A.I).pptx detail explanation
Project Based Learning (A.I).pptx detail explanationProject Based Learning (A.I).pptx detail explanation
Project Based Learning (A.I).pptx detail explanationkaushalgiri8080
 
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASEBATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASEOrtus Solutions, Corp
 
Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...
Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...
Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...kellynguyen01
 
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...Christina Lin
 
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...MyIntelliSource, Inc.
 
The Evolution of Karaoke From Analog to App.pdf
The Evolution of Karaoke From Analog to App.pdfThe Evolution of Karaoke From Analog to App.pdf
The Evolution of Karaoke From Analog to App.pdfPower Karaoke
 
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...ICS
 
Engage Usergroup 2024 - The Good The Bad_The Ugly
Engage Usergroup 2024 - The Good The Bad_The UglyEngage Usergroup 2024 - The Good The Bad_The Ugly
Engage Usergroup 2024 - The Good The Bad_The UglyFrank van der Linden
 
why an Opensea Clone Script might be your perfect match.pdf
why an Opensea Clone Script might be your perfect match.pdfwhy an Opensea Clone Script might be your perfect match.pdf
why an Opensea Clone Script might be your perfect match.pdfjoe51371421
 
What is Binary Language? Computer Number Systems
What is Binary Language?  Computer Number SystemsWhat is Binary Language?  Computer Number Systems
What is Binary Language? Computer Number SystemsJheuzeDellosa
 
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...MyIntelliSource, Inc.
 
EY_Graph Database Powered Sustainability
EY_Graph Database Powered SustainabilityEY_Graph Database Powered Sustainability
EY_Graph Database Powered SustainabilityNeo4j
 
5 Signs You Need a Fashion PLM Software.pdf
5 Signs You Need a Fashion PLM Software.pdf5 Signs You Need a Fashion PLM Software.pdf
5 Signs You Need a Fashion PLM Software.pdfWave PLM
 
Advancing Engineering with AI through the Next Generation of Strategic Projec...
Advancing Engineering with AI through the Next Generation of Strategic Projec...Advancing Engineering with AI through the Next Generation of Strategic Projec...
Advancing Engineering with AI through the Next Generation of Strategic Projec...OnePlan Solutions
 
Salesforce Certified Field Service Consultant
Salesforce Certified Field Service ConsultantSalesforce Certified Field Service Consultant
Salesforce Certified Field Service ConsultantAxelRicardoTrocheRiq
 
cybersecurity notes for mca students for learning
cybersecurity notes for mca students for learningcybersecurity notes for mca students for learning
cybersecurity notes for mca students for learningVitsRangannavar
 

Recently uploaded (20)

chapter--4-software-project-planning.ppt
chapter--4-software-project-planning.pptchapter--4-software-project-planning.ppt
chapter--4-software-project-planning.ppt
 
Unit 1.1 Excite Part 1, class 9, cbse...
Unit 1.1 Excite Part 1, class 9, cbse...Unit 1.1 Excite Part 1, class 9, cbse...
Unit 1.1 Excite Part 1, class 9, cbse...
 
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...
 
Project Based Learning (A.I).pptx detail explanation
Project Based Learning (A.I).pptx detail explanationProject Based Learning (A.I).pptx detail explanation
Project Based Learning (A.I).pptx detail explanation
 
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASEBATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
 
Call Girls In Mukherjee Nagar 📱 9999965857 🤩 Delhi 🫦 HOT AND SEXY VVIP 🍎 SE...
Call Girls In Mukherjee Nagar 📱  9999965857  🤩 Delhi 🫦 HOT AND SEXY VVIP 🍎 SE...Call Girls In Mukherjee Nagar 📱  9999965857  🤩 Delhi 🫦 HOT AND SEXY VVIP 🍎 SE...
Call Girls In Mukherjee Nagar 📱 9999965857 🤩 Delhi 🫦 HOT AND SEXY VVIP 🍎 SE...
 
Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...
Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...
Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...
 
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
 
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
 
The Evolution of Karaoke From Analog to App.pdf
The Evolution of Karaoke From Analog to App.pdfThe Evolution of Karaoke From Analog to App.pdf
The Evolution of Karaoke From Analog to App.pdf
 
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
 
Engage Usergroup 2024 - The Good The Bad_The Ugly
Engage Usergroup 2024 - The Good The Bad_The UglyEngage Usergroup 2024 - The Good The Bad_The Ugly
Engage Usergroup 2024 - The Good The Bad_The Ugly
 
why an Opensea Clone Script might be your perfect match.pdf
why an Opensea Clone Script might be your perfect match.pdfwhy an Opensea Clone Script might be your perfect match.pdf
why an Opensea Clone Script might be your perfect match.pdf
 
What is Binary Language? Computer Number Systems
What is Binary Language?  Computer Number SystemsWhat is Binary Language?  Computer Number Systems
What is Binary Language? Computer Number Systems
 
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
 
EY_Graph Database Powered Sustainability
EY_Graph Database Powered SustainabilityEY_Graph Database Powered Sustainability
EY_Graph Database Powered Sustainability
 
5 Signs You Need a Fashion PLM Software.pdf
5 Signs You Need a Fashion PLM Software.pdf5 Signs You Need a Fashion PLM Software.pdf
5 Signs You Need a Fashion PLM Software.pdf
 
Advancing Engineering with AI through the Next Generation of Strategic Projec...
Advancing Engineering with AI through the Next Generation of Strategic Projec...Advancing Engineering with AI through the Next Generation of Strategic Projec...
Advancing Engineering with AI through the Next Generation of Strategic Projec...
 
Salesforce Certified Field Service Consultant
Salesforce Certified Field Service ConsultantSalesforce Certified Field Service Consultant
Salesforce Certified Field Service Consultant
 
cybersecurity notes for mca students for learning
cybersecurity notes for mca students for learningcybersecurity notes for mca students for learning
cybersecurity notes for mca students for learning
 

Scale17x - Latency SLOs Done Right

  • 1. Latency SLOs Done Right By Fred Moyer SCaLE17x #SCaLE17x@phredmoye
  • 3. Latency For any of your services, how many requests were served within 500 ms over the last month? @phredmoye #SCaLE17x
  • 4. Latency For any of your services, how many requests were served within 250ms over the last month? @phredmoye #SCaLE17x
  • 5. Latency How would you answer that question for your services? @phredmoye #SCaLE17x
  • 6. Latency How accurate would your answer be? @phredmoye #SCaLE17x
  • 7. I’m Fred and I like SLOs - Developer Evangelist @Circonus - Engineer who talks to people - Writing code and breaking prod for 20 years - @phredmoyer on Twitter - Likes C, Go, Perl, PostgreSQL @phredmoye #SCaLE17x
  • 8. Talk Agenda ● SLO Refresher ● A Common Mistake ● Computing SLOs with log data ● Computing SLOs by counting requests ● Computing SLOs with histograms @phredmoye #SCaLE17x
  • 9. Service Level Objectives SLI - Service Level Indicator SLO - Service Level Objectives SLA - Service Level Agreement @phredmoye #SCaLE17x
  • 11. “SLIs drive SLOs which inform SLAs” @phredmoye SLI - Service Level Indicator, a measure of the service that can be quantified “99th percentile latency of homepage requests over the past 5 minutes < 300ms” Excerpted from “SLIs, SLOs, SLAs, oh my!” @sethvargo @lizthegrey https://youtu.be/tEylFyxbDL E #SCaLE17x
  • 12. @phredmoye SLO - Service Level Objective, a target for Service Level Indicators “99th percentile homepage SLI will succeed 99.9% over trailing year” Excerpted from “SLIs, SLOs, SLAs, oh my!” @sethvargo @lizthegrey https://youtu.be/tEylFyxbDL E #SCaLE17x “SLIs drive SLOs which inform SLAs”
  • 13. @phredmoye SLA - Service Level Agreement, a legal agreement “99th percentile homepage SLI will succeed 99% over trailing year” Excerpted from “SLIs, SLOs, SLAs, oh my!” @sethvargo @lizthegrey https://youtu.be/tEylFyxbDL E #SCaLE17x “SLIs drive SLOs which inform SLAs”
  • 14. Talk Agenda ● SLO Refresher ● A Common Mistake ● Computing SLOs with log data ● Computing SLOs by counting requests ● Computing SLOs with histograms @phredmoye #SCaLE17x
  • 15. A Common Mistake @phredmoye Averaging Percentiles p95(W1 ∪ W2) != (p95(W1)+ p95(W2))/2 Works fine when node workload is symmetric Hides problems when workloads are asymmetric #SCaLE17x
  • 17. A Common Mistake @phredmoye #SCaLE17x 99% of requests served here
  • 19. @phredmoye p95(W1) = 220ms p95(W2) = 650ms p95(W1 ∪ W2) = 230ms (p95(W1)+p95(W2))/2 = 430ms ~200% difference #SCaLE17x A Common Mistake
  • 20. @phredmoye Averaging Percentiles #SCaLE17x A Common Mistake p95 actual (230ms) p95 average (430ms) ERROR
  • 21. A Common Mistake @phredmoye Log parser => Metrics (mtail) What metrics are you storing? Averages? p50, p90, p95, p99, p99.9, p99.9? #SCaLE17x
  • 22. Talk Agenda ● SLO Refresher ● A Common Mistake ● Computing SLOs with log data ● Computing SLOs by counting requests ● Computing SLOs with histograms @phredmoye #SCaLE17x
  • 23. Computing SLOs with log data @phredmoye "%{%d/%b/%Y %T}t.%{msec_frac}t %{%z}t" ~100 bytes per log line ~1GB for 10M requests #SCaLE17x
  • 24. Computing SLOs with log data @phredmoye Logs => HDFS Logs => ElasticSearch/Splunk ssh -- `grep ... | awk ... > 550 ... | wc -l` Then query all the log files #SCaLE17x
  • 25. Computing SLOs with log data @phredmoye Calculating p95 SLI 1. Extract samples for time window 2. Sort the samples by value 3. Find the sample 5% count from largest 4. That’s your p95 #SCaLE17x
  • 26. Computing SLOs with log data @phredmoye Calculating p95 SLO “95th percentile SLI will succeed 99.9% trailing year” 1. Divide 1 year samples into 1,000 slices 2. For each slice, calculate SLI 3. Was p95 SLI met for 999 slices? Met SLO if so #SCaLE17x
  • 27. Computing SLOs with log data @phredmoye Pros: 1. Easy to configure logs to capture latency 2. Easy to roll your own processing code, some open source options out there 3. Accurate results #SCaLE17x
  • 28. Computing SLOs with log data @phredmoye Cons: 1. Expensive (see log analysis solution pricing) 2. Sampling possible but skews accuracy 3. Slow 4. Difficult to scale #SCaLE17x
  • 29. Talk Agenda ● SLO Refresher ● A Common Mistake ● Computing SLOs with log data ● Computing SLOs by counting requests ● Computing SLOs with histograms @phredmoye #SCaLE17x
  • 30. Computing SLOs by counting requests @phredmoye 1. Count # of requests that violate SLI threshold 2. Count total number of requests 3. % success = 100 - (#failed_reqs/#total_reqs)*100 Similar to Prometheus cumulative ‘le’ histogram #SCaLE17x#SCaLE17x
  • 31. Computing SLOs by counting requests @phredmoye #SCaLE17x
  • 32. Computing SLOs by counting requests @phredmoye SLO = 90% of reqs < 30ms # bad requests = 2,262 # total requests = 60,124 100-(2262/60124)*100=96.2% SLO was met #SCaLE17x
  • 33. @phredmoye Pros: 1. Simple to implement 2. Performant 3. Scalable 4. Accurate Computing SLOs by counting requests #SCaLE17x
  • 34. @phredmoye Cons: 1. Fixed SLO threshold - must reconfigure 2. Look back impossible for other thresholds Computing SLOs by counting requests #SCaLE17x
  • 35. Talk Agenda ● SLO Refresher ● A Common Mistake ● Computing SLOs with log data ● Computing SLOs by counting requests ● Computing SLOs with histograms @phredmoye #SCaLE17x
  • 36. @phredmoye AKA distributions Sample counts in bins/buckets Gil Tene’s hdrhistogram.org Computing SLOs with histograms Sample value # Samples Median q(0.5) Mode q(0.9) q(1)Mean #SCaLE17x
  • 37. @phredmoye Some histogram types: 1. Linear 2. Approximate 3. Fixed bin 4. Cumulative 5. Log Linear Computing SLOs by counting requests #SCaLE17x
  • 40. @phredmoye h(A ∪ B) = h(A) ∪ h(B) A & B must have identical bin boundaries Can be aggregated both in space and time Mergeability #SCaLE17x
  • 41. @phredmoye How many requests are faster than 330ms? 1. Walk the bins lowest to highest until you reach 330ms 2. Sum the counts in those bins 3. Done Computing SLOs with histograms #SCaLE17x
  • 43. @phredmoye For the libcircllhist implementation we have bins at: ... 320, 330, 340, ... .... And: 10,11,12,13... .... And: 0.0000010, 0.0000011, 0.0000012, For every decimal floating point number, with 2 significant digits, we have a bin (within 10^{+/-128}). So ... where are the bin boundaries? #SCaLE17x
  • 44. @phredmoye Pros: 1. Space Efficient (HH: ~ 300bytes / histogram in practice, 10x more efficient than logs) 2. Full Flexibility: - Thresholds can be chosen as needed and analyzed - Statistical methods applicable, IQR, count_below, q(1), etc. 3. Mergability (HH: Aggregate data across nodes) 4. Performance (ns insertions, μs percentile calculations) 5. Bounded error (half the bin size) 6. Several open source libraries available Computing SLOs with histograms #SCaLE17x
  • 45. @phredmoye Cons: 1. Math is more complex than other methods 2. Some loss of accuracy (<<5%) in worst cases Computing SLOs with histograms #SCaLE17x
  • 46. @phredmoye github.com/circonus-labs/libcircllhist (autoconf && ./configure && make && make install) github.com/circonus- labs/libcircllhist/tree/master/src/python (pip install circllhist) Log Linear histograms with Python #SCaLE17x
  • 47. @phredmoye h = Circllhist() # make a new histogram h.insert(123) # insert value 123 h.insert(456) # insert value 456 h.insert(789) # insert value 789 print(h.count()) # prints 3 print(h.sum()) # prints 1,368 print(h.quantile(0.5)) # prints 456 Log Linear histograms with Python #SCaLE17x
  • 48. @phredmoye from matplotlib import pyplot as plt from circllhist import Circllhist H = Circllhist() … # add latency data to H via insert() H.plot() plt.axvline(x=H.quantile(0.95), color=red) Log Linear histograms with Python #SCaLE17x
  • 49. @phredmoye Averaging Percentiles Log Linear histograms with Python #SCaLE17x
  • 50. @phredmoye Conclusions 1. Averaging Percentiles is tempting, but misleading 2. Use counters or histograms to calculate SLOs correctly 3. Histograms give the most flexibility in choosing latency thresholds, but only a couple libraries implement them (libcircllhist, hdrhistogram) 4. Full support for (sparsely encoded-, HDR-) histograms in TSDBs still lacking (except IRONdb). #SCaLE17x
  • 51. @phredmoye Thank you! Tweet me: @phredmoyer AMA about histograms on: slack.s.circonus.com More talks about histograms: slideshare.net/redhotpenguin https://github.com/HeinrichHartmann/DS4OPS #SCaLE17x

Editor's Notes

  1. Hello folks. Welcome to Latency SLOs Done Right. I’m Fred Moyer, a Developer Evangelist at Circonus, which means I’m basically an engineer who talks to other engineers. My twitter handle is phredmoyer, that’s with a ph instead of an f. I want to give a shout out to my colleague Heinrich Hartmann who is a data scientist and originally did a blog post on the material I’m about to present. Let’s get started.
  2. How many people here think that latency is an important metric to track for their applications? Please raise your hands if you are already tracking latency in any of your services as a business critical metric.
  3. So now that we’ve seen a few folks here think that latency is an important metric to track, let me ask for a given service in your infrastructure, how many requests were served faster than zero point 55 seconds over the past month? I don’t expect anyone to have an exact answer on hand, this is a fairly specific question.
  4. So now that we’ve seen a few folks here think that latency is an important metric to track, let me ask for a given service in your infrastructure, how many requests were served faster than zero point 55 seconds over the past month? I don’t expect anyone to have an exact answer on hand, this is a fairly specific question.
  5. But I want you to ask yourself how you could answer that question? Do you have the capabilities to glean this information from your systems? There are many tools out there which everyone here is familiar with, so your answer to that is probably yes.
  6. But I ask you, would your answer be correct? How accurate do you think it would be? As I mentioned, there’s a wide range of tools available to answer these questions, but often you need to question the answers they give.
  7. Today we’re going to be looking at how we can question those answers. First we’ll do a quick SLO refresher. Then we’ll look at a common mistake made when using percentiles. Next we’ll see how to calculate SLOs the right way using three different approaches. With log data, but counting requests, and with histograms. So let’s get started.
  8. Most people here are probably familiar with the Google SRE book. The concept of SLAs has been around for at least a decade, but service level objectives and service level indicators are also becoming ubiquitous amongst site reliability engineers. The amount of online content around these terms has been increasing rapidly. These three terms SLI, SLO, and SLA are now fairly standard lexicon amongst site reliability engineers.
  9. In addition to the Google SRE book, there are two other recent books that talk about service level objectives. The site reliability workbook has a dedicated chapter on service level objectives. Seeking SRE has a chapter on defining SLOs by Theo Schlossnagle, the Circonus CEO who got me into all this stuff five or six years ago. One thing to note that there isn’t one standard definition of SLO. Everyone’s business needs are different, and what you should take away from these books and their discussions of service level objectives is there are many ways to define them, so you should base your definition of a service level objective on the one that makes the most sense for your business.
  10. I’m pulling in a few excerpts from a great youtube video by Seth Vargo and Liz Fong Jones I recently watched which I think explains these concepts really well. I recommend watching the video to get an in depth understanding. The gist of that video is that SLIs drive SLOs which inform SLAs. A service level indicator is basically a metric derived measure of health for a service. For example, I could have an SLI that says my 99th percentile latency of homepage requests over the last 5 minutes should be less than 300 milliseconds.
  11. A service level objective is basically how we take a service level indicator, and extend the scope of it to quantify how we expect our service to perform over a strategic time interval. Drawing on the SLI we talked about in the previous slide, we could say that our SLO is that we want to meet the criteria set by that SLI for three nines over a trailing year window. SLAs take SLOs one step further, but use the same criteria. They are generally crafted by lawyers to limit the possibility of having to give customers money for those times when our service doesn’t perform like we committed to. SLAs are similar to SLOs, but the commitment level is relaxed, as we want our internal facing targets to be more strict than our external facing targets.
  12. A service level agreement is a legal agreement that is generally less restrictive than the SLO which the operations team is accustomed to delivering. It is crafted by lawyers and generally meant to be as risk averse as possible. When SLAs are violated, bad things happen. Customers notice, then they try to get money back from you. Executives call meetings, and folks get called on the carpet about why the SLA couldn’t be met. And here’s the kicker. If you don’t have an SLO, YOUR SLA IS YOUR SLO. So your internal reliability targets are now your external reliability targets. There’s a reason we separate SLOs and SLAs. One is a target we don’t ever want to miss. The other is a realistic measure of what we can achieve, but might not always accomplish. When we bring things like error budgets into the discussion, we want to be able to take risk with deployments and expect downtime so that we can move quickly. It’s not about moving fast and breaking things. It’s about using math to figure out what the risk is if we move a given speed.
  13. So we just did a brief refresher of Service Level Objectives, something folks here are probably somewhat familiar with already. I encourage folks here to read the books I’ve listed, but also to remember that SLOs are tools for the business, and should be tailored appropriately to your use cases. Now let’s look at a common mistake when using percentiles with SLOs - averaging percentiles.
  14. Averaging percentiles is probably the single most common mistake made when working with latency metrics. Why is this? Part of this happens because averaging percentiles is actually a reasonable approach which systems are functioning normally and nodes are exhibiting symmetric workloads. It’s easy to get an ideal of aggregate system performance in those situations by just adding up percentiles from nodes and dividing by the number of nodes. The data from most monitoring systems makes this very easy to do. When this approach becomes problematic though is when node workloads are asymmetric. If you’re looking an an average of percentiles when that happens, you’ll rarely know it though, this approach hides those asymmetries.
  15. What if I told you that 90% of the requests occurred during that period where the 5 minute p99 was 300 microseconds? Does that change your answer? This is a good example of why you can’t average percentiles over time. What you don’t see here is request volumes.
  16. The is a graph of 5 minute p99,p95,p90,p50 over ~24 hours. What is the p99 over the entire time range? We have a peak of ~300 microseconds for 6 hours, and about 180 microseconds for the other 18 hours. So we can guess that the p99 is probably around 200 microseconds right?
  17. Let’s take a look at averaging percentiles over nodes. Here is the distribution of requests for 2 webservers, along with the corresponding p95s (this is over constant time). This distribution shows the sample number on the Y axis and the sample value on the X axis. You can see that the blue webserver had more samples at lower latencies, and the red webserver a flatter distribution of latencies that were generally higher than the blue webserver. What happens if we calculate P95 by averaging them vs calculating then by aggregating the samples?
  18. If we combine the samples and calculate the correct p95, we get 230 milliseconds. If I average the p95 from both sample sets, we get 430 milliseconds. That’s a 200% difference. If these sample distributions matched up exactly, we could average the p95s and get the same answer as taking the p95 of the aggregated samples, but that rarely happens, and it especially doesn’t happen when workloads are asymmetric.
  19. So how do people end up averaging percentiles? Well there are some common workflows that make it very easy. One way to collect latency data without instrumenting your application is to use a log parser like Google’s mtail which extracts latency metrics from logs, and then stores them in a Time Series Database or sends them to a statsd server or some other aggregation point. So what latency metric do you end up storing? Almost none of the open source tooling exposes the raw samples, they provide the average, the median, or any one of the common percentiles captured for analysis. So if you run mtail on each web server, you end up with storing latency percentiles for each node, which results in the graph we just showed. Sure, this is an easy situation to avoid if you have this knowledge, but that’s usually the exception as opposed to the rule. Though I’ve talked to folks who do this and say “yeah we know it’s not correct some of the time, but it’s the best we have right now and we don’t have time to change it”. Well, we’re going to see how we can do better.
  20. So let’s start off with our first of three approaches on how to compute service level objects the right way.
  21. The first approach is to compute our service level objective by using log data. This is an example Apache log line configuration to log request time as milliseconds. Pretty much anyone running a web service can log the time of the request with just a small configuration change. So for each log line emit, we have to store about 100 bytes of data. Which means that 10M requests will cost about us about one gigabyte. Remember thought that if this is a web page, you’ll also be logging the request time to serve images and other static content. So if you are collecting more than just API request data you may need to multiply this number by several dozen.
  22. Once we have our logfile configured to emit latency, collecting it usually goes something like this. You stuff your metrics into a store like HDFS, logstash, or Splunk. And then you either use Elastic search, Splunk, or good old fashioned grep and awk to query the logs. This is all pretty straightforward. You just need lots of servers.
  23. Once you have those latency metrics available, you can calculate your service level indicator. Just sort the samples and find the sample which is 5% from the top. That latency value is your 95th percentile. Some of the tools I just mentioned like Splunk and ElasticSearch have this capability built in.
  24. Now you can take that SLI and apply it across your SLO timeframe. Again this is a straightforward mathematical calculation and can be easily coded up or done with the tools I previously mentioned. Of course, the devil is in the details with this approach - as I previously mentioned, you’ll need a lot of servers, and this is not an option that you can really implement for realtime analysis.
  25. Let’s take a look at the pros of this approach. It’s easy to get latency out of your logs. The math is fairly easy to implement, you can use some open source tools or build your own. The results are accurate. I’ve taken this approach in the past using Splunk.
  26. Now the downsides. It’s expensive. Like I said, I’ve done this with Splunk, which isn’t cheap. It’s appealing to sample requests to cut down the volume, but that can really skew your accuracy. Sampling should never really be an option to solve a too much data problem, there are better ways to deal with that. This approach is slow; you are shipping a lot of bytes around and analyzing them in some inefficient manners. It is difficult to scale; you need a lot of servers and a lot of processing power.
  27. So that covers approach one on computing SLOs the right way. Let’s look at another option of calculating SLOs, this time by counting requests.
  28. The approach to calculating a service level objective by counting requests is fairly simple. First you pick your SLI threshold, let’s choose 30 milliseconds. You instrument your application to count the number of total requests, and the number of requests that violated your SLI threshold. Then you calculate the percentage of successful requests - that’s your SLO. This approach is similar to the le histograms use by Prometheus. Those specify a number of predetermined bins and count up the number of requests that are under those thresholds.
  29. Here’s a visualization of this approach. The requests that violated our SLI are in red at the bottom of the image, the total count of requests are in grey. You can generate a graph like this with pretty much any monitoring or observability system out there.
  30. Our SLO is 90% of requests in less than 30 milliseconds. Calculating the SLO here is easy. Two thousand divided by sixty thousand - about 96%. We met our SLO. Let’s take a look at the pros and cons of this approach.
  31. This is a simple approach to implement, the math is very easy. Any number of tools can be used for this approach. It’s very performant - counting requests is fast. It’s quite scalable. I can keep the two metrics I need in around 128 bytes of RAM, one 64 bit int for total requests, and one for unsuccessful requests. The results for this approach are accurate. It’s difficult to screw up the calculations.
  32. Now the bad news - the fixed threshold means that you have to reconfigure your app if you want to adjust your SLI. This makes this approach highly inflexible. In addition, it means that you can only analyze historical data for one threshold. Unless you have a system that is tied very tightly to a specific threshold which you have spent a lot of time ensuring that this is the right threshold for your business, chances are your SLI threshold is not ideally placed. So while this may be an appealing approach on the surface, there are some hidden costs associated with it.
  33. We’ve looked at two approaches for calculating SLOs, so now let’s talk about using histograms.
  34. Who here has seen one of these before? This is a histogram, also known as a distribution. It is one of the seven basic tools of quality. It’s basically a graph that has the number of data samples on the Y axis, and the in our case the latency value on the X axis. It looks like a bar chart; each bar has a low and high boundary, and represents the number of samples between those values. We refer to those as bins, but you’ll also hear them defined as buckets. We can use some numbers to characterize particular histograms. We’ve talked about percentiles, which we can also refer to as quantiles using the q notation. The median is the point at which half the samples are below that value, and half are above. The mean is the average. q(0.9) is the same as the 90th percentile, where 90 percent of the values are below that number. q(1) is just a fancy way of saying the maximum. A mode of a histogram is the value at which there is a local maximum number of samples. Gil Tene has an excellent site explaining histograms, hdrhistogram.org.
  35. There are several different types of histograms. I’m listing a few of some of the most common variations here. Linear, Approximate, Fixed bin, Cumulative, and Log Linear. These different types really represent attributes of certain histogram implementations, such that you could combine these to create a histogram that fits your business needs. For example, you could create a cumulative log linear histogram which has bins in powers of 10s, but each subsequent bin would contain the sum of the bins with lower values. For example, the hdrhistogram reference I mentioned in the previous slide is a log linear type histogram. I won’t go into detail about each of these types here, we’ll be looking at log linear histograms, but I’ve got a presentation up on slideshare which details each of these types if you want to learn more.
  36. This is the log linear histogram type that we’ve implemented at Circonus. Bin sizes increase by a factor of 10 every power of 10, and there are 90 bins between each power of 10. The X axis scale is in microsends, y axis is number of samples. As an example, there are 90 bins between 100,000 and 1 million, each with a size of 10,000. This sample histogram shows latency distribution from a web service in microseconds. I’ve overlayed the average, median, and 90th percentile values. Note how the bin size increases from 10,000 to 100,000 at the 1 million value. I’ve listed the github repos here that show code implementations of this data structure in both C and Golang. This histogram represents about 50 million or so data samples - it’s relatively cheap to store in this format. The number of data samples is invariant to the size needed to store it.
  37. This is another implementation of the log linear histogram. This particular graph shows syscall latencies captured by eBPF for sysread and syswrite calls. Notice that each dataset in the histogram has several modes. We can also clearly see where the bin size changes at the 10 microsecond boundary. This particular histogram has 15 million samples in it.
  38. Mergeability. Histograms have the property of mergeability, which means that they can be merged together as long as they have a common set of bin boundaries. So if I have two histograms, each representing latency distributions, I can merge them together in one histogram. I could also do this for 10,000 histograms, each of those could represent the latency of a web server over a certain time period. I can also merge together histograms across time. I can take a distribution of yesterday’s latencies, and merge it with today’s to get an aggregate distribution for the combined time range.
  39. We can generate SLOs from histograms that contain latency data. Say I have a distribution of request latency in a histogram, and I want to ask how many requests are faster than 330 milliseconds. The math is simple, I walk the bins from lowest value to highest until I reach the bin that has a value of 330 milliseconds, aggregating the bin counts along the way. The sum of those samples is how many requests were faster than 330 milliseconds. Pretty simple.
  40. So I gave a lightning version of this talk a few months ago at NewOpsDays at Splunk, and then put my slides up on Twitter. Liz Fong-Jones apparently read them and brought up the good point about what happens if the value you are interested in falls between histogram bin boundaries. You don’t need to only be interested in sample values that lie on bin boundaries. If I had chosen 330 milliseconds, and my bin boundaries are 300 and 400 milliseconds, I can interpolate across those boundaries to get an approximate answer. Errors in operational data using the log linear histogram I’ve shown has a maximum value of 5 percent we’ve found. But this brings up the question of what binning algorithm is used with the log linear structure I’ve shown.
  41. In the implementation I’ve shown, we have bin boundaries at 320, 330, and 340. At the scale of 10, the bin boundaries occur at each integer. We can also represent much smaller values, which have increased precision. This log linear histogram implementation provides a very wide range of values while simultaneously being able achieve high precision across those ranges. In practice, we generally see about 300 bins total needed to represent operational latency telemetry. The maximum error experienced with this type of data structure is a bounded little less than 5%. Say I have a bin bounded at 10 and 11. If I insert several values of 10.99, those are interpolated in the bin to 10.5, which gives me an error of approximately 5%. That’s for a worst case sample set, which we pretty much never see in practice. So these bin boundaries provide a very good base for calculating SLOs against.
  42. So let’s summarize the pros of calculating SLOs using histograms. They are space efficient. In practice we see about 300 bytes per histogram, which is about 1/10th the size of the log data approach. We can choose our SLI thresholds as needed to calculate SLOs. We can also calculate inter-quartile ranges, get counts of samples below a certain threshold (remember that SLOs are business specific, not one size fits all), do standard deviation, and a number of other statistical calculations. They can be aggregated across both space and time. They are computationally efficient. Typical values for the implementation I have just shown has nanosecond bin insertion latency, and percentiles can be calculated in a microsecond or so. If we have some extra time I’ll pull up the code and walk through it. Errors are bounded to half a bin size, which is typically worst case 5%. There are several open source libraries available as I’ve shown to do these kinds of calculations, you don’t need to go out there and write your own.
  43. So there are some downsides to calculating SLOs with histograms. The math is more complex than the other methods I’ve shown, but it’s still relatively simple when compared to t-digest and other quantile approximation methods. There is some loss of accuracy when compared to the other two methods, but in practice 5% is a worst case scenario which is never really seen in production workloads.
  44. So let’s take a look at how we can do some of these calculations with the log linear histogram library I mentioned, and Python. Python bindings to libcircllhist are available to install with the pip utility. You’ll need to have the libcircllhist C library installed first before running the pip command. There is not a way to specify a C dependency, at least not a way that I’m aware of.
  45. So here I’m going to create a histogram using this library, and then insert a few values. Each of these insertions should take about a nanosecond since that is handled in the C library. Then I can easily get a count of samples, and generate a quantile from these samples. The percentile calculation happens in the C library and doesn’t take more than a few microseconds. This is a pretty simple example that I think most folks here should be able to accomplish easily. Let’s look at something a little more complex.
  46. If I have a set of latency values for a web service, I can create a histogram from those as I’ve shown, and then generate a visual plot of that histogram using code that looks like this. I can also calculate my 95th percentile and draw a line on the graph. So let’s see what that might look like.
  47. This plot should look familiar - I showed it earlier in this presentation. I created this plot using actual service latency data and the commands that I just showed you. There’s about 10,000 samples here. You can scale this to several million samples with commodity hardware using the library I referenced. If we have some time leftover after questions I can show this in action.
  48. So let’s review what we’ve seen. Be careful of averaging percentiles; it’s very easy to do, but you can easily come up with results that will lead you to incorrect conclusions. The best approach for calculating SLOs is using counters or histograms. The approach using log data produces the correct results, but is economically inefficient. Histograms give you the widest range of flexibility for choosing different latency thresholds, but I’m only aware of two open source implementations, the Go and C log linear implementation from Circonus, and the hdrhistogram implementation in Java. There are not any time series databases that support storing either sparsely encoded or high dynamic range histograms yet except for IRONdb. Storing histogram data serialized on disk is one option, but you may run into some challenges at large scale.
  49. That’s it. It looks like I met my service level objective of talk length, and we have a small error budget for questions. Or if we have time allows, I can show some of the python code in action.