7. I’m Fred and I like SLOs
- Developer Evangelist @Circonus
- Engineer who talks to people
- Writing code and breaking prod for 20 years
- @phredmoyer on Twitter
- Likes C, Go, Perl, PostgreSQL
@phredmoye #SCaLE17x
8. Talk Agenda
● SLO Refresher
● A Common Mistake
● Computing SLOs with log data
● Computing SLOs by counting requests
● Computing SLOs with histograms
@phredmoye #SCaLE17x
9. Service Level Objectives
SLI - Service Level Indicator
SLO - Service Level Objectives
SLA - Service Level Agreement
@phredmoye #SCaLE17x
11. “SLIs drive SLOs which inform SLAs”
@phredmoye
SLI - Service Level Indicator, a
measure of the service that can
be quantified
“99th percentile latency of homepage
requests over the past 5 minutes <
300ms”
Excerpted from “SLIs, SLOs,
SLAs, oh my!”
@sethvargo @lizthegrey
https://youtu.be/tEylFyxbDL
E
#SCaLE17x
12. @phredmoye
SLO - Service Level Objective, a
target for Service Level
Indicators
“99th percentile homepage SLI will
succeed 99.9% over trailing year”
Excerpted from “SLIs, SLOs,
SLAs, oh my!”
@sethvargo @lizthegrey
https://youtu.be/tEylFyxbDL
E
#SCaLE17x
“SLIs drive SLOs which inform SLAs”
13. @phredmoye
SLA - Service Level Agreement,
a legal agreement
“99th percentile homepage SLI will
succeed 99% over trailing year”
Excerpted from “SLIs, SLOs,
SLAs, oh my!”
@sethvargo @lizthegrey
https://youtu.be/tEylFyxbDL
E
#SCaLE17x
“SLIs drive SLOs which inform SLAs”
14. Talk Agenda
● SLO Refresher
● A Common Mistake
● Computing SLOs with log data
● Computing SLOs by counting requests
● Computing SLOs with histograms
@phredmoye #SCaLE17x
15. A Common Mistake
@phredmoye
Averaging Percentiles
p95(W1 ∪ W2) != (p95(W1)+ p95(W2))/2
Works fine when node workload is symmetric
Hides problems when workloads are asymmetric
#SCaLE17x
21. A Common Mistake
@phredmoye
Log parser => Metrics (mtail)
What metrics are you storing?
Averages? p50, p90, p95, p99, p99.9, p99.9?
#SCaLE17x
22. Talk Agenda
● SLO Refresher
● A Common Mistake
● Computing SLOs with log data
● Computing SLOs by counting requests
● Computing SLOs with histograms
@phredmoye #SCaLE17x
23. Computing SLOs with log
data
@phredmoye
"%{%d/%b/%Y %T}t.%{msec_frac}t %{%z}t"
~100 bytes per log line
~1GB for 10M requests
#SCaLE17x
24. Computing SLOs with log
data
@phredmoye
Logs => HDFS
Logs => ElasticSearch/Splunk
ssh -- `grep ... | awk ... > 550 ... | wc -l`
Then query all the log files
#SCaLE17x
25. Computing SLOs with log
data
@phredmoye
Calculating p95 SLI
1. Extract samples for time window
2. Sort the samples by value
3. Find the sample 5% count from largest
4. That’s your p95
#SCaLE17x
26. Computing SLOs with log
data
@phredmoye
Calculating p95 SLO
“95th percentile SLI will succeed 99.9% trailing year”
1. Divide 1 year samples into 1,000 slices
2. For each slice, calculate SLI
3. Was p95 SLI met for 999 slices? Met SLO if so
#SCaLE17x
27. Computing SLOs with log
data
@phredmoye
Pros:
1. Easy to configure logs to capture latency
2. Easy to roll your own processing code, some open source
options out there
3. Accurate results
#SCaLE17x
28. Computing SLOs with log
data
@phredmoye
Cons:
1. Expensive (see log analysis solution pricing)
2. Sampling possible but skews accuracy
3. Slow
4. Difficult to scale
#SCaLE17x
29. Talk Agenda
● SLO Refresher
● A Common Mistake
● Computing SLOs with log data
● Computing SLOs by counting requests
● Computing SLOs with histograms
@phredmoye #SCaLE17x
30. Computing SLOs by counting
requests
@phredmoye
1. Count # of requests that violate SLI threshold
2. Count total number of requests
3. % success = 100 - (#failed_reqs/#total_reqs)*100
Similar to Prometheus cumulative ‘le’ histogram
#SCaLE17x#SCaLE17x
32. Computing SLOs by counting requests
@phredmoye
SLO = 90% of reqs < 30ms
# bad requests = 2,262
# total requests = 60,124
100-(2262/60124)*100=96.2%
SLO was met
#SCaLE17x
33. @phredmoye
Pros:
1. Simple to implement
2. Performant
3. Scalable
4. Accurate
Computing SLOs by counting
requests
#SCaLE17x
34. @phredmoye
Cons:
1. Fixed SLO threshold - must reconfigure
2. Look back impossible for other thresholds
Computing SLOs by counting requests
#SCaLE17x
35. Talk Agenda
● SLO Refresher
● A Common Mistake
● Computing SLOs with log data
● Computing SLOs by counting requests
● Computing SLOs with histograms
@phredmoye #SCaLE17x
36. @phredmoye
AKA distributions
Sample counts
in bins/buckets
Gil Tene’s hdrhistogram.org
Computing SLOs with histograms
Sample value
# Samples
Median
q(0.5)
Mode
q(0.9)
q(1)Mean
#SCaLE17x
37. @phredmoye
Some histogram types:
1. Linear
2. Approximate
3. Fixed bin
4. Cumulative
5. Log Linear
Computing SLOs by counting
requests
#SCaLE17x
40. @phredmoye
h(A ∪ B) = h(A) ∪ h(B)
A & B must have identical bin boundaries
Can be aggregated both in space and time
Mergeability
#SCaLE17x
41. @phredmoye
How many requests are faster than 330ms?
1. Walk the bins lowest to highest until you reach 330ms
2. Sum the counts in those bins
3. Done
Computing SLOs with histograms
#SCaLE17x
43. @phredmoye
For the libcircllhist implementation we have bins at:
... 320, 330, 340, ...
.... And: 10,11,12,13...
.... And: 0.0000010, 0.0000011, 0.0000012,
For every decimal floating point number, with 2
significant digits, we have a bin (within 10^{+/-128}).
So ... where are the bin boundaries?
#SCaLE17x
44. @phredmoye
Pros:
1. Space Efficient (HH: ~ 300bytes / histogram in practice, 10x
more efficient than logs)
2. Full Flexibility:
- Thresholds can be chosen as needed and analyzed
- Statistical methods applicable, IQR, count_below, q(1), etc.
3. Mergability (HH: Aggregate data across nodes)
4. Performance (ns insertions, μs percentile calculations)
5. Bounded error (half the bin size)
6. Several open source libraries available
Computing SLOs with histograms
#SCaLE17x
45. @phredmoye
Cons:
1. Math is more complex than other methods
2. Some loss of accuracy (<<5%) in worst cases
Computing SLOs with histograms
#SCaLE17x
47. @phredmoye
h = Circllhist() # make a new histogram
h.insert(123) # insert value 123
h.insert(456) # insert value 456
h.insert(789) # insert value 789
print(h.count()) # prints 3
print(h.sum()) # prints 1,368
print(h.quantile(0.5)) # prints 456
Log Linear histograms with Python
#SCaLE17x
48. @phredmoye
from matplotlib import pyplot as plt
from circllhist import Circllhist
H = Circllhist()
… # add latency data to H via insert()
H.plot()
plt.axvline(x=H.quantile(0.95), color=red)
Log Linear histograms with
Python
#SCaLE17x
50. @phredmoye
Conclusions
1. Averaging Percentiles is tempting, but misleading
2. Use counters or histograms to calculate SLOs
correctly
3. Histograms give the most flexibility in choosing
latency thresholds, but only a couple libraries
implement them (libcircllhist, hdrhistogram)
4. Full support for (sparsely encoded-, HDR-)
histograms in TSDBs still lacking (except IRONdb).
#SCaLE17x
51. @phredmoye
Thank you!
Tweet me: @phredmoyer
AMA about histograms on: slack.s.circonus.com
More talks about histograms:
slideshare.net/redhotpenguin
https://github.com/HeinrichHartmann/DS4OPS
#SCaLE17x
Hello folks. Welcome to Latency SLOs Done Right. I’m Fred Moyer, a Developer Evangelist at Circonus, which means I’m basically an engineer who talks to other engineers. My twitter handle is phredmoyer, that’s with a ph instead of an f. I want to give a shout out to my colleague Heinrich Hartmann who is a data scientist and originally did a blog post on the material I’m about to present. Let’s get started.
How many people here think that latency is an important metric to track for their applications? Please raise your hands if you are already tracking latency in any of your services as a business critical metric.
So now that we’ve seen a few folks here think that latency is an important metric to track, let me ask for a given service in your infrastructure, how many requests were served faster than zero point 55 seconds over the past month? I don’t expect anyone to have an exact answer on hand, this is a fairly specific question.
So now that we’ve seen a few folks here think that latency is an important metric to track, let me ask for a given service in your infrastructure, how many requests were served faster than zero point 55 seconds over the past month? I don’t expect anyone to have an exact answer on hand, this is a fairly specific question.
But I want you to ask yourself how you could answer that question? Do you have the capabilities to glean this information from your systems? There are many tools out there which everyone here is familiar with, so your answer to that is probably yes.
But I ask you, would your answer be correct? How accurate do you think it would be? As I mentioned, there’s a wide range of tools available to answer these questions, but often you need to question the answers they give.
Today we’re going to be looking at how we can question those answers.
First we’ll do a quick SLO refresher.
Then we’ll look at a common mistake made when using percentiles.
Next we’ll see how to calculate SLOs the right way using three different approaches. With log data, but counting requests, and with histograms.
So let’s get started.
Most people here are probably familiar with the Google SRE book. The concept of SLAs has been around for at least a decade, but service level objectives and service level indicators are also becoming ubiquitous amongst site reliability engineers. The amount of online content around these terms has been increasing rapidly.
These three terms SLI, SLO, and SLA are now fairly standard lexicon amongst site reliability engineers.
In addition to the Google SRE book, there are two other recent books that talk about service level objectives. The site reliability workbook has a dedicated chapter on service level objectives. Seeking SRE has a chapter on defining SLOs by Theo Schlossnagle, the Circonus CEO who got me into all this stuff five or six years ago.
One thing to note that there isn’t one standard definition of SLO. Everyone’s business needs are different, and what you should take away from these books and their discussions of service level objectives is there are many ways to define them, so you should base your definition of a service level objective on the one that makes the most sense for your business.
I’m pulling in a few excerpts from a great youtube video by Seth Vargo and Liz Fong Jones I recently watched which I think explains these concepts really well. I recommend watching the video to get an in depth understanding.
The gist of that video is that SLIs drive SLOs which inform SLAs.
A service level indicator is basically a metric derived measure of health for a service. For example, I could have an SLI that says my 99th percentile latency of homepage requests over the last 5 minutes should be less than 300 milliseconds.
A service level objective is basically how we take a service level indicator, and extend the scope of it to quantify how we expect our service to perform over a strategic time interval.
Drawing on the SLI we talked about in the previous slide, we could say that our SLO is that we want to meet the criteria set by that SLI for three nines over a trailing year window.
SLAs take SLOs one step further, but use the same criteria. They are generally crafted by lawyers to limit the possibility of having to give customers money for those times when our service doesn’t perform like we committed to. SLAs are similar to SLOs, but the commitment level is relaxed, as we want our internal facing targets to be more strict than our external facing targets.
A service level agreement is a legal agreement that is generally less restrictive than the SLO which the operations team is accustomed to delivering. It is crafted by lawyers and generally meant to be as risk averse as possible.
When SLAs are violated, bad things happen. Customers notice, then they try to get money back from you. Executives call meetings, and folks get called on the carpet about why the SLA couldn’t be met.
And here’s the kicker. If you don’t have an SLO, YOUR SLA IS YOUR SLO. So your internal reliability targets are now your external reliability targets. There’s a reason we separate SLOs and SLAs. One is a target we don’t ever want to miss. The other is a realistic measure of what we can achieve, but might not always accomplish. When we bring things like error budgets into the discussion, we want to be able to take risk with deployments and expect downtime so that we can move quickly.
It’s not about moving fast and breaking things. It’s about using math to figure out what the risk is if we move a given speed.
So we just did a brief refresher of Service Level Objectives, something folks here are probably somewhat familiar with already. I encourage folks here to read the books I’ve listed, but also to remember that SLOs are tools for the business, and should be tailored appropriately to your use cases.
Now let’s look at a common mistake when using percentiles with SLOs - averaging percentiles.
Averaging percentiles is probably the single most common mistake made when working with latency metrics. Why is this? Part of this happens because averaging percentiles is actually a reasonable approach which systems are functioning normally and nodes are exhibiting symmetric workloads.
It’s easy to get an ideal of aggregate system performance in those situations by just adding up percentiles from nodes and dividing by the number of nodes. The data from most monitoring systems makes this very easy to do.
When this approach becomes problematic though is when node workloads are asymmetric. If you’re looking an an average of percentiles when that happens, you’ll rarely know it though, this approach hides those asymmetries.
What if I told you that 90% of the requests occurred during that period where the 5 minute p99 was 300 microseconds? Does that change your answer? This is a good example of why you can’t average percentiles over time. What you don’t see here is request volumes.
The is a graph of 5 minute p99,p95,p90,p50 over ~24 hours. What is the p99 over the entire time range?
We have a peak of ~300 microseconds for 6 hours, and about 180 microseconds for the other 18 hours. So we can guess that the p99 is probably around 200 microseconds right?
Let’s take a look at averaging percentiles over nodes. Here is the distribution of requests for 2 webservers, along with the corresponding p95s (this is over constant time). This distribution shows the sample number on the Y axis and the sample value on the X axis.
You can see that the blue webserver had more samples at lower latencies, and the red webserver a flatter distribution of latencies that were generally higher than the blue webserver.
What happens if we calculate P95 by averaging them vs calculating then by aggregating the samples?
If we combine the samples and calculate the correct p95, we get 230 milliseconds. If I average the p95 from both sample sets, we get 430 milliseconds.
That’s a 200% difference. If these sample distributions matched up exactly, we could average the p95s and get the same answer as taking the p95 of the aggregated samples, but that rarely happens, and it especially doesn’t happen when workloads are asymmetric.
So how do people end up averaging percentiles? Well there are some common workflows that make it very easy.
One way to collect latency data without instrumenting your application is to use a log parser like Google’s mtail which extracts latency metrics from logs, and then stores them in a Time Series Database or sends them to a statsd server or some other aggregation point.
So what latency metric do you end up storing? Almost none of the open source tooling exposes the raw samples, they provide the average, the median, or any one of the common percentiles captured for analysis. So if you run mtail on each web server, you end up with storing latency percentiles for each node, which results in the graph we just showed.
Sure, this is an easy situation to avoid if you have this knowledge, but that’s usually the exception as opposed to the rule. Though I’ve talked to folks who do this and say “yeah we know it’s not correct some of the time, but it’s the best we have right now and we don’t have time to change it”.
Well, we’re going to see how we can do better.
So let’s start off with our first of three approaches on how to compute service level objects the right way.
The first approach is to compute our service level objective by using log data. This is an example Apache log line configuration to log request time as milliseconds. Pretty much anyone running a web service can log the time of the request with just a small configuration change.
So for each log line emit, we have to store about 100 bytes of data. Which means that 10M requests will cost about us about one gigabyte.
Remember thought that if this is a web page, you’ll also be logging the request time to serve images and other static content. So if you are collecting more than just API request data you may need to multiply this number by several dozen.
Once we have our logfile configured to emit latency, collecting it usually goes something like this. You stuff your metrics into a store like HDFS, logstash, or Splunk. And then you either use Elastic search, Splunk, or good old fashioned grep and awk to query the logs.
This is all pretty straightforward. You just need lots of servers.
Once you have those latency metrics available, you can calculate your service level indicator. Just sort the samples and find the sample which is 5% from the top. That latency value is your 95th percentile. Some of the tools I just mentioned like Splunk and ElasticSearch have this capability built in.
Now you can take that SLI and apply it across your SLO timeframe. Again this is a straightforward mathematical calculation and can be easily coded up or done with the tools I previously mentioned.
Of course, the devil is in the details with this approach - as I previously mentioned, you’ll need a lot of servers, and this is not an option that you can really implement for realtime analysis.
Let’s take a look at the pros of this approach.
It’s easy to get latency out of your logs.
The math is fairly easy to implement, you can use some open source tools or build your own.
The results are accurate. I’ve taken this approach in the past using Splunk.
Now the downsides.
It’s expensive. Like I said, I’ve done this with Splunk, which isn’t cheap.
It’s appealing to sample requests to cut down the volume, but that can really skew your accuracy. Sampling should never really be an option to solve a too much data problem, there are better ways to deal with that.
This approach is slow; you are shipping a lot of bytes around and analyzing them in some inefficient manners.
It is difficult to scale; you need a lot of servers and a lot of processing power.
So that covers approach one on computing SLOs the right way. Let’s look at another option of calculating SLOs, this time by counting requests.
The approach to calculating a service level objective by counting requests is fairly simple. First you pick your SLI threshold, let’s choose 30 milliseconds.
You instrument your application to count the number of total requests, and the number of requests that violated your SLI threshold. Then you calculate the percentage of successful requests - that’s your SLO.
This approach is similar to the le histograms use by Prometheus. Those specify a number of predetermined bins and count up the number of requests that are under those thresholds.
Here’s a visualization of this approach. The requests that violated our SLI are in red at the bottom of the image, the total count of requests are in grey. You can generate a graph like this with pretty much any monitoring or observability system out there.
Our SLO is 90% of requests in less than 30 milliseconds. Calculating the SLO here is easy. Two thousand divided by sixty thousand - about 96%.
We met our SLO. Let’s take a look at the pros and cons of this approach.
This is a simple approach to implement, the math is very easy. Any number of tools can be used for this approach.
It’s very performant - counting requests is fast.
It’s quite scalable. I can keep the two metrics I need in around 128 bytes of RAM, one 64 bit int for total requests, and one for unsuccessful requests.
The results for this approach are accurate. It’s difficult to screw up the calculations.
Now the bad news - the fixed threshold means that you have to reconfigure your app if you want to adjust your SLI. This makes this approach highly inflexible.
In addition, it means that you can only analyze historical data for one threshold. Unless you have a system that is tied very tightly to a specific threshold which you have spent a lot of time ensuring that this is the right threshold for your business, chances are your SLI threshold is not ideally placed.
So while this may be an appealing approach on the surface, there are some hidden costs associated with it.
We’ve looked at two approaches for calculating SLOs, so now let’s talk about using histograms.
Who here has seen one of these before? This is a histogram, also known as a distribution. It is one of the seven basic tools of quality.
It’s basically a graph that has the number of data samples on the Y axis, and the in our case the latency value on the X axis.
It looks like a bar chart; each bar has a low and high boundary, and represents the number of samples between those values. We refer to those as bins, but you’ll also hear them defined as buckets.
We can use some numbers to characterize particular histograms. We’ve talked about percentiles, which we can also refer to as quantiles using the q notation. The median is the point at which half the samples are below that value, and half are above. The mean is the average. q(0.9) is the same as the 90th percentile, where 90 percent of the values are below that number. q(1) is just a fancy way of saying the maximum. A mode of a histogram is the value at which there is a local maximum number of samples.
Gil Tene has an excellent site explaining histograms, hdrhistogram.org.
There are several different types of histograms. I’m listing a few of some of the most common variations here.
Linear, Approximate, Fixed bin, Cumulative, and Log Linear. These different types really represent attributes of certain histogram implementations, such that you could combine these to create a histogram that fits your business needs.
For example, you could create a cumulative log linear histogram which has bins in powers of 10s, but each subsequent bin would contain the sum of the bins with lower values.
For example, the hdrhistogram reference I mentioned in the previous slide is a log linear type histogram. I won’t go into detail about each of these types here, we’ll be looking at log linear histograms, but I’ve got a presentation up on slideshare which details each of these types if you want to learn more.
This is the log linear histogram type that we’ve implemented at Circonus. Bin sizes increase by a factor of 10 every power of 10, and there are 90 bins between each power of 10. The X axis scale is in microsends, y axis is number of samples.
As an example, there are 90 bins between 100,000 and 1 million, each with a size of 10,000. This sample histogram shows latency distribution from a web service in microseconds. I’ve overlayed the average, median, and 90th percentile values.
Note how the bin size increases from 10,000 to 100,000 at the 1 million value.
I’ve listed the github repos here that show code implementations of this data structure in both C and Golang.
This histogram represents about 50 million or so data samples - it’s relatively cheap to store in this format. The number of data samples is invariant to the size needed to store it.
This is another implementation of the log linear histogram. This particular graph shows syscall latencies captured by eBPF for sysread and syswrite calls.
Notice that each dataset in the histogram has several modes. We can also clearly see where the bin size changes at the 10 microsecond boundary.
This particular histogram has 15 million samples in it.
Mergeability. Histograms have the property of mergeability, which means that they can be merged together as long as they have a common set of bin boundaries.
So if I have two histograms, each representing latency distributions, I can merge them together in one histogram. I could also do this for 10,000 histograms, each of those could represent the latency of a web server over a certain time period.
I can also merge together histograms across time. I can take a distribution of yesterday’s latencies, and merge it with today’s to get an aggregate distribution for the combined time range.
We can generate SLOs from histograms that contain latency data.
Say I have a distribution of request latency in a histogram, and I want to ask how many requests are faster than 330 milliseconds.
The math is simple, I walk the bins from lowest value to highest until I reach the bin that has a value of 330 milliseconds, aggregating the bin counts along the way.
The sum of those samples is how many requests were faster than 330 milliseconds. Pretty simple.
So I gave a lightning version of this talk a few months ago at NewOpsDays at Splunk, and then put my slides up on Twitter. Liz Fong-Jones apparently read them and brought up the good point about what happens if the value you are interested in falls between histogram bin boundaries.
You don’t need to only be interested in sample values that lie on bin boundaries. If I had chosen 330 milliseconds, and my bin boundaries are 300 and 400 milliseconds, I can interpolate across those boundaries to get an approximate answer. Errors in operational data using the log linear histogram I’ve shown has a maximum value of 5 percent we’ve found. But this brings up the question of what binning algorithm is used with the log linear structure I’ve shown.
In the implementation I’ve shown, we have bin boundaries at 320, 330, and 340.
At the scale of 10, the bin boundaries occur at each integer.
We can also represent much smaller values, which have increased precision.
This log linear histogram implementation provides a very wide range of values while simultaneously being able achieve high precision across those ranges. In practice, we generally see about 300 bins total needed to represent operational latency telemetry.
The maximum error experienced with this type of data structure is a bounded little less than 5%. Say I have a bin bounded at 10 and 11. If I insert several values of 10.99, those are interpolated in the bin to 10.5, which gives me an error of approximately 5%. That’s for a worst case sample set, which we pretty much never see in practice.
So these bin boundaries provide a very good base for calculating SLOs against.
So let’s summarize the pros of calculating SLOs using histograms.
They are space efficient. In practice we see about 300 bytes per histogram, which is about 1/10th the size of the log data approach.
We can choose our SLI thresholds as needed to calculate SLOs. We can also calculate inter-quartile ranges, get counts of samples below a certain threshold (remember that SLOs are business specific, not one size fits all), do standard deviation, and a number of other statistical calculations.
They can be aggregated across both space and time.
They are computationally efficient. Typical values for the implementation I have just shown has nanosecond bin insertion latency, and percentiles can be calculated in a microsecond or so. If we have some extra time I’ll pull up the code and walk through it.
Errors are bounded to half a bin size, which is typically worst case 5%.
There are several open source libraries available as I’ve shown to do these kinds of calculations, you don’t need to go out there and write your own.
So there are some downsides to calculating SLOs with histograms.
The math is more complex than the other methods I’ve shown, but it’s still relatively simple when compared to t-digest and other quantile approximation methods.
There is some loss of accuracy when compared to the other two methods, but in practice 5% is a worst case scenario which is never really seen in production workloads.
So let’s take a look at how we can do some of these calculations with the log linear histogram library I mentioned, and Python. Python bindings to libcircllhist are available to install with the pip utility.
You’ll need to have the libcircllhist C library installed first before running the pip command. There is not a way to specify a C dependency, at least not a way that I’m aware of.
So here I’m going to create a histogram using this library, and then insert a few values. Each of these insertions should take about a nanosecond since that is handled in the C library.
Then I can easily get a count of samples, and generate a quantile from these samples. The percentile calculation happens in the C library and doesn’t take more than a few microseconds.
This is a pretty simple example that I think most folks here should be able to accomplish easily. Let’s look at something a little more complex.
If I have a set of latency values for a web service, I can create a histogram from those as I’ve shown, and then generate a visual plot of that histogram using code that looks like this.
I can also calculate my 95th percentile and draw a line on the graph. So let’s see what that might look like.
This plot should look familiar - I showed it earlier in this presentation. I created this plot using actual service latency data and the commands that I just showed you. There’s about 10,000 samples here.
You can scale this to several million samples with commodity hardware using the library I referenced. If we have some time leftover after questions I can show this in action.
So let’s review what we’ve seen.
Be careful of averaging percentiles; it’s very easy to do, but you can easily come up with results that will lead you to incorrect conclusions.
The best approach for calculating SLOs is using counters or histograms. The approach using log data produces the correct results, but is economically inefficient.
Histograms give you the widest range of flexibility for choosing different latency thresholds, but I’m only aware of two open source implementations, the Go and C log linear implementation from Circonus, and the hdrhistogram implementation in Java.
There are not any time series databases that support storing either sparsely encoded or high dynamic range histograms yet except for IRONdb. Storing histogram data serialized on disk is one option, but you may run into some challenges at large scale.
That’s it. It looks like I met my service level objective of talk length, and we have a small error budget for questions. Or if we have time allows, I can show some of the python code in action.