SlideShare a Scribd company logo
1 of 24
Download to read offline
Predicting response times for the Spotify backend
Rerngvit Yanggratoke1, Gunnar Kreitz 2,
Mikael Goldmann 2, Rolf Stadler 1
1ACCESS Linnaeus Center, KTH Royal Institute of Technology, Sweden
2Spotify, Sweden
8th International Conference on
Network and Service Management (CNSM 2012), Las Vegas
24 October, 2012
1 / 24
What is Spotify?
On-demand music streaming
service, similar to MOG or
Rhapsody.
Large catalogue, over 15 million
tracks.
Over 15M active users and 4M
subscribers around the world.
“Spotify: large-scale distributed system with real users.”
2 / 24
Overview
Spotify is a peer-assisted system.
Response time and latency.
Goal
An analytical model for distribution of the response time for
the Spotify backend.
The model should be tractable and accurate.
Motivation
Low latency is key to the Spotify service.
Related works do not have a performance model or model only
the average response time.
Steps
Study Spotify backend and find the simpler system model.
Replicate Spotify backend implementations at KTH testbed.
Develop and validate the model for
KTH testbed with small and large servers.
Spotify operational infrastructure.
3 / 24
Outline
1 The Spotify backend architecture
2 Analytical model for estimating response time distribution
3 Validation of the model
Validation on the KTH testbed
Validation on the Spotify operational environment
4 Applications of the model
5 Conclusions and future work
4 / 24
The Spotify backend architecture
Backend sites:
Stockholm, London,
Ashburn.
Master Storage
stores all songs.
Production Storage
acts as a caching
layers.
5 / 24
Outline
1 The Spotify backend architecture
2 Analytical model for estimating response time distribution
3 Validation of the model
Validation on the KTH testbed
Validation on the Spotify operational environment
4 Applications of the model
5 Conclusions and future work
6 / 24
Simplified architecture for a particular site
Model only Production Storage.
AP selects a storage server uniformly at random.
Ignore network delays.
Consider steady-state conditions and Poisson arrivals.
7 / 24
Model of a single storage server
A request is served from
memory with probability q,
one of the disks with probability (1 − q)/nd
(nd: number of identical disks).
Model memory or a disk as an M/M/1 queue.
µm: service rate of memory and µd: disk. µm ≫ µd holds.
Model for a single storage server
probability that a request to the server is served below a latency t is
P r(T ≤ t) = q + (1 − q)(1 − e−µd(1−(1−q)λ/µdnd)t
). (1)
8 / 24
Model of a cluster of storage servers
Set of storage servers S.
Arrival process: Poisson process with rate λc.
A request forwarded to a server s ∈ S independently and
uniformly at random.
For server s
µd,s: the service rate of a disk.
nd,s: number of identical disks.
qs: probability that the request is served from memory.
f(t, nd, µd, λ, q) = Pr(T ≤ t).
Tc: latency of a request for the cluster.
Model of a cluster of storage servers
probability that a request to the cluster is served below a latency t is
P r(Tc ≤ t) =
1
|S| s∈S
f(t, nd,s, µd,s,
λc
|S|
, qs). (2)
9 / 24
Outline
1 The Spotify backend architecture
2 Analytical model for estimating response time distribution
3 Validation of the model
Validation on the KTH testbed
Validation on the Spotify operational environment
4 Applications of the model
5 Conclusions and future work
10 / 24
Setup and measuring of request latency for a server
Validation compares measurements at load injector with
model (equation 1).
11 / 24
Estimating model parameter for a storage server
q - benchmark the server with a range of request rates.
Approximate q through least-square regression.
µd - run iostat when the server is serving a Spotify load.
Parameter Small server Large server Spotify server
µd 93 120 150
nd 1 1 6
α 0.0137 0.00580 0.000501
q0 0.946 1.15 0.815
12 / 24
Model confidence limit for a single storage server
Extensive testing reveals that model predictions are close to
the measurements when the average length of disk queue is at
most one.
The model confidence limit is the maximum request rate
below which above condition holds.
Model confidence limit for a single storage server
The limit λL is the positive root of
αλ2
L + (1 − q0)λL −
1
2
µdnd = 0.
13 / 24
Validation for a single large server
The prediction accuracy decreases with increasing request rate.
The maximum error is within 5%.
14 / 24
Setup and measuring of request latency for a cluster
Validation compares measurements at the access point with model (equation 2). 15 / 24
Model confidence limit for a storage cluster
The confidence limit is the max request rate to the cluster,
such that the rate to any server does not exceed the
confidence limit for a single server with high probability.
To compute the limit, we must know the load of the highest
loaded server with high probability by applying the
balls-and-bins model.
Model confidence limit for a storage cluster
The limit is the smaller root of
1
|S|2
λ2
L,c + (
2λL
|S|
−
2 log |S|Kβ,|S|
|S|
)λL,c + λ2
L = 0,
whereby β = 2 and Kβ,|S| = 1 − 1
β
log log |S|
2 log |S| .
16 / 24
Result for a cluster of three large servers
The prediction accuracy decreases with increasing request rate.
The maximum error is within 4.6%.
17 / 24
Result for a Spotify operational server - single server
The prediction accuracy decreases with increasing request rate.
The maximum error is within 8.5%.
18 / 24
Result for Spotify operational service - Stockholm site
31 operational Spotify storage servers.
24 hours measurements data.
Arrival rate and response time distribution for the requests
(five-minutes averages).
The maximum error is within 9.7%.
19 / 24
Outline
1 The Spotify backend architecture
2 Analytical model for estimating response time distribution
3 Validation of the model
Validation on the KTH testbed
Validation on the Spotify operational environment
4 Applications of the model
5 Conclusions and future work
20 / 24
Applications of the model
Varying the number of servers for the load of 12,000 requests/sec (peak load).
Varying the load for 25 storage servers.
21 / 24
Outline
1 The Spotify backend architecture
2 Analytical model for estimating response time distribution
3 Validation of the model
Validation on the KTH testbed
Validation on the Spotify operational environment
4 Applications of the model
5 Conclusions and future work
22 / 24
Conclusions
Develop the model for distribution of the response time for
the Spotify backend that is tractable.
Extensive validation of the model, on (1) KTH testbed
(2) the Spotify operational infrastructure.
The model predictions are accurate for the lightly-loaded
storage system, with error up to 11%.
The confidence range of our model covers the entire
operational range of the load to the Spotify storage system.
For instance, the model confirms that the Stockholm site
could handle significantly higher peak load, or handle the load
with fewer servers.
23 / 24
Current / Future work
An online performance management system for a distributed
key-value store like the Spotify storage system.
Evaluation of object allocation policies.
24 / 24

More Related Content

What's hot

Flink Forward Berlin 2017: Robert Metzger - Keep it going - How to reliably a...
Flink Forward Berlin 2017: Robert Metzger - Keep it going - How to reliably a...Flink Forward Berlin 2017: Robert Metzger - Keep it going - How to reliably a...
Flink Forward Berlin 2017: Robert Metzger - Keep it going - How to reliably a...Flink Forward
 
HBaseCon2017 Quanta: Quora's hierarchical counting system on HBase
HBaseCon2017 Quanta: Quora's hierarchical counting system on HBaseHBaseCon2017 Quanta: Quora's hierarchical counting system on HBase
HBaseCon2017 Quanta: Quora's hierarchical counting system on HBaseHBaseCon
 
Buzz Words Dunning Real-Time Learning
Buzz Words Dunning Real-Time LearningBuzz Words Dunning Real-Time Learning
Buzz Words Dunning Real-Time LearningMapR Technologies
 
Bulletproof Kafka with Fault Tree Analysis (Andrey Falko, Lyft) Kafka Summit ...
Bulletproof Kafka with Fault Tree Analysis (Andrey Falko, Lyft) Kafka Summit ...Bulletproof Kafka with Fault Tree Analysis (Andrey Falko, Lyft) Kafka Summit ...
Bulletproof Kafka with Fault Tree Analysis (Andrey Falko, Lyft) Kafka Summit ...confluent
 
Real-time driving score service using Flink
Real-time driving score service using FlinkReal-time driving score service using Flink
Real-time driving score service using FlinkDongwon Kim
 
Predictive Maintenance with Deep Learning and Apache Flink
Predictive Maintenance with Deep Learning and Apache FlinkPredictive Maintenance with Deep Learning and Apache Flink
Predictive Maintenance with Deep Learning and Apache FlinkDongwon Kim
 
Planqc2020 codar
Planqc2020 codarPlanqc2020 codar
Planqc2020 codarYu Zhang
 
Openstack meetup lyon_2017-09-28
Openstack meetup lyon_2017-09-28Openstack meetup lyon_2017-09-28
Openstack meetup lyon_2017-09-28Xavier Lucas
 
DSD-INT 2016 Calibration and scenario generation of hydrodynamics and water -...
DSD-INT 2016 Calibration and scenario generation of hydrodynamics and water -...DSD-INT 2016 Calibration and scenario generation of hydrodynamics and water -...
DSD-INT 2016 Calibration and scenario generation of hydrodynamics and water -...Deltares
 

What's hot (9)

Flink Forward Berlin 2017: Robert Metzger - Keep it going - How to reliably a...
Flink Forward Berlin 2017: Robert Metzger - Keep it going - How to reliably a...Flink Forward Berlin 2017: Robert Metzger - Keep it going - How to reliably a...
Flink Forward Berlin 2017: Robert Metzger - Keep it going - How to reliably a...
 
HBaseCon2017 Quanta: Quora's hierarchical counting system on HBase
HBaseCon2017 Quanta: Quora's hierarchical counting system on HBaseHBaseCon2017 Quanta: Quora's hierarchical counting system on HBase
HBaseCon2017 Quanta: Quora's hierarchical counting system on HBase
 
Buzz Words Dunning Real-Time Learning
Buzz Words Dunning Real-Time LearningBuzz Words Dunning Real-Time Learning
Buzz Words Dunning Real-Time Learning
 
Bulletproof Kafka with Fault Tree Analysis (Andrey Falko, Lyft) Kafka Summit ...
Bulletproof Kafka with Fault Tree Analysis (Andrey Falko, Lyft) Kafka Summit ...Bulletproof Kafka with Fault Tree Analysis (Andrey Falko, Lyft) Kafka Summit ...
Bulletproof Kafka with Fault Tree Analysis (Andrey Falko, Lyft) Kafka Summit ...
 
Real-time driving score service using Flink
Real-time driving score service using FlinkReal-time driving score service using Flink
Real-time driving score service using Flink
 
Predictive Maintenance with Deep Learning and Apache Flink
Predictive Maintenance with Deep Learning and Apache FlinkPredictive Maintenance with Deep Learning and Apache Flink
Predictive Maintenance with Deep Learning and Apache Flink
 
Planqc2020 codar
Planqc2020 codarPlanqc2020 codar
Planqc2020 codar
 
Openstack meetup lyon_2017-09-28
Openstack meetup lyon_2017-09-28Openstack meetup lyon_2017-09-28
Openstack meetup lyon_2017-09-28
 
DSD-INT 2016 Calibration and scenario generation of hydrodynamics and water -...
DSD-INT 2016 Calibration and scenario generation of hydrodynamics and water -...DSD-INT 2016 Calibration and scenario generation of hydrodynamics and water -...
DSD-INT 2016 Calibration and scenario generation of hydrodynamics and water -...
 

Similar to cnsm2012_slide

Automated Parameterization of Performance Models from Measurements
Automated Parameterization of Performance Models from MeasurementsAutomated Parameterization of Performance Models from Measurements
Automated Parameterization of Performance Models from MeasurementsWeikun Wang
 
Timing verification of real-time automotive Ethernet networks: what can we ex...
Timing verification of real-time automotive Ethernet networks: what can we ex...Timing verification of real-time automotive Ethernet networks: what can we ex...
Timing verification of real-time automotive Ethernet networks: what can we ex...RealTime-at-Work (RTaW)
 
Self-adaptive container monitoring with performance-aware Load-Shedding policies
Self-adaptive container monitoring with performance-aware Load-Shedding policiesSelf-adaptive container monitoring with performance-aware Load-Shedding policies
Self-adaptive container monitoring with performance-aware Load-Shedding policiesNECST Lab @ Politecnico di Milano
 
How Much Parallelism?
How Much Parallelism?How Much Parallelism?
How Much Parallelism?Dilum Bandara
 
LCA14: LCA14-506: Comparative analysis of preemption vs preempt-rt
LCA14: LCA14-506: Comparative analysis of preemption vs preempt-rtLCA14: LCA14-506: Comparative analysis of preemption vs preempt-rt
LCA14: LCA14-506: Comparative analysis of preemption vs preempt-rtLinaro
 
Queuing model
Queuing model Queuing model
Queuing model goyalrama
 
FFWD - Fast Forward With Degradation
FFWD - Fast Forward With DegradationFFWD - Fast Forward With Degradation
FFWD - Fast Forward With DegradationRolando Brondolin
 
Queuing theory and traffic analysis in depth
Queuing theory and traffic analysis in depthQueuing theory and traffic analysis in depth
Queuing theory and traffic analysis in depthIdcIdk1
 
Introduction to Data streaming - 05/12/2014
Introduction to Data streaming - 05/12/2014Introduction to Data streaming - 05/12/2014
Introduction to Data streaming - 05/12/2014Raja Chiky
 
Leveraging the Power of Solr with Spark: Presented by Johannes Weigend, QAware
Leveraging the Power of Solr with Spark: Presented by Johannes Weigend, QAwareLeveraging the Power of Solr with Spark: Presented by Johannes Weigend, QAware
Leveraging the Power of Solr with Spark: Presented by Johannes Weigend, QAwareLucidworks
 
Leveraging the Power of Solr with Spark
Leveraging the Power of Solr with SparkLeveraging the Power of Solr with Spark
Leveraging the Power of Solr with SparkQAware GmbH
 
Opensample: A Low-latency, Sampling-based Measurement Platform for Software D...
Opensample: A Low-latency, Sampling-based Measurement Platform for Software D...Opensample: A Low-latency, Sampling-based Measurement Platform for Software D...
Opensample: A Low-latency, Sampling-based Measurement Platform for Software D...Junho Suh
 
Self-adaptive container monitoring with performance-aware Load-Shedding policies
Self-adaptive container monitoring with performance-aware Load-Shedding policiesSelf-adaptive container monitoring with performance-aware Load-Shedding policies
Self-adaptive container monitoring with performance-aware Load-Shedding policiesNECST Lab @ Politecnico di Milano
 
[EUC2016] FFWD: latency-aware event stream processing via domain-specific loa...
[EUC2016] FFWD: latency-aware event stream processing via domain-specific loa...[EUC2016] FFWD: latency-aware event stream processing via domain-specific loa...
[EUC2016] FFWD: latency-aware event stream processing via domain-specific loa...Matteo Ferroni
 

Similar to cnsm2012_slide (20)

Automated Parameterization of Performance Models from Measurements
Automated Parameterization of Performance Models from MeasurementsAutomated Parameterization of Performance Models from Measurements
Automated Parameterization of Performance Models from Measurements
 
rerngvit_phd_seminar
rerngvit_phd_seminarrerngvit_phd_seminar
rerngvit_phd_seminar
 
Timing verification of real-time automotive Ethernet networks: what can we ex...
Timing verification of real-time automotive Ethernet networks: what can we ex...Timing verification of real-time automotive Ethernet networks: what can we ex...
Timing verification of real-time automotive Ethernet networks: what can we ex...
 
Self-adaptive container monitoring with performance-aware Load-Shedding policies
Self-adaptive container monitoring with performance-aware Load-Shedding policiesSelf-adaptive container monitoring with performance-aware Load-Shedding policies
Self-adaptive container monitoring with performance-aware Load-Shedding policies
 
How Much Parallelism?
How Much Parallelism?How Much Parallelism?
How Much Parallelism?
 
LCA14: LCA14-506: Comparative analysis of preemption vs preempt-rt
LCA14: LCA14-506: Comparative analysis of preemption vs preempt-rtLCA14: LCA14-506: Comparative analysis of preemption vs preempt-rt
LCA14: LCA14-506: Comparative analysis of preemption vs preempt-rt
 
Queuing model
Queuing model Queuing model
Queuing model
 
FFWD - Fast Forward With Degradation
FFWD - Fast Forward With DegradationFFWD - Fast Forward With Degradation
FFWD - Fast Forward With Degradation
 
slides
slidesslides
slides
 
03 broderick qsts_sand2016-4697 c
03 broderick qsts_sand2016-4697 c03 broderick qsts_sand2016-4697 c
03 broderick qsts_sand2016-4697 c
 
Queuing theory and traffic analysis in depth
Queuing theory and traffic analysis in depthQueuing theory and traffic analysis in depth
Queuing theory and traffic analysis in depth
 
Introduction to Data streaming - 05/12/2014
Introduction to Data streaming - 05/12/2014Introduction to Data streaming - 05/12/2014
Introduction to Data streaming - 05/12/2014
 
Leveraging the Power of Solr with Spark: Presented by Johannes Weigend, QAware
Leveraging the Power of Solr with Spark: Presented by Johannes Weigend, QAwareLeveraging the Power of Solr with Spark: Presented by Johannes Weigend, QAware
Leveraging the Power of Solr with Spark: Presented by Johannes Weigend, QAware
 
Leveraging the Power of Solr with Spark
Leveraging the Power of Solr with SparkLeveraging the Power of Solr with Spark
Leveraging the Power of Solr with Spark
 
Linux capacity planning
Linux capacity planningLinux capacity planning
Linux capacity planning
 
Queue
QueueQueue
Queue
 
Opensample: A Low-latency, Sampling-based Measurement Platform for Software D...
Opensample: A Low-latency, Sampling-based Measurement Platform for Software D...Opensample: A Low-latency, Sampling-based Measurement Platform for Software D...
Opensample: A Low-latency, Sampling-based Measurement Platform for Software D...
 
Self-adaptive container monitoring with performance-aware Load-Shedding policies
Self-adaptive container monitoring with performance-aware Load-Shedding policiesSelf-adaptive container monitoring with performance-aware Load-Shedding policies
Self-adaptive container monitoring with performance-aware Load-Shedding policies
 
[EUC2016] FFWD: latency-aware event stream processing via domain-specific loa...
[EUC2016] FFWD: latency-aware event stream processing via domain-specific loa...[EUC2016] FFWD: latency-aware event stream processing via domain-specific loa...
[EUC2016] FFWD: latency-aware event stream processing via domain-specific loa...
 
HIGH SPEED NETWORKS
HIGH SPEED NETWORKSHIGH SPEED NETWORKS
HIGH SPEED NETWORKS
 

cnsm2012_slide

  • 1. Predicting response times for the Spotify backend Rerngvit Yanggratoke1, Gunnar Kreitz 2, Mikael Goldmann 2, Rolf Stadler 1 1ACCESS Linnaeus Center, KTH Royal Institute of Technology, Sweden 2Spotify, Sweden 8th International Conference on Network and Service Management (CNSM 2012), Las Vegas 24 October, 2012 1 / 24
  • 2. What is Spotify? On-demand music streaming service, similar to MOG or Rhapsody. Large catalogue, over 15 million tracks. Over 15M active users and 4M subscribers around the world. “Spotify: large-scale distributed system with real users.” 2 / 24
  • 3. Overview Spotify is a peer-assisted system. Response time and latency. Goal An analytical model for distribution of the response time for the Spotify backend. The model should be tractable and accurate. Motivation Low latency is key to the Spotify service. Related works do not have a performance model or model only the average response time. Steps Study Spotify backend and find the simpler system model. Replicate Spotify backend implementations at KTH testbed. Develop and validate the model for KTH testbed with small and large servers. Spotify operational infrastructure. 3 / 24
  • 4. Outline 1 The Spotify backend architecture 2 Analytical model for estimating response time distribution 3 Validation of the model Validation on the KTH testbed Validation on the Spotify operational environment 4 Applications of the model 5 Conclusions and future work 4 / 24
  • 5. The Spotify backend architecture Backend sites: Stockholm, London, Ashburn. Master Storage stores all songs. Production Storage acts as a caching layers. 5 / 24
  • 6. Outline 1 The Spotify backend architecture 2 Analytical model for estimating response time distribution 3 Validation of the model Validation on the KTH testbed Validation on the Spotify operational environment 4 Applications of the model 5 Conclusions and future work 6 / 24
  • 7. Simplified architecture for a particular site Model only Production Storage. AP selects a storage server uniformly at random. Ignore network delays. Consider steady-state conditions and Poisson arrivals. 7 / 24
  • 8. Model of a single storage server A request is served from memory with probability q, one of the disks with probability (1 − q)/nd (nd: number of identical disks). Model memory or a disk as an M/M/1 queue. µm: service rate of memory and µd: disk. µm ≫ µd holds. Model for a single storage server probability that a request to the server is served below a latency t is P r(T ≤ t) = q + (1 − q)(1 − e−µd(1−(1−q)λ/µdnd)t ). (1) 8 / 24
  • 9. Model of a cluster of storage servers Set of storage servers S. Arrival process: Poisson process with rate λc. A request forwarded to a server s ∈ S independently and uniformly at random. For server s µd,s: the service rate of a disk. nd,s: number of identical disks. qs: probability that the request is served from memory. f(t, nd, µd, λ, q) = Pr(T ≤ t). Tc: latency of a request for the cluster. Model of a cluster of storage servers probability that a request to the cluster is served below a latency t is P r(Tc ≤ t) = 1 |S| s∈S f(t, nd,s, µd,s, λc |S| , qs). (2) 9 / 24
  • 10. Outline 1 The Spotify backend architecture 2 Analytical model for estimating response time distribution 3 Validation of the model Validation on the KTH testbed Validation on the Spotify operational environment 4 Applications of the model 5 Conclusions and future work 10 / 24
  • 11. Setup and measuring of request latency for a server Validation compares measurements at load injector with model (equation 1). 11 / 24
  • 12. Estimating model parameter for a storage server q - benchmark the server with a range of request rates. Approximate q through least-square regression. µd - run iostat when the server is serving a Spotify load. Parameter Small server Large server Spotify server µd 93 120 150 nd 1 1 6 α 0.0137 0.00580 0.000501 q0 0.946 1.15 0.815 12 / 24
  • 13. Model confidence limit for a single storage server Extensive testing reveals that model predictions are close to the measurements when the average length of disk queue is at most one. The model confidence limit is the maximum request rate below which above condition holds. Model confidence limit for a single storage server The limit λL is the positive root of αλ2 L + (1 − q0)λL − 1 2 µdnd = 0. 13 / 24
  • 14. Validation for a single large server The prediction accuracy decreases with increasing request rate. The maximum error is within 5%. 14 / 24
  • 15. Setup and measuring of request latency for a cluster Validation compares measurements at the access point with model (equation 2). 15 / 24
  • 16. Model confidence limit for a storage cluster The confidence limit is the max request rate to the cluster, such that the rate to any server does not exceed the confidence limit for a single server with high probability. To compute the limit, we must know the load of the highest loaded server with high probability by applying the balls-and-bins model. Model confidence limit for a storage cluster The limit is the smaller root of 1 |S|2 λ2 L,c + ( 2λL |S| − 2 log |S|Kβ,|S| |S| )λL,c + λ2 L = 0, whereby β = 2 and Kβ,|S| = 1 − 1 β log log |S| 2 log |S| . 16 / 24
  • 17. Result for a cluster of three large servers The prediction accuracy decreases with increasing request rate. The maximum error is within 4.6%. 17 / 24
  • 18. Result for a Spotify operational server - single server The prediction accuracy decreases with increasing request rate. The maximum error is within 8.5%. 18 / 24
  • 19. Result for Spotify operational service - Stockholm site 31 operational Spotify storage servers. 24 hours measurements data. Arrival rate and response time distribution for the requests (five-minutes averages). The maximum error is within 9.7%. 19 / 24
  • 20. Outline 1 The Spotify backend architecture 2 Analytical model for estimating response time distribution 3 Validation of the model Validation on the KTH testbed Validation on the Spotify operational environment 4 Applications of the model 5 Conclusions and future work 20 / 24
  • 21. Applications of the model Varying the number of servers for the load of 12,000 requests/sec (peak load). Varying the load for 25 storage servers. 21 / 24
  • 22. Outline 1 The Spotify backend architecture 2 Analytical model for estimating response time distribution 3 Validation of the model Validation on the KTH testbed Validation on the Spotify operational environment 4 Applications of the model 5 Conclusions and future work 22 / 24
  • 23. Conclusions Develop the model for distribution of the response time for the Spotify backend that is tractable. Extensive validation of the model, on (1) KTH testbed (2) the Spotify operational infrastructure. The model predictions are accurate for the lightly-loaded storage system, with error up to 11%. The confidence range of our model covers the entire operational range of the load to the Spotify storage system. For instance, the model confirms that the Stockholm site could handle significantly higher peak load, or handle the load with fewer servers. 23 / 24
  • 24. Current / Future work An online performance management system for a distributed key-value store like the Spotify storage system. Evaluation of object allocation policies. 24 / 24