Cache Allocation in Information Centric Networking
Selective Placement of Caches in ICN
Selective versus Pervasive In-Network Caching in ICN
Hash-Based Off-Path Caching in ICN
Selective placement of caches for hash based off-path caching in icn slides
1. Selective Placement of Caches for
Hash-Based Off-Path Caching in ICN*
Anshuman Kalla
1
*Indian Journal of Science and Technology, Vol 9 (37), October 2016
DOI: 10.17485/ijst/2016/v9i37/100323
For reading mode, Adobe Reader – use Ctrl + H and Foxit Reader – use F11
2. In-Network Caching in ICN
• Research based resurgence in the field of networking
seems to have culminated in form of ICN
– since it has the capability to conciliate in a unified manner all the
existing & anticipated issues
2/24/2017 2Anshuman Kalla
3. In-Network Caching in ICN
• Research based resurgence in the field of networking
seems to have culminated in form of ICN
– since it has the capability to conciliate in a unified manner all the
existing & anticipated issues
• In-network caching has evidently emerged as an
indispensable core functionality of ICN
– expected to play a vital role in ameliorating performance
2/24/2017 3Anshuman Kalla
4. In-Network Caching in ICN
• Research based resurgence in the field of networking
seems to have culminated in form of ICN
– since it has the capability to conciliate in a unified manner all the
existing & anticipated issues
• In-network caching has evidently emerged as an
indispensable core functionality of ICN
– expected to play a vital role in ameliorating performance
• Advantages of in-network caching, related issues,
factors affecting, performance metrics etc.
– Have been discussed in detail
2/24/2017 4Anshuman Kalla
5. In-Network Caching in ICN
• Research based resurgence in the field of networking
seems to have culminated in form of ICN
– since it has the capability to conciliate in a unified manner all the
existing & anticipated issues
• In-network caching has evidently emerged as an
indispensable core functionality of ICN
– expected to play a vital role in ameliorating performance
• Advantages of in-network caching, related issues,
factors affecting, performance metrics etc.
– Have been discussed in detail *
* A. Kalla, SK Sharma, “A Constructive Review of In-Network Caching: A Core Functionality of ICN,”
Proc. IEEE International Conference (ICCCA), pp 567 – 574, 2016
2/24/2017 5Anshuman Kalla
6. Background & Motivation
• Off-Path caching has preliminarily shown better results
– In terms of hit ratio, retrieval delay, cache diversity, server load etc.
– as compared on-path and edge caching [see references in the paper]
2/24/2017 6Anshuman Kalla
7. Background & Motivation
• Off-Path caching has preliminarily shown better results
– In terms of hit ratio, retrieval delay, cache diversity, server load etc.
– as compared on-path and edge caching [see references in the paper]
• One of the practical & popular ways of implementing
pervasive off-path caching is
– Hash-based off-path caching that uses hash function
2/24/2017 7Anshuman Kalla
8. Background & Motivation
• Off-Path caching has preliminarily shown better results
– In terms of hit ratio, retrieval delay, cache diversity, server load etc.
– as compared on-path and edge caching [see references in the paper]
• One of the practical & popular ways of implementing
pervasive* off-path caching is
– Hash-based off-path caching that uses hash function
* Pervasive means all the nodes are enabled with in-network caching capability
2/24/2017 8Anshuman Kalla
9. Background & Motivation
• Off-Path caching has preliminarily shown better results
– In terms of hit ratio, retrieval delay, cache diversity, server load etc.
– as compared on-path and edge caching [see references in the paper]
• One of the practical & popular ways of implementing
pervasive off-path caching is
– Hash-based off-path caching that uses hash function
• Nevertheless, one downside of off-path caching is
– Increase in intra-domain link load to achieve required gain
2/24/2017 9Anshuman Kalla
10. Background & Motivation
• Off-Path caching has preliminarily shown better results
– In terms of hit ratio, retrieval delay, cache diversity, server load etc.
– as compared on-path and edge caching [see references in the paper]
• One of the practical & popular ways of implementing
pervasive off-path caching is
– Hash-based off-path caching that uses hash function
• Nevertheless, one downside of off-path caching is
– Increase in intra-domain link load to achieve required gain
• Moreover, internal link stress and retrieval delay
depends on
– Network topology
2/24/2017 10Anshuman Kalla
11. Aim
• With this background the question raised is:
2/24/2017 11Anshuman Kalla
Does there exists a selective cache allocation map which
could be conducive in achieving
• Smaller ‘average retrieval delay’ and simultaneous
• Lower ‘maximum intra-domain link-stress’
while keeping the other relevant metrics unaltered as
compared to pervasive cache allocation?
12. Aim
• With this background the question raised is:
2/24/2017 12Anshuman Kalla
Does there exists a selective cache allocation map* which
could be conducive in achieving
• Smaller ‘average retrieval delay’ and simultaneous
• Lower ‘maximum intra-domain link-stress’
while keeping the other relevant metrics unaltered as
compared to pervasive cache allocation?
* Cache allocation or placement implies configuring the nodes with content storage capacity
13. Aim
• With this background the question raised is:
2/24/2017 13Anshuman Kalla
Does there exists a selective cache allocation map* which
could be conducive in achieving
• Smaller ‘average retrieval delay’ and simultaneous
• Lower ‘maximum intra-domain link-stress’
while keeping the other relevant metrics unaltered as
compared to pervasive cache allocation?
If so, what could be a simple way to find such a selective
placement of caches* for hash-based off-path caching?
* Cache allocation or placement implies configuring the nodes with content storage capacity
14. Aim
• To ensure profitable, economic & selective placement
of caches such that the placement
– Tends to reduce simultaneously average retrieval delay &
maximum internal link-stress
2/24/2017 14Anshuman Kalla
15. Aim
• To ensure profitable, economic & selective placement
of caches such that the placement
– Tends to reduce simultaneously average retrieval delay &
maximum internal link-stress
• We propose simple yet elegant heuristic algorithm to
find selective cache allocation map
– by strategically identifying & excluding bad node-positions
2/24/2017 15Anshuman Kalla
16. Aim
• To ensure profitable, economic & selective placement
of caches such that the placement
– Tends to reduce simultaneously average retrieval delay &
maximum internal link-stress
• We propose simple yet elegant heuristic algorithm to
find selective cache allocation map
– by strategically identifying & excluding bad node-positions
• Rationale:
– There are few nodes in network which if selected for caching
causes adverse effect
2/24/2017 16Anshuman Kalla
17. Aim
• To ensure profitable, economic & selective placement
of caches such that the placement
– Tends to reduce simultaneously average retrieval delay &
maximum internal link-stress
• We propose simple yet elegant heuristic algorithm to
find selective cache allocation map
– by strategically identifying & excluding bad node-positions
• Rationale:
– There are few nodes in network which if selected for caching
causes adverse effect
• To the best of our knowledge, the work is first attempt
– to explore selective placement of caches for hash-based off-path caching
2/24/2017 17Anshuman Kalla
18. Cost Computation of Nodes
• Cost for every node in the network is calculated
2/24/2017 18Anshuman Kalla
19. Cost Computation of Nodes
• Cost for every node in the network is calculated
2/24/2017 19Anshuman Kalla
• Since uniform rate of request arrival at ingress nodes is
considered thus
20. Cost Computation of Nodes
• Cost for every node in the network is calculated
2/24/2017 20Anshuman Kalla
• Since uniform rate of request arrival at ingress nodes is
considered thus
H = average hit ratio for the case of pervasive hash-based off-path caching
dxy = delay encountered to reach destination y from source x using the shortest path
I = set of ingress nodes, V = set of nodes in network topology
e ∈ E denotes egress node that falls over the shortest path & is connected to server
21. Cost Computation of Nodes
• Cost for every node in the network is calculated
2/24/2017 21Anshuman Kalla
• Since uniform rate of request arrival at ingress nodes is
considered thus
H = average hit ratio for the case of pervasive hash-based off-path caching
dxy = delay encountered to reach destination y from source x using the shortest path
I = set of ingress nodes, V = set of nodes in network topology
e ∈ E denotes egress node that falls over the shortest path & is connected to server
• Using these values, a ordered list of nodes is prepared
In the descending order of cost Cr i.e. is from most costly to least costly
22. Performance Metrics
• Average Retrieval Delay
– Delay encountered on an average to fetch requested content
irrespective of source-of-request & point-of-retrieval
– Smaller the delay better is the user’s QoE
2/24/2017 22Anshuman Kalla
23. Performance Metrics
• Average Retrieval Delay
– Delay encountered on an average to fetch requested content
irrespective of source-of-request & point-of-retrieval
– Smaller the delay better is the user’s QoE
• Link Stress
– For an intra-domain link, the metric indicates the volume of the
traffic passed through it
– The value of link stress for the most heavily loaded internal link is
termed as maximum internal link stress
2/24/2017 23Anshuman Kalla
24. Performance Metrics
• Hit Ratio
– Requests being satisfied by contents residing at in-network
caches to the total requests received
– Higher is the hit ratio, lower is the traffic delegated over the
expensive external links and lesser is the load on origin server
2/24/2017 24Anshuman Kalla
25. Performance Metrics
• Hit Ratio
– Requests being satisfied by contents residing at in-network
caches to the total requests received
– Higher is the hit ratio, lower is the traffic delegated over the
expensive external links and lesser is the load on origin server
• Unique Contents Cached
– Total number of distinct contents being cached by all the in-
network caches collectively
– More is the unique contents cached better is the cache diversity
2/24/2017 25Anshuman Kalla
26. Environment Setup
• Exodux US, a PoP level topology from Rocketfuel
[Reference # 22 in the paper] is used
2/24/2017 26Anshuman Kalla
27. Environment Setup
• Exodux US, a PoP level topology from Rocketfuel
[Reference # 22 in the paper] is used
• About 8% of nodes were randomly appointed as egress
nodes (connected to origin server)
2/24/2017 27Anshuman Kalla
28. Environment Setup
• Exodux US, a PoP level topology from Rocketfuel
[Reference # 22 in the paper] is used
• About 8% of nodes were randomly appointed as egress
nodes (connected to origin server)
• Approx. 55% of nodes were randomly selected as
ingress nodes (connected to clients i.e. users)
2/24/2017 28Anshuman Kalla
29. Environment Setup
• Exodux US, a PoP level topology from Rocketfuel
[Reference # 22 in the paper] is used
• About 8% of nodes were randomly appointed as egress
nodes (connected to origin server)
• Approx. 55% of nodes were randomly selected as
ingress nodes (connected to clients i.e. users)
• Aggregate network cache size is chosen to be 10% of
content population
2/24/2017 29Anshuman Kalla
30. Environment Setup
• Exodux US, a PoP level topology from Rocketfuel
[Reference # 22 in the paper] is used
• About 8% of nodes were randomly appointed as egress
nodes (connected to origin server)
• Approx. 55% of nodes were randomly selected as
ingress nodes (connected to clients i.e. users)
• Aggregate network cache size is chosen to be 10% of
content population
• Rest of the parameters used are as per table, next
2/24/2017 30Anshuman Kalla
31. Parameters Used For Evaluation
2/24/2017 31Anshuman Kalla
Parameters Values
Total number of nodes 79
Number of server one
Number of egress nodes (connected to server) 6
Number of ingress nodes (connected to users) 44
Content population 79000
Aggregate network cache size 7900
Cache size per node 100
Popularity distribution of content requests Zipf (α = 0.8)
Delay to reach origin server from egress nodes 100 ms
Cache size per node Uniform
Rate of request arrival at ingress nodes Uniform
Content size Homogeneous
Cache replacement policy LRU
Number of iterations carried out for the selected topology 39
Number of runs per iteration of algorithm 1 20
Number of requests simulated per run (i.e. after launch of hash-based off-path
caching)
500,000
Routing Shortest Path
Links between users and ingress nodes Bi-directional
Network regime Congestions Free
32. Points To Note
• First
– Evaluation takes into account only link propagation delay
– Factors like queuing delay, packet loss, cross traffic etc. have not
been incorporated for preliminary evaluation
2/24/2017 32Anshuman Kalla
33. Points To Note
• First
– Evaluation takes into account only link propagation delay
– Factors like queuing delay, packet loss, cross traffic etc. have not
been incorporated for preliminary evaluation
• Second
– Iteration denotes a repetitive process of debarring increasing
number of nodes from caching, whereas
– Run indicates an act of deploying hash-based off-path caching
with a selective placement of caches
– As per last table, 20 runs per iteration have been carried out
during the simulations.
2/24/2017 33Anshuman Kalla
34. Points To Note
• Third
– Based on the hash function being used the mapping of contents
to a designated off-path caches could be entirely different
– Thus for better evaluation content-to-cache mapping is being
changed for every run
– This explains the reason behind multiple runs per iteration
2/24/2017 34Anshuman Kalla
36. Results and Discussion
2/24/2017 36Anshuman Kalla
Number of nodes debarred from caching
AverageRetrievalDelay
Number of nodes debarred from caching
MaximumLinkStress
37. Results and Discussion
2/24/2017 37Anshuman Kalla
Number of nodes debarred from caching
AverageRetrievalDelay
Number of nodes debarred from caching
MaximumLinkStress 77410
Selective Placement
38. Results and Discussion
2/24/2017 38Anshuman Kalla
Number of nodes debarred from caching
AverageRetrievalDelay
Number of nodes debarred from caching
MaximumLinkStress 77410
Selective Placement
39. Results and Discussion
2/24/2017 39Anshuman Kalla
Number of nodes debarred from caching
AverageRetrievalDelay
Number of nodes debarred from caching
MaximumLinkStress
78.9465
77410
Selective Placement
40. Results and Discussion
2/24/2017 40Anshuman Kalla
Number of nodes debarred from caching
AverageRetrievalDelay
Number of nodes debarred from caching
MaximumLinkStress
84.5661
81784
78.9465
77410
Selective Placement
Pervasive Caching
41. Results and Discussion
2/24/2017 41Anshuman Kalla
Number of nodes debarred from caching
AverageRetrievalDelay
Number of nodes debarred from caching
MaximumLinkStress
84.5661
81784
78.9465
77410
Selective Placement
Pervasive Caching
Gain = 5.6196
i.e. 6.65%
Gain = 4374
i.e. 5.35%
42. Results and Discussion
2/24/2017 42Anshuman Kalla
Content Descending in Popularity
ContentWiseAverageRetrievalDelay
Content Descending in Popularity
ContentWiseHitRatio
43. Results and Discussion
2/24/2017 43Anshuman Kalla
Placement of
Caches
Hit Ratio
Average
Retrieval Delay
Cache
Diversity
Maximum Link
Stress
Pervasive 0.4635 84.5661 7900 81784
Selective 0.4638 78.9465 7900 77410
44. Results and Discussion
2/24/2017 44Anshuman Kalla
Placement of
Caches
Hit Ratio
Average
Retrieval Delay
Cache
Diversity
Maximum Link
Stress
Pervasive 0.4635 84.5661 7900 81784
Selective 0.4638 78.9465 7900 77410
• The proposed algorithm is able to achieve
– 6.65% and 5.35% reduction in average retrieval delay and maximum
internal link stress respectively
45. Results and Discussion
2/24/2017 45Anshuman Kalla
Placement of
Caches
Hit Ratio
Average
Retrieval Delay
Cache
Diversity
Maximum Link
Stress
Pervasive 0.4635 84.5661 7900 81784
Selective 0.4638 78.9465 7900 77410
• The proposed algorithm is able to achieve
– 6.65% and 5.35% reduction in average retrieval delay and maximum
internal link stress respectively
• Quantitatively the gain might not be high but qualitatively the
gain comes at no trade-off
– That is without sacrificing any other relevant performance metrics
46. Results and Discussion
2/24/2017 46Anshuman Kalla
Placement of
Caches
Hit Ratio
Average
Retrieval Delay
Cache
Diversity
Maximum Link
Stress
Pervasive 0.4635 84.5661 7900 81784
Selective 0.4638 78.9465 7900 77410
• The proposed algorithm is able to achieve
– 6.65% and 5.35% reduction in average retrieval delay and maximum
internal link stress respectively
• Quantitatively the gain might not be high but qualitatively the
gain comes at no trade-off
– That is without sacrificing any other relevant performance metrics
• The nodes debarred are not to be upgraded with ICN
functionalities which implies lower cost