Exploring Off-Path Caching with Edge
Caching in Information Centric Networking*
Anshuman Kalla, Sudhir Sharma
1Anshuman Kalla
* Proc. IEEE International Conference on Computational Techniques in Information and
Communication Technologies (ICCTICT), New Delhi, India, March 11, 2016.
For reading mode, Adobe Reader – use Ctrl + H and Foxit Reader – use F11
Introduction
• Foundation of current (TCP/IP) networking was laid down in
early 70s when
– Networking resources were scarce
– Multiple-accessing of resources was of prime importance
– This implies years of experience and mature networking facility
2Anshuman Kalla
Introduction
• Foundation of current (TCP/IP) networking was laid down in
early 70s when
– Networking resources were scarce
– Multiple-accessing of resources was of prime importance
– This implies years of experience and mature networking facility
• Additional support from numerous growth boosters like
– emergence of high speed data communication links,
– refinement in multi-core processors technology,
– exponential and consistent dip in cost of data storage etc.
– flooding of economic hand-held networking devices,
– simultaneous multiple connectivities
3Anshuman Kalla
Introduction
• Foundation of current (TCP/IP) networking was laid down in
early 70s when
– Networking resources were scarce
– Multiple-accessing of resources was of prime importance
– This implies years of experience and mature networking facility
• Additional support from numerous growth boosters like
– emergence of high speed data communication links,
– refinement in multi-core processors technology,
– exponential and consistent dip in cost of data storage etc.
– flooding of economic hand-held networking devices,
– simultaneous multiple connectivities,
• Thus we expect flawless evolution & facility to be at its best4Anshuman Kalla
Introduction
• In spite of years of maturity & technological advancements
– Networking facility falls short of users’ expectations
– The growth seems to retard
5Anshuman Kalla
Introduction
• In spite of years of maturity & technological advancements
– Networking facility falls short of users’ expectations
– The growth seems to retard
• The issues that have in a way plagued the current TCP/IP
networking are:
– Data Dissemination & Service Accessing (prominent usage)
– Named Host (i.e. no contents due to DNS mapping)
– Mobility (change in IP leads to ongoing applications restart)
– Availability (of content or services preferably close to users)
– Security (absence of data level security)
– Flash Crowd (leads to congestion, DoS, poor QoS etc.)
6Anshuman Kalla
Introduction
• In spite of years of maturity & technological advancements
– Networking facility falls short of users’ expectations
– The growth seems to retard
• The issues that have in a way plagued the current TCP/IP
networking are:
– Data Dissemination & Service Accessing (prominent usage)
– Named Host (i.e. no contents due to DNS mapping)
– Mobility (change in IP leads to ongoing applications restart)
– Availability (of content or services preferably close to users)
– Security (absence of data level security)
– Flash Crowd (leads to congestion, DoS, poor QoS etc.)
• Trend is to deploy dedicated fix for every issue encountered7Anshuman Kalla
The Facts
• First Fact: Increasing add-on patches for various issues
– Has transformed TCP/IP into complex and delicate architecture
8Anshuman Kalla
The Facts
• First Fact: Increasing add-on patches for various issues
– Has transformed TCP/IP into complex and delicate architecture
• Second Fact: Today resources are no more limited
– In fact more number of networking enabled devices per person
9Anshuman Kalla
The Facts
• First Fact: Increasing add-on patches for various issues
– Has transformed TCP/IP into complex and delicate architecture
• Second Fact: Today resources are no more limited
– In fact more number of networking enabled devices per person
• Third Fact: Shift in primary usage of networking facility
– instead of sharing of network resources the prime usage is content
centric
10Anshuman Kalla
The Facts
• First Fact: Increasing add-on patches for various issues
– Has transformed TCP/IP into complex and delicate architecture
• Second Fact: Today resources are no more limited
– In fact more number of networking enabled devices per person
• Third Fact: Shift in primary usage of networking facility
– instead of sharing of network resources the prime usage is content
centric
11Anshuman Kalla
Thus radical change in its usage is the crux of various issues
Information Centric Networking
• Lately researchers have felt the need of clean-slate
approach
– To reconcile all the issues and shift in usage in a unified manner
– This marks the birth of Information Centric Networking (ICN)
12Anshuman Kalla
Information Centric Networking
• Lately researchers have felt the need of clean-slate
approach
– To reconcile all the issues and shift in usage in a unified manner
– This marks the birth of Information Centric Networking (ICN)
• Various proposals are CCN, PSIRP, DONA, PURSUIT etc.
• Albeit design details are different nevertheless all aim
– to retire host-centric & bring in place content-centric model
13Anshuman Kalla
Information Centric Networking
• Lately researchers have felt the need of clean-slate
approach
– To reconcile all the issues and shift in usage in a unified manner
– This marks the birth of Information Centric Networking (ICN)
• Various proposals are CCN, PSIRP, DONA, PURSUIT etc.
• Albeit design details are different nevertheless all aim
– to retire host-centric & bring in place content-centric model
• Content Centric Networking (CCN) has received significant
popularity
– Thus for present work CCN and its related terminology has been
used.
14Anshuman Kalla
Salient Features of ICN
• Named content
• In-network caching
• Named based routing
• Data-level security
• Multi-path routing
• Hop-by-hop flow control
• Pull-based communication
• Adaptability to Multiple simultaneous connectivities
15Anshuman Kalla
Salient Features of ICN
• Named content
• In-network caching  secondary point-of-service
• Named based routing
• Data-level security
• Multi-path routing
• Hop-by-hop flow control
• Pull-based communication
• Adaptability to Multiple simultaneous connectivities
16Anshuman Kalla
Types of In-Network Caching
17Anshuman Kalla
In-Network Caching in ICN
Off-Path Caching Edge CachingOn-Path Caching
Hybrid Caching
Types of In-Network Caching
18Anshuman Kalla
• On-Path Caching
• Off-Path Caching
• Edge Caching
R1
R2
R3R4
R5R8
R7 R6
Interest Packet
Types of In-Network Caching
19Anshuman Kalla
Interest Packet
Data Packet
R1
R2
R3R4
R5R8
R7 R6
Nodes that could
cache data are R1 R2
R3 and R6
• On-Path Caching
• Off-Path Caching
• Edge Caching
Types of In-Network Caching
20Anshuman Kalla
• On-Path Caching
• Off-Path Caching
• Edge Caching
Interest Packet
R1
R2
R3R4
R5R8
R7 R6
Node R4 is
designated
off-path cache
Types of In-Network Caching
21Anshuman Kalla
• On-Path Caching
• Off-Path Caching
• Edge Caching
Interest Packet
R1
R2
R3R4
R5R8
R7 R6
Data Packet
Node R4 is
designated
off-path cache
Types of In-Network Caching
22Anshuman Kalla
• On-Path Caching
• Off-Path Caching
• Edge Caching
Interest Packet
R1
R2
R3R4
R5R8
R7 R6
Types of In-Network Caching
23Anshuman Kalla
• On-Path Caching
• Off-Path Caching
• Edge Caching
Interest Packet
Data Packet
R1
R2
R3R4
R5R8
R7 R6
Node R6 is
edge cache
Aim - First
• To empirically compare the performance of on-path,
off-path and edge caching [All Three]
– Researchers already compared performance of on-path and
edge caching techniques
24Anshuman Kalla
Aim - First
• To empirically compare the performance of on-path,
off-path and edge caching [All Three]
– Researchers already compared performance of on-path and
edge caching techniques
• If marginal performance gap is affordable then edge caching is better
as it involves only edge nodes (Ref this paper for references)
25Anshuman Kalla
Aim - First
• To empirically compare the performance of on-path,
off-path and edge caching [All Three]
– Researchers already compared performance of on-path and
edge caching techniques
• If marginal performance gap is affordable then edge caching is better
as it involves only edge nodes (Ref this paper for references)
– However comparison of three would answer the questions
• Which one of the three caching technique performs the best?
• Is pervasive caching (i.e. caching at all nodes) really beneficial?
26Anshuman Kalla
Performance Metrics Used
• Hit Ratio
– Indicates availability of contents
– Need to be maximized
27Anshuman Kalla
Performance Metrics Used
• Hit Ratio
– Indicates availability of contents
– Need to be maximized
• Average Retrieval Delay
– Smaller the metric better is QoS perceived by users
– Need to be minimized
28Anshuman Kalla
Performance Metrics Used
• Hit Ratio
– Indicates availability of contents
– Need to be maximized
• Average Retrieval Delay
– Smaller the metric better is QoS perceived by users
– Need to be minimized
• Unique Contents Cached
– Implies cache diversity
– Need to be maximized
29Anshuman Kalla
Performance Metrics Used
• Hit Ratio
– Indicates availability of contents
– Need to be maximized
• Average Retrieval Delay
– Smaller the metric better is QoS perceived by users
– Need to be minimized
• Unique Contents Cached
– Implies cache diversity
– Need to be maximized
• Percentage of External Traffic
– Signifies use of expensive external links and load on server
– Need to be minimized 30Anshuman Kalla
Environment Set-up & Parameters Used
• Six real network topologies were considered:
– Abilene (12 Core nodes), Geant (22), Germany50 (50), India35 (35),
Exodus US (79) & Ebone Europe (87)
• Number of server – One
• Randomly nodes connected to server – 7% to 8%
• Randomly nodes connected to clients – 50% to 55%
• Size of content population – 1000 * number of core nodes
• Cache size per node – 100
• Network cache budget – 10% of content population
• Popularity distribution – Zipfian (α = 0.8)
• Distance from edge nodes to server – 100 ms
• Content Size – homogeneous (unit size)
• Network Regime – Congestion free
• Replacement policy – LRU
• Forwarding over shortest path based on link latency
• Total number of requests simulated – 5,00,000 31
Result of Performance Evaluation
32Anshuman Kalla
• Six different network topologies and three different
caching techniques results in
– eighteen different scenarios
Result of Performance Evaluation
33Anshuman Kalla
• Six different network topologies and three different
caching techniques results in
– eighteen different scenarios
• Ten simulations per scenario and results depicts mean
values with standard deviation
Result of Performance Evaluation
34Anshuman Kalla
• Six different network topologies and three different
caching techniques results in
– eighteen different scenarios
• Ten simulations per scenario and results depicts mean
values with standard deviation
• Overall values of hit ratio or average retrieval delay is
computed by considering all the requests concerning
all the contents
Result of Performance Evaluation
35Anshuman Kalla
• Six different network topologies and three different
caching techniques results in
– eighteen different scenarios
• Ten simulations per scenario and results depicts mean
values with standard deviation
• Overall values of hit ratio or average retrieval delay is
computed by considering all the requests concerning
all the contents
Result of Performance Evaluation
36Anshuman Kalla
• Six different network topologies and three different
caching techniques results in
– eighteen different scenarios
• Ten simulations per scenario and results depicts mean
values with standard deviation
• Overall values of hit ratio or average retrieval delay is
computed by considering all the requests concerning
all the contents
Result of Performance Evaluation
37Anshuman Kalla
• Six different network topologies and three different
caching techniques results in
– eighteen different scenarios
• Ten simulations per scenario and results depicts mean
values with standard deviation
• Overall values of hit ratio or average retrieval delay is
computed by considering all the requests concerning
all the contents
Result of Performance Evaluation
38Anshuman Kalla
• Six different network topologies and three different
caching techniques results in
– eighteen different scenarios
• Ten simulations per scenario and results depicts mean
values with standard deviation
• Overall values of hit ratio or average retrieval delay is
computed by considering all the requests concerning
all the contents
Result of Performance Evaluation
39Anshuman Kalla
• Six different network topologies and three different
caching techniques results in
– eighteen different scenarios
• Ten simulations per scenario and results depicts mean
values with standard deviation
• Overall values of hit ratio or average retrieval delay is
computed by considering all the requests concerning
all the contents
Though edge caching performs better than on-
path caching, however off-path caching
performs the best.
Cumulative External Traffic (Exodus US)
40Anshuman Kalla
Content-Wise Hit Ratio (Exodus US)
41Anshuman Kalla
Content-Wise Average Retrieval Delay
42Anshuman Kalla
Result of Performance Evaluation
43Anshuman Kalla
Off-Path Caching performs the best
as compared to On-Path and Edge Caching
44Anshuman Kalla
Conclusion & Motivation
Off-Path Caching performs the best
as compared to On-Path and Edge Caching
However lets review the
content-wise average retrieval delay plot
45Anshuman Kalla
Conclusion & Motivation
Conclusion & Motivation
46Anshuman Kalla
Content-Wise Average Retrieval Delay
Conclusion & Motivation
47Anshuman Kalla
Content-Wise Average Retrieval Delay
Lets zoom this section
48Anshuman Kalla
Conclusion & Motivation
Content-Wise Average Retrieval Delay
49Anshuman Kalla
Conclusion & Motivation
Content-Wise Average Retrieval Delay
Note the gap in terms of delay
for top most popular contents
50Anshuman Kalla
Problem Targeted
Content-Wise Average Retrieval Delay
Is it possible to devise a caching technique that
– could achieve minimum content-wise average retrieval delay for top
most popular contents like edge caching while
– maintaining overall performance very close to that of off-path caching
Aim - Second
• To couple off-path with edge caching  hybrid
 That could reduce average retrieval delay for the top most
popular contents while
51Anshuman Kalla
Aim - Second
• To couple off-path with edge caching  hybrid
 That could reduce average retrieval delay for the top most
popular contents while
 Marginally scarifying other relevant performance metrics
52Anshuman Kalla
We propose Hybrid Caching  Coupling Off-Path Caching with Edge Caching
EDOP (EDge Off-Path) Caching
• Simple coupling results in two devitalizing issues
– Reduction in cache diversity due to content duplication
– Blind (edge) caching at boundary nodes  hog the limited space
53Anshuman Kalla
EDOP (EDge Off-Path) Caching
• Simple coupling results in two devitalizing issues
– Reduction in cache diversity due to content duplication
– Blind (edge) caching at boundary nodes  hog the limited space
• Flavor of edge caching is being introduced to off-path
caching
– Caches at the edge nodes are partitioned
54Anshuman Kalla
Anshuman Kalla 55
EDOP (EDge Off-Path) Caching
R1
R2
R3R4
R5R8
R7 R6
Partitioned of Content Store
at edge nodes
EDOP (EDge Off-Path) Caching
• Simple coupling results in two devitalizing issues
– Reduction in cache diversity due to content duplication
– Blind (edge) caching at boundary nodes  hog the limited space
• Flavor of edge caching is being introduced to off-path
caching
– Caches at the edge nodes are partitioned
– Tuning parameter T  percentage of storage for edge caching
56Anshuman Kalla
Anshuman Kalla 57
EDOP (EDge Off-Path) Caching
R1
R2
R3R4
R5R8
R7 R6
Partitioned of CS at edge nodes
using Tuning Parameter ‘T’
EDOP (EDge Off-Path) Caching
• Simple coupling results in two devitalizing issues
– Reduction in cache diversity due to content duplication
– Blind (edge) caching at boundary nodes  hog the limited space
• Flavor of edge caching is being introduced to off-path
caching
– Caches at the edge nodes are partitioned
– Tuning parameter T  percentage of storage for edge caching
• Content selection to be made before edge caching
– FIFO queue for reference counting i.e. popularity estimation
58Anshuman Kalla
Anshuman Kalla 59
EDOP (EDge Off-Path) Caching
R1
R2
R3R4
R5R8
R7 R6
Partitioned of CS at edge nodes
EDOP (EDge Off-Path) Caching
• Simple coupling results in two devitalizing issues
– Reduction in cache diversity due to content duplication
– Blind (edge) caching at boundary nodes  hog the limited space
• Flavor of edge caching is being introduced to off-path
caching
– Caches at the edge nodes are partitioned
– Tuning parameter T  percentage of storage for edge caching
• Content selection to be made before edge caching
– FIFO queue for reference counting i.e. popularity estimation
• Pre-fetching of popular contents estimated by FIFO
60Anshuman Kalla
Results and Discussion
Caching Hit Ratio Average Retrieval Delay Unique Cached Content
On-Path 0.0944 (±0.0003) 100.7412 (±0.0352) 2527 (±13)
Edge 0.1027 (±0.0003) 99.6784 (±0.0267) 5810 (±12)
Off-Path 0.4637 (±0.0001) 84.4653 (±0.1751) 7900 (±0)
EDOP 0.4545 (±0.0003) 84.0432 (±0.1914) 7465 (±2)
61Anshuman Kalla
Results and Discussion
Caching Hit Ratio Average Retrieval Delay Unique Cached Content
On-Path 0.0944 (±0.0003) 100.7412 (±0.0352) 2527 (±13)
Edge 0.1027 (±0.0003) 99.6784 (±0.0267) 5810 (±12)
Off-Path 0.4637 (±0.0001) 84.4653 (±0.1751) 7900 (±0)
EDOP 0.4545 (±0.0003) 84.0432 (±0.1914) 7465 (±2)
62Anshuman Kalla
Results and Discussion
Caching Hit Ratio Average Retrieval Delay Unique Cached Content
On-Path 0.0944 (±0.0003) 100.7412 (±0.0352) 2527 (±13)
Edge 0.1027 (±0.0003) 99.6784 (±0.0267) 5810 (±12)
Off-Path 0.4637 (±0.0001) 84.4653 (±0.1751) 7900 (±0)
EDOP 0.4545 (±0.0003) 84.0432 (±0.1914) 7465 (±2)
63Anshuman Kalla
<1% <6%
Results and Discussion
Caching Hit Ratio Average Retrieval Delay Unique Cached Content
On-Path 0.0944 (±0.0003) 100.7412 (±0.0352) 2527 (±13)
Edge 0.1027 (±0.0003) 99.6784 (±0.0267) 5810 (±12)
Off-Path 0.4637 (±0.0001) 84.4653 (±0.1751) 7900 (±0)
EDOP 0.4545 (±0.0003) 84.0432 (±0.1914) 7465 (±2)
64Anshuman Kalla
<1% <6%
•The gain achieved in content-
wise average retrieval delay is
between 88% to 3% for the
top most popular contents
•At the cost of max 6%
deterioration in other relevant
parameters
Results and Discussion
65Anshuman Kalla
Conclusion and Future Scope
• The two fold contribution of the paper is as follow:
– Empirically, it has been proven that off-path caching
outperforms the on-path and edge caching techniques
– Hybrid caching like EDOP caching has potential to
improve performance of in-network caching
66Anshuman Kalla
Conclusion and Future Scope
• The two fold contribution of the paper is as follow:
– Empirically, it has been proven that off-path caching
outperforms the on-path and edge caching techniques
– Hybrid caching like EDOP caching has potential to
improve performance of in-network caching
• Issues that will be targeted in future are:
– What should be the optimum value of T and how it
should be determined?
– How to ensure that edge caches retain the most
popular contents?
67Anshuman Kalla
Thank You
68Anshuman Kalla

Exploring off path caching with edge caching in information centric networking slides

  • 1.
    Exploring Off-Path Cachingwith Edge Caching in Information Centric Networking* Anshuman Kalla, Sudhir Sharma 1Anshuman Kalla * Proc. IEEE International Conference on Computational Techniques in Information and Communication Technologies (ICCTICT), New Delhi, India, March 11, 2016. For reading mode, Adobe Reader – use Ctrl + H and Foxit Reader – use F11
  • 2.
    Introduction • Foundation ofcurrent (TCP/IP) networking was laid down in early 70s when – Networking resources were scarce – Multiple-accessing of resources was of prime importance – This implies years of experience and mature networking facility 2Anshuman Kalla
  • 3.
    Introduction • Foundation ofcurrent (TCP/IP) networking was laid down in early 70s when – Networking resources were scarce – Multiple-accessing of resources was of prime importance – This implies years of experience and mature networking facility • Additional support from numerous growth boosters like – emergence of high speed data communication links, – refinement in multi-core processors technology, – exponential and consistent dip in cost of data storage etc. – flooding of economic hand-held networking devices, – simultaneous multiple connectivities 3Anshuman Kalla
  • 4.
    Introduction • Foundation ofcurrent (TCP/IP) networking was laid down in early 70s when – Networking resources were scarce – Multiple-accessing of resources was of prime importance – This implies years of experience and mature networking facility • Additional support from numerous growth boosters like – emergence of high speed data communication links, – refinement in multi-core processors technology, – exponential and consistent dip in cost of data storage etc. – flooding of economic hand-held networking devices, – simultaneous multiple connectivities, • Thus we expect flawless evolution & facility to be at its best4Anshuman Kalla
  • 5.
    Introduction • In spiteof years of maturity & technological advancements – Networking facility falls short of users’ expectations – The growth seems to retard 5Anshuman Kalla
  • 6.
    Introduction • In spiteof years of maturity & technological advancements – Networking facility falls short of users’ expectations – The growth seems to retard • The issues that have in a way plagued the current TCP/IP networking are: – Data Dissemination & Service Accessing (prominent usage) – Named Host (i.e. no contents due to DNS mapping) – Mobility (change in IP leads to ongoing applications restart) – Availability (of content or services preferably close to users) – Security (absence of data level security) – Flash Crowd (leads to congestion, DoS, poor QoS etc.) 6Anshuman Kalla
  • 7.
    Introduction • In spiteof years of maturity & technological advancements – Networking facility falls short of users’ expectations – The growth seems to retard • The issues that have in a way plagued the current TCP/IP networking are: – Data Dissemination & Service Accessing (prominent usage) – Named Host (i.e. no contents due to DNS mapping) – Mobility (change in IP leads to ongoing applications restart) – Availability (of content or services preferably close to users) – Security (absence of data level security) – Flash Crowd (leads to congestion, DoS, poor QoS etc.) • Trend is to deploy dedicated fix for every issue encountered7Anshuman Kalla
  • 8.
    The Facts • FirstFact: Increasing add-on patches for various issues – Has transformed TCP/IP into complex and delicate architecture 8Anshuman Kalla
  • 9.
    The Facts • FirstFact: Increasing add-on patches for various issues – Has transformed TCP/IP into complex and delicate architecture • Second Fact: Today resources are no more limited – In fact more number of networking enabled devices per person 9Anshuman Kalla
  • 10.
    The Facts • FirstFact: Increasing add-on patches for various issues – Has transformed TCP/IP into complex and delicate architecture • Second Fact: Today resources are no more limited – In fact more number of networking enabled devices per person • Third Fact: Shift in primary usage of networking facility – instead of sharing of network resources the prime usage is content centric 10Anshuman Kalla
  • 11.
    The Facts • FirstFact: Increasing add-on patches for various issues – Has transformed TCP/IP into complex and delicate architecture • Second Fact: Today resources are no more limited – In fact more number of networking enabled devices per person • Third Fact: Shift in primary usage of networking facility – instead of sharing of network resources the prime usage is content centric 11Anshuman Kalla Thus radical change in its usage is the crux of various issues
  • 12.
    Information Centric Networking •Lately researchers have felt the need of clean-slate approach – To reconcile all the issues and shift in usage in a unified manner – This marks the birth of Information Centric Networking (ICN) 12Anshuman Kalla
  • 13.
    Information Centric Networking •Lately researchers have felt the need of clean-slate approach – To reconcile all the issues and shift in usage in a unified manner – This marks the birth of Information Centric Networking (ICN) • Various proposals are CCN, PSIRP, DONA, PURSUIT etc. • Albeit design details are different nevertheless all aim – to retire host-centric & bring in place content-centric model 13Anshuman Kalla
  • 14.
    Information Centric Networking •Lately researchers have felt the need of clean-slate approach – To reconcile all the issues and shift in usage in a unified manner – This marks the birth of Information Centric Networking (ICN) • Various proposals are CCN, PSIRP, DONA, PURSUIT etc. • Albeit design details are different nevertheless all aim – to retire host-centric & bring in place content-centric model • Content Centric Networking (CCN) has received significant popularity – Thus for present work CCN and its related terminology has been used. 14Anshuman Kalla
  • 15.
    Salient Features ofICN • Named content • In-network caching • Named based routing • Data-level security • Multi-path routing • Hop-by-hop flow control • Pull-based communication • Adaptability to Multiple simultaneous connectivities 15Anshuman Kalla
  • 16.
    Salient Features ofICN • Named content • In-network caching  secondary point-of-service • Named based routing • Data-level security • Multi-path routing • Hop-by-hop flow control • Pull-based communication • Adaptability to Multiple simultaneous connectivities 16Anshuman Kalla
  • 17.
    Types of In-NetworkCaching 17Anshuman Kalla In-Network Caching in ICN Off-Path Caching Edge CachingOn-Path Caching Hybrid Caching
  • 18.
    Types of In-NetworkCaching 18Anshuman Kalla • On-Path Caching • Off-Path Caching • Edge Caching R1 R2 R3R4 R5R8 R7 R6 Interest Packet
  • 19.
    Types of In-NetworkCaching 19Anshuman Kalla Interest Packet Data Packet R1 R2 R3R4 R5R8 R7 R6 Nodes that could cache data are R1 R2 R3 and R6 • On-Path Caching • Off-Path Caching • Edge Caching
  • 20.
    Types of In-NetworkCaching 20Anshuman Kalla • On-Path Caching • Off-Path Caching • Edge Caching Interest Packet R1 R2 R3R4 R5R8 R7 R6 Node R4 is designated off-path cache
  • 21.
    Types of In-NetworkCaching 21Anshuman Kalla • On-Path Caching • Off-Path Caching • Edge Caching Interest Packet R1 R2 R3R4 R5R8 R7 R6 Data Packet Node R4 is designated off-path cache
  • 22.
    Types of In-NetworkCaching 22Anshuman Kalla • On-Path Caching • Off-Path Caching • Edge Caching Interest Packet R1 R2 R3R4 R5R8 R7 R6
  • 23.
    Types of In-NetworkCaching 23Anshuman Kalla • On-Path Caching • Off-Path Caching • Edge Caching Interest Packet Data Packet R1 R2 R3R4 R5R8 R7 R6 Node R6 is edge cache
  • 24.
    Aim - First •To empirically compare the performance of on-path, off-path and edge caching [All Three] – Researchers already compared performance of on-path and edge caching techniques 24Anshuman Kalla
  • 25.
    Aim - First •To empirically compare the performance of on-path, off-path and edge caching [All Three] – Researchers already compared performance of on-path and edge caching techniques • If marginal performance gap is affordable then edge caching is better as it involves only edge nodes (Ref this paper for references) 25Anshuman Kalla
  • 26.
    Aim - First •To empirically compare the performance of on-path, off-path and edge caching [All Three] – Researchers already compared performance of on-path and edge caching techniques • If marginal performance gap is affordable then edge caching is better as it involves only edge nodes (Ref this paper for references) – However comparison of three would answer the questions • Which one of the three caching technique performs the best? • Is pervasive caching (i.e. caching at all nodes) really beneficial? 26Anshuman Kalla
  • 27.
    Performance Metrics Used •Hit Ratio – Indicates availability of contents – Need to be maximized 27Anshuman Kalla
  • 28.
    Performance Metrics Used •Hit Ratio – Indicates availability of contents – Need to be maximized • Average Retrieval Delay – Smaller the metric better is QoS perceived by users – Need to be minimized 28Anshuman Kalla
  • 29.
    Performance Metrics Used •Hit Ratio – Indicates availability of contents – Need to be maximized • Average Retrieval Delay – Smaller the metric better is QoS perceived by users – Need to be minimized • Unique Contents Cached – Implies cache diversity – Need to be maximized 29Anshuman Kalla
  • 30.
    Performance Metrics Used •Hit Ratio – Indicates availability of contents – Need to be maximized • Average Retrieval Delay – Smaller the metric better is QoS perceived by users – Need to be minimized • Unique Contents Cached – Implies cache diversity – Need to be maximized • Percentage of External Traffic – Signifies use of expensive external links and load on server – Need to be minimized 30Anshuman Kalla
  • 31.
    Environment Set-up &Parameters Used • Six real network topologies were considered: – Abilene (12 Core nodes), Geant (22), Germany50 (50), India35 (35), Exodus US (79) & Ebone Europe (87) • Number of server – One • Randomly nodes connected to server – 7% to 8% • Randomly nodes connected to clients – 50% to 55% • Size of content population – 1000 * number of core nodes • Cache size per node – 100 • Network cache budget – 10% of content population • Popularity distribution – Zipfian (α = 0.8) • Distance from edge nodes to server – 100 ms • Content Size – homogeneous (unit size) • Network Regime – Congestion free • Replacement policy – LRU • Forwarding over shortest path based on link latency • Total number of requests simulated – 5,00,000 31
  • 32.
    Result of PerformanceEvaluation 32Anshuman Kalla • Six different network topologies and three different caching techniques results in – eighteen different scenarios
  • 33.
    Result of PerformanceEvaluation 33Anshuman Kalla • Six different network topologies and three different caching techniques results in – eighteen different scenarios • Ten simulations per scenario and results depicts mean values with standard deviation
  • 34.
    Result of PerformanceEvaluation 34Anshuman Kalla • Six different network topologies and three different caching techniques results in – eighteen different scenarios • Ten simulations per scenario and results depicts mean values with standard deviation • Overall values of hit ratio or average retrieval delay is computed by considering all the requests concerning all the contents
  • 35.
    Result of PerformanceEvaluation 35Anshuman Kalla • Six different network topologies and three different caching techniques results in – eighteen different scenarios • Ten simulations per scenario and results depicts mean values with standard deviation • Overall values of hit ratio or average retrieval delay is computed by considering all the requests concerning all the contents
  • 36.
    Result of PerformanceEvaluation 36Anshuman Kalla • Six different network topologies and three different caching techniques results in – eighteen different scenarios • Ten simulations per scenario and results depicts mean values with standard deviation • Overall values of hit ratio or average retrieval delay is computed by considering all the requests concerning all the contents
  • 37.
    Result of PerformanceEvaluation 37Anshuman Kalla • Six different network topologies and three different caching techniques results in – eighteen different scenarios • Ten simulations per scenario and results depicts mean values with standard deviation • Overall values of hit ratio or average retrieval delay is computed by considering all the requests concerning all the contents
  • 38.
    Result of PerformanceEvaluation 38Anshuman Kalla • Six different network topologies and three different caching techniques results in – eighteen different scenarios • Ten simulations per scenario and results depicts mean values with standard deviation • Overall values of hit ratio or average retrieval delay is computed by considering all the requests concerning all the contents
  • 39.
    Result of PerformanceEvaluation 39Anshuman Kalla • Six different network topologies and three different caching techniques results in – eighteen different scenarios • Ten simulations per scenario and results depicts mean values with standard deviation • Overall values of hit ratio or average retrieval delay is computed by considering all the requests concerning all the contents Though edge caching performs better than on- path caching, however off-path caching performs the best.
  • 40.
    Cumulative External Traffic(Exodus US) 40Anshuman Kalla
  • 41.
    Content-Wise Hit Ratio(Exodus US) 41Anshuman Kalla
  • 42.
    Content-Wise Average RetrievalDelay 42Anshuman Kalla
  • 43.
    Result of PerformanceEvaluation 43Anshuman Kalla
  • 44.
    Off-Path Caching performsthe best as compared to On-Path and Edge Caching 44Anshuman Kalla Conclusion & Motivation
  • 45.
    Off-Path Caching performsthe best as compared to On-Path and Edge Caching However lets review the content-wise average retrieval delay plot 45Anshuman Kalla Conclusion & Motivation
  • 46.
    Conclusion & Motivation 46AnshumanKalla Content-Wise Average Retrieval Delay
  • 47.
    Conclusion & Motivation 47AnshumanKalla Content-Wise Average Retrieval Delay Lets zoom this section
  • 48.
    48Anshuman Kalla Conclusion &Motivation Content-Wise Average Retrieval Delay
  • 49.
    49Anshuman Kalla Conclusion &Motivation Content-Wise Average Retrieval Delay Note the gap in terms of delay for top most popular contents
  • 50.
    50Anshuman Kalla Problem Targeted Content-WiseAverage Retrieval Delay Is it possible to devise a caching technique that – could achieve minimum content-wise average retrieval delay for top most popular contents like edge caching while – maintaining overall performance very close to that of off-path caching
  • 51.
    Aim - Second •To couple off-path with edge caching  hybrid  That could reduce average retrieval delay for the top most popular contents while 51Anshuman Kalla
  • 52.
    Aim - Second •To couple off-path with edge caching  hybrid  That could reduce average retrieval delay for the top most popular contents while  Marginally scarifying other relevant performance metrics 52Anshuman Kalla We propose Hybrid Caching  Coupling Off-Path Caching with Edge Caching
  • 53.
    EDOP (EDge Off-Path)Caching • Simple coupling results in two devitalizing issues – Reduction in cache diversity due to content duplication – Blind (edge) caching at boundary nodes  hog the limited space 53Anshuman Kalla
  • 54.
    EDOP (EDge Off-Path)Caching • Simple coupling results in two devitalizing issues – Reduction in cache diversity due to content duplication – Blind (edge) caching at boundary nodes  hog the limited space • Flavor of edge caching is being introduced to off-path caching – Caches at the edge nodes are partitioned 54Anshuman Kalla
  • 55.
    Anshuman Kalla 55 EDOP(EDge Off-Path) Caching R1 R2 R3R4 R5R8 R7 R6 Partitioned of Content Store at edge nodes
  • 56.
    EDOP (EDge Off-Path)Caching • Simple coupling results in two devitalizing issues – Reduction in cache diversity due to content duplication – Blind (edge) caching at boundary nodes  hog the limited space • Flavor of edge caching is being introduced to off-path caching – Caches at the edge nodes are partitioned – Tuning parameter T  percentage of storage for edge caching 56Anshuman Kalla
  • 57.
    Anshuman Kalla 57 EDOP(EDge Off-Path) Caching R1 R2 R3R4 R5R8 R7 R6 Partitioned of CS at edge nodes using Tuning Parameter ‘T’
  • 58.
    EDOP (EDge Off-Path)Caching • Simple coupling results in two devitalizing issues – Reduction in cache diversity due to content duplication – Blind (edge) caching at boundary nodes  hog the limited space • Flavor of edge caching is being introduced to off-path caching – Caches at the edge nodes are partitioned – Tuning parameter T  percentage of storage for edge caching • Content selection to be made before edge caching – FIFO queue for reference counting i.e. popularity estimation 58Anshuman Kalla
  • 59.
    Anshuman Kalla 59 EDOP(EDge Off-Path) Caching R1 R2 R3R4 R5R8 R7 R6 Partitioned of CS at edge nodes
  • 60.
    EDOP (EDge Off-Path)Caching • Simple coupling results in two devitalizing issues – Reduction in cache diversity due to content duplication – Blind (edge) caching at boundary nodes  hog the limited space • Flavor of edge caching is being introduced to off-path caching – Caches at the edge nodes are partitioned – Tuning parameter T  percentage of storage for edge caching • Content selection to be made before edge caching – FIFO queue for reference counting i.e. popularity estimation • Pre-fetching of popular contents estimated by FIFO 60Anshuman Kalla
  • 61.
    Results and Discussion CachingHit Ratio Average Retrieval Delay Unique Cached Content On-Path 0.0944 (±0.0003) 100.7412 (±0.0352) 2527 (±13) Edge 0.1027 (±0.0003) 99.6784 (±0.0267) 5810 (±12) Off-Path 0.4637 (±0.0001) 84.4653 (±0.1751) 7900 (±0) EDOP 0.4545 (±0.0003) 84.0432 (±0.1914) 7465 (±2) 61Anshuman Kalla
  • 62.
    Results and Discussion CachingHit Ratio Average Retrieval Delay Unique Cached Content On-Path 0.0944 (±0.0003) 100.7412 (±0.0352) 2527 (±13) Edge 0.1027 (±0.0003) 99.6784 (±0.0267) 5810 (±12) Off-Path 0.4637 (±0.0001) 84.4653 (±0.1751) 7900 (±0) EDOP 0.4545 (±0.0003) 84.0432 (±0.1914) 7465 (±2) 62Anshuman Kalla
  • 63.
    Results and Discussion CachingHit Ratio Average Retrieval Delay Unique Cached Content On-Path 0.0944 (±0.0003) 100.7412 (±0.0352) 2527 (±13) Edge 0.1027 (±0.0003) 99.6784 (±0.0267) 5810 (±12) Off-Path 0.4637 (±0.0001) 84.4653 (±0.1751) 7900 (±0) EDOP 0.4545 (±0.0003) 84.0432 (±0.1914) 7465 (±2) 63Anshuman Kalla <1% <6%
  • 64.
    Results and Discussion CachingHit Ratio Average Retrieval Delay Unique Cached Content On-Path 0.0944 (±0.0003) 100.7412 (±0.0352) 2527 (±13) Edge 0.1027 (±0.0003) 99.6784 (±0.0267) 5810 (±12) Off-Path 0.4637 (±0.0001) 84.4653 (±0.1751) 7900 (±0) EDOP 0.4545 (±0.0003) 84.0432 (±0.1914) 7465 (±2) 64Anshuman Kalla <1% <6% •The gain achieved in content- wise average retrieval delay is between 88% to 3% for the top most popular contents •At the cost of max 6% deterioration in other relevant parameters
  • 65.
  • 66.
    Conclusion and FutureScope • The two fold contribution of the paper is as follow: – Empirically, it has been proven that off-path caching outperforms the on-path and edge caching techniques – Hybrid caching like EDOP caching has potential to improve performance of in-network caching 66Anshuman Kalla
  • 67.
    Conclusion and FutureScope • The two fold contribution of the paper is as follow: – Empirically, it has been proven that off-path caching outperforms the on-path and edge caching techniques – Hybrid caching like EDOP caching has potential to improve performance of in-network caching • Issues that will be targeted in future are: – What should be the optimum value of T and how it should be determined? – How to ensure that edge caches retain the most popular contents? 67Anshuman Kalla
  • 68.