On October 23rd, 2014, we updated our
Privacy Policy
and
User Agreement.
By continuing to use LinkedIn’s SlideShare service, you agree to the revised terms, so please take a few minutes to review them.
Some Interesting Directions In Network CodingPresentation Transcript
1.
Some interesting directions in network coding Muriel Médard Electrical Engineering and Computer Science Department Massachusetts Institute of Technology
s t u y z w b 1 b 1 b 1 b 1 + b 2 b 2 b 2 b 2 x b 1 + b 2 b 1 + b 2 Must we consider the optimization of codes and network usage jointly?
7.
Randomized network coding - multicast
To recover symbols at the receivers, we require sufficient degrees of freedom – an invertible matrix in the coefficients of all nodes
The realization of the determinant of the matrix will be non-zero with high probability if the coefficients are chosen independently and randomly
Probability of success over field F ≈
Randomized network coding can use any multicast subgraph which satisfies min-cut max-flow bound [Ho et al. 03] any number of sources, even when correlated [Ho et al. 04]
j Endogenous inputs Exogenous input
8.
Erasure reliability – single flow
End-to-end erasure coding: Capacity is packets per unit time.
As two separate channels: Capacity is packets per unit time.
- Can use block erasure coding on each channel. But delay is a problem.
Network coding: minimum cut is capacity
- For erasures, correlated or not, we can in the multicast case deal with average flows uniquely [Lun et al. 04, 05], [Dana et al. 04]:
- Nodes store received packets in memory
Random linear combinations of memory contents sent out
Delay expressions generalize Jackson networks to the innovative packets
Can be used in a rateless fashion
9.
Feedback for reliability
Parameters we consider:
delay incurred at B: excess time, relative to
the theoretical minimum, that it takes for k packets
to be communicated, disregarding any delay due to
the use of the feedback channel
block size
feedback: number of feedback packets used
(feedback rate R f = number of feedback messages / number of received packets)
memory requirement at B
achievable rate from A to C
10.
Feedback for reliability Follow the approach of Pakzad et al. 05, Lun et al. 06 Scheme V allows us to achieve the min-cut rate, while keeping the average memory requirements at node B finite note that the feedback delay for Scheme V is smaller than the usual ARQ (with R f = 1) by a factor of R f feedback is required only on link BC Fragouli et al. 07
11.
Interesting directions
Practical code design:
Using small generation sizes may reduce the throughput and erasure-correcting benefits of mixing information packets
Large generation sizes may incur unacceptable decoding delay at the receivers
Can we consider issues of delay, memory and feedback overhead for interesting code designs?
How do we take these issues into account when we use multicast rather than single flow approaches?
Parameter adaptation for delay-sensitive applications:
Feedback from the receivers to the source can be used to adjust adaptively the generation size and maximize the number of packets successfully decoded within the delay specifications.
The source response to this type of feedback is similar to TCP windows
Can we build an entire TCP-style suite for single network coded flows?
Errors – see Ralf’s talk!
12.
Difficulty of not allowing coding everywhere:
Finding a minimal set of coding nodes or links is NP-hard
Finding multicast codes when some nodes are not able to code is difficult
We associate a binary variable with each coefficient at a merging node
Limited network coding with multicast
0 is zeroed ,
1 remains indeterminate .
For each assignment of binary values to the variables, we can verify the achievability of the target rate R and determine whether coding is required.
Network coding and distributed compression are intimately linked [Ho et al. 04] – we may envisage
Network coding for correlated sources can make use of naturally occurring correlation
Designing sources with correlation rather than straightforward replication as is done currently in mirrors
Coding and decoding melds erasure coding, multicast coding and compression
Rather than consider only shedding redundancy in networks, network coding points to using it and designing it intelligently
17.
Optimization for multicast network coding (1, 1 , 0 ) (1, 0 , 1 ) (1, 1 , 0 ) (1, 0 , 1 ) (1, 1 , 1 ) (1, 1 , 0 ) (1, 0 , 1 ) = source sink (1, 1 , 1 ) (1, 1 , 1 ) Index on receivers rather than on processes [Lun et al. 04] Steiner-tree problem can be seen to be this problem with extra integrality constraints
18.
Joint versus separate coding for each link (R = 3) Joint (cost 9) Separate (cost 10.5) [Lee at al. 07]
19.
Interesting directions
Making use of the joint coding:
Complexity goes up with the number of sources
How much better does this perform than doing Slepian-Wolf first, followed by routing or network coding?
How dependent is the design on knowing actual correlation parameters?
Practical code design for such schemes:
Achievability comes from random code construction, uses minimum-entropy decoding
Can we use the practical techniques that have yielded good results in Slepian-Wolf in this type of network coding?
Generalize mirror site design:
Do not copy a whole site, but just certain portions
How does this affect the storage in and operation of networks?
20.
Going beyond multicast
Can create algebraic setting for linear non-multicast connections [Koetter Medard 02,03]
In the non-multicast case, linear codes do not suffice [Dougherty et al. 05]
Limited code approaches: ability to use XOR
Opportunistic XORs that are undone immediately (COPE) [Katabi et al. 05, 06]
End-to-end XOR codes on 2 flows [Traskov et al. 06] using cycle approaches
These approaches outperform routing by trivially subsuming it
Generalizations to codes including more flows, intermediate decoding points or codes beyond beyond XORs can be envisaged
A plethora of elaborations can be developed, leading to increased complexity with further benefits – trade-off unclear
No Coding Our Scheme Net throughput (KB/s) Number of flows a b
21.
Going beyond multicast
Can create algebraic setting for linear non-multicast connections [Koetter Medard 02,03]
In the non-multicast case, linear codes do not suffice [Dougherty et al. 05]
Limited code approaches: ability to use XOR
Opportunistic XORs that are undone immediately (COPE) [Katabi et al. 05, 06]
End-to-end XOR codes on 2 flows [Traskov et al. 06] using cycle approaches
These approaches outperform routing by trivially subsuming it
Generalizations to codes including more flows, intermediate decoding points or codes beyond beyond XORs can be envisaged
A plethora of elaborations can be developed, leading to increased complexity with further benefits – trade-off unclear
No Coding Our Scheme Net throughput (KB/s) Number of flows a a b
22.
Going beyond multicast
Can create algebraic setting for linear non-multicast connections [Koetter Medard 02,03]
In the non-multicast case, linear codes do not suffice [Dougherty et al. 05]
Limited code approaches: ability to use XOR
Opportunistic XORs that are undone immediately (COPE) [Katabi et al. 05, 06]
End-to-end XOR codes on 2 flows [Traskov et al. 06] using cycle approaches
These approaches outperform routing by trivially subsuming it
Generalizations to codes including more flows, intermediate decoding points or codes beyond beyond XORs can be envisaged
A plethora of elaborations can be developed, leading to increased complexity with further benefits – trade-off unclear
No Coding Our Scheme Net throughput (KB/s) Number of flows a b a b
23.
Going beyond multicast
Can create algebraic setting for linear non-multicast connections [Koetter Medard 02,03]
In the non-multicast case, linear codes do not suffice [Dougherty et al. 05]
Limited code approaches: ability to use XOR
Opportunistic XORs that are undone immediately (COPE) [Katabi et al. 05, 06]
End-to-end XOR codes on 2 flows [Traskov et al. 06] using cycle approaches
These approaches outperform routing by trivially subsuming it
Generalizations to codes including more flows, intermediate decoding points or codes beyond beyond XORs can be envisaged
A plethora of elaborations can be developed, leading to increased complexity with further benefits – trade-off unclear
No Coding Our Scheme Net throughput (KB/s) Number of flows a b a+b a+b a+b a b
24.
A principled optimization approach to match or outperform routing
An optimization that yields a solution that is no worse than multicommodity flow
The optimization is in effect a relaxation of multicommodity flow – akin to Steiner tree relaxation for the multicast case
A solution of the problem implies the existence of a network code to accommodate the arbitrary demands – the types of codes subsume routing
All decoding is performed at the receivers
We can provide an optimization, with a linear code construction, that is guaranteed to perform as well as routing [Lun et al. 04]
25.
Optimization Optimization for arbitrary demands with decoding at receivers gives a set partition of {1 , . . . ,M } that represents the sources that can be mixed (combined linearly) on links going into j Demands of {1 , . . . ,M } at t
26.
Coding and optimization
Sinks that receive a source process in C by way of link (j, i) either receive all the source processes in C or none at all
Hence source processes in C can be mixed on link ( j, i ) as the sinks that receive the mixture will also receive the source processes (or mixtures thereof) necessary for decoding
We step through the nodes in topological order, examining the outgoing links and defining global coding vectors on them (akin to [Jaggi et al. 03])
We can build the code over an ever-expanding front
We can go to coding over time by considering several flows for the different times – we let the coding delay be arbitrarily large
The optimization and the coding are done separately as for the multicast case, but the coding is not distributed
27.
Fix the code approach – conflict hypergraph
There may occasions when we are not willing to go to infinite code lengths, or the types of codes may be pre-determined in our network, with different codes at different nodes
In that case, we can adopt a conflict hypergraph representation of the effects of coding and allowable rate regions together
Recent development for considering intrinsic multicast in switches [Sundarajan et al. 04] and special fabrics [Caramanis et al. 04]
Provides a systematic approach of representing the capacity region of a coded system for arbitrary codes
Vertices:
Define one vertex for each possible “composition of information” on every link
The composition of information on a link is the net transfer function from the source messages to the symbol sent on the link
Edges:
In a valid code, more than one vertex cannot be chosen corresponding to each link
If the composition on an outgoing link at a node is incompatible with a set of incoming input compositions, then the corresponding vertices are connected by a hyperedge
Natural extension of switching approaches in networks
28.
Interesting directions
Design of codes:
How far should we go?
What are the advantages and disadvantages of fixing the lengths and fields ahead of time?
Should be looking at non-linear codes?
Can we find some distributed approaches?
Performance evaluation:
Can we use properties of certain conflict graphs to obtain capacity regions?
Can we generalize the optimization approach, for instance when certain nodes can do intermediate decoding?