Presented at 2010 IEEE Conference on Decision and Control.
We investigate the continuum limits of a class of Markov chains. The investigation of such limits is motivated by the desire to model very large networks. We show that under some conditions, a sequence of Markov chains converges in some sense to the solution of a partial differential equation. Based on such convergence we approximate Markov chains modeling networks involving a large number of components by partial differential equations. While traditional numerical simulation for very large networks is practically infeasible, partial differential equations can be solved with reasonable computational overhead using well-established mathematical tools.
On Continuum Limits of Markov Chains and Network Modeling
1. On Continuum Limits of Markov Chains and
Network Modeling
2010 IEEE Conference on Decision and Control
Yang Zhang (Colorado State University)
Edwin K. P. Chong (Colorado State University)
Jan Hannig (University of North Carolina)
Don Estep (Colorado State University)
2010 CDC (Atlanta, GA) December 17, 2010 1 / 23
2. On Continuum Limits of Markov Chains and Network Modeling Yang Zhang, Edwin K. P. Chong, Jan Hannig, Don Estep
Motivation
Networks modeled by Markov chains (discrete).
Traditional approach: Monte Carlo simulation.
For very large networks (large number of components): Practically
infeasible.
2010 CDC (Atlanta, GA) December 17, 2010 2 / 23
3. On Continuum Limits of Markov Chains and Network Modeling Yang Zhang, Edwin K. P. Chong, Jan Hannig, Don Estep
Motivation
ScienceDaily Aug. 12, 2003
Researchers Create The World’s Fastest Detailed Computer
Simulations Of The Internet
The Georgia Tech researchers have demonstrated the ability to simulate network traffic from over
1 million web browsers in near real time. This feat means that the simulators could model a
minute of such large-scale network operations in only a few minutes of clock time. Using the
high-performance computers at the Pittsburgh Super computing Center, the Georgia Tech
simulators used as many as 1,534 processors to simultaneously work on the simulation
computation, enabling them to model more than 106 million packet transmissions in one second
of clock time – two to three orders of magnitude faster than simulators commonly used today.
What if slower computers or larger networks?
2010 CDC (Atlanta, GA) December 17, 2010 2 / 23
4. On Continuum Limits of Markov Chains and Network Modeling Yang Zhang, Edwin K. P. Chong, Jan Hannig, Don Estep
Motivation
Networks modeled by Markov chains (discrete).
Traditional approach: Monte Carlo simulation.
For very large networks (large number of components): Practically
infeasible.
Continuum limits of Markov chains: solutions of PDEs.
Available mathematical tools for PDEs: Great reduction in
computation time.
2010 CDC (Atlanta, GA) December 17, 2010 2 / 23
5. On Continuum Limits of Markov Chains and Network Modeling Yang Zhang, Edwin K. P. Chong, Jan Hannig, Don Estep
Example: Wireless Sensor Network
Sensor nodes: generate data messages and relay them to
destination nodes at boundary.
All nodes have message queues.
Nodes only communicate with immediate neighbors.
Nodes transmit or receive at each time step.
2010 CDC (Atlanta, GA) December 17, 2010 3 / 23
6. On Continuum Limits of Markov Chains and Network Modeling Yang Zhang, Edwin K. P. Chong, Jan Hannig, Don Estep
Channel Model
All nodes share the same wireless channel.
A transmission from a transmitter to a neighboring receiver is
successful if and only if none of the other neighbors of the receiver
is a transmitter.
Reception at a node fails when more than one of its neighbors
transmit.
2010 CDC (Atlanta, GA) December 17, 2010 4 / 23
7. On Continuum Limits of Markov Chains and Network Modeling Yang Zhang, Edwin K. P. Chong, Jan Hannig, Don Estep
Quantities of Interest
Interested in calculating message queue lengths at nodes.
Index nodes by (i, j), distance between nodes dx and dy.
Notation:
◮ Q(k, i, j): queue length of node (i, j) at time k
◮ M: maximum queue length
◮ F(i, j, Q(k, i, j)): probability that node (i, j) tries to transmit
◮ Set F(i, j, Q(k, i, j)) = Q(k, i, j): nodes transmit with probability
proportional to queue length
◮ Pn(k, i, j), Ps(k, i, j), Pe(k, i, j), Pw(k, i, j): probability of transmitting to
north, south, east, west (resp.)
Pe(k, i, j) = 1
4 + ce(k, i, j)dx, Pw(k, i, j) = 1
4 + cw(k, i, j)dx
Pn(k, i, j) = 1
4 + cn(k, i, j)dy, Ps(k, i, j) = 1
4 + cs(k, i, j)dy
◮ U(k, i, j): number of messages generated at node (i, j) at time k
2010 CDC (Atlanta, GA) December 17, 2010 5 / 23
8. On Continuum Limits of Markov Chains and Network Modeling Yang Zhang, Edwin K. P. Chong, Jan Hannig, Don Estep
Stochastic Difference Equation
Q(k + 1, i, j) − Q(k, i, j) =
1+U(k,i,j)
M
, with probability
(1 − Q(k, i, j))
× [Pw(k, i − 1, j)Q(k, i − 1, j))(1 − Q(k, i + 1, j))(1 − Q(k, i, j + 1))(1 − Q(k, i, j − 1))
+ Pe(k, i + 1, j)Q(k, i + 1, j)(1 − Q(k, i − 1, j))(1 − Q(k, i, j + 1))(1 − Q(k, i, j − 1))
+ Pn(k, i, j − 1)Q(k, i, j − 1)(1 − Q(k, i + 1, j))(1 − Q(k, i − 1, j))(1 − Q(k, i, j + 1))
+ Ps(k, i, j + 1)Q(k, i, j + 1)(1 − Q(k, i + 1, j))(1 − Q(k, i − 1, j))(1 − Q(k, i, j − 1))]
−1+U(k,i,j)
M
, with probability
Q(k, i, j)
× [Pw(k, i, j)(1 − Q(k, i − 1, j))(1 − Q(k, i − 1, j + 1))(1 − Q(k, i − 1, j − 1))(1 − Q(k, i − 2, j))
+ Pe(k, i, j)(1 − Q(k, i + 1, j))(1 − Q(k, i + 1, j + 1))(1 − Q(k, i + 1, j − 1))(1 − Q(k, i + 2, j))
+ Ps(k, i, j)(1 − Q(k, i, j − 1))(1 − Q(k, i + 1, j − 1))(1 − Q(k, i − 1, j − 1))(1 − Q(k, i, j − 2))
+ Pn(k, i, j)(1 − Q(k, i, j + 1))(1 − Q(k, i + 1, j + 1))(1 − Q(k, i − 1, j + 1))(1 − Q(k, i, j + 2))]
U(k,i,j)
M
, otherwise
2010 CDC (Atlanta, GA) December 17, 2010 6 / 23
9. On Continuum Limits of Markov Chains and Network Modeling Yang Zhang, Edwin K. P. Chong, Jan Hannig, Don Estep
Stochastic Difference Equation
Q(k + 1, i, j) − Q(k, i, j) =
1+U(k,i,j)
M
, with probability
(1 − Q(k, i, j))
× [Pw(k, i − 1, j)Q(k, i − 1, j))(1 − Q(k, i + 1, j))(1 − Q(k, i, j + 1))(1 − Q(k, i, j − 1))
+ Pe(k, i + 1, j)Q(k, i + 1, j)(1 − Q(k, i − 1, j))(1 − Q(k, i, j + 1))(1 − Q(k, i, j − 1))
+ Pn(k, i, j − 1)Q(k, i, j − 1)(1 − Q(k, i + 1, j))(1 − Q(k, i − 1, j))(1 − Q(k, i, j + 1))
+ Ps(k, i, j + 1)Q(k, i, j + 1)(1 − Q(k, i + 1, j))(1 − Q(k, i − 1, j))(1 − Q(k, i, j − 1))]
−1+U(k,i,j)
M
, with probability
Q(k, i, j)
× [Pw(k, i, j)(1 − Q(k, i − 1, j))(1 − Q(k, i − 1, j + 1))(1 − Q(k, i − 1, j − 1))(1 − Q(k, i − 2, j))
+ Pe(k, i, j)(1 − Q(k, i + 1, j))(1 − Q(k, i + 1, j + 1))(1 − Q(k, i + 1, j − 1))(1 − Q(k, i + 2, j))
+ Ps(k, i, j)(1 − Q(k, i, j − 1))(1 − Q(k, i + 1, j − 1))(1 − Q(k, i − 1, j − 1))(1 − Q(k, i, j − 2))
+ Pn(k, i, j)(1 − Q(k, i, j + 1))(1 − Q(k, i + 1, j + 1))(1 − Q(k, i − 1, j + 1))(1 − Q(k, i, j + 2))]
U(k,i,j)
M
, otherwise
2010 CDC (Atlanta, GA) December 17, 2010 6 / 23
10. On Continuum Limits of Markov Chains and Network Modeling Yang Zhang, Edwin K. P. Chong, Jan Hannig, Don Estep
Monte Carlo Simulation Result
2010 CDC (Atlanta, GA) December 17, 2010 7 / 23
11. On Continuum Limits of Markov Chains and Network Modeling Yang Zhang, Edwin K. P. Chong, Jan Hannig, Don Estep
Continuum Limit: PDE Solution
∂q
∂t
(t, x, y) =∇ ·
1
4
(1 − q(t, x, y))3
(1 + 5q(t, x, y))∇q(t, x, y)
+
cw(t, x, y) − ce(t, x, y)
cs(t, x, y) − cn(t, x, y)
q(t, x, y)(1 − q(t, x, y))4
+ u(x, y)
2010 CDC (Atlanta, GA) December 17, 2010 8 / 23
12. On Continuum Limits of Markov Chains and Network Modeling Yang Zhang, Edwin K. P. Chong, Jan Hannig, Don Estep
PDE Solution
2010 CDC (Atlanta, GA) December 17, 2010 9 / 23
13. On Continuum Limits of Markov Chains and Network Modeling Yang Zhang, Edwin K. P. Chong, Jan Hannig, Don Estep
Monte Carlo Simulation Result
2010 CDC (Atlanta, GA) December 17, 2010 10 / 23
14. On Continuum Limits of Markov Chains and Network Modeling Yang Zhang, Edwin K. P. Chong, Jan Hannig, Don Estep
Close? Why?
In what sense and under what conditions are they close?
These questions are answered in this paper.
First look at a simpler example of continuum limit.
2010 CDC (Atlanta, GA) December 17, 2010 11 / 23
15. On Continuum Limits of Markov Chains and Network Modeling Yang Zhang, Edwin K. P. Chong, Jan Hannig, Don Estep
Single Random Walk
1/21/2 1/21/2
… …
1 n n+1n 1 N N+10
ds ∝ 1/N: distance between two points
dt: length of time step
Scaling law: dt = ds2
As N → ∞, ds, dt → 0.
2010 CDC (Atlanta, GA) December 17, 2010 12 / 23
16. On Continuum Limits of Markov Chains and Network Modeling Yang Zhang, Edwin K. P. Chong, Jan Hannig, Don Estep
Deriving Diffusion Equation
P(k, n): probability of particle at point n at time k
P(k + 1, n) =
1
2
P(k, n − 1) +
1
2
P(k, n + 1)
P(k + 1, n) − P(k, n) =
1
2
(P(k, n − 1) − P(k, n)) +
1
2
(P(k, n + 1) − P(k, n))
Change notation: s = n · ds, t = k · dt, p(t, s) = P(k, n)
p(t + dt, s) − p(t, s) =
1
2
(p(t, s − ds) − p(t, s)) +
1
2
(p(t, s + ds) − p(t, s))
Taylor expansions:
p(t, s ± ds) = p(t, s) ±
∂p
∂s
(t, s)ds +
1
2
∂2p
∂s2
(t, s)ds2
+ o(ds2
)
p(t + dt, s) = p(t, s) +
∂p
∂t
(t, s)dt + o(dt2
)
Take limit as N → ∞, i.e., ds, dt → 0:
Diffusion equation:
∂p
∂t
(t, s) =
1
2
∂2p
∂s2
(t, s)
2010 CDC (Atlanta, GA) December 17, 2010 13 / 23
17. On Continuum Limits of Markov Chains and Network Modeling Yang Zhang, Edwin K. P. Chong, Jan Hannig, Don Estep
Multiple Random Walks
… …
1 n n+1n 1 N N+10
Consider M i.i.d. random walks on the same domain.
X(k, n): number of particles at point n at time k
X(k) = [X(k, 1), . . . , X(k, N)]: Markov chain
X(k, n)/M → P(k, n) a.s. as M → ∞ (SLLN).
As M, N → ∞, X(k) converges to solution of diffusion equation.
2010 CDC (Atlanta, GA) December 17, 2010 14 / 23
18. On Continuum Limits of Markov Chains and Network Modeling Yang Zhang, Edwin K. P. Chong, Jan Hannig, Don Estep
Compare
−1 −0.5 0 0.5 1
0
0.005
0.01
0.015
0.02
0.025
0.03
0.035
0.04
0.045
0.05
◦ Monte Carlo simulation —— PDE solution
2010 CDC (Atlanta, GA) December 17, 2010 15 / 23
19. On Continuum Limits of Markov Chains and Network Modeling Yang Zhang, Edwin K. P. Chong, Jan Hannig, Don Estep
Convergence Analysis
Back to our problem: much more complicated than random walk.
Explanation to the "look-alike" phenomenon: convergence of a
sequence of Markov chains.
2010 CDC (Atlanta, GA) December 17, 2010 16 / 23
20. On Continuum Limits of Markov Chains and Network Modeling Yang Zhang, Edwin K. P. Chong, Jan Hannig, Don Estep
General Setting
D: fixed subset of Rd
N nodes at n = 1, . . . , N uniformly placed over D.
X(k, n) ∈ R: network state of node n at time k
X(k) = [X(k, 1), . . . , X(k, N)] ∈ RN: Markov chain
Stochastic difference equation (System behavior):
◮
X(k + 1) = X(k) + FN(X(k)/M, U(k))
◮ FN: random transition function
◮ M: “normalizing” parameter; M → ∞ as N → ∞
◮ U(k): i.i.d. random variables independent of X(k)
2010 CDC (Atlanta, GA) December 17, 2010 17 / 23
21. On Continuum Limits of Markov Chains and Network Modeling Yang Zhang, Edwin K. P. Chong, Jan Hannig, Don Estep
General Setting
Wireless Sensor Network: Stochastic Difference Equation
X: message queue length
M: maximum queue length
U: number of messages generated by nodes
Q(k + 1, i, j) − Q(k, i, j) =
1+U(k,i,j)
M
, with probability
(1 − Q(k, i, j))
× [Pw(k, i − 1, j)Q(k, i − 1, j))(1 − Q(k, i + 1, j))(1 − Q(k, i, j + 1))(1 − Q(k, i, j − 1))
+ Pe(k, i + 1, j)Q(k, i + 1, j)(1 − Q(k, i − 1, j))(1 − Q(k, i, j + 1))(1 − Q(k, i, j − 1))
+ Pn(k, i, j − 1)Q(k, i, j − 1)(1 − Q(k, i + 1, j))(1 − Q(k, i − 1, j))(1 − Q(k, i, j + 1))
+ Ps(k, i, j + 1)Q(k, i, j + 1)(1 − Q(k, i + 1, j))(1 − Q(k, i − 1, j))(1 − Q(k, i, j − 1))]
−1+U(k,i,j)
M
, with probability
Q(k, i, j)
× [Pw(k, i, j)(1 − Q(k, i − 1, j))(1 − Q(k, i − 1, j + 1))(1 − Q(k, i − 1, j − 1))(1 − Q(k, i − 2, j))
+ Pe(k, i, j)(1 − Q(k, i + 1, j))(1 − Q(k, i + 1, j + 1))(1 − Q(k, i + 1, j − 1))(1 − Q(k, i + 2, j))
+ Ps(k, i, j)(1 − Q(k, i, j − 1))(1 − Q(k, i + 1, j − 1))(1 − Q(k, i − 1, j − 1))(1 − Q(k, i, j − 2))
+ Pn(k, i, j)(1 − Q(k, i, j + 1))(1 − Q(k, i + 1, j + 1))(1 − Q(k, i − 1, j + 1))(1 − Q(k, i, j + 2))]
U(k,i,j)
M
, otherwise
2010 CDC (Atlanta, GA) December 17, 2010 17 / 23
22. On Continuum Limits of Markov Chains and Network Modeling Yang Zhang, Edwin K. P. Chong, Jan Hannig, Don Estep
General Setting
D: Fixed subset of Rd
N nodes at n = 1, . . . , N uniformly placed over D.
X(k, n) ∈ R: Network state of node n at time k
X(k) = [X(k, 1), . . . , X(k, N)] ∈ RN: Markov chain
Stochastic difference equation (System behavior):
◮
X(k + 1) = X(k) + FN(X(k)/M, U(k))
◮ FN: random transition function
◮ M: “normalizing” parameter; M → ∞ as N → ∞
◮ U(k): i.i.d. random variables independent of X(k)
fN(x) = EFN(x, U(k)): mean transition function
Define nonrandom sequence x(k) by deterministic difference
equation (Mean behavior):
x(k + 1) = x(k) + fN(x(k)/M)
2010 CDC (Atlanta, GA) December 17, 2010 17 / 23
23. On Continuum Limits of Markov Chains and Network Modeling Yang Zhang, Edwin K. P. Chong, Jan Hannig, Don Estep
General Setting
Wireless Sensor Network: Deterministic Difference Equation
q(k + 1, i, j) − q(k, i, j) =
1
M
{(1 − q(k, i, j))
× [Pw(k, i − 1, j)q(k, i − 1, j))(1 − q(k, i + 1, j))(1 − q(k, i, j + 1))(1 − q(k, i, j − 1))
+ Pe(k, i + 1, j)q(k, i + 1, j)/M)(1 − q(k, i − 1, j))(1 − q(k, i, j + 1))(1 − q(k, i, j − 1))
+ Pn(k, i, j − 1)q(k, i, j − 1)/M)(1 − q(k, i + 1, j))(1 − q(k, i − 1, j))(1 − q(k, i, j + 1))
+ Ps(k, i, j + 1)q(k, i, j + 1)/M)(1 − q(k, i + 1, j))(1 − q(k, i − 1, j))(1 − q(k, i, j − 1))]
− q(k, i, j)
× [Pw(k, i, j)(1 − q(k, i − 1, j))(1 − q(k, i − 1, j + 1))(1 − q(k, i − 1, j − 1))(1 − q(k, i − 2, j))
+ Pe(k, i, j)(1 − q(k, i + 1, j))(1 − q(k, i + 1, j + 1))(1 − q(k, i + 1, j − 1))(1 − q(k, i + 2, j))
+ Ps(k, i, j)(1 − q(k, i, j − 1))(1 − q(k, i + 1, j − 1))(1 − q(k, i − 1, j − 1))(1 − q(k, i, j − 2))
+ Pn(k, i, j)(1 − q(k, i, j + 1))(1 − q(k, i + 1, j + 1))(1 − q(k, i − 1, j + 1))(1 − q(k, i, j + 2))]
+ u(k, i, j)}.
2010 CDC (Atlanta, GA) December 17, 2010 17 / 23
24. On Continuum Limits of Markov Chains and Network Modeling Yang Zhang, Edwin K. P. Chong, Jan Hannig, Don Estep
Continuous Time-Space Extension of the Markov
Chain
Step 1:
Xo(˜t) = X(⌊M˜t⌋)/M
Piecewise-constant time extensions of X(k) with constant piece
length dt = 1/M. As M → ∞, dt → 0.
Step 2:
Xp(t, s): Piecewise-constant space extensions of Xo(˜t) on D with
constant piece length ds = distance between neighboring nodes.
As N → ∞, ds → 0.
Xp: Continuous time-space extension of X(k).
In the same manner, define xo(˜t) and xp(t, s) for x(k).
2010 CDC (Atlanta, GA) December 17, 2010 18 / 23
25. On Continuum Limits of Markov Chains and Network Modeling Yang Zhang, Edwin K. P. Chong, Jan Hannig, Don Estep
An Illustration of the Time-Space Extension
x(k,n) & xp(t,s)( , ) p( , )
n & s
k & t
x(k,n) xp(t,s)
2010 CDC (Atlanta, GA) December 17, 2010 19 / 23
26. On Continuum Limits of Markov Chains and Network Modeling Yang Zhang, Edwin K. P. Chong, Jan Hannig, Don Estep
Main Result
Theorem
Under some conditions, if fN/ds2 → f as N → ∞ and z solves PDE
∂z
∂t
(s, t) = f s, z(s, t),
∂z
∂s
(s, t),
∂2z
∂s2
(s, t) ,
then Xp, the continuous time-space extension of the Markov chain X(k),
converges uniformly to z, the solution to the PDE, as N → ∞ (M → ∞).
2010 CDC (Atlanta, GA) December 17, 2010 20 / 23
27. On Continuum Limits of Markov Chains and Network Modeling Yang Zhang, Edwin K. P. Chong, Jan Hannig, Don Estep
Main Result
Wireless Sensor Network: Continuum Limit
∂q
∂t
(t, x, y) =∇ ·
1
4
(1 − q(t, x, y))3
(1 + 5q(t, x, y))∇q(t, x, y)
+
cw(t, x, y) − ce(t, x, y)
cs(t, x, y) − cn(t, x, y)
q(t, x, y)(1 − q(t, x, y))4
+ u(x, y)
2010 CDC (Atlanta, GA) December 17, 2010 20 / 23
28. On Continuum Limits of Markov Chains and Network Modeling Yang Zhang, Edwin K. P. Chong, Jan Hannig, Don Estep
Outline of Proof
Xo close to xo for big M Xp close to xp for big M
l t f bi N
Xp close to z for big M, N
xp close to z for big N
Theorem
Under some conditions, if fN/ds2 → f as N → ∞ and z solves PDE
∂z
∂t
(s, t) = f s, z(s, t),
∂z
∂s
(s, t),
∂2z
∂s2
(s, t) ,
then Xp, the continuous time-space extension of the Markov chain X(k),
converges uniformly to z, the solution to the PDE, as N → ∞ (M → ∞).
2010 CDC (Atlanta, GA) December 17, 2010 21 / 23
29. On Continuum Limits of Markov Chains and Network Modeling Yang Zhang, Edwin K. P. Chong, Jan Hannig, Don Estep
Greater M and N: Increasing Accuracy
−1
−0.5
0
0.5
1
−1
−0.5
0
0.5
1
0
0.002
0.004
0.006
0.008
0.01
0.012
N = 50, M = 1000
2010 CDC (Atlanta, GA) December 17, 2010 22 / 23
30. On Continuum Limits of Markov Chains and Network Modeling Yang Zhang, Edwin K. P. Chong, Jan Hannig, Don Estep
Greater M and N: Increasing Accuracy
−1
−0.5
0
0.5
1
−1
−0.5
0
0.5
1
0
0.002
0.004
0.006
0.008
0.01
0.012
N = 60, M = 5000
2010 CDC (Atlanta, GA) December 17, 2010 22 / 23
31. On Continuum Limits of Markov Chains and Network Modeling Yang Zhang, Edwin K. P. Chong, Jan Hannig, Don Estep
Greater M and N: Increasing Accuracy
−1
−0.5
0
0.5
1
−1
−0.5
0
0.5
1
0
0.002
0.004
0.006
0.008
0.01
N = 70, M = 10000
2010 CDC (Atlanta, GA) December 17, 2010 22 / 23
32. On Continuum Limits of Markov Chains and Network Modeling Yang Zhang, Edwin K. P. Chong, Jan Hannig, Don Estep
Greater M and N: Increasing Accuracy
−1
−0.5
0
0.5
1
−1
−0.5
0
0.5
1
0
2
4
6
8
x 10
−3
N = 80, M = 50000
2010 CDC (Atlanta, GA) December 17, 2010 22 / 23
33. On Continuum Limits of Markov Chains and Network Modeling Yang Zhang, Edwin K. P. Chong, Jan Hannig, Don Estep
Greater M and N: Increasing Accuracy
−1
−0.5
0
0.5
1
−1
−0.5
0
0.5
1
0
2
4
6
8
x 10
−3
N = 90, M = 100000
2010 CDC (Atlanta, GA) December 17, 2010 22 / 23
34. On Continuum Limits of Markov Chains and Network Modeling Yang Zhang, Edwin K. P. Chong, Jan Hannig, Don Estep
Greater M and N: Increasing Accuracy
−1
−0.5
0
0.5
1
−1
−0.5
0
0.5
1
0
2
4
6
8
x 10
−3
N = 100, M = 1000000
2010 CDC (Atlanta, GA) December 17, 2010 22 / 23
35. On Continuum Limits of Markov Chains and Network Modeling Yang Zhang, Edwin K. P. Chong, Jan Hannig, Don Estep
Greater M and N: Increasing Accuracy
N = 200, M = 5000000
This took weeks!
2010 CDC (Atlanta, GA) December 17, 2010 22 / 23
36. On Continuum Limits of Markov Chains and Network Modeling Yang Zhang, Edwin K. P. Chong, Jan Hannig, Don Estep
PDE Solution
This took seconds.
2010 CDC (Atlanta, GA) December 17, 2010 23 / 23