SlideShare a Scribd company logo
1 of 29
Download to read offline
Distributed Algorithm
for Network Size Estimation
Donggil Lee, Seungjoon Lee, Taekyoo Kim, and Hyungbo Shim
Control & Dynamic Systems Lab. Seoul National University
57th IEEE Conference on Decision and Control
December 18, 2018
We propose a Distributed Algorithm for Network Size Estimation
Network size: total number N of nodes in a given network
Goal: design a node dynamics whose individual solutions all
converges to N. We pursue
decentralized design: design of node dynamics doesn't use
much information about the network
distributed algorithm: each node exchanges information only
with its neighbors
2 / 20
We propose a Distributed Algorithm for Network Size Estimation
Network size: total number N of nodes in a given network
Goal: design a node dynamics whose individual solutions all
converges to N. We pursue
decentralized design: design of node dynamics doesn't use
much information about the network
distributed algorithm: each node exchanges information only
with its neighbors
2 / 20
Distributed estimation of N is useful in many applications
For example,
Distributed Optimization [Nedic, Ozdaglar (2009)]
1 requires
N to obtain the convergence rate.
Distributed Kalman Filter [Kim, Shim, Wu (2016)]
2 requires
N is known to all nodes.
1
Nedic, Ozdaglar, Distributed subgradient methods for multi-agent
optimization, IEEE TAC, 2009
2
Kim, Shim, Wu, On distributed optimal Kalman-Bucy ltering by
averaging dynamics of heterogeneous agents, IEEE CDC, 2016
3 / 20
Distributed estimation of N is not trivial
N is a property of a network and so is a global parameter.
Each node is able to see only its neighbor
4 / 20
Previous results for network size estimation
(Baquero, et al., IEEE Trans. Parallel and Distrib. Sys., 2012)
(Lucchese, Varagnolo, ACC, 2015)
obtains the estimate ˆN in statistical manner by exchanging M
pieces of information with neighbors
⇒ the estimate is not deterministic
E[ ˆN] = N with Var[ ˆN] =
N2
M − 2
(Kempe et al., IEEE Symp. Foundations of Comp. Sci., 2003)
(Shames et al., ACC, 2012)
obtains 1/N asymptotically by average consensus
They require initialization which should be done over the network
⇒not ready for plug-and-play.
5 / 20
Previous results for network size estimation
(Baquero, et al., IEEE Trans. Parallel and Distrib. Sys., 2012)
(Lucchese, Varagnolo, ACC, 2015)
obtains the estimate ˆN in statistical manner by exchanging M
pieces of information with neighbors
⇒ the estimate is not deterministic
E[ ˆN] = N with Var[ ˆN] =
N2
M − 2
(Kempe et al., IEEE Symp. Foundations of Comp. Sci., 2003)
(Shames et al., ACC, 2012)
obtains 1/N asymptotically by average consensus
They require initialization which should be done over the network
⇒not ready for plug-and-play.
5 / 20
The proposed algorithm
Assumptions
1. Communication graph is undirected and connected with unit
weight.
2. ∃ one special node always belonging to network; say node 1.
node 1: ˙x1(t) = 1−x1(t) + k
j∈N1
xj(t) − x1(t)
all other nodes: ˙xi(t) = 1 + k
j∈Ni
xj(t) − xi(t)
gain k will be designed
algorithm is simple, only scalar xi(t) ∈ R is exchanged
initial condition xi(0) is arbitrary
6 / 20
The proposed algorithm
Assumptions
1. Communication graph is undirected and connected with unit
weight.
2. ∃ one special node always belonging to network; say node 1.
node 1: ˙x1(t) = 1−x1(t) + k
j∈N1
xj(t) − x1(t)
all other nodes: ˙xi(t) = 1 + k
j∈Ni
xj(t) − xi(t)
gain k will be designed
algorithm is simple, only scalar xi(t) ∈ R is exchanged
initial condition xi(0) is arbitrary
6 / 20
How the proposed algorithm works?
Overall dynamics:





˙x1
˙x2
.
.
.
˙xN





= −





k





l11 l12 · · · l1N
l21 l22 · · · l2N
.
.
.
.
.
.
.
.
.
.
.
.
lN1 lN2 · · · lNN





+





1 0 · · · 0
0 0 · · · 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 · · · 0










x +





1
1
.
.
.
1





=: −(kL + J11
)x + 1N (L is Laplacian matrix)
Lemma: If k  0, then the matrix −(kL + J11) is Hurwitz.
Therefore, x(t) converges to the equilibrium
x∗
= x∗
(k) = (kL + J11
)−1
1N .
Lemma: x∗
1(k) = N, ∀k  0, and x∗
i (k) → N as k → ∞ for i ≥ 2.
Therefore, if k is large enough such that |x∗
i (k) − N|  0.5, ∀i,
lim
t→∞
round(xi(t)) = lim
t→∞
xi(t) = N.
7 / 20
How the proposed algorithm works?
Overall dynamics:





˙x1
˙x2
.
.
.
˙xN





= −





k





l11 l12 · · · l1N
l21 l22 · · · l2N
.
.
.
.
.
.
.
.
.
.
.
.
lN1 lN2 · · · lNN





+





1 0 · · · 0
0 0 · · · 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 · · · 0










x +





1
1
.
.
.
1





=: −(kL + J11
)x + 1N (L is Laplacian matrix)
Lemma: If k  0, then the matrix −(kL + J11) is Hurwitz.
Therefore, x(t) converges to the equilibrium
x∗
= x∗
(k) = (kL + J11
)−1
1N .
Lemma: x∗
1(k) = N, ∀k  0, and x∗
i (k) → N as k → ∞ for i ≥ 2.
Therefore, if k is large enough such that |x∗
i (k) − N|  0.5, ∀i,
lim
t→∞
round(xi(t)) = lim
t→∞
xi(t) = N.
7 / 20
How the proposed algorithm works?
Overall dynamics:





˙x1
˙x2
.
.
.
˙xN





= −





k





l11 l12 · · · l1N
l21 l22 · · · l2N
.
.
.
.
.
.
.
.
.
.
.
.
lN1 lN2 · · · lNN





+





1 0 · · · 0
0 0 · · · 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 · · · 0










x +





1
1
.
.
.
1





=: −(kL + J11
)x + 1N (L is Laplacian matrix)
Lemma: If k  0, then the matrix −(kL + J11) is Hurwitz.
Therefore, x(t) converges to the equilibrium
x∗
= x∗
(k) = (kL + J11
)−1
1N .
Lemma: x∗
1(k) = N, ∀k  0, and x∗
i (k) → N as k → ∞ for i ≥ 2.
Therefore, if k is large enough such that |x∗
i (k) − N|  0.5, ∀i,
lim
t→∞
round(xi(t)) = lim
t→∞
xi(t) = N.
7 / 20
How large k should be?
To compute the minimal value of k, we impose
One more assumption
3. ∃ upper bound of network size ¯N ( ¯N  N), and ¯N is known
to every node (this is the only global information needed.)
Theorem: If
k  ¯N3
then the proposed algorithm
node 1: ˙x1 = 1 − x1 + k
j∈N1
(xj − x1)
all other nodes: ˙xi = 1 + k
j∈Ni
(xj − xi)
with arbitrary initial conditions yields estimation of N because
lim
t→∞
|xi(t) − N|  0.5, ∀i ∈ N.
8 / 20
How large k should be?
To compute the minimal value of k, we impose
One more assumption
3. ∃ upper bound of network size ¯N ( ¯N  N), and ¯N is known
to every node (this is the only global information needed.)
Theorem: If
k  ¯N3
then the proposed algorithm
node 1: ˙x1 = 1 − x1 + k
j∈N1
(xj − x1)
all other nodes: ˙xi = 1 + k
j∈Ni
(xj − xi)
with arbitrary initial conditions yields estimation of N because
lim
t→∞
|xi(t) − N|  0.5, ∀i ∈ N.
8 / 20
How to obtain N in nite time?
The problem is to nd minimal value T such that
|xi(t) − N|  0.5, ∀t  T
To nd the time T, we need
convergence rate
bounded initial condition
9 / 20
How to obtain N in nite time?
The problem is to nd minimal value T such that
|xi(t) − N|  0.5, ∀t  T
To nd the time T, we need
convergence rate
bounded initial condition
9 / 20
Convergence rate of proposed algorithm
Recall overall dynamics:





˙x1
˙x2
.
.
.
˙xN





= −





k





l11 l12 · · · l1N
l21 l22 · · · l2N
.
.
.
.
.
.
. . .
.
.
.
lN1 lN2 · · · lNN





+





1 0 · · · 0
0 0 · · · 0
.
.
.
.
.
.
. . .
.
.
.
0 0 · · · 0










x +





1
1
.
.
.
1





=: −(kL + J11
)x + 1N
Lemma: If k  ¯N3, then
λmax −(kL + J11
) ≤ −
1
4 ¯N
.
convergence rate is decreasing function w.r.t. ¯N
10 / 20
Main Result
Our last assumption (Bounded initial condition)
Suppose xi(0) ∈ [0, ¯N] for all i.
reasonable initial guess ( N  ¯N)
Theorem (Finite-time Estimation of N)
Under all the assumptions, if k  ¯N3, then the proposed algorithm
guarantees
xi(t) = N, ∀t  T(k), ∀i
where the settling time T(k) is given by
T(k) = 4 ¯N ln
2 ¯N1.5k
k − ¯N3
.
11 / 20
Advantages of the proposed algorithm
node 1: ˙x1 = 1 − x1 + k
j∈N1
(xj − x1)
all other nodes: ˙xi = 1 + k
j∈Ni
(xj − xi)
1. simple rst-order dynamics
2. exchanges single variable with neighbors
3. obtains N directly within nite time
4. independent of initialization
→ while the algorithm is running, new node can join or some
node can leave the network
This property is often called
`plug-and-play ready' or
`open MAS (multi-agent system)' or
`initialization-free algorithm'
12 / 20
A Remark for Practical Application
1. To obtain correct estimate of N, it takes T(k) time from the
network change
2. However, not every node can detect the changes.
3. Possible solution: allow the changes only at specied time,
i.e., some nodes can join or leave the network at t = j · T
where T  T(k), assuming every node has same clock.
Example scenario:
there is a unit length of time T  T(k)
1. two nodes 1 and 2 belong to the network
from time T0 = 0
2. node 3 joins the network at T1 = T
3. node 3 leaves the network at T2 = 2T
13 / 20
A Remark for Practical Application
1. To obtain correct estimate of N, it takes T(k) time from the
network change
2. However, not every node can detect the changes.
3. Possible solution: allow the changes only at specied time,
i.e., some nodes can join or leave the network at t = j · T
where T  T(k), assuming every node has same clock.
Example scenario:
there is a unit length of time T  T(k)
1. two nodes 1 and 2 belong to the network
from time T0 = 0
2. node 3 joins the network at T1 = T
3. node 3 leaves the network at T2 = 2T
13 / 20
(two nodes belong to the network from T0 = 0)
Every node initializes its state within [0, ¯N]
Estimation is guaranteed for t  T(k)
14 / 20
(node 3 joins the network at T1 = T )
x3(T1) is initialized within [0, ¯N]
both x1(T1) and x2(T1) are within [0, ¯N]
15 / 20
(node 3 leaves the network at T2 = 2T )
all x1(T2) and x2(T2) are within [0, ¯N]
correct estimation is always available for t  Tj + T(k)
16 / 20
Behind the scene
How we came up with the proposed algorithm?
17 / 20
Blended dynamics approach
A tool for analysis of heterogeneous multi-agent system
Node's dynamics:
˙xi = fi(xi) + k
j∈Ni
xj − xi , i ∈ {1, 2, · · · , N}
Blended dynamics (average of vector elds fi):
˙s =
1
N
N
i=1
fi(s) with s(0) =
1
N
N
i=1
xi(0)
Theorem3
Suppose blended dynamics is stable. Then, ∀  0, ∃k∗ such that
for all k ≥ k∗,
lim sup
t→∞
|xi(t) − s(t)|  , ∀i.
3
Kim, Yang, Shim, Kim, Seo, Robustness of synchronization of
heterogeneous agents by strong coupling and a large number of agents, IEEE
TAC, 2016
18 / 20
We designed node dynamics so that their blended dynamics
has desired property.
The proposed node dynamics:
˙x1 = 1 − x1 + k
j∈N1
xj − x1
˙xi = 1 + k
j∈Ni
xj − xi , ∀j ∈ {2, . . . , N}
Their blended dynamics:
˙s =
1
N
N
i=1
fi(s) =
1
N
(N − s) = −
1
N
s + 1
Therefore, with suciently large k, we have
lim sup
t→∞
|xi(t) − s(t)| = lim sup
t→∞
|xi(t) − N|  , ∀i ∈ N
information about N is embedded in the vector elds (not in
the initial conditions) → key to the `plug-and-play'.
19 / 20
Summary
the design of proposed algorithm is based on blended dynamics
˙s = −
1
N
s + 1
each node obtains network size exactly with arbitrary initial
condition
⇒ the algorithm supports plug-and-play operation
the estimation is guaranteed within nite time
Thank you!
Donggil Lee (dglee@cdsl.kr)
20 / 20
Simulation for 30 nodes
Black dotted line: N(t) ± 0.5

More Related Content

What's hot

Discrete fourier transform
Discrete fourier transformDiscrete fourier transform
Discrete fourier transformMOHAMMAD AKRAM
 
Quantum mechanics and the square root of the Brownian motion
Quantum mechanics and the square root of the Brownian motionQuantum mechanics and the square root of the Brownian motion
Quantum mechanics and the square root of the Brownian motionMarco Frasca
 
Unit 2 signal &system
Unit 2 signal &systemUnit 2 signal &system
Unit 2 signal &systemsushant7dare
 
Convolution discrete and continuous time-difference equaion and system proper...
Convolution discrete and continuous time-difference equaion and system proper...Convolution discrete and continuous time-difference equaion and system proper...
Convolution discrete and continuous time-difference equaion and system proper...Vinod Sharma
 
Animashree Anandkumar, Electrical Engineering and CS Dept, UC Irvine at MLcon...
Animashree Anandkumar, Electrical Engineering and CS Dept, UC Irvine at MLcon...Animashree Anandkumar, Electrical Engineering and CS Dept, UC Irvine at MLcon...
Animashree Anandkumar, Electrical Engineering and CS Dept, UC Irvine at MLcon...MLconf
 
Introduction to Deep Neural Network
Introduction to Deep Neural NetworkIntroduction to Deep Neural Network
Introduction to Deep Neural NetworkLiwei Ren任力偉
 
Digit recognizer by convolutional neural network
Digit recognizer by convolutional neural networkDigit recognizer by convolutional neural network
Digit recognizer by convolutional neural networkDing Li
 
Chapter2 - Linear Time-Invariant System
Chapter2 - Linear Time-Invariant SystemChapter2 - Linear Time-Invariant System
Chapter2 - Linear Time-Invariant SystemAttaporn Ninsuwan
 
Applied Digital Signal Processing 1st Edition Manolakis Solutions Manual
Applied Digital Signal Processing 1st Edition Manolakis Solutions ManualApplied Digital Signal Processing 1st Edition Manolakis Solutions Manual
Applied Digital Signal Processing 1st Edition Manolakis Solutions Manualtowojixi
 

What's hot (20)

Radix-2 DIT FFT
Radix-2 DIT FFT Radix-2 DIT FFT
Radix-2 DIT FFT
 
Discrete fourier transform
Discrete fourier transformDiscrete fourier transform
Discrete fourier transform
 
Quantum mechanics and the square root of the Brownian motion
Quantum mechanics and the square root of the Brownian motionQuantum mechanics and the square root of the Brownian motion
Quantum mechanics and the square root of the Brownian motion
 
Unit 2 signal &system
Unit 2 signal &systemUnit 2 signal &system
Unit 2 signal &system
 
Lecture 5: The Convolution Sum
Lecture 5: The Convolution SumLecture 5: The Convolution Sum
Lecture 5: The Convolution Sum
 
Convolution discrete and continuous time-difference equaion and system proper...
Convolution discrete and continuous time-difference equaion and system proper...Convolution discrete and continuous time-difference equaion and system proper...
Convolution discrete and continuous time-difference equaion and system proper...
 
K-Means Algorithm
K-Means AlgorithmK-Means Algorithm
K-Means Algorithm
 
Animashree Anandkumar, Electrical Engineering and CS Dept, UC Irvine at MLcon...
Animashree Anandkumar, Electrical Engineering and CS Dept, UC Irvine at MLcon...Animashree Anandkumar, Electrical Engineering and CS Dept, UC Irvine at MLcon...
Animashree Anandkumar, Electrical Engineering and CS Dept, UC Irvine at MLcon...
 
Introduction to Deep Neural Network
Introduction to Deep Neural NetworkIntroduction to Deep Neural Network
Introduction to Deep Neural Network
 
Implicit schemes for wave models
Implicit schemes for wave modelsImplicit schemes for wave models
Implicit schemes for wave models
 
Hw1 solution
Hw1 solutionHw1 solution
Hw1 solution
 
algorithm Unit 4
algorithm Unit 4 algorithm Unit 4
algorithm Unit 4
 
Digit recognizer by convolutional neural network
Digit recognizer by convolutional neural networkDigit recognizer by convolutional neural network
Digit recognizer by convolutional neural network
 
Decimation in Time
Decimation in TimeDecimation in Time
Decimation in Time
 
Greedy Algorithms with examples' b-18298
Greedy Algorithms with examples'  b-18298Greedy Algorithms with examples'  b-18298
Greedy Algorithms with examples' b-18298
 
Chapter2 - Linear Time-Invariant System
Chapter2 - Linear Time-Invariant SystemChapter2 - Linear Time-Invariant System
Chapter2 - Linear Time-Invariant System
 
K-means and GMM
K-means and GMMK-means and GMM
K-means and GMM
 
Applied Digital Signal Processing 1st Edition Manolakis Solutions Manual
Applied Digital Signal Processing 1st Edition Manolakis Solutions ManualApplied Digital Signal Processing 1st Edition Manolakis Solutions Manual
Applied Digital Signal Processing 1st Edition Manolakis Solutions Manual
 
Matlab Assignment Help
Matlab Assignment HelpMatlab Assignment Help
Matlab Assignment Help
 
Lecture123
Lecture123Lecture123
Lecture123
 

Similar to Cdc18 dg lee

Continuum Modeling and Control of Large Nonuniform Networks
Continuum Modeling and Control of Large Nonuniform NetworksContinuum Modeling and Control of Large Nonuniform Networks
Continuum Modeling and Control of Large Nonuniform NetworksYang Zhang
 
Digital Signal Processing[ECEG-3171]-Ch1_L03
Digital Signal Processing[ECEG-3171]-Ch1_L03Digital Signal Processing[ECEG-3171]-Ch1_L03
Digital Signal Processing[ECEG-3171]-Ch1_L03Rediet Moges
 
Estimating Future Initial Margin with Machine Learning
Estimating Future Initial Margin with Machine LearningEstimating Future Initial Margin with Machine Learning
Estimating Future Initial Margin with Machine LearningAndres Hernandez
 
Fast dct algorithm using winograd’s method
Fast dct algorithm using winograd’s methodFast dct algorithm using winograd’s method
Fast dct algorithm using winograd’s methodIAEME Publication
 
MVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsMVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsElvis DOHMATOB
 
Pres110811
Pres110811Pres110811
Pres110811shotlub
 
Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Fabian Pedregosa
 
ep ppt of it .pptx
ep ppt of it .pptxep ppt of it .pptx
ep ppt of it .pptxbsabjdsv
 
Social Network Analysis
Social Network AnalysisSocial Network Analysis
Social Network Analysisrik0
 
Electromagnetic Scattering from Objects with Thin Coatings.2016.05.04.02
Electromagnetic Scattering from Objects with Thin Coatings.2016.05.04.02Electromagnetic Scattering from Objects with Thin Coatings.2016.05.04.02
Electromagnetic Scattering from Objects with Thin Coatings.2016.05.04.02Luke Underwood
 
Circuit Network Analysis - [Chapter5] Transfer function, frequency response, ...
Circuit Network Analysis - [Chapter5] Transfer function, frequency response, ...Circuit Network Analysis - [Chapter5] Transfer function, frequency response, ...
Circuit Network Analysis - [Chapter5] Transfer function, frequency response, ...Simen Li
 
Mit2 092 f09_lec20
Mit2 092 f09_lec20Mit2 092 f09_lec20
Mit2 092 f09_lec20Rahman Hakim
 

Similar to Cdc18 dg lee (20)

Continuum Modeling and Control of Large Nonuniform Networks
Continuum Modeling and Control of Large Nonuniform NetworksContinuum Modeling and Control of Large Nonuniform Networks
Continuum Modeling and Control of Large Nonuniform Networks
 
numerical.ppt
numerical.pptnumerical.ppt
numerical.ppt
 
Digital Signal Processing[ECEG-3171]-Ch1_L03
Digital Signal Processing[ECEG-3171]-Ch1_L03Digital Signal Processing[ECEG-3171]-Ch1_L03
Digital Signal Processing[ECEG-3171]-Ch1_L03
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
Estimating Future Initial Margin with Machine Learning
Estimating Future Initial Margin with Machine LearningEstimating Future Initial Margin with Machine Learning
Estimating Future Initial Margin with Machine Learning
 
Fast dct algorithm using winograd’s method
Fast dct algorithm using winograd’s methodFast dct algorithm using winograd’s method
Fast dct algorithm using winograd’s method
 
MVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsMVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priors
 
Randomized algorithms ver 1.0
Randomized algorithms ver 1.0Randomized algorithms ver 1.0
Randomized algorithms ver 1.0
 
Dynamics
DynamicsDynamics
Dynamics
 
Pres110811
Pres110811Pres110811
Pres110811
 
residue
residueresidue
residue
 
Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3
 
ep ppt of it .pptx
ep ppt of it .pptxep ppt of it .pptx
ep ppt of it .pptx
 
Social Network Analysis
Social Network AnalysisSocial Network Analysis
Social Network Analysis
 
Networking Assignment Help
Networking Assignment HelpNetworking Assignment Help
Networking Assignment Help
 
Mobile_Lec6
Mobile_Lec6Mobile_Lec6
Mobile_Lec6
 
Electromagnetic Scattering from Objects with Thin Coatings.2016.05.04.02
Electromagnetic Scattering from Objects with Thin Coatings.2016.05.04.02Electromagnetic Scattering from Objects with Thin Coatings.2016.05.04.02
Electromagnetic Scattering from Objects with Thin Coatings.2016.05.04.02
 
Max net
Max netMax net
Max net
 
Circuit Network Analysis - [Chapter5] Transfer function, frequency response, ...
Circuit Network Analysis - [Chapter5] Transfer function, frequency response, ...Circuit Network Analysis - [Chapter5] Transfer function, frequency response, ...
Circuit Network Analysis - [Chapter5] Transfer function, frequency response, ...
 
Mit2 092 f09_lec20
Mit2 092 f09_lec20Mit2 092 f09_lec20
Mit2 092 f09_lec20
 

Recently uploaded

Risk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfRisk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfROCENODodongVILLACER
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxJoão Esperancinha
 
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdfCCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdfAsst.prof M.Gokilavani
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024Mark Billinghurst
 
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ
 
Artificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxArtificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxbritheesh05
 
Work Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvWork Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvLewisJB
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxwendy cai
 
An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...Chandu841456
 
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfCCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfAsst.prof M.Gokilavani
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfAsst.prof M.Gokilavani
 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...srsj9000
 
Concrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptxConcrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptxKartikeyaDwivedi3
 
Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.eptoze12
 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AIabhishek36461
 
Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...VICTOR MAESTRE RAMIREZ
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionDr.Costas Sachpazis
 

Recently uploaded (20)

Risk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfRisk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdf
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
 
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdfCCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024
 
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
 
Artificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxArtificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptx
 
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptxExploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
 
Work Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvWork Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvv
 
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptx
 
An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...
 
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfCCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
 
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Serviceyoung call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
 
Concrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptxConcrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptx
 
Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.
 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AI
 
Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
 

Cdc18 dg lee

  • 1. Distributed Algorithm for Network Size Estimation Donggil Lee, Seungjoon Lee, Taekyoo Kim, and Hyungbo Shim Control & Dynamic Systems Lab. Seoul National University 57th IEEE Conference on Decision and Control December 18, 2018
  • 2. We propose a Distributed Algorithm for Network Size Estimation Network size: total number N of nodes in a given network Goal: design a node dynamics whose individual solutions all converges to N. We pursue decentralized design: design of node dynamics doesn't use much information about the network distributed algorithm: each node exchanges information only with its neighbors 2 / 20
  • 3. We propose a Distributed Algorithm for Network Size Estimation Network size: total number N of nodes in a given network Goal: design a node dynamics whose individual solutions all converges to N. We pursue decentralized design: design of node dynamics doesn't use much information about the network distributed algorithm: each node exchanges information only with its neighbors 2 / 20
  • 4. Distributed estimation of N is useful in many applications For example, Distributed Optimization [Nedic, Ozdaglar (2009)] 1 requires N to obtain the convergence rate. Distributed Kalman Filter [Kim, Shim, Wu (2016)] 2 requires N is known to all nodes. 1 Nedic, Ozdaglar, Distributed subgradient methods for multi-agent optimization, IEEE TAC, 2009 2 Kim, Shim, Wu, On distributed optimal Kalman-Bucy ltering by averaging dynamics of heterogeneous agents, IEEE CDC, 2016 3 / 20
  • 5. Distributed estimation of N is not trivial N is a property of a network and so is a global parameter. Each node is able to see only its neighbor 4 / 20
  • 6. Previous results for network size estimation (Baquero, et al., IEEE Trans. Parallel and Distrib. Sys., 2012) (Lucchese, Varagnolo, ACC, 2015) obtains the estimate ˆN in statistical manner by exchanging M pieces of information with neighbors ⇒ the estimate is not deterministic E[ ˆN] = N with Var[ ˆN] = N2 M − 2 (Kempe et al., IEEE Symp. Foundations of Comp. Sci., 2003) (Shames et al., ACC, 2012) obtains 1/N asymptotically by average consensus They require initialization which should be done over the network ⇒not ready for plug-and-play. 5 / 20
  • 7. Previous results for network size estimation (Baquero, et al., IEEE Trans. Parallel and Distrib. Sys., 2012) (Lucchese, Varagnolo, ACC, 2015) obtains the estimate ˆN in statistical manner by exchanging M pieces of information with neighbors ⇒ the estimate is not deterministic E[ ˆN] = N with Var[ ˆN] = N2 M − 2 (Kempe et al., IEEE Symp. Foundations of Comp. Sci., 2003) (Shames et al., ACC, 2012) obtains 1/N asymptotically by average consensus They require initialization which should be done over the network ⇒not ready for plug-and-play. 5 / 20
  • 8. The proposed algorithm Assumptions 1. Communication graph is undirected and connected with unit weight. 2. ∃ one special node always belonging to network; say node 1. node 1: ˙x1(t) = 1−x1(t) + k j∈N1 xj(t) − x1(t) all other nodes: ˙xi(t) = 1 + k j∈Ni xj(t) − xi(t) gain k will be designed algorithm is simple, only scalar xi(t) ∈ R is exchanged initial condition xi(0) is arbitrary 6 / 20
  • 9. The proposed algorithm Assumptions 1. Communication graph is undirected and connected with unit weight. 2. ∃ one special node always belonging to network; say node 1. node 1: ˙x1(t) = 1−x1(t) + k j∈N1 xj(t) − x1(t) all other nodes: ˙xi(t) = 1 + k j∈Ni xj(t) − xi(t) gain k will be designed algorithm is simple, only scalar xi(t) ∈ R is exchanged initial condition xi(0) is arbitrary 6 / 20
  • 10. How the proposed algorithm works? Overall dynamics:      ˙x1 ˙x2 . . . ˙xN      = −      k      l11 l12 · · · l1N l21 l22 · · · l2N . . . . . . . . . . . . lN1 lN2 · · · lNN      +      1 0 · · · 0 0 0 · · · 0 . . . . . . . . . . . . 0 0 · · · 0           x +      1 1 . . . 1      =: −(kL + J11 )x + 1N (L is Laplacian matrix) Lemma: If k 0, then the matrix −(kL + J11) is Hurwitz. Therefore, x(t) converges to the equilibrium x∗ = x∗ (k) = (kL + J11 )−1 1N . Lemma: x∗ 1(k) = N, ∀k 0, and x∗ i (k) → N as k → ∞ for i ≥ 2. Therefore, if k is large enough such that |x∗ i (k) − N| 0.5, ∀i, lim t→∞ round(xi(t)) = lim t→∞ xi(t) = N. 7 / 20
  • 11. How the proposed algorithm works? Overall dynamics:      ˙x1 ˙x2 . . . ˙xN      = −      k      l11 l12 · · · l1N l21 l22 · · · l2N . . . . . . . . . . . . lN1 lN2 · · · lNN      +      1 0 · · · 0 0 0 · · · 0 . . . . . . . . . . . . 0 0 · · · 0           x +      1 1 . . . 1      =: −(kL + J11 )x + 1N (L is Laplacian matrix) Lemma: If k 0, then the matrix −(kL + J11) is Hurwitz. Therefore, x(t) converges to the equilibrium x∗ = x∗ (k) = (kL + J11 )−1 1N . Lemma: x∗ 1(k) = N, ∀k 0, and x∗ i (k) → N as k → ∞ for i ≥ 2. Therefore, if k is large enough such that |x∗ i (k) − N| 0.5, ∀i, lim t→∞ round(xi(t)) = lim t→∞ xi(t) = N. 7 / 20
  • 12. How the proposed algorithm works? Overall dynamics:      ˙x1 ˙x2 . . . ˙xN      = −      k      l11 l12 · · · l1N l21 l22 · · · l2N . . . . . . . . . . . . lN1 lN2 · · · lNN      +      1 0 · · · 0 0 0 · · · 0 . . . . . . . . . . . . 0 0 · · · 0           x +      1 1 . . . 1      =: −(kL + J11 )x + 1N (L is Laplacian matrix) Lemma: If k 0, then the matrix −(kL + J11) is Hurwitz. Therefore, x(t) converges to the equilibrium x∗ = x∗ (k) = (kL + J11 )−1 1N . Lemma: x∗ 1(k) = N, ∀k 0, and x∗ i (k) → N as k → ∞ for i ≥ 2. Therefore, if k is large enough such that |x∗ i (k) − N| 0.5, ∀i, lim t→∞ round(xi(t)) = lim t→∞ xi(t) = N. 7 / 20
  • 13. How large k should be? To compute the minimal value of k, we impose One more assumption 3. ∃ upper bound of network size ¯N ( ¯N N), and ¯N is known to every node (this is the only global information needed.) Theorem: If k ¯N3 then the proposed algorithm node 1: ˙x1 = 1 − x1 + k j∈N1 (xj − x1) all other nodes: ˙xi = 1 + k j∈Ni (xj − xi) with arbitrary initial conditions yields estimation of N because lim t→∞ |xi(t) − N| 0.5, ∀i ∈ N. 8 / 20
  • 14. How large k should be? To compute the minimal value of k, we impose One more assumption 3. ∃ upper bound of network size ¯N ( ¯N N), and ¯N is known to every node (this is the only global information needed.) Theorem: If k ¯N3 then the proposed algorithm node 1: ˙x1 = 1 − x1 + k j∈N1 (xj − x1) all other nodes: ˙xi = 1 + k j∈Ni (xj − xi) with arbitrary initial conditions yields estimation of N because lim t→∞ |xi(t) − N| 0.5, ∀i ∈ N. 8 / 20
  • 15. How to obtain N in nite time? The problem is to nd minimal value T such that |xi(t) − N| 0.5, ∀t T To nd the time T, we need convergence rate bounded initial condition 9 / 20
  • 16. How to obtain N in nite time? The problem is to nd minimal value T such that |xi(t) − N| 0.5, ∀t T To nd the time T, we need convergence rate bounded initial condition 9 / 20
  • 17. Convergence rate of proposed algorithm Recall overall dynamics:      ˙x1 ˙x2 . . . ˙xN      = −      k      l11 l12 · · · l1N l21 l22 · · · l2N . . . . . . . . . . . . lN1 lN2 · · · lNN      +      1 0 · · · 0 0 0 · · · 0 . . . . . . . . . . . . 0 0 · · · 0           x +      1 1 . . . 1      =: −(kL + J11 )x + 1N Lemma: If k ¯N3, then λmax −(kL + J11 ) ≤ − 1 4 ¯N . convergence rate is decreasing function w.r.t. ¯N 10 / 20
  • 18. Main Result Our last assumption (Bounded initial condition) Suppose xi(0) ∈ [0, ¯N] for all i. reasonable initial guess ( N ¯N) Theorem (Finite-time Estimation of N) Under all the assumptions, if k ¯N3, then the proposed algorithm guarantees xi(t) = N, ∀t T(k), ∀i where the settling time T(k) is given by T(k) = 4 ¯N ln 2 ¯N1.5k k − ¯N3 . 11 / 20
  • 19. Advantages of the proposed algorithm node 1: ˙x1 = 1 − x1 + k j∈N1 (xj − x1) all other nodes: ˙xi = 1 + k j∈Ni (xj − xi) 1. simple rst-order dynamics 2. exchanges single variable with neighbors 3. obtains N directly within nite time 4. independent of initialization → while the algorithm is running, new node can join or some node can leave the network This property is often called `plug-and-play ready' or `open MAS (multi-agent system)' or `initialization-free algorithm' 12 / 20
  • 20. A Remark for Practical Application 1. To obtain correct estimate of N, it takes T(k) time from the network change 2. However, not every node can detect the changes. 3. Possible solution: allow the changes only at specied time, i.e., some nodes can join or leave the network at t = j · T where T T(k), assuming every node has same clock. Example scenario: there is a unit length of time T T(k) 1. two nodes 1 and 2 belong to the network from time T0 = 0 2. node 3 joins the network at T1 = T 3. node 3 leaves the network at T2 = 2T 13 / 20
  • 21. A Remark for Practical Application 1. To obtain correct estimate of N, it takes T(k) time from the network change 2. However, not every node can detect the changes. 3. Possible solution: allow the changes only at specied time, i.e., some nodes can join or leave the network at t = j · T where T T(k), assuming every node has same clock. Example scenario: there is a unit length of time T T(k) 1. two nodes 1 and 2 belong to the network from time T0 = 0 2. node 3 joins the network at T1 = T 3. node 3 leaves the network at T2 = 2T 13 / 20
  • 22. (two nodes belong to the network from T0 = 0) Every node initializes its state within [0, ¯N] Estimation is guaranteed for t T(k) 14 / 20
  • 23. (node 3 joins the network at T1 = T ) x3(T1) is initialized within [0, ¯N] both x1(T1) and x2(T1) are within [0, ¯N] 15 / 20
  • 24. (node 3 leaves the network at T2 = 2T ) all x1(T2) and x2(T2) are within [0, ¯N] correct estimation is always available for t Tj + T(k) 16 / 20
  • 25. Behind the scene How we came up with the proposed algorithm? 17 / 20
  • 26. Blended dynamics approach A tool for analysis of heterogeneous multi-agent system Node's dynamics: ˙xi = fi(xi) + k j∈Ni xj − xi , i ∈ {1, 2, · · · , N} Blended dynamics (average of vector elds fi): ˙s = 1 N N i=1 fi(s) with s(0) = 1 N N i=1 xi(0) Theorem3 Suppose blended dynamics is stable. Then, ∀ 0, ∃k∗ such that for all k ≥ k∗, lim sup t→∞ |xi(t) − s(t)| , ∀i. 3 Kim, Yang, Shim, Kim, Seo, Robustness of synchronization of heterogeneous agents by strong coupling and a large number of agents, IEEE TAC, 2016 18 / 20
  • 27. We designed node dynamics so that their blended dynamics has desired property. The proposed node dynamics: ˙x1 = 1 − x1 + k j∈N1 xj − x1 ˙xi = 1 + k j∈Ni xj − xi , ∀j ∈ {2, . . . , N} Their blended dynamics: ˙s = 1 N N i=1 fi(s) = 1 N (N − s) = − 1 N s + 1 Therefore, with suciently large k, we have lim sup t→∞ |xi(t) − s(t)| = lim sup t→∞ |xi(t) − N| , ∀i ∈ N information about N is embedded in the vector elds (not in the initial conditions) → key to the `plug-and-play'. 19 / 20
  • 28. Summary the design of proposed algorithm is based on blended dynamics ˙s = − 1 N s + 1 each node obtains network size exactly with arbitrary initial condition ⇒ the algorithm supports plug-and-play operation the estimation is guaranteed within nite time Thank you! Donggil Lee (dglee@cdsl.kr) 20 / 20
  • 29. Simulation for 30 nodes Black dotted line: N(t) ± 0.5