SlideShare a Scribd company logo
Distributed Algorithm
for Network Size Estimation
Donggil Lee, Seungjoon Lee, Taekyoo Kim, and Hyungbo Shim
Control & Dynamic Systems Lab. Seoul National University
57th IEEE Conference on Decision and Control
December 18, 2018
We propose a Distributed Algorithm for Network Size Estimation
Network size: total number N of nodes in a given network
Goal: design a node dynamics whose individual solutions all
converges to N. We pursue
decentralized design: design of node dynamics doesn't use
much information about the network
distributed algorithm: each node exchanges information only
with its neighbors
2 / 20
We propose a Distributed Algorithm for Network Size Estimation
Network size: total number N of nodes in a given network
Goal: design a node dynamics whose individual solutions all
converges to N. We pursue
decentralized design: design of node dynamics doesn't use
much information about the network
distributed algorithm: each node exchanges information only
with its neighbors
2 / 20
Distributed estimation of N is useful in many applications
For example,
Distributed Optimization [Nedic, Ozdaglar (2009)]
1 requires
N to obtain the convergence rate.
Distributed Kalman Filter [Kim, Shim, Wu (2016)]
2 requires
N is known to all nodes.
1
Nedic, Ozdaglar, Distributed subgradient methods for multi-agent
optimization, IEEE TAC, 2009
2
Kim, Shim, Wu, On distributed optimal Kalman-Bucy ltering by
averaging dynamics of heterogeneous agents, IEEE CDC, 2016
3 / 20
Distributed estimation of N is not trivial
N is a property of a network and so is a global parameter.
Each node is able to see only its neighbor
4 / 20
Previous results for network size estimation
(Baquero, et al., IEEE Trans. Parallel and Distrib. Sys., 2012)
(Lucchese, Varagnolo, ACC, 2015)
obtains the estimate ˆN in statistical manner by exchanging M
pieces of information with neighbors
⇒ the estimate is not deterministic
E[ ˆN] = N with Var[ ˆN] =
N2
M − 2
(Kempe et al., IEEE Symp. Foundations of Comp. Sci., 2003)
(Shames et al., ACC, 2012)
obtains 1/N asymptotically by average consensus
They require initialization which should be done over the network
⇒not ready for plug-and-play.
5 / 20
Previous results for network size estimation
(Baquero, et al., IEEE Trans. Parallel and Distrib. Sys., 2012)
(Lucchese, Varagnolo, ACC, 2015)
obtains the estimate ˆN in statistical manner by exchanging M
pieces of information with neighbors
⇒ the estimate is not deterministic
E[ ˆN] = N with Var[ ˆN] =
N2
M − 2
(Kempe et al., IEEE Symp. Foundations of Comp. Sci., 2003)
(Shames et al., ACC, 2012)
obtains 1/N asymptotically by average consensus
They require initialization which should be done over the network
⇒not ready for plug-and-play.
5 / 20
The proposed algorithm
Assumptions
1. Communication graph is undirected and connected with unit
weight.
2. ∃ one special node always belonging to network; say node 1.
node 1: ˙x1(t) = 1−x1(t) + k
j∈N1
xj(t) − x1(t)
all other nodes: ˙xi(t) = 1 + k
j∈Ni
xj(t) − xi(t)
gain k will be designed
algorithm is simple, only scalar xi(t) ∈ R is exchanged
initial condition xi(0) is arbitrary
6 / 20
The proposed algorithm
Assumptions
1. Communication graph is undirected and connected with unit
weight.
2. ∃ one special node always belonging to network; say node 1.
node 1: ˙x1(t) = 1−x1(t) + k
j∈N1
xj(t) − x1(t)
all other nodes: ˙xi(t) = 1 + k
j∈Ni
xj(t) − xi(t)
gain k will be designed
algorithm is simple, only scalar xi(t) ∈ R is exchanged
initial condition xi(0) is arbitrary
6 / 20
How the proposed algorithm works?
Overall dynamics:





˙x1
˙x2
.
.
.
˙xN





= −





k





l11 l12 · · · l1N
l21 l22 · · · l2N
.
.
.
.
.
.
.
.
.
.
.
.
lN1 lN2 · · · lNN





+





1 0 · · · 0
0 0 · · · 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 · · · 0










x +





1
1
.
.
.
1





=: −(kL + J11
)x + 1N (L is Laplacian matrix)
Lemma: If k  0, then the matrix −(kL + J11) is Hurwitz.
Therefore, x(t) converges to the equilibrium
x∗
= x∗
(k) = (kL + J11
)−1
1N .
Lemma: x∗
1(k) = N, ∀k  0, and x∗
i (k) → N as k → ∞ for i ≥ 2.
Therefore, if k is large enough such that |x∗
i (k) − N|  0.5, ∀i,
lim
t→∞
round(xi(t)) = lim
t→∞
xi(t) = N.
7 / 20
How the proposed algorithm works?
Overall dynamics:





˙x1
˙x2
.
.
.
˙xN





= −





k





l11 l12 · · · l1N
l21 l22 · · · l2N
.
.
.
.
.
.
.
.
.
.
.
.
lN1 lN2 · · · lNN





+





1 0 · · · 0
0 0 · · · 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 · · · 0










x +





1
1
.
.
.
1





=: −(kL + J11
)x + 1N (L is Laplacian matrix)
Lemma: If k  0, then the matrix −(kL + J11) is Hurwitz.
Therefore, x(t) converges to the equilibrium
x∗
= x∗
(k) = (kL + J11
)−1
1N .
Lemma: x∗
1(k) = N, ∀k  0, and x∗
i (k) → N as k → ∞ for i ≥ 2.
Therefore, if k is large enough such that |x∗
i (k) − N|  0.5, ∀i,
lim
t→∞
round(xi(t)) = lim
t→∞
xi(t) = N.
7 / 20
How the proposed algorithm works?
Overall dynamics:





˙x1
˙x2
.
.
.
˙xN





= −





k





l11 l12 · · · l1N
l21 l22 · · · l2N
.
.
.
.
.
.
.
.
.
.
.
.
lN1 lN2 · · · lNN





+





1 0 · · · 0
0 0 · · · 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 · · · 0










x +





1
1
.
.
.
1





=: −(kL + J11
)x + 1N (L is Laplacian matrix)
Lemma: If k  0, then the matrix −(kL + J11) is Hurwitz.
Therefore, x(t) converges to the equilibrium
x∗
= x∗
(k) = (kL + J11
)−1
1N .
Lemma: x∗
1(k) = N, ∀k  0, and x∗
i (k) → N as k → ∞ for i ≥ 2.
Therefore, if k is large enough such that |x∗
i (k) − N|  0.5, ∀i,
lim
t→∞
round(xi(t)) = lim
t→∞
xi(t) = N.
7 / 20
How large k should be?
To compute the minimal value of k, we impose
One more assumption
3. ∃ upper bound of network size ¯N ( ¯N  N), and ¯N is known
to every node (this is the only global information needed.)
Theorem: If
k  ¯N3
then the proposed algorithm
node 1: ˙x1 = 1 − x1 + k
j∈N1
(xj − x1)
all other nodes: ˙xi = 1 + k
j∈Ni
(xj − xi)
with arbitrary initial conditions yields estimation of N because
lim
t→∞
|xi(t) − N|  0.5, ∀i ∈ N.
8 / 20
How large k should be?
To compute the minimal value of k, we impose
One more assumption
3. ∃ upper bound of network size ¯N ( ¯N  N), and ¯N is known
to every node (this is the only global information needed.)
Theorem: If
k  ¯N3
then the proposed algorithm
node 1: ˙x1 = 1 − x1 + k
j∈N1
(xj − x1)
all other nodes: ˙xi = 1 + k
j∈Ni
(xj − xi)
with arbitrary initial conditions yields estimation of N because
lim
t→∞
|xi(t) − N|  0.5, ∀i ∈ N.
8 / 20
How to obtain N in nite time?
The problem is to nd minimal value T such that
|xi(t) − N|  0.5, ∀t  T
To nd the time T, we need
convergence rate
bounded initial condition
9 / 20
How to obtain N in nite time?
The problem is to nd minimal value T such that
|xi(t) − N|  0.5, ∀t  T
To nd the time T, we need
convergence rate
bounded initial condition
9 / 20
Convergence rate of proposed algorithm
Recall overall dynamics:





˙x1
˙x2
.
.
.
˙xN





= −





k





l11 l12 · · · l1N
l21 l22 · · · l2N
.
.
.
.
.
.
. . .
.
.
.
lN1 lN2 · · · lNN





+





1 0 · · · 0
0 0 · · · 0
.
.
.
.
.
.
. . .
.
.
.
0 0 · · · 0










x +





1
1
.
.
.
1





=: −(kL + J11
)x + 1N
Lemma: If k  ¯N3, then
λmax −(kL + J11
) ≤ −
1
4 ¯N
.
convergence rate is decreasing function w.r.t. ¯N
10 / 20
Main Result
Our last assumption (Bounded initial condition)
Suppose xi(0) ∈ [0, ¯N] for all i.
reasonable initial guess ( N  ¯N)
Theorem (Finite-time Estimation of N)
Under all the assumptions, if k  ¯N3, then the proposed algorithm
guarantees
xi(t) = N, ∀t  T(k), ∀i
where the settling time T(k) is given by
T(k) = 4 ¯N ln
2 ¯N1.5k
k − ¯N3
.
11 / 20
Advantages of the proposed algorithm
node 1: ˙x1 = 1 − x1 + k
j∈N1
(xj − x1)
all other nodes: ˙xi = 1 + k
j∈Ni
(xj − xi)
1. simple rst-order dynamics
2. exchanges single variable with neighbors
3. obtains N directly within nite time
4. independent of initialization
→ while the algorithm is running, new node can join or some
node can leave the network
This property is often called
`plug-and-play ready' or
`open MAS (multi-agent system)' or
`initialization-free algorithm'
12 / 20
A Remark for Practical Application
1. To obtain correct estimate of N, it takes T(k) time from the
network change
2. However, not every node can detect the changes.
3. Possible solution: allow the changes only at specied time,
i.e., some nodes can join or leave the network at t = j · T
where T  T(k), assuming every node has same clock.
Example scenario:
there is a unit length of time T  T(k)
1. two nodes 1 and 2 belong to the network
from time T0 = 0
2. node 3 joins the network at T1 = T
3. node 3 leaves the network at T2 = 2T
13 / 20
A Remark for Practical Application
1. To obtain correct estimate of N, it takes T(k) time from the
network change
2. However, not every node can detect the changes.
3. Possible solution: allow the changes only at specied time,
i.e., some nodes can join or leave the network at t = j · T
where T  T(k), assuming every node has same clock.
Example scenario:
there is a unit length of time T  T(k)
1. two nodes 1 and 2 belong to the network
from time T0 = 0
2. node 3 joins the network at T1 = T
3. node 3 leaves the network at T2 = 2T
13 / 20
(two nodes belong to the network from T0 = 0)
Every node initializes its state within [0, ¯N]
Estimation is guaranteed for t  T(k)
14 / 20
(node 3 joins the network at T1 = T )
x3(T1) is initialized within [0, ¯N]
both x1(T1) and x2(T1) are within [0, ¯N]
15 / 20
(node 3 leaves the network at T2 = 2T )
all x1(T2) and x2(T2) are within [0, ¯N]
correct estimation is always available for t  Tj + T(k)
16 / 20
Behind the scene
How we came up with the proposed algorithm?
17 / 20
Blended dynamics approach
A tool for analysis of heterogeneous multi-agent system
Node's dynamics:
˙xi = fi(xi) + k
j∈Ni
xj − xi , i ∈ {1, 2, · · · , N}
Blended dynamics (average of vector elds fi):
˙s =
1
N
N
i=1
fi(s) with s(0) =
1
N
N
i=1
xi(0)
Theorem3
Suppose blended dynamics is stable. Then, ∀  0, ∃k∗ such that
for all k ≥ k∗,
lim sup
t→∞
|xi(t) − s(t)|  , ∀i.
3
Kim, Yang, Shim, Kim, Seo, Robustness of synchronization of
heterogeneous agents by strong coupling and a large number of agents, IEEE
TAC, 2016
18 / 20
We designed node dynamics so that their blended dynamics
has desired property.
The proposed node dynamics:
˙x1 = 1 − x1 + k
j∈N1
xj − x1
˙xi = 1 + k
j∈Ni
xj − xi , ∀j ∈ {2, . . . , N}
Their blended dynamics:
˙s =
1
N
N
i=1
fi(s) =
1
N
(N − s) = −
1
N
s + 1
Therefore, with suciently large k, we have
lim sup
t→∞
|xi(t) − s(t)| = lim sup
t→∞
|xi(t) − N|  , ∀i ∈ N
information about N is embedded in the vector elds (not in
the initial conditions) → key to the `plug-and-play'.
19 / 20
Summary
the design of proposed algorithm is based on blended dynamics
˙s = −
1
N
s + 1
each node obtains network size exactly with arbitrary initial
condition
⇒ the algorithm supports plug-and-play operation
the estimation is guaranteed within nite time
Thank you!
Donggil Lee (dglee@cdsl.kr)
20 / 20
Simulation for 30 nodes
Black dotted line: N(t) ± 0.5

More Related Content

What's hot

Radix-2 DIT FFT
Radix-2 DIT FFT Radix-2 DIT FFT
Radix-2 DIT FFT
Sarang Joshi
 
Discrete fourier transform
Discrete fourier transformDiscrete fourier transform
Discrete fourier transform
MOHAMMAD AKRAM
 
Quantum mechanics and the square root of the Brownian motion
Quantum mechanics and the square root of the Brownian motionQuantum mechanics and the square root of the Brownian motion
Quantum mechanics and the square root of the Brownian motion
Marco Frasca
 
Unit 2 signal &system
Unit 2 signal &systemUnit 2 signal &system
Unit 2 signal &system
sushant7dare
 
Lecture 5: The Convolution Sum
Lecture 5: The Convolution SumLecture 5: The Convolution Sum
Lecture 5: The Convolution Sum
Jawaher Abdulwahab Fadhil
 
Convolution discrete and continuous time-difference equaion and system proper...
Convolution discrete and continuous time-difference equaion and system proper...Convolution discrete and continuous time-difference equaion and system proper...
Convolution discrete and continuous time-difference equaion and system proper...
Vinod Sharma
 
K-Means Algorithm
K-Means AlgorithmK-Means Algorithm
K-Means Algorithm
Carlos Castillo (ChaTo)
 
Animashree Anandkumar, Electrical Engineering and CS Dept, UC Irvine at MLcon...
Animashree Anandkumar, Electrical Engineering and CS Dept, UC Irvine at MLcon...Animashree Anandkumar, Electrical Engineering and CS Dept, UC Irvine at MLcon...
Animashree Anandkumar, Electrical Engineering and CS Dept, UC Irvine at MLcon...
MLconf
 
Introduction to Deep Neural Network
Introduction to Deep Neural NetworkIntroduction to Deep Neural Network
Introduction to Deep Neural Network
Liwei Ren任力偉
 
Implicit schemes for wave models
Implicit schemes for wave modelsImplicit schemes for wave models
Implicit schemes for wave models
Mathieu Dutour Sikiric
 
Hw1 solution
Hw1 solutionHw1 solution
Hw1 solution
iqbal ahmad
 
algorithm Unit 4
algorithm Unit 4 algorithm Unit 4
algorithm Unit 4
Monika Choudhery
 
Digit recognizer by convolutional neural network
Digit recognizer by convolutional neural networkDigit recognizer by convolutional neural network
Digit recognizer by convolutional neural network
Ding Li
 
Decimation in Time
Decimation in TimeDecimation in Time
Decimation in Time
SURAJ KUMAR SAINI
 
Greedy Algorithms with examples' b-18298
Greedy Algorithms with examples'  b-18298Greedy Algorithms with examples'  b-18298
Greedy Algorithms with examples' b-18298
LGS, GBHS&IC, University Of South-Asia, TARA-Technologies
 
Chapter2 - Linear Time-Invariant System
Chapter2 - Linear Time-Invariant SystemChapter2 - Linear Time-Invariant System
Chapter2 - Linear Time-Invariant System
Attaporn Ninsuwan
 
K-means and GMM
K-means and GMMK-means and GMM
K-means and GMM
Sanghyuk Chun
 
Applied Digital Signal Processing 1st Edition Manolakis Solutions Manual
Applied Digital Signal Processing 1st Edition Manolakis Solutions ManualApplied Digital Signal Processing 1st Edition Manolakis Solutions Manual
Applied Digital Signal Processing 1st Edition Manolakis Solutions Manual
towojixi
 
Matlab Assignment Help
Matlab Assignment HelpMatlab Assignment Help
Matlab Assignment Help
Matlab Assignment Experts
 
Lecture123
Lecture123Lecture123
Lecture123
benjoshpp
 

What's hot (20)

Radix-2 DIT FFT
Radix-2 DIT FFT Radix-2 DIT FFT
Radix-2 DIT FFT
 
Discrete fourier transform
Discrete fourier transformDiscrete fourier transform
Discrete fourier transform
 
Quantum mechanics and the square root of the Brownian motion
Quantum mechanics and the square root of the Brownian motionQuantum mechanics and the square root of the Brownian motion
Quantum mechanics and the square root of the Brownian motion
 
Unit 2 signal &system
Unit 2 signal &systemUnit 2 signal &system
Unit 2 signal &system
 
Lecture 5: The Convolution Sum
Lecture 5: The Convolution SumLecture 5: The Convolution Sum
Lecture 5: The Convolution Sum
 
Convolution discrete and continuous time-difference equaion and system proper...
Convolution discrete and continuous time-difference equaion and system proper...Convolution discrete and continuous time-difference equaion and system proper...
Convolution discrete and continuous time-difference equaion and system proper...
 
K-Means Algorithm
K-Means AlgorithmK-Means Algorithm
K-Means Algorithm
 
Animashree Anandkumar, Electrical Engineering and CS Dept, UC Irvine at MLcon...
Animashree Anandkumar, Electrical Engineering and CS Dept, UC Irvine at MLcon...Animashree Anandkumar, Electrical Engineering and CS Dept, UC Irvine at MLcon...
Animashree Anandkumar, Electrical Engineering and CS Dept, UC Irvine at MLcon...
 
Introduction to Deep Neural Network
Introduction to Deep Neural NetworkIntroduction to Deep Neural Network
Introduction to Deep Neural Network
 
Implicit schemes for wave models
Implicit schemes for wave modelsImplicit schemes for wave models
Implicit schemes for wave models
 
Hw1 solution
Hw1 solutionHw1 solution
Hw1 solution
 
algorithm Unit 4
algorithm Unit 4 algorithm Unit 4
algorithm Unit 4
 
Digit recognizer by convolutional neural network
Digit recognizer by convolutional neural networkDigit recognizer by convolutional neural network
Digit recognizer by convolutional neural network
 
Decimation in Time
Decimation in TimeDecimation in Time
Decimation in Time
 
Greedy Algorithms with examples' b-18298
Greedy Algorithms with examples'  b-18298Greedy Algorithms with examples'  b-18298
Greedy Algorithms with examples' b-18298
 
Chapter2 - Linear Time-Invariant System
Chapter2 - Linear Time-Invariant SystemChapter2 - Linear Time-Invariant System
Chapter2 - Linear Time-Invariant System
 
K-means and GMM
K-means and GMMK-means and GMM
K-means and GMM
 
Applied Digital Signal Processing 1st Edition Manolakis Solutions Manual
Applied Digital Signal Processing 1st Edition Manolakis Solutions ManualApplied Digital Signal Processing 1st Edition Manolakis Solutions Manual
Applied Digital Signal Processing 1st Edition Manolakis Solutions Manual
 
Matlab Assignment Help
Matlab Assignment HelpMatlab Assignment Help
Matlab Assignment Help
 
Lecture123
Lecture123Lecture123
Lecture123
 

Similar to Cdc18 dg lee

Continuum Modeling and Control of Large Nonuniform Networks
Continuum Modeling and Control of Large Nonuniform NetworksContinuum Modeling and Control of Large Nonuniform Networks
Continuum Modeling and Control of Large Nonuniform Networks
Yang Zhang
 
numerical.ppt
numerical.pptnumerical.ppt
numerical.ppt
SuyashPatil72
 
Digital Signal Processing[ECEG-3171]-Ch1_L03
Digital Signal Processing[ECEG-3171]-Ch1_L03Digital Signal Processing[ECEG-3171]-Ch1_L03
Digital Signal Processing[ECEG-3171]-Ch1_L03
Rediet Moges
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
The Statistical and Applied Mathematical Sciences Institute
 
Estimating Future Initial Margin with Machine Learning
Estimating Future Initial Margin with Machine LearningEstimating Future Initial Margin with Machine Learning
Estimating Future Initial Margin with Machine Learning
Andres Hernandez
 
Fast dct algorithm using winograd’s method
Fast dct algorithm using winograd’s methodFast dct algorithm using winograd’s method
Fast dct algorithm using winograd’s method
IAEME Publication
 
MVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsMVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priors
Elvis DOHMATOB
 
Randomized algorithms ver 1.0
Randomized algorithms ver 1.0Randomized algorithms ver 1.0
Randomized algorithms ver 1.0
Dr. C.V. Suresh Babu
 
Dynamics
DynamicsDynamics
Dynamics
nguyendattdh
 
Pres110811
Pres110811Pres110811
Pres110811
shotlub
 
residue
residueresidue
residue
Rob Arnold
 
Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3
Fabian Pedregosa
 
ep ppt of it .pptx
ep ppt of it .pptxep ppt of it .pptx
ep ppt of it .pptx
bsabjdsv
 
Social Network Analysis
Social Network AnalysisSocial Network Analysis
Social Network Analysis
rik0
 
Networking Assignment Help
Networking Assignment HelpNetworking Assignment Help
Networking Assignment Help
Computer Network Assignment Help
 
Mobile_Lec6
Mobile_Lec6Mobile_Lec6
Mobile_Lec6
Charan Litchfield
 
Electromagnetic Scattering from Objects with Thin Coatings.2016.05.04.02
Electromagnetic Scattering from Objects with Thin Coatings.2016.05.04.02Electromagnetic Scattering from Objects with Thin Coatings.2016.05.04.02
Electromagnetic Scattering from Objects with Thin Coatings.2016.05.04.02
Luke Underwood
 
Max net
Max netMax net
Circuit Network Analysis - [Chapter5] Transfer function, frequency response, ...
Circuit Network Analysis - [Chapter5] Transfer function, frequency response, ...Circuit Network Analysis - [Chapter5] Transfer function, frequency response, ...
Circuit Network Analysis - [Chapter5] Transfer function, frequency response, ...
Simen Li
 
Mit2 092 f09_lec20
Mit2 092 f09_lec20Mit2 092 f09_lec20
Mit2 092 f09_lec20
Rahman Hakim
 

Similar to Cdc18 dg lee (20)

Continuum Modeling and Control of Large Nonuniform Networks
Continuum Modeling and Control of Large Nonuniform NetworksContinuum Modeling and Control of Large Nonuniform Networks
Continuum Modeling and Control of Large Nonuniform Networks
 
numerical.ppt
numerical.pptnumerical.ppt
numerical.ppt
 
Digital Signal Processing[ECEG-3171]-Ch1_L03
Digital Signal Processing[ECEG-3171]-Ch1_L03Digital Signal Processing[ECEG-3171]-Ch1_L03
Digital Signal Processing[ECEG-3171]-Ch1_L03
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
Estimating Future Initial Margin with Machine Learning
Estimating Future Initial Margin with Machine LearningEstimating Future Initial Margin with Machine Learning
Estimating Future Initial Margin with Machine Learning
 
Fast dct algorithm using winograd’s method
Fast dct algorithm using winograd’s methodFast dct algorithm using winograd’s method
Fast dct algorithm using winograd’s method
 
MVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsMVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priors
 
Randomized algorithms ver 1.0
Randomized algorithms ver 1.0Randomized algorithms ver 1.0
Randomized algorithms ver 1.0
 
Dynamics
DynamicsDynamics
Dynamics
 
Pres110811
Pres110811Pres110811
Pres110811
 
residue
residueresidue
residue
 
Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3
 
ep ppt of it .pptx
ep ppt of it .pptxep ppt of it .pptx
ep ppt of it .pptx
 
Social Network Analysis
Social Network AnalysisSocial Network Analysis
Social Network Analysis
 
Networking Assignment Help
Networking Assignment HelpNetworking Assignment Help
Networking Assignment Help
 
Mobile_Lec6
Mobile_Lec6Mobile_Lec6
Mobile_Lec6
 
Electromagnetic Scattering from Objects with Thin Coatings.2016.05.04.02
Electromagnetic Scattering from Objects with Thin Coatings.2016.05.04.02Electromagnetic Scattering from Objects with Thin Coatings.2016.05.04.02
Electromagnetic Scattering from Objects with Thin Coatings.2016.05.04.02
 
Max net
Max netMax net
Max net
 
Circuit Network Analysis - [Chapter5] Transfer function, frequency response, ...
Circuit Network Analysis - [Chapter5] Transfer function, frequency response, ...Circuit Network Analysis - [Chapter5] Transfer function, frequency response, ...
Circuit Network Analysis - [Chapter5] Transfer function, frequency response, ...
 
Mit2 092 f09_lec20
Mit2 092 f09_lec20Mit2 092 f09_lec20
Mit2 092 f09_lec20
 

Recently uploaded

Generative AI leverages algorithms to create various forms of content
Generative AI leverages algorithms to create various forms of contentGenerative AI leverages algorithms to create various forms of content
Generative AI leverages algorithms to create various forms of content
Hitesh Mohapatra
 
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMS
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSA SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMS
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMS
IJNSA Journal
 
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECT
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTCHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECT
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECT
jpsjournal1
 
Exception Handling notes in java exception
Exception Handling notes in java exceptionException Handling notes in java exception
Exception Handling notes in java exception
Ratnakar Mikkili
 
Iron and Steel Technology Roadmap - Towards more sustainable steelmaking.pdf
Iron and Steel Technology Roadmap - Towards more sustainable steelmaking.pdfIron and Steel Technology Roadmap - Towards more sustainable steelmaking.pdf
Iron and Steel Technology Roadmap - Towards more sustainable steelmaking.pdf
RadiNasr
 
14 Template Contractual Notice - EOT Application
14 Template Contractual Notice - EOT Application14 Template Contractual Notice - EOT Application
14 Template Contractual Notice - EOT Application
SyedAbiiAzazi1
 
2. Operations Strategy in a Global Environment.ppt
2. Operations Strategy in a Global Environment.ppt2. Operations Strategy in a Global Environment.ppt
2. Operations Strategy in a Global Environment.ppt
PuktoonEngr
 
Recycled Concrete Aggregate in Construction Part III
Recycled Concrete Aggregate in Construction Part IIIRecycled Concrete Aggregate in Construction Part III
Recycled Concrete Aggregate in Construction Part III
Aditya Rajan Patra
 
Manufacturing Process of molasses based distillery ppt.pptx
Manufacturing Process of molasses based distillery ppt.pptxManufacturing Process of molasses based distillery ppt.pptx
Manufacturing Process of molasses based distillery ppt.pptx
Madan Karki
 
PPT on GRP pipes manufacturing and testing
PPT on GRP pipes manufacturing and testingPPT on GRP pipes manufacturing and testing
PPT on GRP pipes manufacturing and testing
anoopmanoharan2
 
132/33KV substation case study Presentation
132/33KV substation case study Presentation132/33KV substation case study Presentation
132/33KV substation case study Presentation
kandramariana6
 
New techniques for characterising damage in rock slopes.pdf
New techniques for characterising damage in rock slopes.pdfNew techniques for characterising damage in rock slopes.pdf
New techniques for characterising damage in rock slopes.pdf
wisnuprabawa3
 
哪里办理(csu毕业证书)查尔斯特大学毕业证硕士学历原版一模一样
哪里办理(csu毕业证书)查尔斯特大学毕业证硕士学历原版一模一样哪里办理(csu毕业证书)查尔斯特大学毕业证硕士学历原版一模一样
哪里办理(csu毕业证书)查尔斯特大学毕业证硕士学历原版一模一样
insn4465
 
Embedded machine learning-based road conditions and driving behavior monitoring
Embedded machine learning-based road conditions and driving behavior monitoringEmbedded machine learning-based road conditions and driving behavior monitoring
Embedded machine learning-based road conditions and driving behavior monitoring
IJECEIAES
 
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressions
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsKuberTENes Birthday Bash Guadalajara - K8sGPT first impressions
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressions
Victor Morales
 
digital fundamental by Thomas L.floydl.pdf
digital fundamental by Thomas L.floydl.pdfdigital fundamental by Thomas L.floydl.pdf
digital fundamental by Thomas L.floydl.pdf
drwaing
 
CSM Cloud Service Management Presentarion
CSM Cloud Service Management PresentarionCSM Cloud Service Management Presentarion
CSM Cloud Service Management Presentarion
rpskprasana
 
Question paper of renewable energy sources
Question paper of renewable energy sourcesQuestion paper of renewable energy sources
Question paper of renewable energy sources
mahammadsalmanmech
 
IEEE Aerospace and Electronic Systems Society as a Graduate Student Member
IEEE Aerospace and Electronic Systems Society as a Graduate Student MemberIEEE Aerospace and Electronic Systems Society as a Graduate Student Member
IEEE Aerospace and Electronic Systems Society as a Graduate Student Member
VICTOR MAESTRE RAMIREZ
 
Understanding Inductive Bias in Machine Learning
Understanding Inductive Bias in Machine LearningUnderstanding Inductive Bias in Machine Learning
Understanding Inductive Bias in Machine Learning
SUTEJAS
 

Recently uploaded (20)

Generative AI leverages algorithms to create various forms of content
Generative AI leverages algorithms to create various forms of contentGenerative AI leverages algorithms to create various forms of content
Generative AI leverages algorithms to create various forms of content
 
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMS
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSA SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMS
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMS
 
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECT
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTCHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECT
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECT
 
Exception Handling notes in java exception
Exception Handling notes in java exceptionException Handling notes in java exception
Exception Handling notes in java exception
 
Iron and Steel Technology Roadmap - Towards more sustainable steelmaking.pdf
Iron and Steel Technology Roadmap - Towards more sustainable steelmaking.pdfIron and Steel Technology Roadmap - Towards more sustainable steelmaking.pdf
Iron and Steel Technology Roadmap - Towards more sustainable steelmaking.pdf
 
14 Template Contractual Notice - EOT Application
14 Template Contractual Notice - EOT Application14 Template Contractual Notice - EOT Application
14 Template Contractual Notice - EOT Application
 
2. Operations Strategy in a Global Environment.ppt
2. Operations Strategy in a Global Environment.ppt2. Operations Strategy in a Global Environment.ppt
2. Operations Strategy in a Global Environment.ppt
 
Recycled Concrete Aggregate in Construction Part III
Recycled Concrete Aggregate in Construction Part IIIRecycled Concrete Aggregate in Construction Part III
Recycled Concrete Aggregate in Construction Part III
 
Manufacturing Process of molasses based distillery ppt.pptx
Manufacturing Process of molasses based distillery ppt.pptxManufacturing Process of molasses based distillery ppt.pptx
Manufacturing Process of molasses based distillery ppt.pptx
 
PPT on GRP pipes manufacturing and testing
PPT on GRP pipes manufacturing and testingPPT on GRP pipes manufacturing and testing
PPT on GRP pipes manufacturing and testing
 
132/33KV substation case study Presentation
132/33KV substation case study Presentation132/33KV substation case study Presentation
132/33KV substation case study Presentation
 
New techniques for characterising damage in rock slopes.pdf
New techniques for characterising damage in rock slopes.pdfNew techniques for characterising damage in rock slopes.pdf
New techniques for characterising damage in rock slopes.pdf
 
哪里办理(csu毕业证书)查尔斯特大学毕业证硕士学历原版一模一样
哪里办理(csu毕业证书)查尔斯特大学毕业证硕士学历原版一模一样哪里办理(csu毕业证书)查尔斯特大学毕业证硕士学历原版一模一样
哪里办理(csu毕业证书)查尔斯特大学毕业证硕士学历原版一模一样
 
Embedded machine learning-based road conditions and driving behavior monitoring
Embedded machine learning-based road conditions and driving behavior monitoringEmbedded machine learning-based road conditions and driving behavior monitoring
Embedded machine learning-based road conditions and driving behavior monitoring
 
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressions
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsKuberTENes Birthday Bash Guadalajara - K8sGPT first impressions
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressions
 
digital fundamental by Thomas L.floydl.pdf
digital fundamental by Thomas L.floydl.pdfdigital fundamental by Thomas L.floydl.pdf
digital fundamental by Thomas L.floydl.pdf
 
CSM Cloud Service Management Presentarion
CSM Cloud Service Management PresentarionCSM Cloud Service Management Presentarion
CSM Cloud Service Management Presentarion
 
Question paper of renewable energy sources
Question paper of renewable energy sourcesQuestion paper of renewable energy sources
Question paper of renewable energy sources
 
IEEE Aerospace and Electronic Systems Society as a Graduate Student Member
IEEE Aerospace and Electronic Systems Society as a Graduate Student MemberIEEE Aerospace and Electronic Systems Society as a Graduate Student Member
IEEE Aerospace and Electronic Systems Society as a Graduate Student Member
 
Understanding Inductive Bias in Machine Learning
Understanding Inductive Bias in Machine LearningUnderstanding Inductive Bias in Machine Learning
Understanding Inductive Bias in Machine Learning
 

Cdc18 dg lee

  • 1. Distributed Algorithm for Network Size Estimation Donggil Lee, Seungjoon Lee, Taekyoo Kim, and Hyungbo Shim Control & Dynamic Systems Lab. Seoul National University 57th IEEE Conference on Decision and Control December 18, 2018
  • 2. We propose a Distributed Algorithm for Network Size Estimation Network size: total number N of nodes in a given network Goal: design a node dynamics whose individual solutions all converges to N. We pursue decentralized design: design of node dynamics doesn't use much information about the network distributed algorithm: each node exchanges information only with its neighbors 2 / 20
  • 3. We propose a Distributed Algorithm for Network Size Estimation Network size: total number N of nodes in a given network Goal: design a node dynamics whose individual solutions all converges to N. We pursue decentralized design: design of node dynamics doesn't use much information about the network distributed algorithm: each node exchanges information only with its neighbors 2 / 20
  • 4. Distributed estimation of N is useful in many applications For example, Distributed Optimization [Nedic, Ozdaglar (2009)] 1 requires N to obtain the convergence rate. Distributed Kalman Filter [Kim, Shim, Wu (2016)] 2 requires N is known to all nodes. 1 Nedic, Ozdaglar, Distributed subgradient methods for multi-agent optimization, IEEE TAC, 2009 2 Kim, Shim, Wu, On distributed optimal Kalman-Bucy ltering by averaging dynamics of heterogeneous agents, IEEE CDC, 2016 3 / 20
  • 5. Distributed estimation of N is not trivial N is a property of a network and so is a global parameter. Each node is able to see only its neighbor 4 / 20
  • 6. Previous results for network size estimation (Baquero, et al., IEEE Trans. Parallel and Distrib. Sys., 2012) (Lucchese, Varagnolo, ACC, 2015) obtains the estimate ˆN in statistical manner by exchanging M pieces of information with neighbors ⇒ the estimate is not deterministic E[ ˆN] = N with Var[ ˆN] = N2 M − 2 (Kempe et al., IEEE Symp. Foundations of Comp. Sci., 2003) (Shames et al., ACC, 2012) obtains 1/N asymptotically by average consensus They require initialization which should be done over the network ⇒not ready for plug-and-play. 5 / 20
  • 7. Previous results for network size estimation (Baquero, et al., IEEE Trans. Parallel and Distrib. Sys., 2012) (Lucchese, Varagnolo, ACC, 2015) obtains the estimate ˆN in statistical manner by exchanging M pieces of information with neighbors ⇒ the estimate is not deterministic E[ ˆN] = N with Var[ ˆN] = N2 M − 2 (Kempe et al., IEEE Symp. Foundations of Comp. Sci., 2003) (Shames et al., ACC, 2012) obtains 1/N asymptotically by average consensus They require initialization which should be done over the network ⇒not ready for plug-and-play. 5 / 20
  • 8. The proposed algorithm Assumptions 1. Communication graph is undirected and connected with unit weight. 2. ∃ one special node always belonging to network; say node 1. node 1: ˙x1(t) = 1−x1(t) + k j∈N1 xj(t) − x1(t) all other nodes: ˙xi(t) = 1 + k j∈Ni xj(t) − xi(t) gain k will be designed algorithm is simple, only scalar xi(t) ∈ R is exchanged initial condition xi(0) is arbitrary 6 / 20
  • 9. The proposed algorithm Assumptions 1. Communication graph is undirected and connected with unit weight. 2. ∃ one special node always belonging to network; say node 1. node 1: ˙x1(t) = 1−x1(t) + k j∈N1 xj(t) − x1(t) all other nodes: ˙xi(t) = 1 + k j∈Ni xj(t) − xi(t) gain k will be designed algorithm is simple, only scalar xi(t) ∈ R is exchanged initial condition xi(0) is arbitrary 6 / 20
  • 10. How the proposed algorithm works? Overall dynamics:      ˙x1 ˙x2 . . . ˙xN      = −      k      l11 l12 · · · l1N l21 l22 · · · l2N . . . . . . . . . . . . lN1 lN2 · · · lNN      +      1 0 · · · 0 0 0 · · · 0 . . . . . . . . . . . . 0 0 · · · 0           x +      1 1 . . . 1      =: −(kL + J11 )x + 1N (L is Laplacian matrix) Lemma: If k 0, then the matrix −(kL + J11) is Hurwitz. Therefore, x(t) converges to the equilibrium x∗ = x∗ (k) = (kL + J11 )−1 1N . Lemma: x∗ 1(k) = N, ∀k 0, and x∗ i (k) → N as k → ∞ for i ≥ 2. Therefore, if k is large enough such that |x∗ i (k) − N| 0.5, ∀i, lim t→∞ round(xi(t)) = lim t→∞ xi(t) = N. 7 / 20
  • 11. How the proposed algorithm works? Overall dynamics:      ˙x1 ˙x2 . . . ˙xN      = −      k      l11 l12 · · · l1N l21 l22 · · · l2N . . . . . . . . . . . . lN1 lN2 · · · lNN      +      1 0 · · · 0 0 0 · · · 0 . . . . . . . . . . . . 0 0 · · · 0           x +      1 1 . . . 1      =: −(kL + J11 )x + 1N (L is Laplacian matrix) Lemma: If k 0, then the matrix −(kL + J11) is Hurwitz. Therefore, x(t) converges to the equilibrium x∗ = x∗ (k) = (kL + J11 )−1 1N . Lemma: x∗ 1(k) = N, ∀k 0, and x∗ i (k) → N as k → ∞ for i ≥ 2. Therefore, if k is large enough such that |x∗ i (k) − N| 0.5, ∀i, lim t→∞ round(xi(t)) = lim t→∞ xi(t) = N. 7 / 20
  • 12. How the proposed algorithm works? Overall dynamics:      ˙x1 ˙x2 . . . ˙xN      = −      k      l11 l12 · · · l1N l21 l22 · · · l2N . . . . . . . . . . . . lN1 lN2 · · · lNN      +      1 0 · · · 0 0 0 · · · 0 . . . . . . . . . . . . 0 0 · · · 0           x +      1 1 . . . 1      =: −(kL + J11 )x + 1N (L is Laplacian matrix) Lemma: If k 0, then the matrix −(kL + J11) is Hurwitz. Therefore, x(t) converges to the equilibrium x∗ = x∗ (k) = (kL + J11 )−1 1N . Lemma: x∗ 1(k) = N, ∀k 0, and x∗ i (k) → N as k → ∞ for i ≥ 2. Therefore, if k is large enough such that |x∗ i (k) − N| 0.5, ∀i, lim t→∞ round(xi(t)) = lim t→∞ xi(t) = N. 7 / 20
  • 13. How large k should be? To compute the minimal value of k, we impose One more assumption 3. ∃ upper bound of network size ¯N ( ¯N N), and ¯N is known to every node (this is the only global information needed.) Theorem: If k ¯N3 then the proposed algorithm node 1: ˙x1 = 1 − x1 + k j∈N1 (xj − x1) all other nodes: ˙xi = 1 + k j∈Ni (xj − xi) with arbitrary initial conditions yields estimation of N because lim t→∞ |xi(t) − N| 0.5, ∀i ∈ N. 8 / 20
  • 14. How large k should be? To compute the minimal value of k, we impose One more assumption 3. ∃ upper bound of network size ¯N ( ¯N N), and ¯N is known to every node (this is the only global information needed.) Theorem: If k ¯N3 then the proposed algorithm node 1: ˙x1 = 1 − x1 + k j∈N1 (xj − x1) all other nodes: ˙xi = 1 + k j∈Ni (xj − xi) with arbitrary initial conditions yields estimation of N because lim t→∞ |xi(t) − N| 0.5, ∀i ∈ N. 8 / 20
  • 15. How to obtain N in nite time? The problem is to nd minimal value T such that |xi(t) − N| 0.5, ∀t T To nd the time T, we need convergence rate bounded initial condition 9 / 20
  • 16. How to obtain N in nite time? The problem is to nd minimal value T such that |xi(t) − N| 0.5, ∀t T To nd the time T, we need convergence rate bounded initial condition 9 / 20
  • 17. Convergence rate of proposed algorithm Recall overall dynamics:      ˙x1 ˙x2 . . . ˙xN      = −      k      l11 l12 · · · l1N l21 l22 · · · l2N . . . . . . . . . . . . lN1 lN2 · · · lNN      +      1 0 · · · 0 0 0 · · · 0 . . . . . . . . . . . . 0 0 · · · 0           x +      1 1 . . . 1      =: −(kL + J11 )x + 1N Lemma: If k ¯N3, then λmax −(kL + J11 ) ≤ − 1 4 ¯N . convergence rate is decreasing function w.r.t. ¯N 10 / 20
  • 18. Main Result Our last assumption (Bounded initial condition) Suppose xi(0) ∈ [0, ¯N] for all i. reasonable initial guess ( N ¯N) Theorem (Finite-time Estimation of N) Under all the assumptions, if k ¯N3, then the proposed algorithm guarantees xi(t) = N, ∀t T(k), ∀i where the settling time T(k) is given by T(k) = 4 ¯N ln 2 ¯N1.5k k − ¯N3 . 11 / 20
  • 19. Advantages of the proposed algorithm node 1: ˙x1 = 1 − x1 + k j∈N1 (xj − x1) all other nodes: ˙xi = 1 + k j∈Ni (xj − xi) 1. simple rst-order dynamics 2. exchanges single variable with neighbors 3. obtains N directly within nite time 4. independent of initialization → while the algorithm is running, new node can join or some node can leave the network This property is often called `plug-and-play ready' or `open MAS (multi-agent system)' or `initialization-free algorithm' 12 / 20
  • 20. A Remark for Practical Application 1. To obtain correct estimate of N, it takes T(k) time from the network change 2. However, not every node can detect the changes. 3. Possible solution: allow the changes only at specied time, i.e., some nodes can join or leave the network at t = j · T where T T(k), assuming every node has same clock. Example scenario: there is a unit length of time T T(k) 1. two nodes 1 and 2 belong to the network from time T0 = 0 2. node 3 joins the network at T1 = T 3. node 3 leaves the network at T2 = 2T 13 / 20
  • 21. A Remark for Practical Application 1. To obtain correct estimate of N, it takes T(k) time from the network change 2. However, not every node can detect the changes. 3. Possible solution: allow the changes only at specied time, i.e., some nodes can join or leave the network at t = j · T where T T(k), assuming every node has same clock. Example scenario: there is a unit length of time T T(k) 1. two nodes 1 and 2 belong to the network from time T0 = 0 2. node 3 joins the network at T1 = T 3. node 3 leaves the network at T2 = 2T 13 / 20
  • 22. (two nodes belong to the network from T0 = 0) Every node initializes its state within [0, ¯N] Estimation is guaranteed for t T(k) 14 / 20
  • 23. (node 3 joins the network at T1 = T ) x3(T1) is initialized within [0, ¯N] both x1(T1) and x2(T1) are within [0, ¯N] 15 / 20
  • 24. (node 3 leaves the network at T2 = 2T ) all x1(T2) and x2(T2) are within [0, ¯N] correct estimation is always available for t Tj + T(k) 16 / 20
  • 25. Behind the scene How we came up with the proposed algorithm? 17 / 20
  • 26. Blended dynamics approach A tool for analysis of heterogeneous multi-agent system Node's dynamics: ˙xi = fi(xi) + k j∈Ni xj − xi , i ∈ {1, 2, · · · , N} Blended dynamics (average of vector elds fi): ˙s = 1 N N i=1 fi(s) with s(0) = 1 N N i=1 xi(0) Theorem3 Suppose blended dynamics is stable. Then, ∀ 0, ∃k∗ such that for all k ≥ k∗, lim sup t→∞ |xi(t) − s(t)| , ∀i. 3 Kim, Yang, Shim, Kim, Seo, Robustness of synchronization of heterogeneous agents by strong coupling and a large number of agents, IEEE TAC, 2016 18 / 20
  • 27. We designed node dynamics so that their blended dynamics has desired property. The proposed node dynamics: ˙x1 = 1 − x1 + k j∈N1 xj − x1 ˙xi = 1 + k j∈Ni xj − xi , ∀j ∈ {2, . . . , N} Their blended dynamics: ˙s = 1 N N i=1 fi(s) = 1 N (N − s) = − 1 N s + 1 Therefore, with suciently large k, we have lim sup t→∞ |xi(t) − s(t)| = lim sup t→∞ |xi(t) − N| , ∀i ∈ N information about N is embedded in the vector elds (not in the initial conditions) → key to the `plug-and-play'. 19 / 20
  • 28. Summary the design of proposed algorithm is based on blended dynamics ˙s = − 1 N s + 1 each node obtains network size exactly with arbitrary initial condition ⇒ the algorithm supports plug-and-play operation the estimation is guaranteed within nite time Thank you! Donggil Lee (dglee@cdsl.kr) 20 / 20
  • 29. Simulation for 30 nodes Black dotted line: N(t) ± 0.5