SlideShare a Scribd company logo
Centrality in Time-
Dependent Networks
Mason A. Porter (@masonporter)
Department of Mathematics, UCLA
Outline
◦ Introduction
◦ Motivation
◦ Centrality in Time-Independent Networks
◦ Centrality in Temporal Networks
◦ Eigenvector-Based Centralities in Multilayer Representations of
Time-Dependent Networks
◦ “Tie-Decay” Networks in Continuous Time
◦ Example: Generalization of PageRank Centrality
◦ Conclusions
The Top-5 Hobbies of Network Scientists
◦ 5. Citing papers based on preferential attachment (and/or
possibly about preferential attachment)
◦ 4. Arguing about power laws
◦ 3. Community detection
◦ 2. Finding ways to get the Zachary Karate Club network into
their talks
◦ 1. Developing new centrality measures
A Couple of Longstanding Questions in the
Study of Social Networks
◦ I. de Sola Pool & M. Kochen [1978/79], “Contacts and Influence”, Social Networks, 1: 5–51
(though a preprint for 2 decades)
◦ Inspired work of Milgram
theory of human contact nets might help clarify them both.
No such theory exists at present. Sociologists talk of social stratification;
political scientists of influence. These quantitative concepts ought to lend
themselves to a rigorous metric based upon the elementary social events of
man-to-man contact. “Stratification” expresses the probability of two people
in the same stratum meeting and the improbability of two people from dif-
ferent strata meeting. Political access may be expressed as the probability
that there exists an easy chain of contacts leading to the power holder. Yet
such measures of stratification and influence as functions of contacts do not
exist.
What is it that we should like to know about human contact nets?
-~-For any individual we should like to know how many other people he
knows, i.c. his acquaintance volume.
- For a popnfatiorl we want to know the distribution of acquaintance
volumes, the mean and the range between the extremes.
_ We want to know what kinds of people they are who have many con-
tacts and whether those people are also the influentials.
,.- We want to know how the lines of contact are stratified; what is the
structure of the network?
If we know the answers to these questions about individuals and about the
whole population, we can pose questions about the implications for paths
between pairs of individuals.
- How great is the probability that two persons chosen at random from
the population will know each other?
- How great is the chance that they will have a friend in common?
- How great is the chance that the shortest chain between them requires
two intermediaries; i.e., a friend of a friend?
The mere existence of such a minimum chain does not mean, however,
They run into people they know everywhere they go. The experience of
casual contact and the practice of influence are not unrelated. A common
theory of human contact nets might help clarify them both.
No such theory exists at present. Sociologists talk of social stratification;
political scientists of influence. These quantitative concepts ought to lend
themselves to a rigorous metric based upon the elementary social events of
man-to-man contact. “Stratification” expresses the probability of two people
in the same stratum meeting and the improbability of two people from dif-
ferent strata meeting. Political access may be expressed as the probability
that there exists an easy chain of contacts leading to the power holder. Yet
such measures of stratification and influence as functions of contacts do not
exist.
What is it that we should like to know about human contact nets?
-~-For any individual we should like to know how many other people he
knows, i.c. his acquaintance volume.
- For a popnfatiorl we want to know the distribution of acquaintance
volumes, the mean and the range between the extremes.
_ We want to know what kinds of people they are who have many con-
tacts and whether those people are also the influentials.
,.- We want to know how the lines of contact are stratified; what is the
structure of the network?
If we know the answers to these questions about individuals and about the
whole population, we can pose questions about the implications for paths
between pairs of individuals.
- How great is the probability that two persons chosen at random from
the population will know each other?
- How great is the chance that they will have a friend in common?
- How great is the chance that the shortest chain between them requires
Some classical notions of centrality
◦ Degree
◦ Closeness centrality
◦ Betweenness centrality and its many variants
◦ Eigenvector-based
◦ Solutions of eigenvalue problems
◦ Eigenvector centrality
◦ PageRank
◦ Hubs and authorities
◦ …
◦ Katz centrality, communicability, etc.
◦ …
◦ …
Example: Betweenness Centrality
◦ Which nodes (or edges, or perhaps other substructures) are on a lot of short paths between
nodes?
◦ Example: shortest (“geodesic”) paths
◦ Geodesic node betweenness centrality is the number of shortest (“geodesic”) paths through
node i divided by the total number of geodesic paths (common convention: i, j, m distinct):
◦ Similar formula for geodesic edge betweenness
◦ One can also define notions of betweenness based on ideas like random walks (or by
restricting to particular paths in useful ways).
Example: Eigenvector Centrality
◦ Leading eigenvector v of the adjacency matrix A. The entries of v,
which has strictly positive entries by the Perron–Frobenius theorem,
give the eigenvector centralities of the nodes.
Example: PageRank ◦ Review article: D. F. Gleich, SIAM Review, 2015
B : adjacency matrix
Example: Hubs and Authorities
◦ J. M. Kleinberg, Journal of the ACM, Vol. 46: 604–632 (1999)
◦ Intuition: A Web page (node) is a good hub if it has many hyperlinks (out-edges) to
important nodes, and a node is a good authority if many important nodes have
hyperlinks to it (in-edges)
◦ Imagine a random walker surfing the Web. It should spend a lot of time on
important Web pages. Steady-state populations of an ensemble of walkers satisfy
the eigenvalue problem:
◦ x = aAy ; y = bATx è ATAy = λy & AATx = λx, where λ = 1/(ab)
◦ Leading eigenvalue λ1 (strictly positive) gives strictly positive authority vector x and hub
vector y (leading eigenvectors)
◦ Node i has hub centrality xi and authority centrality yi
Example application of Hubs and Authorities:
Measuring the Quality of Programs in the Mathematical Sciences
◦ Apply the same idea to mathematics departments based on
the flow of Ph.D. students
◦ S. A. Meyer, P. J. Mucha, & MAP, “Mathematical genealogy
and department prestige”, Chaos, Vol. 21: 041104 (2011)
◦ One-page paper in Gallery of Nonlinear Images
◦ Data from Mathematics Genealogy Project (MGP)
MGP with hubs and authorities
• We consider MPG data in the US from
1973–2010 (data from 10/09)
• Example: Peter Mucha earned a PhD from
Princeton and later supervised students at
Georgia Tech and UNC Chapel Hill.
• è Directed edge of unit weight from
Princeton to UNC (and also from Princeton
to Georgia Tech)
• (Note: several additional students not
listed)
• A university is a good authority if it hires
students from good hubs, and a university
is good hub if its students are hired by
good authorities.
• Caveats
• Our measurement has a time delay (only
have the PrincetonèUNC edge after Peter
supervises a PhD student there)
• Hubs and authorities should change in time
Geographically-Inspired Visualization
Mathematical genealogy and department
prestige
Sean A. Myers,1
Peter J. Mucha,1
and Mason A. Porter2
1
Department of Mathematics, University of North Carolina,
Chapel Hill, North Carolina 27599, USA
2
Mathematical Institute, University of Oxford, OX1 3LB, UK
FIG. 1. (Color) Visualizations of a mathematics genealogy network.
CHAOS 21, 041104 (2011)
Hubs: node size
Authorities: node color
How do our rankings do?
artment
son A. Porter2
arolina,
3LB, UK
ber 2011)
(http://www.
50 000 scholars
related fields.
s), graduation
ees. The MGP
be used to trace
ourant, Hilbert,
s Gauss, Euler,
We use a “geographically inspired” layout to balance node
locations and node overlap. A Kamada-Kawai visualization4
or) Visualizations of a mathematics genealogy network.
FIG. 2. (Color) Rankings versus authority scores.
Generalizing Centralities to Time-
Dependent Networks
◦ There have been numerous efforts to generalize centrality measures to time-dependent networks
using various approaches.
◦ For some discussions, see the review articles on temporal networks by P. Holme & J. Saramaki
(2012) and P. Holme (2015)
◦ A very small selection of examples (using references cited in Taylor et al. 2017)
◦ Calculate centrality from networks constructed from independent time windows: D. Braha & Y. Bar-Yam,
2006 (and many others)
◦ Calculate centrality in a time-independent network constructed from time-respecting paths in a temporal
network: G. Kossinets, J. M. Kleinberg, and D. J. Watts (2008)
◦ Calculate (for PageRank) in a way that counteracts age bias: D. Walker, X. Xie, K.-K. Yan, and S. Maslov (2007)
◦ Generalizations, from numerous perspective, of many common centrality measures: betweenness,
eigenvector, PageRank, Katz, communicability, etc. [See Taylor et al. for many references.]
◦ Continuous-time approach for Katz centrality (where the generalization was devised specifically for
Katz centrality): P. Grindrod and D. J. Higham, A Dynamical Systems View of Network Centrality,
Proc. Royal Soc. A (2014), Vol. 470, 20130835
◦ Helped inspire our work in Ahmad et al., where our goal was to come up with a flexible formulation for
studying temporal networks in continuous time, using PageRank centrality as an example.
Using a Multilayer Framework for Eigenvector-
Based Centralities for Time-Dependent Networks
◦ D. Taylor, S. A. Myers, A.
Clauset, MAP, & P. J. Mucha,
“Eigenvector-based
Centrality Measures for
Temporal Networks”,
Multiscale Modeling and
Simulation: A SIAM
Interdisciplinary Journal,
2017
◦ Uses a multilayer
representation of time-
dependent networks to
study important “central”
entities and their dynamics
over time
M. Kivelä et al., 2014ZKCC Network
Supra-adjacency Matrix
(‘flattened’ linear-algebraic representation)
• Schematic from M. Bazzi, MAP, S. Williams, M. McDonald, D. J. Fenn, &
S. D. Howison [2016] Multiscale Modeling and Simulation: A SIAM Interdisciplinary Journal,
14(1): 1–41
13
Layer 1
11 21
31
Layer 2
12 22
32
Layer 3
13 23
33
!
2
6
6
6
6
6
6
6
6
6
6
6
6
4
0 1 1 ! 0 0 0 0 0
1 0 0 0 ! 0 0 0 0
1 0 0 0 0 ! 0 0 0
! 0 0 0 1 1 ! 0 0
0 ! 0 1 0 1 0 ! 0
0 0 ! 1 1 0 0 0 !
0 0 0 ! 0 0 0 1 0
0 0 0 0 ! 0 1 0 1
0 0 0 0 0 ! 0 1 0
3
7
7
7
7
7
7
7
7
7
7
7
7
5
Fig. 3.1. Example of (left) a multilayer network with unweighted intra-layer connections (solid
lines) and uniformly weighted inter-layer connections (dashed curves) and (right) its corresponding
adjacency matrix. (The adjacency matrix that corresponds to a multilayer network is sometimes
called a “supra-adjacency matrix” in the network-science literature [39].)
or an adjacency matrix to represent a multilayer network.) The generalization in [49]
consists of applying the function in (2.16) to the N|T |-node multilayer network:
Multilayer Construction to Examine
Temporal Eigenvector-Based Centralities
◦ Math department rankings change in time, so we want to consider
centrality measures that change in time
◦ E.g. via a multilayer representation of a temporal network
◦ “Multislice” network with adjacency tensor elements Aijt
◦ Directed intralayer edge from university i to university j at time t for a specific
person’s PhD granted at time t for a person who later advised a student at i
(multi-edges give weights)
◦ E.g. Peter Mucha yields a Georgia TechèPrinceton edge and a UNC Chapel
HillèPrinceton edge for t = 1998
◦ Use a multilayer network with “diagonal” and ordinal interlayer coupling
◦ 231 US universities, T = 65 time layers (1946–2010)
Construct a Supra-centrality Matrix
◦ E.g. M(t) = A(t)[A(t)]T to examine hubs
and authorities
◦ A different choice of M(t) gives a
temporal generalization of a different
eigenvector-based centrality measure
◦ ε = 1/ω ; t indexes the layers
◦ A singular perturbation from the ε è
0 (strong coupling) limit yields an
averaged (time-independent)
centrality, then a “first mover”
perturbation term, and then higher-
order corrections
2.2. Inter-Layer Coupling of Centrality Matrices. To avoid
that arise from ignoring the distinction between inter-layer edges and in
we define a somewhat more nuanced generalization of eigenvector-bas
To preserve the special role of inter-layer edges, we directly couple the
defne the eigenvector-based centrality measure within each temporal
dinary adjacency matrices for eigenvector centrality). That is, one
eigenvector-based centrality in terms of some matrix M that is a f
adjacency matrix A. For example, hub and authority scores are the lea
tors of the matrices AAT
and AT
A, respectively (using the convention
Aij indicate i ! j edges). Letting M(t)
denote the centrality matrix
couple these centrality matrices with inter-layer couplings of strength !
supra-centrality matrix
M(✏) =
2
6
6
6
6
6
4
✏M(1)
I 0 · · ·
I ✏M(2)
I
...
0 I ✏M(3) ...
...
...
...
...
3
7
7
7
7
7
5
.
For notational convenience, we define the supra-centrality matrix us
factor ✏ = 1/! rather than the coupling weight !. [That is, we re
by a factor ✏ to obtain Eq. (2.4).] We study the dominant eigenvect
corresponds to the largest eigenvalue max(✏) [i.e, M(✏) (✏) = max
entries of the dominant eigenvector give the centralities of the node-la
this represents the centrality of physical node i at time t. As we
Sec. 2.3, we refer to this type of centrality as a “joint node-layer cent
it reflects the centrality of both the physical node i and the time layer
One can interpret the parameter ✏ > 0 as a tuning parameter tha
strongly a given physical node’s centrality is coupled to itself between ne
Singular Perturbation Expansion
over scores, respectively. We give higher-order expansions in an a
gularity at Infinite Inter-Layer Coupling. To understand
ace (i.e., the eigenspace of the largest eigenvalue) of M(✏) in
rst study the system for ✏ = 0. We write Eq. (2.4) as
M(✏) = B + ✏G ,
A(chain)
⌦ I and G = diag[M(1)
, . . . , M(T )
]. It follows that
To facilitate our discussion of the eigenspace and our subseque
ntroduce the NT ⇥ NT permutation matrix P with entries Pk
T [(k 1) mod N] and Pkl = 0 otherwise, where d·e is the ceiling
permutes node-layer indices so that we can easily go back-and-
ng the node-layer pairs by time and then by physical node index
dering them by physical node index and then by time). In pa
= P I ⌦ A(chain)
PT
. Additionally, because P is a unitary oper
nd the spectral properties of M(0) via the spectral properties o
2
A(chain)
0 0 · · ·
3
first-order-mover scores, respectively. We give high
3.1. Singularity at Infinite Inter-Layer C
inant eigenspace (i.e., the eigenspace of the large
✏ ! 0+
, we first study the system for ✏ = 0. We w
M(✏) = B + ✏G
where B = A(chain)
⌦ I and G = diag[M(1)
, . .
A(chain)
⌦ I. To facilitate our discussion of the eig
lations, we introduce the NT ⇥ NT permutation
l = dk/Ne+T [(k 1) mod N] and Pkl = 0 otherwi
Note that P permutes node-layer indices so that w
tween ordering the node-layer pairs by time and t
versa (i.e., ordering them by physical node index
A(chain)
⌦ I = P I ⌦ A(chain)
PT
. Additionally, b
can understand the spectral properties of M(0) vi
2
6
A(chain)
0
0 A(chain)
node-layer pairs {(i, t)} for the same physical node i across the T network layers.
The identity edges of weight ! attempt to weight the “persistence” of a physical
node through time by enforcing an identification with itself at consecutive times. We
restrict our attention to nonnegative inter-layer coupling ! 0. (One could consider
! < 0 to drive negative coupling between layers, but we do not consider such values
for our applications.) One can construe ! as a parameter to tune interactions between
network layers [6, 9, 81]: in the limit ! ! 0+
, the layers become uncoupled; in the
limit ! ! 1, the layers are so strongly coupled that inter-layer weights dominate the
intra-layer connections.
The case of inter-layer edges only between di↵erent instances of the same physical
node is sometimes called “diagonal coupling,” and the using a constant ! across all
such-interlayer edges is sometimes known as “layer-coupling.” Here we also restrict
ourselves to nearest-neighbor coupling of temporal layers, as we place the identity
inter-layer edges only between node-layer pairs that are adjacent in time, (i, t) and
(i, t ± 1), which results in the block structure in Eq. (2.1). Equivalently, we write
A = diag
h
A(1)
, . . . , A(T )
i
+ A(chain)
⌦ !I , (2.2)
where ⌦ denotes the Kronecker product and A(chain)
is the T ⇥ T adjacency matrix
of an undirected “bucket brigade” (or “chain”) network whose T nodes are each
adjacent to their nearest neighbors along an undirected chain. In this bucket brigade,
A
(chain)
ij = 1 for j = i ± 1 and A
(chain)
ij = 0 otherwise. Although one can choose inter-
layer coupling matrices other than A(chain)
for the inter-layer couplings [70] (and
much of our approach can be generalized to other choices of coupling), we restrict our
attention to nearest-neighbor coupling of layers.
It is tempting to directly apply a standard eigenvector-based centrality formu-
lation to the supra-adjacency matrix A by treating it just like any other adjacency
matrix despite its structure. However, such an approach neglects to respect the funda-
mental distinction between intra-layer edges and inter-layer edges that result from the
block-diagonal structure of A. That is, in such an approach, one treats the inter-layer
couplings (i.e., identity arcs) just like any other edge. In general, however, one needs
to be careful when studying a temporal network using the supra-adjacency matrix
formalism because many basic network properties—some of which carry strong impli-
cations about a static network (e.g., its spectra, connectedness properties, etc.)—do
not naturally carry over without modification to the supra-adjacency matrix. This
issue was discussed for multilayer networks more generally in Refs. [24, 27, 70] and
12 D. TAYLOR et al.
✏ ! 0+
. In Sec. 3.1, we further explore the singularity that arises in the strong-
coupling limit. In Secs. 3.2 and 3.3, we give zeroth- and first-order perturbation
expansions, which lead to principled expressions for time-averaged centralities and
first-order-mover scores, respectively. We give higher-order expansions in an appendix.
3.1. Singularity at Infinite Inter-Layer Coupling. To understand the dom-
inant eigenspace (i.e., the eigenspace of the largest eigenvalue) of M(✏) in the limit
✏ ! 0+
, we first study the system for ✏ = 0. We write Eq. (2.4) as
M(✏) = B + ✏G , (3.1)
where B = A(chain)
⌦ I and G = diag[M(1)
, . . . , M(T )
]. It follows that M(0) =
A(chain)
⌦ I. To facilitate our discussion of the eigenspace and our subsequent calcu-
lations, we introduce the NT ⇥ NT permutation matrix P with entries Pkl = 1 for
l = dk/Ne+T [(k 1) mod N] and Pkl = 0 otherwise, where d·e is the ceiling operator.
Note that P permutes node-layer indices so that we can easily go back-and-forth be-
tween ordering the node-layer pairs by time and then by physical node index, or vice
versa (i.e., ordering them by physical node index and then by time). In particular,
A(chain)
⌦ I = P I ⌦ A(chain)
PT
. Additionally, because P is a unitary operator, one
can understand the spectral properties of M(0) via the spectral properties of
I ⌦ A(chain)
=
2
6
6
6
6
4
A(chain)
0 0 · · ·
0 A(chain)
0
0 0 A(chain) ...
...
...
...
3
7
7
7
7
5
. (3.2)
Eigenvector-Based Centrality Measures for Temporal Networks 13
where we impose u0 = uT +1 = 0 so that Eq. (3.4) holds for all t 2 {1, . . . , T}. We
now let ut / sin n⇡t
T +1 for n 2 {1, . . . , T} to ensure that these boundary conditions are
satisfied. Equation (3.4) then reduces to

2 cos
✓
n⇡
T + 1
◆
(chain)
sin
n⇡t
T + 1
= 0 . (3.5)
For the solution of Eq. (3.5) to be consistent for all t, the term in the square brackets
must be identically 0. Consequently,
(chain)
= 2 cos
✓
n⇡
T + 1
◆
, (3.6)
u(chain)
=
1
p
n

sin
✓
n⇡
T + 1
◆
, sin
✓
2n⇡
T + 1
◆
, . . . , sin
✓
Tn⇡
T + 1
◆ T
, (3.7)
where the normalization constant is n =
PT
t=1 sin2
[n⇡t/(T + 1)]. Setting n = 1
then gives the dominant eigenvalue and its corresponding eigenvector for the matrix
A(chain)
.
Returning to the dominant eigenvalue and eigenspace of Eq. (3.2), we see that the
dominant eigenvalue of both I ⌦ A(chain)
and A(chain)
⌦ I is max = 2 cos [⇡/(T + 1)].
The general solution for the dominant eigenvector of I ⌦ A(chain)
that spans this
eigenspace is
P
j ↵j j. The N dominant eigenvectors of M(0) therefore have the
general form
P
j ↵jP j, where the constants {↵i} must satisfy
P
i ↵2
i = 1 to ensure
A(chain)
.
Returning to the dominant eigenvalue and eigenspace of Eq. (3.2), we see that the
dominant eigenvalue of both I ⌦ A(chain)
and A(chain)
⌦ I is max = 2 cos [⇡/(T + 1)].
The general solution for the dominant eigenvector of I ⌦ A(chain)
that spans this
eigenspace is
P
j ↵j j. The N dominant eigenvectors of M(0) therefore have the
general form
P
j ↵jP j, where the constants {↵i} must satisfy
P
i ↵2
i = 1 to ensure
hat the vector is normalized.
3.2. Zeroth-Order Expansion and Time-Averaged Centrality. In this
section, we study the zeroth-order expansion of the dominant eigenvector (✏) for
he limit ✏ ! 0+
. As we shall now show, the conditional node-layer centralities
{(i, t)} corresponding to a given physical node i become constant across time in this
imit. We refer to these values as the physical nodes’ time-averaged centralities. By
examining the first-order expansion, we show in Eq. (3.14) that one can obtain these
values as the solution to a dominant eigenvalue equation for a matrix of size N ⇥ N.
We consider the dominant eigenvalue equation
max(✏) (✏) = M(✏) (✏) = B (✏) + ✏G (✏) . (3.8)
We expand max(✏) and (✏) for small ✏ by writing max(✏) = 0 + ✏ 1 + · · · and
(✏) = 0 + ✏ 1 + · · · to obtain kth-order approximations: max(✏) ⇡
Pk
j=0 ✏j
j and
(✏) ⇡
Pk
j=0 ✏j
j. Note that we use subscripts to indicate the orders in ✏ of the
erms in the expansion. Our strategy is to develop consistent solutions to Eq. (3.8)
or increasing values of k.
Starting with the first-order approximations, we substitute max(✏) ⇡ 0 + ✏ 1
and (✏) ⇡ 0 + ✏ 1 into Eq. (3.8) and collect the zeroth- and first-order terms in ✏
o obtain
nvalue and eigenspace of Eq. (3.2), we see that the
(chain)
and A(chain)
⌦ I is max = 2 cos [⇡/(T + 1)].
nant eigenvector of I ⌦ A(chain)
that spans this
ominant eigenvectors of M(0) therefore have the
constants {↵i} must satisfy
P
i ↵2
i = 1 to ensure
on and Time-Averaged Centrality. In this
expansion of the dominant eigenvector (✏) for
now show, the conditional node-layer centralities
hysical node i become constant across time in this
the physical nodes’ time-averaged centralities. By
n, we show in Eq. (3.14) that one can obtain these
nt eigenvalue equation for a matrix of size N ⇥ N.
nvalue equation
M(✏) (✏) = B (✏) + ✏G (✏) . (3.8)
mall ✏ by writing max(✏) = 0 + ✏ 1 + · · · and
h-order approximations: max(✏) ⇡
Pk
j=0 ✏j
j and
use subscripts to indicate the orders in ✏ of the
egy is to develop consistent solutions to Eq. (3.8)
pproximations, we substitute max(✏) ⇡ 0 + ✏ 1
and collect the zeroth- and first-order terms in ✏
j
that the vector is normalized.
3.2. Zeroth-Order Expansio
section, we study the zeroth-order e
the limit ✏ ! 0+
. As we shall no
{(i, t)} corresponding to a given phy
limit. We refer to these values as th
examining the first-order expansion,
values as the solution to a dominant
We consider the dominant eigenv
max(✏) (✏) = M
We expand max(✏) and (✏) for sm
(✏) = 0 + ✏ 1 + · · · to obtain kth-
(✏) ⇡
Pk
j=0 ✏j
j. Note that we u
terms in the expansion. Our strateg
for increasing values of k.
Starting with the first-order app
and (✏) ⇡ 0 + ✏ 1 into Eq. (3.8) a
to obtain
0th Order Expansion and Time-Averaged Centrality
of 0 in Eq. (3.11), we obtain
X
j
↵j
T
i PT
GP j = 1
X
j
↵j
T
i PT
P j = 1↵i , (3.13)
T
= I and T
i j = ij, where ij is the Kronecker delta. Letting
Eq. (3.13) corresponds to an N-dimensional eigenvalue equation,
X(1)
↵ = 1↵ , (3.14)
X(1)
has elements
(1)
ij = T
i PT
GP j = 1
1
X
t
M
(t)
ij sin2
✓
⇡t
T + 1
◆
, (3.15)
(⇡t/(T + 1)) is the normalization constant for the dominant eigen-
n by n = 1 in Eq. (3.7). Our assumption that (✏) is nonnegative
any ✏ > 0 ensures that X(1)
is also nonnegative and irreducible.
enius theorem for nonnegative matrices [77], the largest eigenvalue
ltiplicity of one, and its eigenvector ↵ is unique with nonnegative
2 and the footnote 3 therein). We normalize the solution ↵ to
2
i = 1 and substitute the normalized solution into Eq. (3.11) to
rder term 0.
vector 0 is the dominant eigenvector of (✏) and that it gives
+
i i
Using the solution of 0 in Eq. (3.11), we obtain
X
j
↵j
T
i PT
GP j = 1
X
j
↵j
T
i PT
P j = 1↵i , (3.13)
because PT
P = PPT
= I and T
i j = ij, where ij is the Kronecker delta. Letting
↵ = [↵1, . . . , ↵N ]T
, Eq. (3.13) corresponds to an N-dimensional eigenvalue equation,
X(1)
↵ = 1↵ , (3.14)
where the matrix X(1)
has elements
X
(1)
ij = T
i PT
GP j = 1
1
X
t
M
(t)
ij sin2
✓
⇡t
T + 1
◆
, (3.15)
and 1 =
PT
t=1 sin2
(⇡t/(T + 1)) is the normalization constant for the dominant eigen-
vector u(chain)
given by n = 1 in Eq. (3.7). Our assumption that (✏) is nonnegative
and irreducible for any ✏ > 0 ensures that X(1)
is also nonnegative and irreducible.
By the Perron-Frobenius theorem for nonnegative matrices [77], the largest eigenvalue
1 of X(1)
has a multiplicity of one, and its eigenvector ↵ is unique with nonnegative
entries (see Sec. 2.2 and the footnote 3 therein). We normalize the solution ↵ to
Eq. (3.14) by
P
i ↵2
i = 1 and substitute the normalized solution into Eq. (3.11) to
obtain the zeroth-order term 0.
Recall that the vector 0 is the dominant eigenvector of (✏) and that it gives
the joint node-layer centralities in the limit ✏ ! 0+
. By inspection, the elements
of 0 are ↵i sin(⇡t/(T + 1)) for node-layer pair (i, t). It follows that the conditional
centrality of node-layer pair (i, t) is ↵i, independent of the layer t. Importantly, these
{↵i} values arise naturally from our perturbative expansion in the supra-centrality
framework, independently of the value of ✏. By contrast, recall that the marginal
node centralities (MNC) reflect averaging the joint centralities across time layers for
a specific choice of ✏. Accordingly, we hereafter refer to the entry ↵i in the vector ↵
12 D. TAYLOR et al.
✏ ! 0+
. In Sec. 3.1, we further explore the singularity that arises in the strong-
coupling limit. In Secs. 3.2 and 3.3, we give zeroth- and first-order perturbation
expansions, which lead to principled expressions for time-averaged centralities and
first-order-mover scores, respectively. We give higher-order expansions in an appendix.
3.1. Singularity at Infinite Inter-Layer Coupling. To understand the dom-
inant eigenspace (i.e., the eigenspace of the largest eigenvalue) of M(✏) in the limit
✏ ! 0+
, we first study the system for ✏ = 0. We write Eq. (2.4) as
M(✏) = B + ✏G , (3.1)
where B = A(chain)
⌦ I and G = diag[M(1)
, . . . , M(T )
]. It follows that M(0) =
A(chain)
⌦ I. To facilitate our discussion of the eigenspace and our subsequent calcu-
lations, we introduce the NT ⇥ NT permutation matrix P with entries Pkl = 1 for
l = dk/Ne+T [(k 1) mod N] and Pkl = 0 otherwise, where d·e is the ceiling operator.
Note that P permutes node-layer indices so that we can easily go back-and-forth be-
tween ordering the node-layer pairs by time and then by physical node index, or vice
versa (i.e., ordering them by physical node index and then by time). In particular,
A(chain)
⌦ I = P I ⌦ A(chain)
PT
. Additionally, because P is a unitary operator, one
can understand the spectral properties of M(0) via the spectral properties of
I ⌦ A(chain)
=
2
6
6
6
6
4
A(chain)
0 0 · · ·
0 A(chain)
0
0 0 A(chain) ...
...
...
...
3
7
7
7
7
5
. (3.2)
We can see from our above discussion that the base problem at ✏ = 0 de-
couples into N identical chains, where each chain corresponds to a physical node
coupled across the T time layers. Because of the block-diagonal and repeated na-
ture of Eq. (3.2), determining its spectral properties is relatively straightforward:
we obtain them from the eigenvalues and eigenvectors of A(chain)
. The eigenval-
ues of I ⌦ A(chain)
are given by the eigenvalues of A(chain)
, where each eigenvalue
has a multiplicity of N and a corresponding N-dimensional eigenspace spanned by
the vectors based on the eigenvectors of A(chain)
(with appended 0 values in ap-
0 = 2 cos
✓
⇡
T + 1
◆
, 0 =
X
j
↵jP j , (3.11)
where {↵i} are constants that satisfy a constraint that 0 has a magnitude of 1 (i.e.,P
i ↵2
i = 1). We defined i just below Eq. (3.2).
To find the set {↵i} of unique constants that define 0, we need a solvability
condition in the first-order terms. Using the fact that the null space of 0I B is
span(P 1, . . . , P N ) for any physical node i, it follows that (P i)T
( 0I B) 1 = 0,
and left-multiplying Eq. (3.10) by (P i)T
leads to
T
i PT
G 0 = 1
T
i PT
0 . (3.12)
Using the solution of 0 in Eq. (3.11), we obtain
X
j
↵j
T
i PT
GP j = 1
X
j
↵j
T
i PT
P j = 1↵i , (3.13)
because PT
P = PPT
= I and T
i j = ij, where ij is the Kronecker delta. Letting
↵ = [↵1, . . . , ↵N ]T
, Eq. (3.13) corresponds to an N-dimensional eigenvalue equation,
X(1)
↵ = 1↵ , (3.14)
where the matrix X(1)
has elements
X
(1)
ij = T
i PT
GP j = 1
1
X
t
M
(t)
ij sin2
✓
⇡t
T + 1
◆
, (3.15)
and 1 =
PT
t=1 sin2
(⇡t/(T + 1)) is the normalization constant for the dominant eigen-
vector u(chain)
given by n = 1 in Eq. (3.7). Our assumption that (✏) is nonnegative
and irreducible for any ✏ > 0 ensures that X(1)
is also nonnegative and irreducible.
By the Perron-Frobenius theorem for nonnegative matrices [77], the largest eigenvalue
1 of X(1)
has a multiplicity of one, and its eigenvector ↵ is unique with nonnegative
entries (see Sec. 2.2 and the footnote 3 therein). We normalize the solution ↵ to
Eq. (3.14) by
P
i ↵2
i = 1 and substitute the normalized solution into Eq. (3.11) to
obtain the zeroth-order term 0.
Recall that the vector 0 is the dominant eigenvector of (✏) and that it gives
the joint node-layer centralities in the limit ✏ ! 0+
. By inspection, the elements
of 0 are ↵i sin(⇡t/(T + 1)) for node-layer pair (i, t). It follows that the conditional
centrality of node-layer pair (i, t) is ↵i, independent of the layer t. Importantly, these
{↵i} values arise naturally from our perturbative expansion in the supra-centrality
framework, independently of the value of ✏. By contrast, recall that the marginal
node centralities (MNC) reflect averaging the joint centralities across time layers for
One derives expressions for higher-order corrections in a similar way. For example, the coefficient of
the first correction gives a first-mover score.
αi is the time-averaged centrality for entity i
Math Departments: Best Authorities
18 S. A. MYERS et al.
Table 4.1
Top centralities and first-order movers for universities in the MGP [4].
Top Time-Averaged Centralities Top First-Order Mover Scores
Rank University ↵i
1 MIT 0.6685
2 Berkeley 0.2722
3 Stanford 0.2295
4 Princeton 0.1803
5 Illinois 0.1645
6 Cornell 0.1642
7 Harvard 0.1628
8 UW 0.1590
9 Michigan 0.1521
10 UCLA 0.1456
Rank University mi
1 MIT 688.62
2 Berkeley 299.07
3 Princeton 248.72
4 Stanford 241.71
5 Georgia Tech 189.34
6 Maryland 186.65
7 Harvard 185.34
8 CUNY 182.59
9 Cornell 180.50
10 Yale 159.11
map: do we want to indicate what any of those other papers do with
the MGP data?drt: my vote is no. keep things as brief as possible.
We extend our previous consideration of this data [71] by keeping the year that
each faculty member graduated with his/her Ph.D. degree. We thus construct a
multilayer network of the MGP Ph.D. exchange using elements Aijt that indicate a
Tie-Decay Centrality in Continuous Time
◦ W. Ahmad, MAP, & M.
Beguerisse-Díaz [2018], arXiv:
1805.00193
◦ It’s desirable to develop
methods that allow
consideration of continuous
time.
◦ Chopping up data into time
windows is a major issue.
◦ Very important for modeling
◦ Important: we now distinguish
between interactions and ties
Ties, Interactions, and Time-Dependent
Networks
◦ Interaction: an interaction between two agents is an event that takes
place at a specific point in time (or during a specific time interval)
◦ A phone call, text message, tweet, etc.
◦ This work: consider only instantaneous interactions
◦ Tie: a tie between two agents is a relationship between them
◦ It can have a weight to represent its strength
◦ Strengthen (or, more generally, change in strength) with repeated interactions,
but they deteriorate in their absence
Mathematical Setup
◦ n interacting agents
◦ B(t) = n x n time-dependent, real, non-negative matrix
◦ Entries bij(t) represent the connection strength between agents i and j at time t
◦ To construct a continuous-time temporal network of ties, we make two assumptions about
how ties evolve and how ties strengthen them
Yields a Time-Dependent Network
◦ A(t) = instantaneous adjacency matrix (i.e. of interactions) at time t
◦ Technical point: B(t) is a solution of an ODE (not shown)
◦ E.g. we get delta functions from instantaneous interactions, and we know how to solve the ODE. One can
generalize the framework and then analyze a more complicated ODE.
Illustration
Tie-Decay Generalization of PageRank
◦ = Moore–Penrose pseudo-inverse
◦ c(t) = an n x 1 vector of ‘dangling nodes’ (which have 0 out-degree) at time t
◦ v = vector of teleportation probabilities (time-independent)
Updating the Transition Matrix P(t)
◦ No interactions → P + ∆P ≡ P(t + ∆t) = P(t)
◦ If there is a single new interaction during interval ∆t, we get the following expression for ∆P:
◦ Efficient computation of time-dependent PageRank vector using power iteration. We use π(t)
as initial value to obtain π(t+ ∆t)
◦ → Allows efficient computation with data streams (not just data sets)
Example: National Health Service (NHS)
Retweet Network
◦ Data set: 5 months of
tweeting activity about NHS
after the controversial Health
and Social Care Act of 2012
◦ Tweets in English with the term
‘NHS’ and look at the 10000
most-active users
◦ Interactions are retweets
Conclusions
◦ Clearly, we need to develop more centrality measures, right?
◦ Right?
◦ More seriously: it’s important to generalize ideas that we know (and presumably love) to
time-dependent networks.
◦ This talk: time-dependent generalizations of eigenvector-based centrality measures
◦ Multilayer representation
◦ Continuous time using “tie-decay network” approach
◦ Distinguishes between interactions and ties
◦ Examining continuous time is important for both data and modeling considerations
◦ Ideas for tie-decay continuous-time networks
◦ Include duration of ties, general personalized PageRank and associated community-detection
methods, etc. (There is a ton of stuff to do.)
◦ Key message: Need more studies of networks in continuous time
Advertisement: New Journal:
SIAM Journal on Mathematics
of Data Science (SIMODS)
◦ Now accepting submissions!
◦ SIMODS publishes work that advances mathematical,
statistical, and computational methods in the context
of data and information sciences. We invite papers that
present significant advances in this context, including
applications to science, engineering, business, and
medicine.

More Related Content

What's hot

Tutorial of topological data analysis part 3(Mapper algorithm)
Tutorial of topological data analysis part 3(Mapper algorithm)Tutorial of topological data analysis part 3(Mapper algorithm)
Tutorial of topological data analysis part 3(Mapper algorithm)
Ha Phuong
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
DEEPASHRI HK
 

What's hot (20)

Topological Data Analysis of Complex Spatial Systems
Topological Data Analysis of Complex Spatial SystemsTopological Data Analysis of Complex Spatial Systems
Topological Data Analysis of Complex Spatial Systems
 
Brain network modelling: connectivity metrics and group analysis
Brain network modelling: connectivity metrics and group analysisBrain network modelling: connectivity metrics and group analysis
Brain network modelling: connectivity metrics and group analysis
 
Fuzzy rules and fuzzy reasoning
Fuzzy rules and fuzzy reasoningFuzzy rules and fuzzy reasoning
Fuzzy rules and fuzzy reasoning
 
Graph Representation Learning
Graph Representation LearningGraph Representation Learning
Graph Representation Learning
 
Markov Chain Monte Carlo Methods
Markov Chain Monte Carlo MethodsMarkov Chain Monte Carlo Methods
Markov Chain Monte Carlo Methods
 
Tutorial of topological data analysis part 3(Mapper algorithm)
Tutorial of topological data analysis part 3(Mapper algorithm)Tutorial of topological data analysis part 3(Mapper algorithm)
Tutorial of topological data analysis part 3(Mapper algorithm)
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Linear models for classification
Linear models for classificationLinear models for classification
Linear models for classification
 
Interval Type-2 fuzzy decision making
Interval Type-2 fuzzy decision makingInterval Type-2 fuzzy decision making
Interval Type-2 fuzzy decision making
 
Machine learning important questions
Machine learning  important questionsMachine learning  important questions
Machine learning important questions
 
Artificial Neural Networks - ANN
Artificial Neural Networks - ANNArtificial Neural Networks - ANN
Artificial Neural Networks - ANN
 
Tutorial on Deep Generative Models
 Tutorial on Deep Generative Models Tutorial on Deep Generative Models
Tutorial on Deep Generative Models
 
Federated Learning of Neural Network Models with Heterogeneous Structures.pdf
Federated Learning of Neural Network Models with Heterogeneous Structures.pdfFederated Learning of Neural Network Models with Heterogeneous Structures.pdf
Federated Learning of Neural Network Models with Heterogeneous Structures.pdf
 
Lecture 7: Hidden Markov Models (HMMs)
Lecture 7: Hidden Markov Models (HMMs)Lecture 7: Hidden Markov Models (HMMs)
Lecture 7: Hidden Markov Models (HMMs)
 
Fuzzy arithmetic
Fuzzy arithmeticFuzzy arithmetic
Fuzzy arithmetic
 
Disentangled Representation Learning of Deep Generative Models
Disentangled Representation Learning of Deep Generative ModelsDisentangled Representation Learning of Deep Generative Models
Disentangled Representation Learning of Deep Generative Models
 
Mesoscale Structures in Networks
Mesoscale Structures in NetworksMesoscale Structures in Networks
Mesoscale Structures in Networks
 
Belief Networks & Bayesian Classification
Belief Networks & Bayesian ClassificationBelief Networks & Bayesian Classification
Belief Networks & Bayesian Classification
 
Metaheuristic Algorithms: A Critical Analysis
Metaheuristic Algorithms: A Critical AnalysisMetaheuristic Algorithms: A Critical Analysis
Metaheuristic Algorithms: A Critical Analysis
 
Anfis (1)
Anfis (1)Anfis (1)
Anfis (1)
 

Similar to Centrality in Time- Dependent Networks

Tepl webinar 20032013
Tepl webinar   20032013Tepl webinar   20032013
Tepl webinar 20032013
Nina Pataraia
 
Social networkanalysisfinal
Social networkanalysisfinalSocial networkanalysisfinal
Social networkanalysisfinal
kcarter14
 
ICPSR - Complex Systems Models in the Social Sciences - Lecture 4 - Professor...
ICPSR - Complex Systems Models in the Social Sciences - Lecture 4 - Professor...ICPSR - Complex Systems Models in the Social Sciences - Lecture 4 - Professor...
ICPSR - Complex Systems Models in the Social Sciences - Lecture 4 - Professor...
Daniel Katz
 
Socialnetworkanalysis
SocialnetworkanalysisSocialnetworkanalysis
Socialnetworkanalysis
kcarter14
 

Similar to Centrality in Time- Dependent Networks (20)

Map history-networks-shorter
Map history-networks-shorterMap history-networks-shorter
Map history-networks-shorter
 
Mathematics and Social Networks
Mathematics and Social NetworksMathematics and Social Networks
Mathematics and Social Networks
 
Social Dynamics on Networks
Social Dynamics on NetworksSocial Dynamics on Networks
Social Dynamics on Networks
 
Opinion Dynamics on Networks
Opinion Dynamics on NetworksOpinion Dynamics on Networks
Opinion Dynamics on Networks
 
Cite track presentation
Cite track presentationCite track presentation
Cite track presentation
 
Analysis Of A Learning Community As A Social Network
Analysis Of A Learning Community As A Social NetworkAnalysis Of A Learning Community As A Social Network
Analysis Of A Learning Community As A Social Network
 
01 Introduction to Networks Methods and Measures
01 Introduction to Networks Methods and Measures01 Introduction to Networks Methods and Measures
01 Introduction to Networks Methods and Measures
 
01 Introduction to Networks Methods and Measures (2016)
01 Introduction to Networks Methods and Measures (2016)01 Introduction to Networks Methods and Measures (2016)
01 Introduction to Networks Methods and Measures (2016)
 
Tepl webinar 20032013
Tepl webinar   20032013Tepl webinar   20032013
Tepl webinar 20032013
 
Social networkanalysisfinal
Social networkanalysisfinalSocial networkanalysisfinal
Social networkanalysisfinal
 
ICPSR - Complex Systems Models in the Social Sciences - Lecture 4 - Professor...
ICPSR - Complex Systems Models in the Social Sciences - Lecture 4 - Professor...ICPSR - Complex Systems Models in the Social Sciences - Lecture 4 - Professor...
ICPSR - Complex Systems Models in the Social Sciences - Lecture 4 - Professor...
 
Socialnetworkanalysis
SocialnetworkanalysisSocialnetworkanalysis
Socialnetworkanalysis
 
Social Network Analysis - an Introduction (minus the Maths)
Social Network Analysis - an Introduction (minus the Maths)Social Network Analysis - an Introduction (minus the Maths)
Social Network Analysis - an Introduction (minus the Maths)
 
Current trends of opinion mining and sentiment analysis in social networks
Current trends of opinion mining and sentiment analysis in social networksCurrent trends of opinion mining and sentiment analysis in social networks
Current trends of opinion mining and sentiment analysis in social networks
 
Mesoscale Structures in Networks - Mason A. Porter
Mesoscale Structures in Networks - Mason A. PorterMesoscale Structures in Networks - Mason A. Porter
Mesoscale Structures in Networks - Mason A. Porter
 
Networking Portfolio Term Paper
Networking Portfolio Term PaperNetworking Portfolio Term Paper
Networking Portfolio Term Paper
 
Tutorial on Relationship Mining In Online Social Networks
Tutorial on Relationship Mining In Online Social NetworksTutorial on Relationship Mining In Online Social Networks
Tutorial on Relationship Mining In Online Social Networks
 
Scholar's scientific collaboration through co-authorship networks
Scholar's scientific collaboration through co-authorship networksScholar's scientific collaboration through co-authorship networks
Scholar's scientific collaboration through co-authorship networks
 
Frontiers of Computational Journalism week 8 - Visualization and Network Anal...
Frontiers of Computational Journalism week 8 - Visualization and Network Anal...Frontiers of Computational Journalism week 8 - Visualization and Network Anal...
Frontiers of Computational Journalism week 8 - Visualization and Network Anal...
 
Social Network, Metrics and Computational Problem
Social Network, Metrics and Computational ProblemSocial Network, Metrics and Computational Problem
Social Network, Metrics and Computational Problem
 

More from Mason Porter

More from Mason Porter (12)

Mathematical Models of the Spread of Diseases, Opinions, Information, and Mis...
Mathematical Models of the Spread of Diseases, Opinions, Information, and Mis...Mathematical Models of the Spread of Diseases, Opinions, Information, and Mis...
Mathematical Models of the Spread of Diseases, Opinions, Information, and Mis...
 
Introduction to Topological Data Analysis
Introduction to Topological Data AnalysisIntroduction to Topological Data Analysis
Introduction to Topological Data Analysis
 
Topological Data Analysis of Complex Spatial Systems
Topological Data Analysis of Complex Spatial SystemsTopological Data Analysis of Complex Spatial Systems
Topological Data Analysis of Complex Spatial Systems
 
The Science of "Chaos"
The Science of "Chaos"The Science of "Chaos"
The Science of "Chaos"
 
Paper Writing in Applied Mathematics (slightly updated slides)
Paper Writing in Applied Mathematics (slightly updated slides)Paper Writing in Applied Mathematics (slightly updated slides)
Paper Writing in Applied Mathematics (slightly updated slides)
 
Tutorial on Paper-Writing in Applied Mathematics (Preliminary Draft of Slides)
Tutorial on Paper-Writing in Applied Mathematics (Preliminary Draft of Slides)Tutorial on Paper-Writing in Applied Mathematics (Preliminary Draft of Slides)
Tutorial on Paper-Writing in Applied Mathematics (Preliminary Draft of Slides)
 
Snowbird comp-top-may2017
Snowbird comp-top-may2017Snowbird comp-top-may2017
Snowbird comp-top-may2017
 
Data Ethics for Mathematicians
Data Ethics for MathematiciansData Ethics for Mathematicians
Data Ethics for Mathematicians
 
Networks in Space: Granular Force Networks and Beyond
Networks in Space: Granular Force Networks and BeyondNetworks in Space: Granular Force Networks and Beyond
Networks in Space: Granular Force Networks and Beyond
 
Ds15 minitute-v2
Ds15 minitute-v2Ds15 minitute-v2
Ds15 minitute-v2
 
Matchmaker110714
Matchmaker110714Matchmaker110714
Matchmaker110714
 
Cascades and Social Influence on Networks, UCSB, 3 Oct 2014
Cascades and Social Influence on Networks, UCSB, 3 Oct 2014Cascades and Social Influence on Networks, UCSB, 3 Oct 2014
Cascades and Social Influence on Networks, UCSB, 3 Oct 2014
 

Recently uploaded

Anemia_ different types_causes_ conditions
Anemia_ different types_causes_ conditionsAnemia_ different types_causes_ conditions
Anemia_ different types_causes_ conditions
muralinath2
 
Mammalian Pineal Body Structure and Also Functions
Mammalian Pineal Body Structure and Also FunctionsMammalian Pineal Body Structure and Also Functions
Mammalian Pineal Body Structure and Also Functions
YOGESH DOGRA
 
Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...
Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...
Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...
muralinath2
 
Exomoons & Exorings with the Habitable Worlds Observatory I: On the Detection...
Exomoons & Exorings with the Habitable Worlds Observatory I: On the Detection...Exomoons & Exorings with the Habitable Worlds Observatory I: On the Detection...
Exomoons & Exorings with the Habitable Worlds Observatory I: On the Detection...
Sérgio Sacani
 
The importance of continents, oceans and plate tectonics for the evolution of...
The importance of continents, oceans and plate tectonics for the evolution of...The importance of continents, oceans and plate tectonics for the evolution of...
The importance of continents, oceans and plate tectonics for the evolution of...
Sérgio Sacani
 
Jet reorientation in central galaxies of clusters and groups: insights from V...
Jet reorientation in central galaxies of clusters and groups: insights from V...Jet reorientation in central galaxies of clusters and groups: insights from V...
Jet reorientation in central galaxies of clusters and groups: insights from V...
Sérgio Sacani
 
Detectability of Solar Panels as a Technosignature
Detectability of Solar Panels as a TechnosignatureDetectability of Solar Panels as a Technosignature
Detectability of Solar Panels as a Technosignature
Sérgio Sacani
 
Aerodynamics. flippatterncn5tm5ttnj6nmnynyppt
Aerodynamics. flippatterncn5tm5ttnj6nmnynypptAerodynamics. flippatterncn5tm5ttnj6nmnynyppt
Aerodynamics. flippatterncn5tm5ttnj6nmnynyppt
sreddyrahul
 
FAIR & AI Ready KGs for Explainable Predictions
FAIR & AI Ready KGs for Explainable PredictionsFAIR & AI Ready KGs for Explainable Predictions
FAIR & AI Ready KGs for Explainable Predictions
Michel Dumontier
 

Recently uploaded (20)

Anemia_ different types_causes_ conditions
Anemia_ different types_causes_ conditionsAnemia_ different types_causes_ conditions
Anemia_ different types_causes_ conditions
 
Mammalian Pineal Body Structure and Also Functions
Mammalian Pineal Body Structure and Also FunctionsMammalian Pineal Body Structure and Also Functions
Mammalian Pineal Body Structure and Also Functions
 
Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...
Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...
Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...
 
GBSN - Microbiology (Lab 2) Compound Microscope
GBSN - Microbiology (Lab 2) Compound MicroscopeGBSN - Microbiology (Lab 2) Compound Microscope
GBSN - Microbiology (Lab 2) Compound Microscope
 
Exomoons & Exorings with the Habitable Worlds Observatory I: On the Detection...
Exomoons & Exorings with the Habitable Worlds Observatory I: On the Detection...Exomoons & Exorings with the Habitable Worlds Observatory I: On the Detection...
Exomoons & Exorings with the Habitable Worlds Observatory I: On the Detection...
 
Hemoglobin metabolism_pathophysiology.pptx
Hemoglobin metabolism_pathophysiology.pptxHemoglobin metabolism_pathophysiology.pptx
Hemoglobin metabolism_pathophysiology.pptx
 
electrochemical gas sensors and their uses.pptx
electrochemical gas sensors and their uses.pptxelectrochemical gas sensors and their uses.pptx
electrochemical gas sensors and their uses.pptx
 
The importance of continents, oceans and plate tectonics for the evolution of...
The importance of continents, oceans and plate tectonics for the evolution of...The importance of continents, oceans and plate tectonics for the evolution of...
The importance of continents, oceans and plate tectonics for the evolution of...
 
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.
 
Jet reorientation in central galaxies of clusters and groups: insights from V...
Jet reorientation in central galaxies of clusters and groups: insights from V...Jet reorientation in central galaxies of clusters and groups: insights from V...
Jet reorientation in central galaxies of clusters and groups: insights from V...
 
Lab report on liquid viscosity of glycerin
Lab report on liquid viscosity of glycerinLab report on liquid viscosity of glycerin
Lab report on liquid viscosity of glycerin
 
The ASGCT Annual Meeting was packed with exciting progress in the field advan...
The ASGCT Annual Meeting was packed with exciting progress in the field advan...The ASGCT Annual Meeting was packed with exciting progress in the field advan...
The ASGCT Annual Meeting was packed with exciting progress in the field advan...
 
Richard's entangled aventures in wonderland
Richard's entangled aventures in wonderlandRichard's entangled aventures in wonderland
Richard's entangled aventures in wonderland
 
Detectability of Solar Panels as a Technosignature
Detectability of Solar Panels as a TechnosignatureDetectability of Solar Panels as a Technosignature
Detectability of Solar Panels as a Technosignature
 
Aerodynamics. flippatterncn5tm5ttnj6nmnynyppt
Aerodynamics. flippatterncn5tm5ttnj6nmnynypptAerodynamics. flippatterncn5tm5ttnj6nmnynyppt
Aerodynamics. flippatterncn5tm5ttnj6nmnynyppt
 
FAIR & AI Ready KGs for Explainable Predictions
FAIR & AI Ready KGs for Explainable PredictionsFAIR & AI Ready KGs for Explainable Predictions
FAIR & AI Ready KGs for Explainable Predictions
 
Erythropoiesis- Dr.E. Muralinath-C Kalyan
Erythropoiesis- Dr.E. Muralinath-C KalyanErythropoiesis- Dr.E. Muralinath-C Kalyan
Erythropoiesis- Dr.E. Muralinath-C Kalyan
 
biotech-regenration of plants, pharmaceutical applications.pptx
biotech-regenration of plants, pharmaceutical applications.pptxbiotech-regenration of plants, pharmaceutical applications.pptx
biotech-regenration of plants, pharmaceutical applications.pptx
 
Hemoglobin metabolism: C Kalyan & E. Muralinath
Hemoglobin metabolism: C Kalyan & E. MuralinathHemoglobin metabolism: C Kalyan & E. Muralinath
Hemoglobin metabolism: C Kalyan & E. Muralinath
 
word2vec, node2vec, graph2vec, X2vec: Towards a Theory of Vector Embeddings o...
word2vec, node2vec, graph2vec, X2vec: Towards a Theory of Vector Embeddings o...word2vec, node2vec, graph2vec, X2vec: Towards a Theory of Vector Embeddings o...
word2vec, node2vec, graph2vec, X2vec: Towards a Theory of Vector Embeddings o...
 

Centrality in Time- Dependent Networks

  • 1. Centrality in Time- Dependent Networks Mason A. Porter (@masonporter) Department of Mathematics, UCLA
  • 2. Outline ◦ Introduction ◦ Motivation ◦ Centrality in Time-Independent Networks ◦ Centrality in Temporal Networks ◦ Eigenvector-Based Centralities in Multilayer Representations of Time-Dependent Networks ◦ “Tie-Decay” Networks in Continuous Time ◦ Example: Generalization of PageRank Centrality ◦ Conclusions
  • 3. The Top-5 Hobbies of Network Scientists ◦ 5. Citing papers based on preferential attachment (and/or possibly about preferential attachment) ◦ 4. Arguing about power laws ◦ 3. Community detection ◦ 2. Finding ways to get the Zachary Karate Club network into their talks ◦ 1. Developing new centrality measures
  • 4. A Couple of Longstanding Questions in the Study of Social Networks ◦ I. de Sola Pool & M. Kochen [1978/79], “Contacts and Influence”, Social Networks, 1: 5–51 (though a preprint for 2 decades) ◦ Inspired work of Milgram theory of human contact nets might help clarify them both. No such theory exists at present. Sociologists talk of social stratification; political scientists of influence. These quantitative concepts ought to lend themselves to a rigorous metric based upon the elementary social events of man-to-man contact. “Stratification” expresses the probability of two people in the same stratum meeting and the improbability of two people from dif- ferent strata meeting. Political access may be expressed as the probability that there exists an easy chain of contacts leading to the power holder. Yet such measures of stratification and influence as functions of contacts do not exist. What is it that we should like to know about human contact nets? -~-For any individual we should like to know how many other people he knows, i.c. his acquaintance volume. - For a popnfatiorl we want to know the distribution of acquaintance volumes, the mean and the range between the extremes. _ We want to know what kinds of people they are who have many con- tacts and whether those people are also the influentials. ,.- We want to know how the lines of contact are stratified; what is the structure of the network? If we know the answers to these questions about individuals and about the whole population, we can pose questions about the implications for paths between pairs of individuals. - How great is the probability that two persons chosen at random from the population will know each other? - How great is the chance that they will have a friend in common? - How great is the chance that the shortest chain between them requires two intermediaries; i.e., a friend of a friend? The mere existence of such a minimum chain does not mean, however, They run into people they know everywhere they go. The experience of casual contact and the practice of influence are not unrelated. A common theory of human contact nets might help clarify them both. No such theory exists at present. Sociologists talk of social stratification; political scientists of influence. These quantitative concepts ought to lend themselves to a rigorous metric based upon the elementary social events of man-to-man contact. “Stratification” expresses the probability of two people in the same stratum meeting and the improbability of two people from dif- ferent strata meeting. Political access may be expressed as the probability that there exists an easy chain of contacts leading to the power holder. Yet such measures of stratification and influence as functions of contacts do not exist. What is it that we should like to know about human contact nets? -~-For any individual we should like to know how many other people he knows, i.c. his acquaintance volume. - For a popnfatiorl we want to know the distribution of acquaintance volumes, the mean and the range between the extremes. _ We want to know what kinds of people they are who have many con- tacts and whether those people are also the influentials. ,.- We want to know how the lines of contact are stratified; what is the structure of the network? If we know the answers to these questions about individuals and about the whole population, we can pose questions about the implications for paths between pairs of individuals. - How great is the probability that two persons chosen at random from the population will know each other? - How great is the chance that they will have a friend in common? - How great is the chance that the shortest chain between them requires
  • 5. Some classical notions of centrality ◦ Degree ◦ Closeness centrality ◦ Betweenness centrality and its many variants ◦ Eigenvector-based ◦ Solutions of eigenvalue problems ◦ Eigenvector centrality ◦ PageRank ◦ Hubs and authorities ◦ … ◦ Katz centrality, communicability, etc. ◦ … ◦ …
  • 6. Example: Betweenness Centrality ◦ Which nodes (or edges, or perhaps other substructures) are on a lot of short paths between nodes? ◦ Example: shortest (“geodesic”) paths ◦ Geodesic node betweenness centrality is the number of shortest (“geodesic”) paths through node i divided by the total number of geodesic paths (common convention: i, j, m distinct): ◦ Similar formula for geodesic edge betweenness ◦ One can also define notions of betweenness based on ideas like random walks (or by restricting to particular paths in useful ways).
  • 7. Example: Eigenvector Centrality ◦ Leading eigenvector v of the adjacency matrix A. The entries of v, which has strictly positive entries by the Perron–Frobenius theorem, give the eigenvector centralities of the nodes.
  • 8. Example: PageRank ◦ Review article: D. F. Gleich, SIAM Review, 2015 B : adjacency matrix
  • 9. Example: Hubs and Authorities ◦ J. M. Kleinberg, Journal of the ACM, Vol. 46: 604–632 (1999) ◦ Intuition: A Web page (node) is a good hub if it has many hyperlinks (out-edges) to important nodes, and a node is a good authority if many important nodes have hyperlinks to it (in-edges) ◦ Imagine a random walker surfing the Web. It should spend a lot of time on important Web pages. Steady-state populations of an ensemble of walkers satisfy the eigenvalue problem: ◦ x = aAy ; y = bATx è ATAy = λy & AATx = λx, where λ = 1/(ab) ◦ Leading eigenvalue λ1 (strictly positive) gives strictly positive authority vector x and hub vector y (leading eigenvectors) ◦ Node i has hub centrality xi and authority centrality yi
  • 10. Example application of Hubs and Authorities: Measuring the Quality of Programs in the Mathematical Sciences ◦ Apply the same idea to mathematics departments based on the flow of Ph.D. students ◦ S. A. Meyer, P. J. Mucha, & MAP, “Mathematical genealogy and department prestige”, Chaos, Vol. 21: 041104 (2011) ◦ One-page paper in Gallery of Nonlinear Images ◦ Data from Mathematics Genealogy Project (MGP)
  • 11. MGP with hubs and authorities • We consider MPG data in the US from 1973–2010 (data from 10/09) • Example: Peter Mucha earned a PhD from Princeton and later supervised students at Georgia Tech and UNC Chapel Hill. • è Directed edge of unit weight from Princeton to UNC (and also from Princeton to Georgia Tech) • (Note: several additional students not listed) • A university is a good authority if it hires students from good hubs, and a university is good hub if its students are hired by good authorities. • Caveats • Our measurement has a time delay (only have the PrincetonèUNC edge after Peter supervises a PhD student there) • Hubs and authorities should change in time
  • 12. Geographically-Inspired Visualization Mathematical genealogy and department prestige Sean A. Myers,1 Peter J. Mucha,1 and Mason A. Porter2 1 Department of Mathematics, University of North Carolina, Chapel Hill, North Carolina 27599, USA 2 Mathematical Institute, University of Oxford, OX1 3LB, UK FIG. 1. (Color) Visualizations of a mathematics genealogy network. CHAOS 21, 041104 (2011) Hubs: node size Authorities: node color
  • 13. How do our rankings do? artment son A. Porter2 arolina, 3LB, UK ber 2011) (http://www. 50 000 scholars related fields. s), graduation ees. The MGP be used to trace ourant, Hilbert, s Gauss, Euler, We use a “geographically inspired” layout to balance node locations and node overlap. A Kamada-Kawai visualization4 or) Visualizations of a mathematics genealogy network. FIG. 2. (Color) Rankings versus authority scores.
  • 14. Generalizing Centralities to Time- Dependent Networks ◦ There have been numerous efforts to generalize centrality measures to time-dependent networks using various approaches. ◦ For some discussions, see the review articles on temporal networks by P. Holme & J. Saramaki (2012) and P. Holme (2015) ◦ A very small selection of examples (using references cited in Taylor et al. 2017) ◦ Calculate centrality from networks constructed from independent time windows: D. Braha & Y. Bar-Yam, 2006 (and many others) ◦ Calculate centrality in a time-independent network constructed from time-respecting paths in a temporal network: G. Kossinets, J. M. Kleinberg, and D. J. Watts (2008) ◦ Calculate (for PageRank) in a way that counteracts age bias: D. Walker, X. Xie, K.-K. Yan, and S. Maslov (2007) ◦ Generalizations, from numerous perspective, of many common centrality measures: betweenness, eigenvector, PageRank, Katz, communicability, etc. [See Taylor et al. for many references.] ◦ Continuous-time approach for Katz centrality (where the generalization was devised specifically for Katz centrality): P. Grindrod and D. J. Higham, A Dynamical Systems View of Network Centrality, Proc. Royal Soc. A (2014), Vol. 470, 20130835 ◦ Helped inspire our work in Ahmad et al., where our goal was to come up with a flexible formulation for studying temporal networks in continuous time, using PageRank centrality as an example.
  • 15. Using a Multilayer Framework for Eigenvector- Based Centralities for Time-Dependent Networks ◦ D. Taylor, S. A. Myers, A. Clauset, MAP, & P. J. Mucha, “Eigenvector-based Centrality Measures for Temporal Networks”, Multiscale Modeling and Simulation: A SIAM Interdisciplinary Journal, 2017 ◦ Uses a multilayer representation of time- dependent networks to study important “central” entities and their dynamics over time M. Kivelä et al., 2014ZKCC Network
  • 16. Supra-adjacency Matrix (‘flattened’ linear-algebraic representation) • Schematic from M. Bazzi, MAP, S. Williams, M. McDonald, D. J. Fenn, & S. D. Howison [2016] Multiscale Modeling and Simulation: A SIAM Interdisciplinary Journal, 14(1): 1–41 13 Layer 1 11 21 31 Layer 2 12 22 32 Layer 3 13 23 33 ! 2 6 6 6 6 6 6 6 6 6 6 6 6 4 0 1 1 ! 0 0 0 0 0 1 0 0 0 ! 0 0 0 0 1 0 0 0 0 ! 0 0 0 ! 0 0 0 1 1 ! 0 0 0 ! 0 1 0 1 0 ! 0 0 0 ! 1 1 0 0 0 ! 0 0 0 ! 0 0 0 1 0 0 0 0 0 ! 0 1 0 1 0 0 0 0 0 ! 0 1 0 3 7 7 7 7 7 7 7 7 7 7 7 7 5 Fig. 3.1. Example of (left) a multilayer network with unweighted intra-layer connections (solid lines) and uniformly weighted inter-layer connections (dashed curves) and (right) its corresponding adjacency matrix. (The adjacency matrix that corresponds to a multilayer network is sometimes called a “supra-adjacency matrix” in the network-science literature [39].) or an adjacency matrix to represent a multilayer network.) The generalization in [49] consists of applying the function in (2.16) to the N|T |-node multilayer network:
  • 17. Multilayer Construction to Examine Temporal Eigenvector-Based Centralities ◦ Math department rankings change in time, so we want to consider centrality measures that change in time ◦ E.g. via a multilayer representation of a temporal network ◦ “Multislice” network with adjacency tensor elements Aijt ◦ Directed intralayer edge from university i to university j at time t for a specific person’s PhD granted at time t for a person who later advised a student at i (multi-edges give weights) ◦ E.g. Peter Mucha yields a Georgia TechèPrinceton edge and a UNC Chapel HillèPrinceton edge for t = 1998 ◦ Use a multilayer network with “diagonal” and ordinal interlayer coupling ◦ 231 US universities, T = 65 time layers (1946–2010)
  • 18. Construct a Supra-centrality Matrix ◦ E.g. M(t) = A(t)[A(t)]T to examine hubs and authorities ◦ A different choice of M(t) gives a temporal generalization of a different eigenvector-based centrality measure ◦ ε = 1/ω ; t indexes the layers ◦ A singular perturbation from the ε è 0 (strong coupling) limit yields an averaged (time-independent) centrality, then a “first mover” perturbation term, and then higher- order corrections 2.2. Inter-Layer Coupling of Centrality Matrices. To avoid that arise from ignoring the distinction between inter-layer edges and in we define a somewhat more nuanced generalization of eigenvector-bas To preserve the special role of inter-layer edges, we directly couple the defne the eigenvector-based centrality measure within each temporal dinary adjacency matrices for eigenvector centrality). That is, one eigenvector-based centrality in terms of some matrix M that is a f adjacency matrix A. For example, hub and authority scores are the lea tors of the matrices AAT and AT A, respectively (using the convention Aij indicate i ! j edges). Letting M(t) denote the centrality matrix couple these centrality matrices with inter-layer couplings of strength ! supra-centrality matrix M(✏) = 2 6 6 6 6 6 4 ✏M(1) I 0 · · · I ✏M(2) I ... 0 I ✏M(3) ... ... ... ... ... 3 7 7 7 7 7 5 . For notational convenience, we define the supra-centrality matrix us factor ✏ = 1/! rather than the coupling weight !. [That is, we re by a factor ✏ to obtain Eq. (2.4).] We study the dominant eigenvect corresponds to the largest eigenvalue max(✏) [i.e, M(✏) (✏) = max entries of the dominant eigenvector give the centralities of the node-la this represents the centrality of physical node i at time t. As we Sec. 2.3, we refer to this type of centrality as a “joint node-layer cent it reflects the centrality of both the physical node i and the time layer One can interpret the parameter ✏ > 0 as a tuning parameter tha strongly a given physical node’s centrality is coupled to itself between ne
  • 19. Singular Perturbation Expansion over scores, respectively. We give higher-order expansions in an a gularity at Infinite Inter-Layer Coupling. To understand ace (i.e., the eigenspace of the largest eigenvalue) of M(✏) in rst study the system for ✏ = 0. We write Eq. (2.4) as M(✏) = B + ✏G , A(chain) ⌦ I and G = diag[M(1) , . . . , M(T ) ]. It follows that To facilitate our discussion of the eigenspace and our subseque ntroduce the NT ⇥ NT permutation matrix P with entries Pk T [(k 1) mod N] and Pkl = 0 otherwise, where d·e is the ceiling permutes node-layer indices so that we can easily go back-and- ng the node-layer pairs by time and then by physical node index dering them by physical node index and then by time). In pa = P I ⌦ A(chain) PT . Additionally, because P is a unitary oper nd the spectral properties of M(0) via the spectral properties o 2 A(chain) 0 0 · · · 3 first-order-mover scores, respectively. We give high 3.1. Singularity at Infinite Inter-Layer C inant eigenspace (i.e., the eigenspace of the large ✏ ! 0+ , we first study the system for ✏ = 0. We w M(✏) = B + ✏G where B = A(chain) ⌦ I and G = diag[M(1) , . . A(chain) ⌦ I. To facilitate our discussion of the eig lations, we introduce the NT ⇥ NT permutation l = dk/Ne+T [(k 1) mod N] and Pkl = 0 otherwi Note that P permutes node-layer indices so that w tween ordering the node-layer pairs by time and t versa (i.e., ordering them by physical node index A(chain) ⌦ I = P I ⌦ A(chain) PT . Additionally, b can understand the spectral properties of M(0) vi 2 6 A(chain) 0 0 A(chain) node-layer pairs {(i, t)} for the same physical node i across the T network layers. The identity edges of weight ! attempt to weight the “persistence” of a physical node through time by enforcing an identification with itself at consecutive times. We restrict our attention to nonnegative inter-layer coupling ! 0. (One could consider ! < 0 to drive negative coupling between layers, but we do not consider such values for our applications.) One can construe ! as a parameter to tune interactions between network layers [6, 9, 81]: in the limit ! ! 0+ , the layers become uncoupled; in the limit ! ! 1, the layers are so strongly coupled that inter-layer weights dominate the intra-layer connections. The case of inter-layer edges only between di↵erent instances of the same physical node is sometimes called “diagonal coupling,” and the using a constant ! across all such-interlayer edges is sometimes known as “layer-coupling.” Here we also restrict ourselves to nearest-neighbor coupling of temporal layers, as we place the identity inter-layer edges only between node-layer pairs that are adjacent in time, (i, t) and (i, t ± 1), which results in the block structure in Eq. (2.1). Equivalently, we write A = diag h A(1) , . . . , A(T ) i + A(chain) ⌦ !I , (2.2) where ⌦ denotes the Kronecker product and A(chain) is the T ⇥ T adjacency matrix of an undirected “bucket brigade” (or “chain”) network whose T nodes are each adjacent to their nearest neighbors along an undirected chain. In this bucket brigade, A (chain) ij = 1 for j = i ± 1 and A (chain) ij = 0 otherwise. Although one can choose inter- layer coupling matrices other than A(chain) for the inter-layer couplings [70] (and much of our approach can be generalized to other choices of coupling), we restrict our attention to nearest-neighbor coupling of layers. It is tempting to directly apply a standard eigenvector-based centrality formu- lation to the supra-adjacency matrix A by treating it just like any other adjacency matrix despite its structure. However, such an approach neglects to respect the funda- mental distinction between intra-layer edges and inter-layer edges that result from the block-diagonal structure of A. That is, in such an approach, one treats the inter-layer couplings (i.e., identity arcs) just like any other edge. In general, however, one needs to be careful when studying a temporal network using the supra-adjacency matrix formalism because many basic network properties—some of which carry strong impli- cations about a static network (e.g., its spectra, connectedness properties, etc.)—do not naturally carry over without modification to the supra-adjacency matrix. This issue was discussed for multilayer networks more generally in Refs. [24, 27, 70] and 12 D. TAYLOR et al. ✏ ! 0+ . In Sec. 3.1, we further explore the singularity that arises in the strong- coupling limit. In Secs. 3.2 and 3.3, we give zeroth- and first-order perturbation expansions, which lead to principled expressions for time-averaged centralities and first-order-mover scores, respectively. We give higher-order expansions in an appendix. 3.1. Singularity at Infinite Inter-Layer Coupling. To understand the dom- inant eigenspace (i.e., the eigenspace of the largest eigenvalue) of M(✏) in the limit ✏ ! 0+ , we first study the system for ✏ = 0. We write Eq. (2.4) as M(✏) = B + ✏G , (3.1) where B = A(chain) ⌦ I and G = diag[M(1) , . . . , M(T ) ]. It follows that M(0) = A(chain) ⌦ I. To facilitate our discussion of the eigenspace and our subsequent calcu- lations, we introduce the NT ⇥ NT permutation matrix P with entries Pkl = 1 for l = dk/Ne+T [(k 1) mod N] and Pkl = 0 otherwise, where d·e is the ceiling operator. Note that P permutes node-layer indices so that we can easily go back-and-forth be- tween ordering the node-layer pairs by time and then by physical node index, or vice versa (i.e., ordering them by physical node index and then by time). In particular, A(chain) ⌦ I = P I ⌦ A(chain) PT . Additionally, because P is a unitary operator, one can understand the spectral properties of M(0) via the spectral properties of I ⌦ A(chain) = 2 6 6 6 6 4 A(chain) 0 0 · · · 0 A(chain) 0 0 0 A(chain) ... ... ... ... 3 7 7 7 7 5 . (3.2) Eigenvector-Based Centrality Measures for Temporal Networks 13 where we impose u0 = uT +1 = 0 so that Eq. (3.4) holds for all t 2 {1, . . . , T}. We now let ut / sin n⇡t T +1 for n 2 {1, . . . , T} to ensure that these boundary conditions are satisfied. Equation (3.4) then reduces to  2 cos ✓ n⇡ T + 1 ◆ (chain) sin n⇡t T + 1 = 0 . (3.5) For the solution of Eq. (3.5) to be consistent for all t, the term in the square brackets must be identically 0. Consequently, (chain) = 2 cos ✓ n⇡ T + 1 ◆ , (3.6) u(chain) = 1 p n  sin ✓ n⇡ T + 1 ◆ , sin ✓ 2n⇡ T + 1 ◆ , . . . , sin ✓ Tn⇡ T + 1 ◆ T , (3.7) where the normalization constant is n = PT t=1 sin2 [n⇡t/(T + 1)]. Setting n = 1 then gives the dominant eigenvalue and its corresponding eigenvector for the matrix A(chain) . Returning to the dominant eigenvalue and eigenspace of Eq. (3.2), we see that the dominant eigenvalue of both I ⌦ A(chain) and A(chain) ⌦ I is max = 2 cos [⇡/(T + 1)]. The general solution for the dominant eigenvector of I ⌦ A(chain) that spans this eigenspace is P j ↵j j. The N dominant eigenvectors of M(0) therefore have the general form P j ↵jP j, where the constants {↵i} must satisfy P i ↵2 i = 1 to ensure A(chain) . Returning to the dominant eigenvalue and eigenspace of Eq. (3.2), we see that the dominant eigenvalue of both I ⌦ A(chain) and A(chain) ⌦ I is max = 2 cos [⇡/(T + 1)]. The general solution for the dominant eigenvector of I ⌦ A(chain) that spans this eigenspace is P j ↵j j. The N dominant eigenvectors of M(0) therefore have the general form P j ↵jP j, where the constants {↵i} must satisfy P i ↵2 i = 1 to ensure hat the vector is normalized. 3.2. Zeroth-Order Expansion and Time-Averaged Centrality. In this section, we study the zeroth-order expansion of the dominant eigenvector (✏) for he limit ✏ ! 0+ . As we shall now show, the conditional node-layer centralities {(i, t)} corresponding to a given physical node i become constant across time in this imit. We refer to these values as the physical nodes’ time-averaged centralities. By examining the first-order expansion, we show in Eq. (3.14) that one can obtain these values as the solution to a dominant eigenvalue equation for a matrix of size N ⇥ N. We consider the dominant eigenvalue equation max(✏) (✏) = M(✏) (✏) = B (✏) + ✏G (✏) . (3.8) We expand max(✏) and (✏) for small ✏ by writing max(✏) = 0 + ✏ 1 + · · · and (✏) = 0 + ✏ 1 + · · · to obtain kth-order approximations: max(✏) ⇡ Pk j=0 ✏j j and (✏) ⇡ Pk j=0 ✏j j. Note that we use subscripts to indicate the orders in ✏ of the erms in the expansion. Our strategy is to develop consistent solutions to Eq. (3.8) or increasing values of k. Starting with the first-order approximations, we substitute max(✏) ⇡ 0 + ✏ 1 and (✏) ⇡ 0 + ✏ 1 into Eq. (3.8) and collect the zeroth- and first-order terms in ✏ o obtain nvalue and eigenspace of Eq. (3.2), we see that the (chain) and A(chain) ⌦ I is max = 2 cos [⇡/(T + 1)]. nant eigenvector of I ⌦ A(chain) that spans this ominant eigenvectors of M(0) therefore have the constants {↵i} must satisfy P i ↵2 i = 1 to ensure on and Time-Averaged Centrality. In this expansion of the dominant eigenvector (✏) for now show, the conditional node-layer centralities hysical node i become constant across time in this the physical nodes’ time-averaged centralities. By n, we show in Eq. (3.14) that one can obtain these nt eigenvalue equation for a matrix of size N ⇥ N. nvalue equation M(✏) (✏) = B (✏) + ✏G (✏) . (3.8) mall ✏ by writing max(✏) = 0 + ✏ 1 + · · · and h-order approximations: max(✏) ⇡ Pk j=0 ✏j j and use subscripts to indicate the orders in ✏ of the egy is to develop consistent solutions to Eq. (3.8) pproximations, we substitute max(✏) ⇡ 0 + ✏ 1 and collect the zeroth- and first-order terms in ✏ j that the vector is normalized. 3.2. Zeroth-Order Expansio section, we study the zeroth-order e the limit ✏ ! 0+ . As we shall no {(i, t)} corresponding to a given phy limit. We refer to these values as th examining the first-order expansion, values as the solution to a dominant We consider the dominant eigenv max(✏) (✏) = M We expand max(✏) and (✏) for sm (✏) = 0 + ✏ 1 + · · · to obtain kth- (✏) ⇡ Pk j=0 ✏j j. Note that we u terms in the expansion. Our strateg for increasing values of k. Starting with the first-order app and (✏) ⇡ 0 + ✏ 1 into Eq. (3.8) a to obtain
  • 20. 0th Order Expansion and Time-Averaged Centrality of 0 in Eq. (3.11), we obtain X j ↵j T i PT GP j = 1 X j ↵j T i PT P j = 1↵i , (3.13) T = I and T i j = ij, where ij is the Kronecker delta. Letting Eq. (3.13) corresponds to an N-dimensional eigenvalue equation, X(1) ↵ = 1↵ , (3.14) X(1) has elements (1) ij = T i PT GP j = 1 1 X t M (t) ij sin2 ✓ ⇡t T + 1 ◆ , (3.15) (⇡t/(T + 1)) is the normalization constant for the dominant eigen- n by n = 1 in Eq. (3.7). Our assumption that (✏) is nonnegative any ✏ > 0 ensures that X(1) is also nonnegative and irreducible. enius theorem for nonnegative matrices [77], the largest eigenvalue ltiplicity of one, and its eigenvector ↵ is unique with nonnegative 2 and the footnote 3 therein). We normalize the solution ↵ to 2 i = 1 and substitute the normalized solution into Eq. (3.11) to rder term 0. vector 0 is the dominant eigenvector of (✏) and that it gives + i i Using the solution of 0 in Eq. (3.11), we obtain X j ↵j T i PT GP j = 1 X j ↵j T i PT P j = 1↵i , (3.13) because PT P = PPT = I and T i j = ij, where ij is the Kronecker delta. Letting ↵ = [↵1, . . . , ↵N ]T , Eq. (3.13) corresponds to an N-dimensional eigenvalue equation, X(1) ↵ = 1↵ , (3.14) where the matrix X(1) has elements X (1) ij = T i PT GP j = 1 1 X t M (t) ij sin2 ✓ ⇡t T + 1 ◆ , (3.15) and 1 = PT t=1 sin2 (⇡t/(T + 1)) is the normalization constant for the dominant eigen- vector u(chain) given by n = 1 in Eq. (3.7). Our assumption that (✏) is nonnegative and irreducible for any ✏ > 0 ensures that X(1) is also nonnegative and irreducible. By the Perron-Frobenius theorem for nonnegative matrices [77], the largest eigenvalue 1 of X(1) has a multiplicity of one, and its eigenvector ↵ is unique with nonnegative entries (see Sec. 2.2 and the footnote 3 therein). We normalize the solution ↵ to Eq. (3.14) by P i ↵2 i = 1 and substitute the normalized solution into Eq. (3.11) to obtain the zeroth-order term 0. Recall that the vector 0 is the dominant eigenvector of (✏) and that it gives the joint node-layer centralities in the limit ✏ ! 0+ . By inspection, the elements of 0 are ↵i sin(⇡t/(T + 1)) for node-layer pair (i, t). It follows that the conditional centrality of node-layer pair (i, t) is ↵i, independent of the layer t. Importantly, these {↵i} values arise naturally from our perturbative expansion in the supra-centrality framework, independently of the value of ✏. By contrast, recall that the marginal node centralities (MNC) reflect averaging the joint centralities across time layers for a specific choice of ✏. Accordingly, we hereafter refer to the entry ↵i in the vector ↵ 12 D. TAYLOR et al. ✏ ! 0+ . In Sec. 3.1, we further explore the singularity that arises in the strong- coupling limit. In Secs. 3.2 and 3.3, we give zeroth- and first-order perturbation expansions, which lead to principled expressions for time-averaged centralities and first-order-mover scores, respectively. We give higher-order expansions in an appendix. 3.1. Singularity at Infinite Inter-Layer Coupling. To understand the dom- inant eigenspace (i.e., the eigenspace of the largest eigenvalue) of M(✏) in the limit ✏ ! 0+ , we first study the system for ✏ = 0. We write Eq. (2.4) as M(✏) = B + ✏G , (3.1) where B = A(chain) ⌦ I and G = diag[M(1) , . . . , M(T ) ]. It follows that M(0) = A(chain) ⌦ I. To facilitate our discussion of the eigenspace and our subsequent calcu- lations, we introduce the NT ⇥ NT permutation matrix P with entries Pkl = 1 for l = dk/Ne+T [(k 1) mod N] and Pkl = 0 otherwise, where d·e is the ceiling operator. Note that P permutes node-layer indices so that we can easily go back-and-forth be- tween ordering the node-layer pairs by time and then by physical node index, or vice versa (i.e., ordering them by physical node index and then by time). In particular, A(chain) ⌦ I = P I ⌦ A(chain) PT . Additionally, because P is a unitary operator, one can understand the spectral properties of M(0) via the spectral properties of I ⌦ A(chain) = 2 6 6 6 6 4 A(chain) 0 0 · · · 0 A(chain) 0 0 0 A(chain) ... ... ... ... 3 7 7 7 7 5 . (3.2) We can see from our above discussion that the base problem at ✏ = 0 de- couples into N identical chains, where each chain corresponds to a physical node coupled across the T time layers. Because of the block-diagonal and repeated na- ture of Eq. (3.2), determining its spectral properties is relatively straightforward: we obtain them from the eigenvalues and eigenvectors of A(chain) . The eigenval- ues of I ⌦ A(chain) are given by the eigenvalues of A(chain) , where each eigenvalue has a multiplicity of N and a corresponding N-dimensional eigenspace spanned by the vectors based on the eigenvectors of A(chain) (with appended 0 values in ap- 0 = 2 cos ✓ ⇡ T + 1 ◆ , 0 = X j ↵jP j , (3.11) where {↵i} are constants that satisfy a constraint that 0 has a magnitude of 1 (i.e.,P i ↵2 i = 1). We defined i just below Eq. (3.2). To find the set {↵i} of unique constants that define 0, we need a solvability condition in the first-order terms. Using the fact that the null space of 0I B is span(P 1, . . . , P N ) for any physical node i, it follows that (P i)T ( 0I B) 1 = 0, and left-multiplying Eq. (3.10) by (P i)T leads to T i PT G 0 = 1 T i PT 0 . (3.12) Using the solution of 0 in Eq. (3.11), we obtain X j ↵j T i PT GP j = 1 X j ↵j T i PT P j = 1↵i , (3.13) because PT P = PPT = I and T i j = ij, where ij is the Kronecker delta. Letting ↵ = [↵1, . . . , ↵N ]T , Eq. (3.13) corresponds to an N-dimensional eigenvalue equation, X(1) ↵ = 1↵ , (3.14) where the matrix X(1) has elements X (1) ij = T i PT GP j = 1 1 X t M (t) ij sin2 ✓ ⇡t T + 1 ◆ , (3.15) and 1 = PT t=1 sin2 (⇡t/(T + 1)) is the normalization constant for the dominant eigen- vector u(chain) given by n = 1 in Eq. (3.7). Our assumption that (✏) is nonnegative and irreducible for any ✏ > 0 ensures that X(1) is also nonnegative and irreducible. By the Perron-Frobenius theorem for nonnegative matrices [77], the largest eigenvalue 1 of X(1) has a multiplicity of one, and its eigenvector ↵ is unique with nonnegative entries (see Sec. 2.2 and the footnote 3 therein). We normalize the solution ↵ to Eq. (3.14) by P i ↵2 i = 1 and substitute the normalized solution into Eq. (3.11) to obtain the zeroth-order term 0. Recall that the vector 0 is the dominant eigenvector of (✏) and that it gives the joint node-layer centralities in the limit ✏ ! 0+ . By inspection, the elements of 0 are ↵i sin(⇡t/(T + 1)) for node-layer pair (i, t). It follows that the conditional centrality of node-layer pair (i, t) is ↵i, independent of the layer t. Importantly, these {↵i} values arise naturally from our perturbative expansion in the supra-centrality framework, independently of the value of ✏. By contrast, recall that the marginal node centralities (MNC) reflect averaging the joint centralities across time layers for One derives expressions for higher-order corrections in a similar way. For example, the coefficient of the first correction gives a first-mover score. αi is the time-averaged centrality for entity i
  • 21.
  • 22. Math Departments: Best Authorities 18 S. A. MYERS et al. Table 4.1 Top centralities and first-order movers for universities in the MGP [4]. Top Time-Averaged Centralities Top First-Order Mover Scores Rank University ↵i 1 MIT 0.6685 2 Berkeley 0.2722 3 Stanford 0.2295 4 Princeton 0.1803 5 Illinois 0.1645 6 Cornell 0.1642 7 Harvard 0.1628 8 UW 0.1590 9 Michigan 0.1521 10 UCLA 0.1456 Rank University mi 1 MIT 688.62 2 Berkeley 299.07 3 Princeton 248.72 4 Stanford 241.71 5 Georgia Tech 189.34 6 Maryland 186.65 7 Harvard 185.34 8 CUNY 182.59 9 Cornell 180.50 10 Yale 159.11 map: do we want to indicate what any of those other papers do with the MGP data?drt: my vote is no. keep things as brief as possible. We extend our previous consideration of this data [71] by keeping the year that each faculty member graduated with his/her Ph.D. degree. We thus construct a multilayer network of the MGP Ph.D. exchange using elements Aijt that indicate a
  • 23. Tie-Decay Centrality in Continuous Time ◦ W. Ahmad, MAP, & M. Beguerisse-Díaz [2018], arXiv: 1805.00193 ◦ It’s desirable to develop methods that allow consideration of continuous time. ◦ Chopping up data into time windows is a major issue. ◦ Very important for modeling ◦ Important: we now distinguish between interactions and ties
  • 24. Ties, Interactions, and Time-Dependent Networks ◦ Interaction: an interaction between two agents is an event that takes place at a specific point in time (or during a specific time interval) ◦ A phone call, text message, tweet, etc. ◦ This work: consider only instantaneous interactions ◦ Tie: a tie between two agents is a relationship between them ◦ It can have a weight to represent its strength ◦ Strengthen (or, more generally, change in strength) with repeated interactions, but they deteriorate in their absence
  • 25. Mathematical Setup ◦ n interacting agents ◦ B(t) = n x n time-dependent, real, non-negative matrix ◦ Entries bij(t) represent the connection strength between agents i and j at time t ◦ To construct a continuous-time temporal network of ties, we make two assumptions about how ties evolve and how ties strengthen them
  • 26. Yields a Time-Dependent Network ◦ A(t) = instantaneous adjacency matrix (i.e. of interactions) at time t ◦ Technical point: B(t) is a solution of an ODE (not shown) ◦ E.g. we get delta functions from instantaneous interactions, and we know how to solve the ODE. One can generalize the framework and then analyze a more complicated ODE.
  • 28. Tie-Decay Generalization of PageRank ◦ = Moore–Penrose pseudo-inverse ◦ c(t) = an n x 1 vector of ‘dangling nodes’ (which have 0 out-degree) at time t ◦ v = vector of teleportation probabilities (time-independent)
  • 29. Updating the Transition Matrix P(t) ◦ No interactions → P + ∆P ≡ P(t + ∆t) = P(t) ◦ If there is a single new interaction during interval ∆t, we get the following expression for ∆P: ◦ Efficient computation of time-dependent PageRank vector using power iteration. We use π(t) as initial value to obtain π(t+ ∆t) ◦ → Allows efficient computation with data streams (not just data sets)
  • 30. Example: National Health Service (NHS) Retweet Network ◦ Data set: 5 months of tweeting activity about NHS after the controversial Health and Social Care Act of 2012 ◦ Tweets in English with the term ‘NHS’ and look at the 10000 most-active users ◦ Interactions are retweets
  • 31.
  • 32. Conclusions ◦ Clearly, we need to develop more centrality measures, right? ◦ Right? ◦ More seriously: it’s important to generalize ideas that we know (and presumably love) to time-dependent networks. ◦ This talk: time-dependent generalizations of eigenvector-based centrality measures ◦ Multilayer representation ◦ Continuous time using “tie-decay network” approach ◦ Distinguishes between interactions and ties ◦ Examining continuous time is important for both data and modeling considerations ◦ Ideas for tie-decay continuous-time networks ◦ Include duration of ties, general personalized PageRank and associated community-detection methods, etc. (There is a ton of stuff to do.) ◦ Key message: Need more studies of networks in continuous time
  • 33. Advertisement: New Journal: SIAM Journal on Mathematics of Data Science (SIMODS) ◦ Now accepting submissions! ◦ SIMODS publishes work that advances mathematical, statistical, and computational methods in the context of data and information sciences. We invite papers that present significant advances in this context, including applications to science, engineering, business, and medicine.