Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Ds15 minitute-v2


Published on

These are slides for my tutorial talk on network dynamics. (The colors are fine in the downloaded version, though there seem to be color issues if you view the slides directly in slideshare.)

Published in: Science
  • Be the first to comment

Ds15 minitute-v2

  1. 1. Mason A. Porter University of Oxford @masonporter (go to Twitter for link to slides on slideshare)
  2. 2. „  MAP & J. P. Gleeson, “Dynamical Systems on Networks: A Tutorial”, arXiv:1403.7663 (2015)! „  Also see the section where we point to several other survey, review, and tutorial articles.! „  P. Holme & J. Saramäki, “Temporal Networks”, Phys. Rep., Vol. 519, No. 3: 97–125 (2012).! „  M. Kivelä, A. Arenas, M. Barthelemy, J. P. Gleeson, Y. Moreno, and MAP, “Multilayer Networks”, J. Cplx. Net., Vol. 2, No. 3: 203–271 (2015). ! „  Accompanying tutorial slides that go through this article: tutorialnetsci2014slightlyupdated!
  3. 3. „  Introduction! „  Dynamical Systems on Networks! „  Example: Watts threshold model and the helpfulness of random-graph ensembles! „  Temporal Networks! „  Dynamics on temporal networks! „  Random walks on temporal networks! „  Adaptive voter model! „  Multilayer representation of temporal networks! „  Eigenvector-based centralities on temporal networks! „  Community structure in temporal networks! „  Multilayer Networks! „  Multiplex networks, networks of networks, and all that jazz! „  Dynamical systems on multilayer networks! „  Conclusions!
  4. 4. „  From structure to dynamics, but also “from structure to function”! „  Networks are ubiquitous, and numerous different types of dynamics occur on networks.! „  The structure of networks can have a major influence on dynamical processes that occur on networks.! „  What should we be measuring to study network structure and dynamical systems on networks?! „  Random-graph ensembles are very useful helping to achieve a better understanding of dynamical process.! „  Mean-field theories and their generalizations yield lower-dimensional systems. When are such lower-dimensional systems good approximations for dynamical processes on networks?!
  5. 5. „  Has been prominent in all recent Snowbird dynamics conferences! „  Many sessions in the NS15 workshop (including Bassett’s IP)! „  Invited talks at DS15 with some relation to network dynamics: ! „  Motter (IP1, this morning): control of dynamics on networks! „  Bertozzi (IP2): mathematics of crime! „  Ermentrout (IP4): large-scale activity in the brain! „  Gore (IP5): Cooperation, cheating, and collapse in biological populations! „  Moehlis (IP6): Brain control! „  Many minisymposia. A subset: MS5, MS25, MS27, MS30 (all at the same time), MS51, MS66, MS68, MS74, MS76, MS84, MS88, MS89, MS102, MS115, MS117, MS127!
  6. 6. „  States of the nodes (or edges) change much faster than network structure! „  è Dynamics on static networks! „  Structure of network changes much faster than states of nodes (or edges)! „  è Dynamics of network (“Temporal networks”)! „  Comparable timescales! „  è Dynamics of network coupled to dynamics on networks! „  Also: maybe network structure changes so fast that only some properties (but not the “microscopic” connections between individual entities) can be measured reliably, so consider dynamical system on a random-graph ensemble that preserves those properties.! „  è also dynamics on networks (but mean properties over an ensemble of graphs)!
  7. 7. How does network structure affect dynamics (and vice versa)?
  8. 8. „  Toy (percolation-like) model for social influence ! „  D. S. Watts, PNAS, 2002! „  Each node j has a (frozen) threshold Rj drawn from some distribution and can be in one of two states (0 or 1)! „  Choose a seed fraction ρ(0) of nodes (e.g. uniformly at random) to initially be in state 1 (“infected”, “active”, etc.)! „  Updating can be either:! „  Synchronous: discrete time; update all nodes at once! „  Asynchronous: “continuous” time; update some fraction of nodes in time step dt! „  Update rule: Compare fraction of infected neighbors (m/kj) to Rj. Node j becomes infected if m/kj ≥ Rj. Otherwise no change.! „  Variant (Centola-Macy): Compare number of active neighbors (m) rather than fraction of active neighbors! „  Monotonicity: Nodes in state 1 stay there forever. !
  9. 9. Response functions for many types of dynamical processes: J. P. Gleeson, PRX,Vol. 3, 021004 (2013)
  10. 10. (see our tutorial article and references therein) Response functions for many types of dynamical processes: J. P. Gleeson, PRX,Vol. 3, 021004 (2013)
  11. 11. S. Melnik, J.A.Ward, J. P. Gleeson, & MAP,“Multi-Stage Complex Contagions”, Chaos,Vol. 23, No. 1: 013124 (2013)
  12. 12. Passive (S0)! Active (S1) Hyper-active (S2) Influences neighbors Influences neighbors, but with bonus influence compared to Active nodes No influence Note: S2 ⊆ S1 but Si ⊈ S0 for i > 0
  13. 13. „ Peer pressure = total influence experienced by a degree-k node! „ P = (m1 + βm2)/k! „  m1 = number of neighbors in S1! „  m2 = number of neighbors in S2! „  β = bonus influence (β = 0 è only S0 and S1 ; no S2 state)! „ Update step: Node j becomes Si-active if Pj ≥ Rj,i! „ 2 different thresholds for each node; chosen from some distributions!
  14. 14. „  Recall: one can use a convenient random-graph ensemble to gain a better understanding of a dynamical process on a network! „  (z1,z2)-regular random graphs! „  Have precise knowledge of when nodes have state changes! „  Fix degree distribution P(k) and possibly also fix joint degree-degree distribution P(k,k’)! „  Otherwise connect uniformly at random! „  Example: ! „  Half of nodes have degree z1 = 4 and the other half have degree z2 = 24 ! „  Ensemble in which each node has on average all but one neighbor from its own degree class! „  Assume all nodes have identical thresholds R1 = 0.2 and R2 = 0.8! „  Consider the case in which S2 activations drive S1 activations.!
  15. 15. D. R.Taylor, et al. arXiv: 1408.1168 (gave a talk this morning)
  16. 16. Holme & Saramäki, Phys. Rep. 2012
  17. 17. Holme&Saramäki,Phys.Rep.2012
  18. 18. „  Activity times of nodes! „  Interevent times of edges! „  Naïve aggregation to obtain a static network assumes Poisson statics, but the times series are bursty (see Bertozzi’s talk)! „  You are making major assumptions when aggregating to construct a static network, so we need to keep the temporal event statistics in mind! „  How do the temporal dynamics of nodes and edges affect dynamical processes (e.g., disease spread) on temporal networks?! „  Ordering of contacts, concurrency of contacts, etc.! „  Continuous versus discrete time! „  How do we generalize ideas like measures of node and edge importance (“centrality” measures), community structure, and so on?!
  19. 19. „  T. Hoffmann, MAP, & R. Lambiotte, Phys. Rev. E, Vol 86, No. 4: 046102 (2012)! „  Let’s consider a 3-node example with different waiting-time statistics for the three edges!GENERALIZED MASTER EQUATIONS FOR NON-POISSON . . . PHY 2 3 1 a c bta = 1 tb = 1 2 tc = 1 3 FIG. 2. An undirected network with N = 3 nodes and no self- loops. The waiting-time distributions for the edges a, b, and c are exponential, uniform, and Rayleigh, respectively. We denote the FIG. 4. Random-walker illustrated in Fig. 2 as a fun is the steady-state solution. dynamics, the system exhibi its steady-state solution. The
  20. 20. tend to be underrepresented on node 1 for the non-Poisson dynamics. In general, the value of the walker density is not sufficient to define the state of the system. The distribution of resting times—that is, the times that a walker spends on a node before making a step—also needs to be specified. Consequently, the walker density temporarily departs from its steady-state value FIG. 3. Random-walker density on each node as a function of time obtained from numerical computations for Poisson (upper panel) started [38]. Settin for a Poisson pro to remain steady, equations (24). V. EXAMPL We consider t example network nodes) to demons Poisson processe examples. In general, the given by p = D⟨ the stochastic ma steady-state solut change of all wal equations (24): dni dt where we have us solution of a Pois is uniform regard transition rates as For non-Poiss for which the W Poisson Non-Poisson
  21. 21. „  P. Holme & M. E. J. Newman,Phys. Rev. E, Vol. 74, No. 5: 056108 (2006); R. Durrett et al., PNAS, Vol. 109: 3682–3687 (2012).! „  Note: see survey and review articles by Thilo Gross and his various collaborators on adaptive networks (part of compiled list of surveys in my tutorial article)! „  The mechanism in Durrett et al. (modified version of Holme- Newman model): ! „  Nodes with opinion 0 and 1! „  Edges are picked randomly! „  If the opinions of the incident nodes are different (“discordant” edge), then one (picked randomly) imitates the other with probability 1 – α. Otherwise, the edge is broken and one of the nodes connects to some other randomly-chosen individual! „  Case (i): connects to random individual with the same opinion! „  Case (ii): connects to any random individual! „  Evolution stops when there are no more discordant edges.! „  Of interest: ρ = fraction of minority individuals at steady state! „  How does ρ depend on α and on initial fraction (u) of nodes with opinion 1?!
  22. 22. „  The phase-transition structure of these two models differs dramatically!! „  See paper for analytics (some mathematically rigorous) and numerics to illustrate the contrasting situations! „  Case (i): rewire to random individual with same opinion! „  There is a critical value α = αc, which is independent of u (initial number of 1s), such that ρ≈u for α>αc and ρ≈0 for α<αc! „  Case (ii): rewire to random individual! „  Now αc = αc(u) in the phase transition! „  Useful: Derive approximate equations for quantities such as the fraction of 0-1 edges versus time! Simulations of the voter model are done on a finite set, typi- cally the torus ðZ mod LÞd . In this setting, the behavior of the voter model is “trivial” because it is a finite Markov chain with two absorbing states, all ones and all zeros. As the next result shows (see Cox and Greven, ref. 45), the voter model has inter- esting behavior along the road to absorption. the joint d number of k, respecti all such tr of limits o number of and zeros Fig. 5 s transient, approxima milar beha when the roots with sus N1∕N fundamen vior, and The ph values of α ¼ 0.1;0. well appr the “suppo tions show almost str Conjecture vðαÞ < u ≤ u of ones, stationary looks local eralized W Fig. 4. Plot of N10∕M versus N1∕N when α ¼ 0.5 in the rewire-to-random case. Five simulations starting from u ¼ 0.2, 0.35, 0.5, 0.65, and 0.8 are plotted in different colors. These results are from graphs with N ¼ 10;000 ver- tices and plotted every 1,000 steps.
  23. 23. Any source of time series that you want: neuroscience, finance, epidemics, output of a dynamical system, etc.
  24. 24. •  Simple idea: Glue common nodes across “slices” (i.e. “layers”) •  E.g., consecutive layers only, as in upper right •  P. J. Mucha,T. Richardson, K. Macon, MAP, & J.-P. Onnela, “Community Structure in Time-Dependent, Multiscale, and Multiplex Networks”, Science,Vol. 328, No. 5980, 876–878 (2010)
  25. 25. „  Interlayer edge strength ω represents the strength of persistence in a node’s “trajectory” through time! •  Schematic from M. Bazzi, MAP, S. Williams, M. McDonald, D. J. Fenn, & S. D. Howison, “Community Detection in Temporal Multilayer Networks, and its Application to Correlation Networks”, arXiv:1501.00040!
  26. 26. „  One can measure the relative importances of a network’s nodes, edges, or other substructures, by calculating “centrality” measures.! „  Numerous centrality measures: degree, betweenness (on many geodesic paths), closeness (short distance to many nodes), PageRank, etc.! „  Fun fact: developing new centrality measures is among the top 10 most popular activities of network scientists (or at least it seems like it)!
  27. 27. „  J. M. Kleinberg, “Authoritative sources in a hyperlinked environment”, Journal of the ACM, Vol. 46: 604–632 (1999)! „  Intuition: A node (e.g., Web page) is a good hub if it has many hyperlinks (out-edges) to important nodes, and a node is a good authority if many important nodes have hyperlinks to it (in- edges)! „  A good hub points to good authorities, and a good authority is pointed to by good hubs! „  Imagine a random walker surfing the Web. It should spend a lot of time on important Web pages. Equilibrium populations of an ensemble of walkers satisfy the eigenvalue problem:! „  x = aAy ; y = bATx è ATAy = λy & AATx = λx, where λ = 1/(ab)! „  Leading eigenvalue λ1 (strictly positive) gives strictly positive authority vector x and hub vector y (leading eigenvectors)! „  Node i has hub score xi and authority score yi!
  28. 28. „  It works not just for Web pages but also mathematics departments! „  S. A. Meyer, P. J. Mucha, & MAP, “Mathematical genealogy and department prestige”, Chaos, Vol. 21: 041104 (2011); one-page paper in Gallery of Nonlinear Images! „  Data from Mathematics Genealogy Project!
  29. 29. •  We consider MPG data in the US from 1973–2010 (data from 10/09) •  Example: Danny Abrams earned a PhD from Cornell and leter supervised a student at Northwestern. •  è Directed edge of unit weight from Cornell University to Northwestern University •  A school is a good authority if it hires students from good hubs, and a university is good hub if its students are hired by good authorities. •  Caveats •  Our measurement has a time delay (only have the Cornell è Northwestern edge after Abrams supervises a PhD student) •  Hubs and authorities should change in time
  30. 30. Mathematical genealogy and department prestige Sean A. Myers,1 Peter J. Mucha,1 and Mason A. Porter2 1 Department of Mathematics, University of North Carolina, Chapel Hill, North Carolina 27599, USA 2 Mathematical Institute, University of Oxford, OX1 3LB, UK FIG. 1. (Color) Visualizations of a mathematics genealogy network. CHAOS 21, 041104 (2011) Hubs: node size Authorities: node color
  31. 31. ter2 ww. olars elds. tion ions of a mathematics genealogy network. FIG. 2. (Color) Rankings versus authority scores.
  32. 32. „  Let’s now try to do this with a temporal network.! „  S. A. Myers, D. S. Taylor, E. A. Leicht, A. Clauset, MAP, & P. J. Mucha, “Eigenvector-based Centrality Measures for Temporal Networks”, in preparation (coming soon)! „  Use a multilayer representation (c = 1/ω); we show only 4 layers for illustration! „  More generally: One can use any eigenvector-based centrality in the diagonal blocks.! MGP:%1946A2010%(“Take%2”)% A,beLer,definiJon,of,temporal,authority, •  Eigenvector%Centrality%of%
  33. 33. „  Multilayer network with adjacency-tensor elements Aijt! „  Directed edge from university i to university j for a specific person’s PhD granted at time t (multi-edges give weights).! „  231 US universities, T = 65 time layers (1946–2010)! „  Use perturbation theory (in c) to derive time-averaged centralities (coefficient at leading order) and first movers (next order)! „  We can talk privately if you want to see precise definitions.! 18 S. A. MYERS et al. Table 4.1 Top centralities and first-order movers for universities in the MGP [4]. Top Time-Averaged Centralities Top First-Order Mover Scores Rank University ↵i 1 MIT 0.6685 2 Berkeley 0.2722 3 Stanford 0.2295 4 Princeton 0.1803 5 Illinois 0.1645 6 Cornell 0.1642 7 Harvard 0.1628 8 UW 0.1590 9 Michigan 0.1521 10 UCLA 0.1456 Rank University mi 1 MIT 688.62 2 Berkeley 299.07 3 Princeton 248.72 4 Stanford 241.71 5 Georgia Tech 189.34 6 Maryland 186.65 7 Harvard 185.34 8 CUNY 182.59 9 Cornell 180.50 10 Yale 159.11
  34. 34. 20 S. A. MYERS et al. 1945 1965 1985 2005 10 −3 10 −2 10 −1 Georgia Tech conditional centrality time (t) conditionalcentrality αi (a) 1945 1965 1985 2005 10 −6 10 −4 10 −2 Georgia Tech joint centrality time (t) jointcentrality ε=0.0001 ε=0.001 ε=0.01 ε=0.1 ε=1 αi (b) Fig. 4.2. Centrality trajectories for Georgia Tech illustrate that one can construe ✏ as a tuning parameter that controls how much centrality can vary between neighboring time layers. (a) To study Eigenvector-Based Centralities for Temporal Networks 19 1 50 1 50 time-averaged centrality rank first-moverrank First-movers and centrality 1 227 1 227 CUNY (a) MIT Georgia Tech 1945 1965 1985 2005 10 −2 10 −1 10 0 time (t) conditionalcentrality Centrality trajectories for ϵ = 10−4 MIT Berkeley Princeton Stanford Georgia Tech CUNY (b) Fig. 4.1. Time-averaged (i.e., conditional) node authority scores for the Mathematics Genealogy Project data [4]. (a) We plot the first-order mover ranking of nodes (i.e., ranked according to {mi}) versus the time-averaged ranking of nodes (i.e., ranked according to {↵i}). As shown by the inset, nodes with large time-averaged rank tend to also have large first-order mover rank (e.g., MIT ranks first in both). However, there are nodes that have a much higher first-order mover rank than time- averaged rank (e.g., Georgia Tech and CUNY). (b) We plot the conditional rankings of nodes versus time (i.e., the ranking of nodes versus time normalized by the centrality of each time layer) to track university centrality trajectories across time. We show results for the seven top ranked first-order movers. Note that most of these top first-order movers are also top time-averaged authorities (e.g., MIT). In contrast, Georgia Tech and CUNY rank in the top six of the first-order movers ranking, but they are in the lower reaches of the top 40 for the time-averaged ranking. As expected, this ranking di↵erence reflects that their centrality trajectories exhibit a significant change over time. Georgia Tech rises in rank during t 2 [1965, 1985], whereas CUNY drops during this time period. first-order mover scores {mi} from Eq. (3.26). We show these values in Fig. 4.1(a) by
  35. 35. „  Generalization of modularity maximization to “multislice” networks (multilayer networks with “diagonal coupling”)! „  P. J. Mucha, T. Richardson, K. Macon, MAP, & J.-P. Onnela, “Community Structure in Time-Dependent, Multiscale, and Multiplex Networks”, Science, Vol. 328, No. 5980, 876–878 (2010)! „  “Diagonal coupling”: interlayer edges only between corresponding entities in different layers!
  36. 36. Puck Rombach
  37. 37. •  Find communities algorithmically by optimizing “multislice modularity”! –  We derived this function in Mucha et al, 2010! •  Laplacian dynamics: find communities based on how long random walkers are trapped there. Exponentiate and then linearize to derive modularity.! •  Generalizes derivation of monoplex modularity from R. Lambiotte, J.-C. Delvenne, &. M Barahona, arXiv:0812.1770 (now published, with updates, in TSNE, 2015)! •  Different spreading weights on different types of edges! –  Node x in layer r is a different node-layer from node x in layer s!
  38. 38. „  Aijs = number of times i and j voted the same in Congress s divided by the total number bills on which they both voted in layer s (one layer = one 2-year Congress)! „  Can get insights into party realignments (gray areas)!
  39. 39. P. J. Mucha & M.A. Porter, Chaos,Vol. 20, No. 4, 041108 (2010)
  40. 40. „  Dani Bassett’s IP in the NS15 meeting! „  fMRI data: network from correlated time series! „  Examine role of modularity in human learning by identifying dynamic changes in modular organization over multiple time scales! „  Main result: flexibility, as measured by allegiance of nodes to communities, in one session predicts amount of learning in subsequent session!
  41. 41. mation would make the transition much smoother), as well as the permission to use it in our paper. In Fig. S10 of the SM [5], we show the drifter trajectories. The drifters are re- leased at di↵erent times, and their trajectories are recorded for di↵erent time intervals. (See Fig. S11 of the SM [5].) For consistency, we need a common time interval [tinit, tfinal] over which the trajectories are known for all drifters. The earliest initial time is tinit = 0.2 (days). For the final time tfinal, we consider several values: tfinal = 0.3, tfinal = 1.5, and tfinal = 3.0 (days). For each final time, we only include drifters whose time interval of recorded trajectory contains the entire interval [tinit, tfinal] are as nodes in a network. Consequently, the num- ber of network nodes is smaller for later tfinal. (See Fig. S12 of the SM [5].) For all of the drifters, we take the common initial time tinit and the final time tfinal as we described in the previous para- graph. The weight between two drifters A and B is based on relative dispersion W(1) AB defined in Eq. (1), where |ri(A, B)| and |rf (A, B)|, respectively, give the initial and final Euclidean dis- tances in the 2D (longitude, latitude) plane [25]. Note that the deformation-gradient tensor F(A) in Eq. (3) is unknown for the drifter data, so we cannot use W(2) AB in Eq. (3) in those cases. The modularity for the NG null model is given by Eq. (4). In Fig. S12 of the SM [5], we show the detected communities for four di↵erent values of the resolution parameter . MAP:the discussion of what a community is above seems rather redundant with earlier discussions? I suspect space may be an issue, in which case it should be shortened to only what we need, but it also seems like we may wish to shorten it to minimize redundancy anyway; I am trying to figure out what it’s contributing SHL:I commented out that part, and I moved the Gen- Louvain part to the methodology section because GenLou- vain is used throughout the work. MAP:should time-dependent community structure start a new section in order to emphasize it? SHL:I separated the sections. Results for Time-Dependent Drifter Data.—To gain in- sights into the dynamics of LCSs, it is also helpful to ex- amine time-dependent community structure. To do this, we will split up the full temporal dynamics into a se- ries of intervals. The simplest approach is to divide the time interval [tinit, tfinal] uniformly into S “layers” using a temporal resolution of tres. This yields the time in- tervals {[t0 = tinit, t1 = t0 + tres], [t1, t2], · · · , [tS 1, tS = tfinal]}, and similar to W(1) AB in Eq. (2), we define the weight between We divide the time interval [tinit = 0.1, tfinal = 3.1] is uniformly into S = 30 non-overlapping intervals {[t0 = tinit = 0.1, 0.2), [0.2, 0.3), · · · , [3.0, tS = tfinal = 3.1]} with a temporal resolution of tres = 0.1. We then apply multilayer community detection [26] to our time-dependent networks by using a generalized version of the modularity function in Eq. (4) for multilayer networks: Qmulti = 1 2µ X ABsr " W(1) ABs s kAskBs 2ms ! sr + ABTBsr # (gAs, gBr) , (7) where A and B index nodes (i.e., fluid elements) as in the orig- inal modularity function in Eq. (4), and s and r index the time layers in the multilayer network. (See Refs. [27, 28] for a re- view of multilayer networks.) That is, for each layer s, there exists a separate network described by the adjacency-matrix elements W(1) ABs. The quantity W(1) ABs, which describes the strength of the interaction between fluid elements A and B at time s, is thus an element of an adjacency tensor [26, 27, 29]. The intralayer interactions W(1) ABs , 0 if nodes A and B are connected in layer s, and W(1) ABs = 0 otherwise. Additionally, kAs = P B W(1) ABs, we normalize in each layer s separately using ms = P AB W(1) ABs, and s is the resolution parameter in layer s. To connect fluid elements to themselves when they are present in multiple layers, we use the interlayer interactions TBsr , 0 0 0.5 1 1.5 2 2.5 3 0 10 20 30 40 50 60 70 initialpointoftimeslice(day) node index FIG. 3. (Color online) Multilayer (time-dependent) community structure of drifters for tinit = 0.1 day, tfinal = 3.1 day, and tres = 0.1. The resolution-parameter value is = 1, and the interlayer coupling strength is ! = 25. The vertical axis gives the initial point of time layers ts in [ts, ts+1) where s = 0, 1, . . . , 29. MAP:I was unable to discern what was meant by the description, and I am not entirely sure of the axis label either, so we need to adjust phrasing SHL:I hope it’s clear now, and the horizontal axis indicates the node index. We indicate di↵erent communities by using di↵erent colors and sym- bols (also used in Fig. 4). The solid horizontal lines in the interior correspond to the snapshots in Fig. 4. S. H. Lee, M. Farazmand, G. Haller, & MAP, “Finding Lagrangian Coherent Structures Using Community Detection”, in preparation Adjacencies constructed using, for example, nearest-neighbor interactions in relative dispersion of different fluid elements 3 (a) (b) FIG. 2. (Color online) Ten communities (each of a di↵erent color), which we detect algorithmically from a network constructed from nearest- neighbor interactions, from the simulated data that we show in Fig. 1. Panels (a) and (b), respectively, show the fluid elements at the initial and final times. We detect the communities using the relative dispersion W(1) AB in Eq. (1) and the modularity QNG in Eq. (4). The resolution-parameter value is = 0.005. See Figs. S2–S5 in SM [5] for similar results using the modularity QLN and various resolution-parameter values. detect the set {gA | A 2 V} of communities, where node A is assigned to community gA, such that modularity is maxi- mized. We use di↵erent null models for the relative disper- (1) sum of weights for all of the interactions. (This sum is nec- essarily the same for both incoming and outgoing weights.) To detect the communities for both Eqs. (4) and (5), we use
  42. 42. „  M. Sarzynska, E. A. Leivht, G. Chowell, and MAP, “Null Models for Community Detection in Spatially-Embedded, Temporal Networks”, arXiv:1407.6297 (2015)! „  Networks constructed from correlations in time series! „  Generalizing null models to incorporate spatial information and exploring the results for different null models! „  Below: aggregated community structure versus consensus communities for provinces from multilayer community structure! 29 of 46 (a) (b) (c) (d) FIG. 15. Province-level algorithmic community structure, which we obtain by maximizing modularity, for the static and multilayer dengue fever correlation networks. We color the provinces according to their community assignments. White
  43. 43. Flora Meng
  44. 44. „  M Kivelä, A. Arenas, M. Barthelemy, J. P. Gleeson, Y. Moreno, & MAP, “Multilayer Networks”, Journal of Complex Networks, 2(3): 203–271, 2014.! „  S. Boccaletti, et al., “Structure and Function of Multilayer Networks”, Physics Reports, Vol. 544: 1–122 (2014).!
  45. 45. •  Definition of a multilayer network M –  M = (VM,EM,V,L) •  V: set of nodes –  As in ordinary graphs •  L: sequence of sets of possible layers –  One set for each additional “aspect” d ≥ 0 beyond an ordinary network (examples: d = 1 in schematic on this page; d = 2 on last page) •  VM: set of tuples that represent node-layers •  EM: multilayer edge set that connects these tuples •  Note 1: allow weighted multilayer networks by mapping edges to real numbers with w: EM èR •  Note 2: d = 0 yields the usual single-layer (“monoplex”) networks
  46. 46. •  Adjacency tensor for unweighted case: •  Elements of adjacency tensor: –  Auvαβ = Auvα1β1 … αdβd = 1 iff ((u,α), (v,β)) is an element of EM (else Auvαβ = 0) •  Important note:‘padding’ layers with empty nodes –  One needs to distinguish between a node not present in a layer and nodes existing but edges not present (use a supplementary tensor with labels for edges that could exist), as this is important for normalization in many quantities.
  47. 47. „ Example: multilayer clustering coefficient! „ Our approach: Cozzo et al., ! „  Use the idea of multilayer walks. Keep track of returning to entity i (possibly in a different layer from where we started) separately for 1 total layer, 2 total layers, 3 total layers (and in principle more).! „ Insight: Need different types of transitivity for different types of multiplex networks.! „  Example: transportation versus social networks! „  There are several different clustering coefficients for monoplex weighted networks, and this situation is even more extreme for multilayer networks.!
  48. 48. „  Need to separately keep track of different types of “elementary cycles”, which all traverse three intra-layer but have different numbers of inter-layer edges! „  Again: Keep track of particular types of multilayer walks!
  49. 49. „  Basic question: How do multilayer structures affect dynamical systems on networks?! „  Effects of multiplexity? (colored edges)! „  Effects of interconnectedness? (colored nodes)! „  Important goal: Find new phenomena that cannot occur without multilayer structures.! „  Example: Speeding up versus slowing down spreading?! „  Example: Multiplexity-induced correlations in dynamics?! „  Example: Effect of different costs for changing layers?! „  We need to keep track of different types of network ties simultaneously! „  Example: spread of ebola coupled to spread of fear of ebola (only one of these can occur online)!
  50. 50. „  Networks structure is really interesting, but it’s important to emphasize dynamics! „  Dynamical systems on networks! „  Dynamics of the networks themselves! „  Even when studying network structure, it is useful think about what structural considerations can potentially say about dynamics (e.g., random graphs, random walks, etc.)! „  Multilayer networks and multilayer representations of temporal networks offer new avenues of explorations! „  E.g., time-dependent measures of node importance (centralities), time- dependent community structure, effects of multilayer structures (e.g., correlations between properties of different layers) on dynamical processes! „  In summary: You say you want some evolution? You know it’s gonna’ be alright!!
  51. 51. •  Mathematical Biosciences Institute, The Ohio State University, USA! •  Semester program on “Dynamics of Biologically Inspired Networks”! – spring-2016-dynamics-biologically-inspired- networks/! –  Focuses on theoretical questions on networks that arise from biology! •  Four awesome workshops: (1) dynamics of networks with special properties, (2) interplay of stochastics and deterministic dynamics, (3) generalized network structures and dynamics,(4) control and observability of network dynamics!
  52. 52. •  March 21–25, 2016! •!