This document provides an outline and definitions for a thesis on convolution of graph signals and deep learning on graph domains. It discusses motivations, related work, definitions of graph signals and convolution, and different approaches to extending convolution operations to non-Euclidean graph domains. Specifically, it covers spectral approaches that define convolution in the graph spectral domain, vertex-domain approaches that define it as a sum over neighborhoods, and characterizes convolutional operators by their equivariance properties. It also discusses applications to deep learning on graphs and different notions of graph convolution.
To describe the dynamics taking place in networks that structurally change over time, we propose an approach to search for attributes whose value changes impact the topology of the graph. In several applications, it appears that the variations of a group of attributes are often followed by some structural changes in the graph that one may assume they generate. We formalize the triggering pattern discovery problem as a method jointly rooted in sequence mining and graph analysis. We apply our approach on three real-world dynamic graphs of different natures - a co-authoring network, an airline network, and a social bookmarking system - assessing the relevancy of the triggering pattern mining approach.
QMC algorithms usually rely on a choice of “N” evenly distributed integration nodes in $[0,1)^d$. A common means to assess such an equidistributional property for a point set or sequence is the so-called discrepancy function, which compares the actual number of points to the expected number of points (assuming uniform distribution on $[0,1)^{d}$) that lie within an arbitrary axis parallel rectangle anchored at the origin. The dependence of the integration error using QMC rules on various norms of the discrepancy function is made precise within the well-known Koksma--Hlawka inequality and its variations. In many cases, such as $L^{p}$ spaces, $1<p<\infty$, the best growth rate in terms of the number of points “N” as well as corresponding explicit constructions are known. In the classical setting $p=\infty$ sharp results are absent for $d\geq3$ already and appear to be intriguingly hard to obtain. This talk shall serve as a survey on discrepancy theory with a special emphasis on the $L^{\infty}$ setting. Furthermore, it highlights the evolution of recent techniques and presents the latest results.
To describe the dynamics taking place in networks that structurally change over time, we propose an approach to search for attributes whose value changes impact the topology of the graph. In several applications, it appears that the variations of a group of attributes are often followed by some structural changes in the graph that one may assume they generate. We formalize the triggering pattern discovery problem as a method jointly rooted in sequence mining and graph analysis. We apply our approach on three real-world dynamic graphs of different natures - a co-authoring network, an airline network, and a social bookmarking system - assessing the relevancy of the triggering pattern mining approach.
QMC algorithms usually rely on a choice of “N” evenly distributed integration nodes in $[0,1)^d$. A common means to assess such an equidistributional property for a point set or sequence is the so-called discrepancy function, which compares the actual number of points to the expected number of points (assuming uniform distribution on $[0,1)^{d}$) that lie within an arbitrary axis parallel rectangle anchored at the origin. The dependence of the integration error using QMC rules on various norms of the discrepancy function is made precise within the well-known Koksma--Hlawka inequality and its variations. In many cases, such as $L^{p}$ spaces, $1<p<\infty$, the best growth rate in terms of the number of points “N” as well as corresponding explicit constructions are known. In the classical setting $p=\infty$ sharp results are absent for $d\geq3$ already and appear to be intriguingly hard to obtain. This talk shall serve as a survey on discrepancy theory with a special emphasis on the $L^{\infty}$ setting. Furthermore, it highlights the evolution of recent techniques and presents the latest results.
Solving connectivity problems via basic Linear Algebracseiitgn
Directed reachability and undirected connectivity are well studied problems in Complexity Theory. Reachability/Connectivity between distinct pairs of vertices through disjoint paths are well known but hard variations. We talk about recent algorithms to solve variants and restrictions of these problems in the static and dynamic settings by reductions to the determinant.
Pattern-based classification of demographic sequencesDmitrii Ignatov
We have proposed prefix-based gapless sequential patterns for classification of demographic sequences. In comparison to black-box machine learning techniques, this one provides interpretable patterns suitable for treatment by professional demographers. As for the language, we have used Pattern Structures as an extension of Formal Concept Analysis for the case of complex data like sequences, graphs, intervals, etc.
A crystallographic group is a group acting on R^n that contains a translation subgroup Z^n as a finite index subgroup. Here we consider which Coxeter groups are crystallographic groups. We also expose the enumeration in dimension 2 and 3. Then we shortly give the principle under which the enumeration of N dimensional crystallographic groups is done.
We study QPT (quasi-polynomial tractability) in the worst case setting of linear tensor product problems defined over Hilbert spaces. We prove QPT for algorithms that use only function values under three assumptions'
1. the minimal errors for the univariate case decay polynomially fast to zero,
2. the largest singular value for the univariate case is simple,
3. the eigenfunction corresponding to the largest singular value is a multiple of the function value at some point.
The first two assumptions are necessary for QPT. The third assumption is necessary for QPT for some Hilbert spaces.
Joint work with Erich Novak
A labeling of graph G is a mapping that carries a set of graph elements into a set of numbers (Usually positive integers) called labels. An edge magic labeling on a graph with p vertices and q edges will be defined as a one-to-one map taking the vertices and edges onto the integers 1,2,----,
p+q with the property that the sum of the label on an edge and the labels of its end vertices is constant independent of the choice of edge.
Complexity Classes and the Graph Isomorphism Problemcseiitgn
The Graph Isomorphism problem is one of the few problems in NP, but not expected to be NP complete and not known to be in P.In this talk I will review some of the attempts that have been made in order to provide a better classification of the problem in terms of complexity classes reviewing upper and lower bounds and illustrating in this way the utility of several complexity classes.
Solving connectivity problems via basic Linear Algebracseiitgn
Directed reachability and undirected connectivity are well studied problems in Complexity Theory. Reachability/Connectivity between distinct pairs of vertices through disjoint paths are well known but hard variations. We talk about recent algorithms to solve variants and restrictions of these problems in the static and dynamic settings by reductions to the determinant.
Pattern-based classification of demographic sequencesDmitrii Ignatov
We have proposed prefix-based gapless sequential patterns for classification of demographic sequences. In comparison to black-box machine learning techniques, this one provides interpretable patterns suitable for treatment by professional demographers. As for the language, we have used Pattern Structures as an extension of Formal Concept Analysis for the case of complex data like sequences, graphs, intervals, etc.
A crystallographic group is a group acting on R^n that contains a translation subgroup Z^n as a finite index subgroup. Here we consider which Coxeter groups are crystallographic groups. We also expose the enumeration in dimension 2 and 3. Then we shortly give the principle under which the enumeration of N dimensional crystallographic groups is done.
We study QPT (quasi-polynomial tractability) in the worst case setting of linear tensor product problems defined over Hilbert spaces. We prove QPT for algorithms that use only function values under three assumptions'
1. the minimal errors for the univariate case decay polynomially fast to zero,
2. the largest singular value for the univariate case is simple,
3. the eigenfunction corresponding to the largest singular value is a multiple of the function value at some point.
The first two assumptions are necessary for QPT. The third assumption is necessary for QPT for some Hilbert spaces.
Joint work with Erich Novak
A labeling of graph G is a mapping that carries a set of graph elements into a set of numbers (Usually positive integers) called labels. An edge magic labeling on a graph with p vertices and q edges will be defined as a one-to-one map taking the vertices and edges onto the integers 1,2,----,
p+q with the property that the sum of the label on an edge and the labels of its end vertices is constant independent of the choice of edge.
Complexity Classes and the Graph Isomorphism Problemcseiitgn
The Graph Isomorphism problem is one of the few problems in NP, but not expected to be NP complete and not known to be in P.In this talk I will review some of the attempts that have been made in order to provide a better classification of the problem in terms of complexity classes reviewing upper and lower bounds and illustrating in this way the utility of several complexity classes.
ON ALGORITHMIC PROBLEMS CONCERNING GRAPHS OF HIGHER DEGREE OF SYMMETRYFransiskeran
Since the ancient determination of the five platonic solids the study of symmetry and regularity has always
been one of the most fascinating aspects of mathematics. One intriguing phenomenon of studies in graph
theory is the fact that quite often arithmetic regularity properties of a graph imply the existence of many
symmetries, i.e. large automorphism group G. In some important special situation higher degree of
regularity means that G is an automorphism group of finite geometry. For example, a glance through the
list of distance regular graphs of diameter d < 3 reveals the fact that most of them are connected with
classical Lie geometry. Theory of distance regular graphs is an important part of algebraic combinatorics
and its applications such as coding theory, communication networks, and block design. An important tool
for investigation of such graphs is their spectra, which is the set of eigenvalues of adjacency matrix of a
graph. Let G be a finite simple group of Lie type and X be the set homogeneous elements of the associated
geometry.
On algorithmic problems concerning graphs of higher degree of symmetrygraphhoc
Since the ancient determination of the five platonic solids the study of symmetry and regularity has always
been one of the most fascinating aspects of mathematics. One intriguing phenomenon of studies in graph
theory is the fact that quite often arithmetic regularity properties of a graph imply the existence of many
symmetries, i.e. large automorphism group G. In some important special situation higher degree of
regularity means that G is an automorphism group of finite geometry. For example, a glance through the
list of distance regular graphs of diameter d < 3 reveals the fact that most of them are connected with
classical Lie geometry. Theory of distance regular graphs is an important part of algebraic combinatorics
and its applications such as coding theory, communication networks, and block design. An important tool
for investigation of such graphs is their spectra, which is the set of eigenvalues of adjacency matrix of a
graph. Let G be a finite simple group of Lie type and X be the set homogeneous elements of the associated
geometry. The complexity of computing the adjacency matrices of a graph Gr on the vertices X such that
Aut GR = G depends very much on the description of the geometry with which one starts. For example, we
can represent the geometry as the totality of 1 cosets of parabolic subgroups 2 chains of embedded
subspaces (case of linear groups), or totally isotropic subspaces (case of the remaining classical groups), 3
special subspaces of minimal module for G which are defined in terms of a G invariant multilinear form.
The aim of this research is to develop an effective method for generation of graphs connected with classical
geometry and evaluation of its spectra, which is the set of eigenvalues of adjacency matrix of a graph. The
main approach is to avoid manual drawing and to calculate graph layout automatically according to its
formal structure. This is a simple task in a case of a tree like graph with a strict hierarchy of entities but it
becomes more complicated for graphs of geometrical nature. There are two main reasons for the
investigations of spectra: (1) very often spectra carry much more useful information about the graph than a
corresponding list of entities and relationships (2) graphs with special spectra, satisfying so called
Ramanujan property or simply Ramanujan graphs (by name of Indian genius mathematician) are important
for real life applications (see [13]). There is a motivated suspicion that among geometrical graphs one
could find some new Ramanujan graphs.
On the Equality of the Grundy Numbers of a Graphjosephjonse
Our work becomes integrated into the general problem of the stability of the network ad hoc. Some, works attacked(affected) this problem. Among these works, we find the modelling of the network ad hoc in the form of a graph. Thus the problem of stability of the network ad hoc which corresponds to a problem of allocation of frequency amounts to a problem of allocation of colors in the vertex of graph. we present use a parameter of coloring " the number of Grundy”. The Grundy number of a graph G, denoted by Γ(G), is the largest k such that G has a greedy k-coloring, that is a coloring with colours obtained by applying the greedy algorithm according to some ordering of the vertices of G. In this paper, we study the Grundy number of the lexicographic, Cartesian and direct products of two graphs in terms of the Grundy numbers of these graphs.
THE RESULT FOR THE GRUNDY NUMBER ON P4- CLASSESgraphhoc
Our work becomes integrated into the general problem of the stability of the network ad hoc. Some, works
attacked (affected) this problem. Among these works, we find the modelling of the network ad hoc in the
form of a graph. We can resume the problem of coherence of the network ad hoc of a problem of allocation
of frequency
We study a new class of graphs, the fat-extended P4 graphs, and we give a polynomial time algorithm to
calculate the Grundy number of the graphs in this class. This result implies that the Grundy number can be
found in polynomial time for many graphs
On the equality of the grundy numbers of a graphijngnjournal
Our work becomes integrated into the general problem of the stability of the network ad hoc. Some, works attacked(affected) this problem. Among these works, we find the modelling of the network ad hoc in the form of a graph. Thus the problem of stability of the network ad hoc which corresponds to a problem of allocation of frequency amounts to a problem of allocation of colors in the vertex of graph. we present use a parameter of coloring " the number of Grundy”. The Grundy number of a graph G, denoted by Γ(G), is the largest k such that G has a greedy k-coloring, that is a coloring with colours obtained by applying the greedy algorithm according to some ordering of the vertices of G. In this paper, we study the Grundy number of the lexicographic, Cartesian and direct products of two graphs in terms of the Grundy numbers of these graphs.
In this paper, we introduce the notions of m-shadow graphs and n-splitting graphs,m ³ 2, n ³ 1. We
prove that, the m-shadow graphs for paths, complete bipartite graphs and symmetric product between
paths and null graphs are odd graceful. In addition, we show that, the m-splitting graphs for paths, stars
and symmetric product between paths and null graphs are odd graceful. Finally, we present some examples
to illustrate the proposed theories.
Strong (Weak) Triple Connected Domination Number of a Fuzzy Graphijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Notes taken while reading the paper "A Tutorial on Spectral Clustering" by Ulrike von Luxburg. Find original paper at http://www.informatik.uni-hamburg.de/ML/contents/people/luxburg/publications/Luxburg07_tutorial.pdf
Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...Francesco Tudisco
We consider the p-Laplacian on discrete graphs, a nonlinear operator that generalizes the standard graph Laplacian (obtained for p=2). We consider a set of variational eigenvalues of this operator and analyze the nodal domain count of the corresponding eigenfunctions. In particular, we show that the famous Courant’s nodal domain theorem for the linear Laplacian carries over almost unchanged to the nonlinear case. Moreover, we use the nodal domains to prove a higher-order Cheeger inequality that relates the k-way graph cut to the k-th variational eigenvalue of the p-Laplacian
We consider here k-valent plane and toroidal maps with faces of size a and b. The faces are said to be in a lego if the faces are organized in blocks that then tile the sphere. We expose some enumeration results and the technical enumeration methods.
Then we expose how we managed to draw the graphs from the combinatorial data.
Similar to On Convolution of Graph Signals and Deep Learning on Graph Domains (20)
ANAMOLOUS SECONDARY GROWTH IN DICOT ROOTS.pptxRASHMI M G
Abnormal or anomalous secondary growth in plants. It defines secondary growth as an increase in plant girth due to vascular cambium or cork cambium. Anomalous secondary growth does not follow the normal pattern of a single vascular cambium producing xylem internally and phloem externally.
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
BREEDING METHODS FOR DISEASE RESISTANCE.pptxRASHMI M G
Plant breeding for disease resistance is a strategy to reduce crop losses caused by disease. Plants have an innate immune system that allows them to recognize pathogens and provide resistance. However, breeding for long-lasting resistance often involves combining multiple resistance genes
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
Toxic effects of heavy metals : Lead and Arsenicsanjana502982
Heavy metals are naturally occuring metallic chemical elements that have relatively high density, and are toxic at even low concentrations. All toxic metals are termed as heavy metals irrespective of their atomic mass and density, eg. arsenic, lead, mercury, cadmium, thallium, chromium, etc.
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills MN
Travis Hills of Minnesota developed a method to convert waste into high-value dry fertilizer, significantly enriching soil quality. By providing farmers with a valuable resource derived from waste, Travis Hills helps enhance farm profitability while promoting environmental stewardship. Travis Hills' sustainable practices lead to cost savings and increased revenue for farmers by improving resource efficiency and reducing waste.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...
On Convolution of Graph Signals and Deep Learning on Graph Domains
1. 1/50
On Convolution of Graph Signals
And Deep Learning on Graph Domains
December 13th 2018
Candidate: Jean-Charles Vialatte
Advisors: Vincent Gripon, Mathias Herberts
Supervisor: Gilles Coppin
Jury: Pierre Borgnat∗, Matthias L¨owe∗,
Paulo Goncalves, Juliette Mattioli
∗: examiners
December 13th 2018 1 / 50
2. 2/50
Outline
1 Motivation and problem statement
2 Literature overview
3 Convolution of graph signals
4 Deep learning on graph domains
5 Experiments
6 Conclusion
December 13th 2018 2 / 50
3. 2/50
Outline
1 Motivation and problem statement
2 Literature overview
3 Convolution of graph signals
4 Deep learning on graph domains
5 Experiments
6 Conclusion
December 13th 2018 2 / 50
5. 3/50
Definitions: Signals and Graphs
Signal:
Graph:
1
2
3
4
5
a
b
c
d
e
0 a 0 d e
a 0 0 b 0
0 0 0 c 0
d b c 0 0
e 0 0 0 0
A: adjacency matrix
December 13th 2018 3 / 50
6. 3/50
Definitions: Signals and Graphs
Signal:
Graph:
1
2
3
4
5
a
b
c
d
e
0 a 0 d e
a 0 0 b 0
0 0 0 c 0
d b c 0 0
e 0 0 0 0
A: adjacency matrix
December 13th 2018 3 / 50
8. 4/50
Examples of signals
Most domains can be represented with a graph:
Euclidean
domains
Non-Euclidean
domains
December 13th 2018 4 / 50
9. 5/50
Deep learning performances
On Euclidean domains:
digits tiny images
1
22
0.5
3
Classification error (in %)
without convolutions with convolutions
December 13th 2018 5 / 50
10. 6/50
A key factor: convolution
Defined on Euclidean domains:
December 13th 2018 6 / 50
11. 7/50
Connectivity pattern: MLP vs CNN
Dense layer
fully connected
no tied weights
ex: 81 different weights here
December 13th 2018 7 / 50
12. 7/50
Connectivity pattern: MLP vs CNN
Dense layer
fully connected
no tied weights
ex: 81 different weights here
Convolutional layer
locally connected
with weight sharing
ex: 3 different weights here
December 13th 2018 7 / 50
13. 8/50
Problem: How to extend convolutions?
Euclidean structure Non-Euclidean structure
convolution
14. 8/50
Problem: How to extend convolutions?
Euclidean structure Non-Euclidean structure
convolution
?
?
December 13th 2018 8 / 50
15. 9/50
Supervised vs semi-supervised application
Let X be a dataset. We classify its rows. Two different kind of graph structures:
Supervised
classification
of graph-
structured
data
X
(b×n)
n
Semi-
supervised
classification
of nodes
X
(n×p)
n
December 13th 2018 9 / 50
16. 9/50
Outline
1 Motivation and problem statement
2 Literature overview
3 Convolution of graph signals
4 Deep learning on graph domains
5 Experiments
6 Conclusion
December 13th 2018 9 / 50
17. 10/50
Spectral approaches
Convolutions are defined in the graph spectral domain.
L = D − A = UΛUT
gft(X) = UX
Figure: Example of signals of the Laplacian eigenbasis
Using the GFT, convolution amount to a pointwise multiplication in the spectral domain.
X ⊗ Θ = UT
(UX.UΘ)
December 13th 2018 10 / 50
18. 11/50
Spectral approaches
X ⊗ Θ = UT
(UX.UΘ)
Pros
Elegant and fast under some approximations
Can be used off the shelf: no need to specify any weight sharing
Cons
Introduce isotropic symmetries
Do not match Euclidean convolutions on grid graphs
Figure: Example of translation defined as convolution with a dirac (courtesy of Pasdeloup, B.)
December 13th 2018 11 / 50
19. 12/50
Vertex-domain approaches
Convolutions are defined as a sum over a neighborhood, usually a sum of dot products
(cf references in thesis manuscript).
(X ⊗ Θ)(vi) =
j∈Nvi
θijX(vj)
December 13th 2018 12 / 50
20. 13/50
Vertex-domain approaches
(X ⊗ Θ)(vi) =
j∈Nvi
θijX(vj)
Pros
Match Euclidean convolutions on grid graphs
Locally connected
Cons
Weight sharing is not always explicit
December 13th 2018 13 / 50
21. 14/50
A few important references
Bruna et al., 2013: spectral filters with O(1) weights, K smoother matrix used to
interpolate more weights.
gθ(X) = UT
(UX.Kθ)
Defferard et al., 2016: filters based on Chebychev polynomials (Ti)i.
gθ(L) =
k
i=0
θi Ti(L)
Kipf et al., 2016: application to semi-supervised settings.
Y = AXΘ
Velickovic et al., 2017: introduction of attention coefficients (Ak)k.
Y =
K
k=1
AkXΘk
Du et al., 2017: convolution from the GSP field (Sandryhaila et al., 2013).
Y =
K
k=1
Ak
XΘk
December 13th 2018 14 / 50
22. 14/50
Outline
1 Motivation and problem statement
2 Literature overview
3 Convolution of graph signals
4 Deep learning on graph domains
5 Experiments
6 Conclusion
December 13th 2018 14 / 50
23. 15/50
Recall: Euclidean convolution
Definition
Convolution on S(Z2
)
The (discrete) convolution s1 ∗ s2 is a binary operation in S(Z2
) defined as:
∀(a, b) ∈ Z2
, (s1 ∗ s2)[a, b] =
i j
s1[i, j] s2[a − i, b − j]
A convolution operator f is a function parameterized by a signal w ∈ S(Z2
) s.t. :
f = . ∗ w (right operator)
f = w ∗ . (left operator)
Some notable properties:
Linearity
Locality and weight sharing
Commutativity (optional)
Equivariance to translations (i.e. commutes with them)
December 13th 2018 15 / 50
24. 16/50
Characterization by translational equivariance
Theorem
Characterization of convolution operators on S(Z2
)
A linear transformation f is equivariant to translations ⇔ it is a convolution operator.
translation
f f
translation
December 13th 2018 16 / 50
25. 17/50
Convolution of signals with graph domains
Can use the Euclidean convolution
How to extend the convolution here ?
December 13th 2018 17 / 50
26. 18/50
A few notions of representation theory
A group is a set (defined by some properties) which can act on other sets.
Let Γ be a group, g ∈ Γ, and V be a set. Example of group actions:
Lg : Γ → Γ (auto-action)
g(.) : V → V (action on V )
g
h → gh
v → g(v)
Lg
g(.)
December 13th 2018 18 / 50
27. 19/50
Group convolution
Definition
Group convolution
Let a group Γ, the group convolution between two signals s1 and s2 ∈ S(Γ) is defined as:
∀h ∈ Γ, (s1 ∗i s2)[h] =
g∈Γ
s1[g] s2[g−1
h]
provided at least one of the signals has finite support if Γ is not finite.
Theorem
Characterization of group convolution operators
Let a group Γ, let f ∈ L(S(Γ)),
1 f is a group convolution right operator ⇔ f is equivariant to left multiplications,
2 f is a group convolution left operator ⇔ f is equivariant to right multiplications,
3 f is a group convolution commutative operator ⇔ f is equivariant to multiplications.
December 13th 2018 19 / 50
28. 20/50
Goal: convolution on the vertex set
Let a graph G = V, E s.t. E ⊂ V 2
.
Starting point: bijective map ϕ between a group Γ and G (or a subgraph).
Γ V
S(Γ) S(V )
ϕ
lin. ext. lin. ext.
ϕ
December 13th 2018 20 / 50
29. 21/50
Convolution on the vertex set from group convolutions
(Goal: convolution on the vertex set)
ϕ: bijective map
S(Γ) S(V )
S(Γ) S(V )
f f=ϕ◦f◦ϕ−1
ϕ−1
ϕ
Equivariance theorem to operators of the form ϕ ◦ Lg ◦ ϕ−1
holds.
But not necessarily to actions of Γ on V (i.e. of the form g(.)).
December 13th 2018 21 / 50
30. 22/50
Needed condition: equivariant map
(Goal: equivariance theorem holds)
Condition: ϕ is a bijective equivariant map
Denote gv = ϕ−1
(v).
gu gvgu
u ϕ(gvgu)
Lgv
ϕ ϕ
gv(.)
We need gv(.) = ϕ ◦ Lgv ◦ ϕ−1
i.e. ∀u ∈ V, gv(u) = ϕ(gvgu).
December 13th 2018 22 / 50
31. 23/50
ϕ-convolution
ϕ: bijective equivariant map i.e. gv(u) = ϕ(gvgu).
Definition
ϕ-convolution
∀s1, s2 ∈ S(V ):
s1 ∗ϕ s2 =
v∈V
s1[v] gv(s2) (1)
=
g∈Γ
s1[ϕ(g)] g(s2) (2)
Theorem
Characterization of ϕ-convolution right operators
f is a ϕ-convolution right operator ⇔ f is equivariant to Γ
December 13th 2018 23 / 50
32. 24/50
Mixed domain formulation
Let Γ be an abelian group. No need to exhibit ϕ in this case:
Definition
Mixed domain convolution
∀r ∈ S(Γ) and ∀s ∈ S(V ):
r ∗m s =
g∈Γ
r[g] g(s) ∈ S(V )
Equivariance theorem holds:
Corollary
f is a m-convolution left operator ⇔ f is equivariant to Γ
(Converse sense still requires bijectivity between Γ and V ).
December 13th 2018 24 / 50
33. 25/50
Inclusion and role of the edge set
Edge constrained (EC)
d
a
b
c
e
g(a) ∈ {c, d}
g(a) /∈ {b, e}
Locality Preserving (LP)
a b
c
a’ b’
c’
g(.)
December 13th 2018 25 / 50
34. 26/50
Cayley graphs
(Goal: description of EC and LP convolutions)
Definition
Cayley graph and subgraph
Let a group Γ and one of its generating set U. The Cayley graph generated by U, is the
digraph G = V, E such that V = Γ and E is such that, either:
∀a, b ∈ Γ, a → b ⇔ ∃g ∈ U, ga = b (left Cayley graph)
∀a, b ∈ Γ, a → b ⇔ ∃g ∈ U, ag = b (right Cayley graph)
both points above (abelian Cayley graph)
A Cayley subgraph is a subgraph that is isomorph to a Cayley graph.
a b
g
Figure: An edge of a Cayley graph
December 13th 2018 26 / 50
35. 27/50
Characterization of EC and LP convolutions
Theorem
Characterization by Cayley subgraphs
Let a graph G = V, E , then:
1 its left Cayley subgraphs characterize its EC ϕ-convolutions,
2 its right Cayley subgraphs characterize its LP ϕ-convolutions,
3 its abelian Cayley subgraphs characterize its EC and LP m-convolutions.
Corollary
Properties of convolutions that are both EC and LP
1 If a ϕ-convolution of group Γ is EC and LP then Γ is abelian;
2 an m-convolution is EC if, and only if, it is also LP.
December 13th 2018 27 / 50
36. 28/50
Other results in the manuscript
Description with smaller kernels
The weight sharing is preserved
More detailed results depending on laterality of operator and equivariance
Analysis of limitations due to algebraic structure of the Cayley subgraphs
Above theorems hold for groupoids of partial transformation under mild conditions
They also hold for groupoids based on paths under restrictive conditions
December 13th 2018 28 / 50
37. 28/50
Outline
1 Motivation and problem statement
2 Literature overview
3 Convolution of graph signals
4 Deep learning on graph domains
5 Experiments
6 Conclusion
December 13th 2018 28 / 50
38. 29/50
Propagational representation of a layer
Ld
Ld+1
Definition
Edge-constrained layer
Connections (dotted lines) are constrained by edges (red lines) in a local receptive field.
December 13th 2018 29 / 50
39. 29/50
Propagational representation of a layer
Ld
Ld+1
Definition
Edge-constrained layer
Connections (dotted lines) are constrained by edges (red lines) in a local receptive field.
Theorem
Characterization by local receptive fields (LRF)
There is a graph for which a layer is EC ⇔ its LRF are intertwined.
December 13th 2018 29 / 50
42. 32/50
Neural contraction
˘ΘSX[j, q, b] =
ω
k=1
P
p=1
n
i=1
Θ[k, p, q] S[k, i, j] X[i, p, b]
g(X) = ˘ΘSX where
Wpq
ij
= Θpq
k
Sk
ij
g(X)jq
b
= Wjq
ip
Xip
b
index size description
i n input neuron
j m output neuron
p N input channel
q M feature map
k ω kernel weight
b B batch instance
Table: indices
tensor shape
Θ ω × N × M
S ω × n × m
X n × N × B
ΘS n × m × N × M
SX ω × m × N × B
ΘX ω × n × M × B
˘ΘSX m × M × B
Table: shapes
December 13th 2018 32 / 50
43. 33/50
Properties
This formulation is:
Linear
Associative
Commutative
Generic (next slide)
It is explainable as a convolution of graph signals when:
in supervised application
Either 0 or 1 weight per connection (sij are one-hot vectors)
In other cases it can be seen as a linear combination of convolutions.
December 13th 2018 33 / 50
44. 34/50
Genericity of ternary representation
Given adequate specification of the weight sharing scheme S, we can obtain, e.g. :
a dense layer
a partially connected layer
a convolutional layer
a graph convolutional layer (GCN, Kipf et al.)
a graph attention layer (GAT, Velickovic et al.)
a topology-adaptive graph convolution layer (TAGCN, Du et al.)
a mixture model convolutional layer (MOnet, Monti et al.)
a generalized convolution under sparse priors
any partial connectivity pattern, sparse or not
December 13th 2018 34 / 50
45. 35/50
Discussion
X Y
Y = h(˙ΘSX)
The propagation logic is in the scheme S. For example, it can be either:
given
randomized
learned
inferred
December 13th 2018 35 / 50
46. 35/50
Outline
1 Motivation and problem statement
2 Literature overview
3 Convolution of graph signals
4 Deep learning on graph domains
5 Experiments
6 Conclusion
December 13th 2018 35 / 50
47. 36/50
Outline
Experiments:
Study of influence of symmetries using S
Learning S when masked by an adjacency matrix and its powers
Monte Carlo simulations with random realizations of S
Learning S in semi-supervised applications
Inferring S from translations
December 13th 2018 36 / 50
50. 39/50
Influence of symmetries: results on MNIST
Y = h(˙ΘSX)
0 2 4 6
1
1.5
2
+
σ (in unit of initial pixel separation)
Error(in%)
MLP
CNN
10−1
100
101
102
December 13th 2018 39 / 50
51. 40/50
Learning both S and Θ on MNIST and scrambled MNIST
Y = h(˙ΘSX)
Ordering Conv5x5 A1
A2
A3
A4
A5
A6
no prior / 1.24% 1.02% 0.93% 0.90% 0.93% 1.00%
prior 0.87% 1.21% 0.91% 0.91% 0.87% 0.80% 0.74%
Table: Grid graphs on MNIST
December 13th 2018 40 / 50
52. 40/50
Learning both S and Θ on MNIST and scrambled MNIST
Y = h(˙ΘSX)
Ordering Conv5x5 A1
A2
A3
A4
A5
A6
no prior / 1.24% 1.02% 0.93% 0.90% 0.93% 1.00%
prior 0.87% 1.21% 0.91% 0.91% 0.87% 0.80% 0.74%
Table: Grid graphs on MNIST
MLP Conv5x5 Thresholded (p = 3%) k-NN (k = 25)
1.44% 1.39% 1.06% 0.96%
Table: Covariance graphs on Scrambled MNIST
December 13th 2018 40 / 50
53. 41/50
Experiments on text categorization
Y = h(˙ΘSX)
input
MC64
MC64
TrivialConv1
+
FC500
FC20
output
Figure: Diagram of the MCNet architecture used
MNB FC2500 FC2500-500 ChebNet32 FC500 MCNet
68.51 64.64 65.76 68.26 71.46 (72.25) 70.74 (72.62)
Table: Accuracies (in %) on 20NEWS, given as mean (max)
December 13th 2018 41 / 50
54. 42/50
Benchmarks on citation networks
Y = h(˙ΘSX)
Comparison of
Graph Convolution Network (GCN),
Graph Attention Network (GAT),
Topology Adaptive GCN (TAGCN).
With our models:
Addition of graph dropout to GCN (GCN*),
Graph Contraction Network (GCT).
Dataset MLP GCN GAT TAGCN GCN* GCT
Cora 58.8 ± 0.9 81.8 ± 0.9 83.3 ± 0.6 82.9 ± 0.7 83.4 ± 0.7 83.3 ± 0.7
Citeseer 56.7 ± 1.1 72.2 ± 0.6 72.1 ± 0.6 71.7 ± 0.7 72.5 ± 0.8 72.7 ± 0.5
Pubmed 72.6 ± 0.9 79.0 ± 0.5 78.3 ± 0.7 78.9 ± 0.5 78.2 ± 0.7 79.2 ± 0.4
Table: Mean accuracy (in %) and standard deviation after 100 run
December 13th 2018 42 / 50
55. 43/50
Another approach: finding translations in graphs to construct S
Step 0 : infer a graph
x0
x1
...
xm
...
⇒
1 2
43
December 13th 2018 43 / 50
56. 43/50
Another approach: finding translations in graphs to construct S
Step 0 : infer a graph
x0
x1
...
xm
...
⇒
1 2
43
Step 1: infer translations
1 2
43
⇒
December 13th 2018 43 / 50
57. 44/50
Another approach: finding translations in graphs to construct S
Step 2: design convolution weight-sharing
w1×
+ w2×
+ w3×
+ w4×
+ w0×
December 13th 2018 44 / 50
58. 44/50
Another approach: finding translations in graphs to construct S
Step 2: design convolution weight-sharing
w1×
+ w2×
+ w3×
+ w4×
+ w0×
Step 3: design data-augmentation
x0 ⇒
December 13th 2018 44 / 50
59. 45/50
Another approach: finding translations in graphs to construct S
Step 4: design graph subsampling and convolution weight-sharing
v0
1 2
43
⇒
1 2
43
⇒
w0× + w1×
+ w2×
December 13th 2018 45 / 50
60. 46/50
Architecture
We used a variant of deep residual networks (ResNet).
We swap operations (data augmentation, convolutions, subsampling) with their
counterparts.
December 13th 2018 46 / 50
61. 47/50
Results on CIFAR-10, scrambled CIFAR-10 and PINES fMRI
Y = h(˙ΘSX)
Support MLP CNN
Grid Graph Covariance Graph
ChebNetc
Proposed Proposed
Full Data Augmentation 78.62%a,b
93.80% 85.13% 93.94% 92.57%
Data Augmentation w/o Flips —— 92.73% 84.41% 92.94% 91.29%
Graph Data Augmentation —— 92.10%d
—— 92.81% 91.07%a
None 69.62% 87.78% —— 88.83% 85.88%a
a
No priors about the structure
b
Lin et al., 2015
c
Defferard et al., 2016
d
Data augmentation done with covariance graph
Table: CIFAR-10 and scrambled CIFAR-10
December 13th 2018 47 / 50
62. 47/50
Results on CIFAR-10, scrambled CIFAR-10 and PINES fMRI
Y = h(˙ΘSX)
Support MLP CNN
Grid Graph Covariance Graph
ChebNetc
Proposed Proposed
Full Data Augmentation 78.62%a,b
93.80% 85.13% 93.94% 92.57%
Data Augmentation w/o Flips —— 92.73% 84.41% 92.94% 91.29%
Graph Data Augmentation —— 92.10%d
—— 92.81% 91.07%a
None 69.62% 87.78% —— 88.83% 85.88%a
a
No priors about the structure
b
Lin et al., 2015
c
Defferard et al., 2016
d
Data augmentation done with covariance graph
Table: CIFAR-10 and scrambled CIFAR-10
Support None Neighborhood Graph
Method MLP CNN (1x1 kernels) ChebNetc
Proposed
Accuracy 82.62% 84.30% 82.80% 85.08%
Table: PINES fMRI
December 13th 2018 47 / 50
63. 47/50
Outline
1 Motivation and problem statement
2 Literature overview
3 Convolution of graph signals
4 Deep learning on graph domains
5 Experiments
6 Conclusion
December 13th 2018 47 / 50
64. 48/50
Summary
We studied convolutions of graph signals and used them to build and understand
extensions of CNN on graph domains.
Convolution of graph signals:
Algebraic description of convolution of graph signals ϕ- and m-convolutions,
Constructed as the class of linear operator that are equivariant to actions of a group.
Strong characterization results for graphs with Cayley subgraphs.
Extension with groupoids.
Deep learning on graphs:
Novel representation based on weight sharing: the neural contraction
Monte-Carlo Neural Networks (MCNN)
Graph Contraction Networks (GCT)
Graph dropout (GCN*)
Translation-Convolutional Neural Network (TCNN)
December 13th 2018 48 / 50
65. 49/50
Final words
Perspectives:
In the literature of this domain: semi-supervised >> supervised.
Both tasks can be abstracted to a more general case.
Y = h(˘ΘSX)
There can be more than one tensor rank which relations can be represented by a
graph.
Y = h(g(X, A1, A2, ..., Ar))
Extended range of applications for deep learning architecture.
Thinking in AI might be about creating connections (captured by S) and not about
updating weights.
Thank you for your attention !
December 13th 2018 49 / 50
66. 50/50
Contributions
Generalizing the convolution operator to extend CNNs to irregular domains, Jean-Charles
Vialatte, Vincent Grippon, Gr´egoire Mercier, arXiv preprint 2016.
Neighborhood-preserving translations on graphs, Nicolas Grelier, Bastien Pasdeloup,
Jean-Charles Vialatte, Vincent Gripon, IEEE GlobalSIP 2016.
Learning local receptive fields and their weight sharing scheme on graphs, Jean-Charles
Vialatte, Vincent Gripon, Gilles Coppin, IEE GlobalSIP 2017.
A study of deep learning robustness against computation failures, Jean-Charles Vialatte,
Fran¸cois Leduc-Primeau, ACTA 2017.
Convolutional neural networks on irregular domains through approximate translations on
inferred graphs, Bastien Pasdeloup, Vincent Gripon, Jean-Charles Vialatte, Dominique
Pastor, arXiv preprint 2017.
Translations on graphs with neighborhood preservation, Bastien Pasdeloup, Vincent
Gripon, Nicolas Grelier, Jean-Charles Vialatte, Dominique Pastor, arXiv preprint 2017.
Matching CNNs without Priors about data, Carlos-Eduardo Rosar Kos Lassance,
Jean-Charles Vialatte, Vincent Gripon, IEEE DSW 2018.
On convolution of graph signals and deep learning on graph domains, Jean-Charles
Vialatte, thesis, unpublished.
Convolution of graph signals, Jean-Charles Vialatte, Vincent Gripon, Gilles Coppin,
unpublished.
Graph contraction networks, Graph dropout, Monte-Carlo Networks, Jean-Charles
Vialatte, Vincent Gripon, Gilles Coppin, unpublished.
December 13th 2018 50 / 50