This paper proposes a new technique for top-K graph similarity queries that aims to reduce computational cost. It defines a new distance measure for graphs based on the maximum common subgraph (MCS). It derives several distance lower bounds to prune graphs from consideration without needing to fully compute the MCS. This allows reducing the number of expensive MCS computations. The techniques are evaluated on a real graph dataset to test their performance improvements over existing approaches.
A common fixed point theorem for two random operators using random mann itera...Alexander Decker
This academic article presents a common fixed point theorem for two random operators using a random Mann iteration scheme. It proves that if a sequence defined by the random Mann iteration of two random operators converges, then the limit point is a common fixed point of the two operators. The paper defines relevant concepts such as random operators and random fixed points. It then presents the main theorem and proof that under a contractive condition, the limit of the random Mann iteration is a common fixed point. The proof uses properties of measurable mappings and the convergence of the iterative sequence.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
The Fuzzy set Theory has been applied in many fields such as Management, Engineering etc. In this paper a new operation on Hexagonal Fuzzy number is defined where the methods of addition, subtraction, and multiplication has been modified with some conditions. The main aim of this paper is to introduce a new operation for addition, subtraction and multiplication of Hexagonal Fuzzy number on the basis of alpha cut sets of fuzzy numbers
Symbolic Computation via Gröbner BasisIJERA Editor
The purpose of this paper is to find the orthogonal projection of a rational parametric curve onto a rational parametric surface in 3-space. We show that the orthogonal projection problem can be reduced to the problem of finding elimination ideals via Gröbnerbasis. We provide a computational algorithm to find the orthogonal projection, and include a few illustrative examples. The presented method is effective and potentially useful for many applications related to the design of surfaces and other industrial and research fields.
Least Square Optimization and Sparse-Linear SolverJi-yong Kwon
The document discusses least-square optimization and sparse linear systems. It introduces least-square optimization as a technique to find approximate solutions when exact solutions do not exist. It provides an example of using least-squares to find the line of best fit through three points. The objective is to minimize the sum of squared distances between the line and points. Solving the optimization problem yields a set of linear equations that can be solved using techniques like pseudo-inverse or conjugate gradient. Sparse linear systems with many zero entries can be solved more efficiently than dense systems.
Pattern-based classification of demographic sequencesDmitrii Ignatov
We have proposed prefix-based gapless sequential patterns for classification of demographic sequences. In comparison to black-box machine learning techniques, this one provides interpretable patterns suitable for treatment by professional demographers. As for the language, we have used Pattern Structures as an extension of Formal Concept Analysis for the case of complex data like sequences, graphs, intervals, etc.
Geoid height determination is one of the major problems of geodesy because usage of satellite
techniques in geodesy isgetting increasing. Geoid heights can be determined using different methods according
to the available data. Soft computing methods such as Fuzzy logic and neural networks became so popular that
they are used to solve many engineering problems. Fuzzy logic theory and later developments in uncertainty
assessment have enabled us to develop more precise models for our requirements. In this study, How to
construct the best fuzzy model is examined. For this purpose, three different data sets were taken and two
different kinds (two inpust one output and three inputs one output) fuzzy model were formed for the calculation
of geoid heights in Istanbul (Turkey). The Fuzzy models results of these were compared with geoid heights
obtained by GPS/levelling methods. The fuzzy approximation models were tested on the test points.
A common fixed point theorem for two random operators using random mann itera...Alexander Decker
This academic article presents a common fixed point theorem for two random operators using a random Mann iteration scheme. It proves that if a sequence defined by the random Mann iteration of two random operators converges, then the limit point is a common fixed point of the two operators. The paper defines relevant concepts such as random operators and random fixed points. It then presents the main theorem and proof that under a contractive condition, the limit of the random Mann iteration is a common fixed point. The proof uses properties of measurable mappings and the convergence of the iterative sequence.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
The Fuzzy set Theory has been applied in many fields such as Management, Engineering etc. In this paper a new operation on Hexagonal Fuzzy number is defined where the methods of addition, subtraction, and multiplication has been modified with some conditions. The main aim of this paper is to introduce a new operation for addition, subtraction and multiplication of Hexagonal Fuzzy number on the basis of alpha cut sets of fuzzy numbers
Symbolic Computation via Gröbner BasisIJERA Editor
The purpose of this paper is to find the orthogonal projection of a rational parametric curve onto a rational parametric surface in 3-space. We show that the orthogonal projection problem can be reduced to the problem of finding elimination ideals via Gröbnerbasis. We provide a computational algorithm to find the orthogonal projection, and include a few illustrative examples. The presented method is effective and potentially useful for many applications related to the design of surfaces and other industrial and research fields.
Least Square Optimization and Sparse-Linear SolverJi-yong Kwon
The document discusses least-square optimization and sparse linear systems. It introduces least-square optimization as a technique to find approximate solutions when exact solutions do not exist. It provides an example of using least-squares to find the line of best fit through three points. The objective is to minimize the sum of squared distances between the line and points. Solving the optimization problem yields a set of linear equations that can be solved using techniques like pseudo-inverse or conjugate gradient. Sparse linear systems with many zero entries can be solved more efficiently than dense systems.
Pattern-based classification of demographic sequencesDmitrii Ignatov
We have proposed prefix-based gapless sequential patterns for classification of demographic sequences. In comparison to black-box machine learning techniques, this one provides interpretable patterns suitable for treatment by professional demographers. As for the language, we have used Pattern Structures as an extension of Formal Concept Analysis for the case of complex data like sequences, graphs, intervals, etc.
Geoid height determination is one of the major problems of geodesy because usage of satellite
techniques in geodesy isgetting increasing. Geoid heights can be determined using different methods according
to the available data. Soft computing methods such as Fuzzy logic and neural networks became so popular that
they are used to solve many engineering problems. Fuzzy logic theory and later developments in uncertainty
assessment have enabled us to develop more precise models for our requirements. In this study, How to
construct the best fuzzy model is examined. For this purpose, three different data sets were taken and two
different kinds (two inpust one output and three inputs one output) fuzzy model were formed for the calculation
of geoid heights in Istanbul (Turkey). The Fuzzy models results of these were compared with geoid heights
obtained by GPS/levelling methods. The fuzzy approximation models were tested on the test points.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Fractional integration and fractional differentiation of the product of m ser...Alexander Decker
This document presents theorems on fractional integrals and derivatives of the product of an M-series and H-function. An M-series is a special case of an H-function, and represents important functions in physics and applied sciences. Theorems are derived for the Riemann-Liouville fractional integral and derivative of the product. Additional theorems provide formulas for fractional integrals of the M-series alone, defined using an H-function operator previously established by Saxena and Khumbhat. The theorems extend previous work and provide new formulas incorporating these important special functions.
This document summarizes a paper that presents new algorithms for solving the cyclic order-preserving assignment problem (COPAP) and related sub-problem, the linear order-preserving assignment problem (LOPAP). It introduces a new point-assignment cost function called the Procrustean local shape distance (PLSD) and explores heuristics for using the A* search algorithm to more efficiently solve COPAP and LOPAP. Experimental results on the MPEG-7 shape dataset are presented and recommendations are made for solving COPAP/LOPAP in practice.
Design and analysis of algorithms question paper 2015 tutorialsduniya.comTutorialsDuniya.com
This document contains instructions for an exam on the topic of algorithms. It includes 7 printed pages and contains 35 total marks worth of questions. Question 1 is compulsory and worth 35 marks, while candidates must attempt any 4 questions from questions 2 through 7. The questions cover topics like quicksort analysis, longest common subsequence, red-black trees, Tower of Hanoi recurrence relations, graph algorithms, and string matching.
A New Enhanced Method of Non Parametric power spectrum Estimation.CSCJournals
The spectral analysis of non uniform sampled data sequences using Fourier Periodogram method is the classical approach. In view of data fitting and computational standpoints why the Least squares periodogram(LSP) method is preferable than the “classical” Fourier periodogram and as well as to the frequently-used form of LSP due to Lomb and Scargle is explained. Then a new method of spectral analysis of nonuniform data sequences can be interpreted as an iteratively weighted LSP that makes use of a data-dependent weighting matrix built from the most recent spectral estimate. It is iterative and it makes use of an adaptive (i.e., data-dependent) weighting, we refer to it as the iterative adaptive approach (IAA).LSP and IAA are nonparametric methods that can be used for the spectral analysis of general data sequences with both continuous and discrete spectra. However, they are most suitable for data sequences with discrete spectra (i.e., sinusoidal data), which is the case we emphasize in this paper. Of the existing methods for nonuniform sinusoidal data, Welch, MUSIC and ESPRIT methods appear to be the closest in spirit to the IAA proposed here. Indeed, all these methods make use of the estimated covariance matrix that is computed in the first iteration of IAA from LSP. MUSIC and ESPRIT, on the other hand, are parametric methods that require a guess of the number of sinusoidal components present in the data, otherwise they cannot be used; furthermore.
Using several mathematical examples from three different authors in texts from different courses this paper illustrates the easier way to avoid confusions and always get the correct results with the least effort was to use the proposed Excel Gamma function explained in detail for the proper use of the Q(z) and ercf(x) functions in most communication courses. The paper serves as a tutorial and introduction for such functions
This document presents and compares three approximation methods for thin plate spline mappings that reduce the computational complexity from O(p3) to O(m3), where m is a small subset of points p. Method 1 uses only the subset of points to estimate the mapping. Method 2 uses the subset of basis functions with all target values. Method 3 approximates the full matrix using the Nyström method. Experiments on synthetic grids show Method 3 has the lowest error, followed by Method 2, with Method 1 having the highest error. The three methods trade off accuracy, computation time, and the ability to do principal warp analysis.
Generalized fixed point theorems for compatible mapping in fuzzy 2 metric spa...Alexander Decker
This document discusses generalized fixed point theorems for compatible mappings in fuzzy 2-metric spaces. It begins with introductions and preliminaries on fixed point theory, fuzzy metric spaces, and compatible mappings. It then provides new definitions of compatible mappings of types (I) and (II) in fuzzy-2 metric spaces. The main results extend, generalize, and improve previous theorems by proving common fixed point theorems for four mappings under the condition of compatible mappings of types (I) and (II) in complete fuzzy-2 metric spaces.
This document discusses computing canonical labelings of digraphs. It begins by reviewing key concepts like digraphs, adjacency matrices, and isomorphisms. It notes that while many algorithms exist for undirected graphs, computing canonical labelings of digraphs remains challenging. The document then presents several new theoretical concepts for digraph canonical labeling, including mix diffusion degree sequences. It proposes using these concepts to systematically compute canonical labelings and proves several theorems to guide the algorithm. It describes four algorithms for calculating the canonical labeling of a digraph and notes the algorithms have been preliminarily verified through software testing.
Generalized fixed point theorems for compatible mapping in fuzzy 3 metric spa...Alexander Decker
This document discusses generalized fixed point theorems for compatible mappings in fuzzy 3-metric spaces. It begins with introductions and preliminaries on fixed point theory, fuzzy metric spaces, and compatible mappings. It then provides new definitions of compatible mappings of types (I) and (II) in fuzzy 3-metric spaces. The main results extend, generalize, and improve previous theorems by proving common fixed point theorems for four mappings under the conditions of compatible mappings of types (I) and (II) in complete fuzzy 3-metric spaces.
A Generalized Sampling Theorem Over Galois Field Domains for Experimental Des...csandit
In this paper, the sampling theorem for bandlimited functions over
domains is
generalized to one over ∏
domains. The generalized theorem is applicable to the
experimental design model in which each factor has a different number of levels and enables us
to estimate the parameters in the model by using Fourier transforms. Moreover, the relationship
between the proposed sampling theorem and orthogonal arrays is also provided.
KEY
Solving Fuzzy Maximal Flow Problem Using Octagonal Fuzzy NumberIJERA Editor
In this paper a general fuzzy maximal flow problem is discussed . A crisp maximal flow problem can be solved
in two methods : linear programming modeling and maximal flow algorithm . Here I tried to fuzzify the
maximal flow algorithm using octagonal fuzzy numbers introduced by S.U Malini and Felbin .C. kennedy [26].
By ranking the octagonal fuzzy numbers it is possible to compare them and using this we convert the fuzzy
valued maximal flow algorithm to a crisp valued algorithm . It is proved that a better solution is obtained when
it is solved using fuzzy octagonal number than when it is solved using trapezoidal fuzzy number . To illustrate
this a numerical example is solved and the obtained result is compared with the existing results . If there is no
uncertainty about the flow between source and sink then the proposed algorithm gives the same result as in crisp
maximal flow problems.
Rosalina Apriana - Math Compulsory Grade XI - MatriksRosalinaApriana
This document discusses matrices and their properties. It begins by defining what a matrix is - a collection of numbers arranged in rows and columns. It notes that matrices were first introduced in 1859 and are now used widely in fields like quantum mechanics. The document then covers various matrix topics in detail over multiple sections, including notation and order, basic operations like addition/subtraction and multiplication, determinants, inverses, similarities and equations. It provides examples for each topic to illustrate the concepts and rules regarding matrices.
This document summarizes research on embedding planar point sets with integral squared distances in lattices corresponding to rings of integers in imaginary quadratic fields. It is shown that if the squared distance between at least one pair of points is integral, and the ring of integers has the property that the square of every ideal is principal, then the point set can be embedded in the ring of integers. Additionally, if the point set is primitive with relatively prime squared distances, and the ring of integers is a principal ideal domain, then the point set embeds in the ring of integers. Examples of embeddings in specific rings of integers are provided.
In many scientific areas, systems can be described as interaction networks where elements correspond to vertices and interactions to edges. A variety of problems in those fields can deal with network comparison and characterization.
The problem of comparing and characterizing networks is the task of measuring their structural similarity and finding characteristics which capture structural information. In order to analyze complex networks, several methods can be combined, such as graph theory, information theory, and statistics.
In this project, we present methods for measuring Shannon’s entropy of graphs.
A New Approach to Design a Reduced Order ObserverIJERD Editor
This document proposes a new method for designing reduced order observers for linear time-invariant systems. The approach is based on inverting matrices of proper dimensions. It reduces the arbitrariness of previous methods by using pole-placement techniques. The method is applied to design a reduced order observer for a 3rd order system. Simulation results show the observer estimates converge to the true system states.
Let 퐺 be simple graph of order 푛. 퐴 퐺 is the adjacency matrix of 퐺 of order 푛 × 푛. The matrix 퐴 퐺 is said to graphical if all its diagonal entries should be zero. The graph⎾ is said to be the matrix product (mod-2) of 퐺 and 퐺 푖푓 퐴 퐺 푎푛푑 퐴 퐺 (mod-2) is graphical and ⎾ is the realization of 퐴 퐺 퐴 퐺 (mod-2). In this paper, we are going to study the realization of the Cycle graph 퐺 and any 푘 − regular subgraph of 퐺 . Also some interesting characterizations and properties of the graphs for each the product of adjacency matrix under (mod-2) is graphical.
Gaps between the theory and practice of large-scale matrix-based network comp...David Gleich
This document discusses gaps between theory and practice in large scale matrix computations for networks. It provides an overview of representing networks as matrices and canonical problems like PageRank that can be modeled as matrix computations. It then discusses different methods for solving these problems, like Monte Carlo methods, relaxation methods, and Krylov subspace methods. It analyzes the computational complexity of these approaches and identifies open problems, such as developing unified convergence results for different algorithms and handling "top k" convergence. The talk concludes by identifying more structured problems on networks that could leverage matrix computations.
This document provides an introduction to calculus by discussing pure versus applied mathematics. It then reviews basic mathematical concepts such as exponents, algebraic expressions, solving equations, inequalities, and sets that are used in numerical analysis. Finally, it discusses graphical representations of rectangular and polar coordinate systems and includes examples of converting between the two systems.
Machine learning ppt and presentation codesharma239172
Principal Component Analysis (PCA) is a technique for dimensionality reduction that projects high-dimensional data onto a lower-dimensional space in a way that maximizes variance. It works by finding the directions (principal components) along which the variance of the data is highest. These principal components become the new axes of the reduced space. PCA involves computing the covariance matrix of the data, performing eigendecomposition on the covariance matrix to obtain its eigenvectors, and projecting the data onto the top K eigenvectors corresponding to the largest eigenvalues, where K is the target dimensionality. This projection both reduces dimensionality and maximizes retained variance.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Fractional integration and fractional differentiation of the product of m ser...Alexander Decker
This document presents theorems on fractional integrals and derivatives of the product of an M-series and H-function. An M-series is a special case of an H-function, and represents important functions in physics and applied sciences. Theorems are derived for the Riemann-Liouville fractional integral and derivative of the product. Additional theorems provide formulas for fractional integrals of the M-series alone, defined using an H-function operator previously established by Saxena and Khumbhat. The theorems extend previous work and provide new formulas incorporating these important special functions.
This document summarizes a paper that presents new algorithms for solving the cyclic order-preserving assignment problem (COPAP) and related sub-problem, the linear order-preserving assignment problem (LOPAP). It introduces a new point-assignment cost function called the Procrustean local shape distance (PLSD) and explores heuristics for using the A* search algorithm to more efficiently solve COPAP and LOPAP. Experimental results on the MPEG-7 shape dataset are presented and recommendations are made for solving COPAP/LOPAP in practice.
Design and analysis of algorithms question paper 2015 tutorialsduniya.comTutorialsDuniya.com
This document contains instructions for an exam on the topic of algorithms. It includes 7 printed pages and contains 35 total marks worth of questions. Question 1 is compulsory and worth 35 marks, while candidates must attempt any 4 questions from questions 2 through 7. The questions cover topics like quicksort analysis, longest common subsequence, red-black trees, Tower of Hanoi recurrence relations, graph algorithms, and string matching.
A New Enhanced Method of Non Parametric power spectrum Estimation.CSCJournals
The spectral analysis of non uniform sampled data sequences using Fourier Periodogram method is the classical approach. In view of data fitting and computational standpoints why the Least squares periodogram(LSP) method is preferable than the “classical” Fourier periodogram and as well as to the frequently-used form of LSP due to Lomb and Scargle is explained. Then a new method of spectral analysis of nonuniform data sequences can be interpreted as an iteratively weighted LSP that makes use of a data-dependent weighting matrix built from the most recent spectral estimate. It is iterative and it makes use of an adaptive (i.e., data-dependent) weighting, we refer to it as the iterative adaptive approach (IAA).LSP and IAA are nonparametric methods that can be used for the spectral analysis of general data sequences with both continuous and discrete spectra. However, they are most suitable for data sequences with discrete spectra (i.e., sinusoidal data), which is the case we emphasize in this paper. Of the existing methods for nonuniform sinusoidal data, Welch, MUSIC and ESPRIT methods appear to be the closest in spirit to the IAA proposed here. Indeed, all these methods make use of the estimated covariance matrix that is computed in the first iteration of IAA from LSP. MUSIC and ESPRIT, on the other hand, are parametric methods that require a guess of the number of sinusoidal components present in the data, otherwise they cannot be used; furthermore.
Using several mathematical examples from three different authors in texts from different courses this paper illustrates the easier way to avoid confusions and always get the correct results with the least effort was to use the proposed Excel Gamma function explained in detail for the proper use of the Q(z) and ercf(x) functions in most communication courses. The paper serves as a tutorial and introduction for such functions
This document presents and compares three approximation methods for thin plate spline mappings that reduce the computational complexity from O(p3) to O(m3), where m is a small subset of points p. Method 1 uses only the subset of points to estimate the mapping. Method 2 uses the subset of basis functions with all target values. Method 3 approximates the full matrix using the Nyström method. Experiments on synthetic grids show Method 3 has the lowest error, followed by Method 2, with Method 1 having the highest error. The three methods trade off accuracy, computation time, and the ability to do principal warp analysis.
Generalized fixed point theorems for compatible mapping in fuzzy 2 metric spa...Alexander Decker
This document discusses generalized fixed point theorems for compatible mappings in fuzzy 2-metric spaces. It begins with introductions and preliminaries on fixed point theory, fuzzy metric spaces, and compatible mappings. It then provides new definitions of compatible mappings of types (I) and (II) in fuzzy-2 metric spaces. The main results extend, generalize, and improve previous theorems by proving common fixed point theorems for four mappings under the condition of compatible mappings of types (I) and (II) in complete fuzzy-2 metric spaces.
This document discusses computing canonical labelings of digraphs. It begins by reviewing key concepts like digraphs, adjacency matrices, and isomorphisms. It notes that while many algorithms exist for undirected graphs, computing canonical labelings of digraphs remains challenging. The document then presents several new theoretical concepts for digraph canonical labeling, including mix diffusion degree sequences. It proposes using these concepts to systematically compute canonical labelings and proves several theorems to guide the algorithm. It describes four algorithms for calculating the canonical labeling of a digraph and notes the algorithms have been preliminarily verified through software testing.
Generalized fixed point theorems for compatible mapping in fuzzy 3 metric spa...Alexander Decker
This document discusses generalized fixed point theorems for compatible mappings in fuzzy 3-metric spaces. It begins with introductions and preliminaries on fixed point theory, fuzzy metric spaces, and compatible mappings. It then provides new definitions of compatible mappings of types (I) and (II) in fuzzy 3-metric spaces. The main results extend, generalize, and improve previous theorems by proving common fixed point theorems for four mappings under the conditions of compatible mappings of types (I) and (II) in complete fuzzy 3-metric spaces.
A Generalized Sampling Theorem Over Galois Field Domains for Experimental Des...csandit
In this paper, the sampling theorem for bandlimited functions over
domains is
generalized to one over ∏
domains. The generalized theorem is applicable to the
experimental design model in which each factor has a different number of levels and enables us
to estimate the parameters in the model by using Fourier transforms. Moreover, the relationship
between the proposed sampling theorem and orthogonal arrays is also provided.
KEY
Solving Fuzzy Maximal Flow Problem Using Octagonal Fuzzy NumberIJERA Editor
In this paper a general fuzzy maximal flow problem is discussed . A crisp maximal flow problem can be solved
in two methods : linear programming modeling and maximal flow algorithm . Here I tried to fuzzify the
maximal flow algorithm using octagonal fuzzy numbers introduced by S.U Malini and Felbin .C. kennedy [26].
By ranking the octagonal fuzzy numbers it is possible to compare them and using this we convert the fuzzy
valued maximal flow algorithm to a crisp valued algorithm . It is proved that a better solution is obtained when
it is solved using fuzzy octagonal number than when it is solved using trapezoidal fuzzy number . To illustrate
this a numerical example is solved and the obtained result is compared with the existing results . If there is no
uncertainty about the flow between source and sink then the proposed algorithm gives the same result as in crisp
maximal flow problems.
Rosalina Apriana - Math Compulsory Grade XI - MatriksRosalinaApriana
This document discusses matrices and their properties. It begins by defining what a matrix is - a collection of numbers arranged in rows and columns. It notes that matrices were first introduced in 1859 and are now used widely in fields like quantum mechanics. The document then covers various matrix topics in detail over multiple sections, including notation and order, basic operations like addition/subtraction and multiplication, determinants, inverses, similarities and equations. It provides examples for each topic to illustrate the concepts and rules regarding matrices.
This document summarizes research on embedding planar point sets with integral squared distances in lattices corresponding to rings of integers in imaginary quadratic fields. It is shown that if the squared distance between at least one pair of points is integral, and the ring of integers has the property that the square of every ideal is principal, then the point set can be embedded in the ring of integers. Additionally, if the point set is primitive with relatively prime squared distances, and the ring of integers is a principal ideal domain, then the point set embeds in the ring of integers. Examples of embeddings in specific rings of integers are provided.
In many scientific areas, systems can be described as interaction networks where elements correspond to vertices and interactions to edges. A variety of problems in those fields can deal with network comparison and characterization.
The problem of comparing and characterizing networks is the task of measuring their structural similarity and finding characteristics which capture structural information. In order to analyze complex networks, several methods can be combined, such as graph theory, information theory, and statistics.
In this project, we present methods for measuring Shannon’s entropy of graphs.
A New Approach to Design a Reduced Order ObserverIJERD Editor
This document proposes a new method for designing reduced order observers for linear time-invariant systems. The approach is based on inverting matrices of proper dimensions. It reduces the arbitrariness of previous methods by using pole-placement techniques. The method is applied to design a reduced order observer for a 3rd order system. Simulation results show the observer estimates converge to the true system states.
Let 퐺 be simple graph of order 푛. 퐴 퐺 is the adjacency matrix of 퐺 of order 푛 × 푛. The matrix 퐴 퐺 is said to graphical if all its diagonal entries should be zero. The graph⎾ is said to be the matrix product (mod-2) of 퐺 and 퐺 푖푓 퐴 퐺 푎푛푑 퐴 퐺 (mod-2) is graphical and ⎾ is the realization of 퐴 퐺 퐴 퐺 (mod-2). In this paper, we are going to study the realization of the Cycle graph 퐺 and any 푘 − regular subgraph of 퐺 . Also some interesting characterizations and properties of the graphs for each the product of adjacency matrix under (mod-2) is graphical.
Gaps between the theory and practice of large-scale matrix-based network comp...David Gleich
This document discusses gaps between theory and practice in large scale matrix computations for networks. It provides an overview of representing networks as matrices and canonical problems like PageRank that can be modeled as matrix computations. It then discusses different methods for solving these problems, like Monte Carlo methods, relaxation methods, and Krylov subspace methods. It analyzes the computational complexity of these approaches and identifies open problems, such as developing unified convergence results for different algorithms and handling "top k" convergence. The talk concludes by identifying more structured problems on networks that could leverage matrix computations.
This document provides an introduction to calculus by discussing pure versus applied mathematics. It then reviews basic mathematical concepts such as exponents, algebraic expressions, solving equations, inequalities, and sets that are used in numerical analysis. Finally, it discusses graphical representations of rectangular and polar coordinate systems and includes examples of converting between the two systems.
Machine learning ppt and presentation codesharma239172
Principal Component Analysis (PCA) is a technique for dimensionality reduction that projects high-dimensional data onto a lower-dimensional space in a way that maximizes variance. It works by finding the directions (principal components) along which the variance of the data is highest. These principal components become the new axes of the reduced space. PCA involves computing the covariance matrix of the data, performing eigendecomposition on the covariance matrix to obtain its eigenvectors, and projecting the data onto the top K eigenvectors corresponding to the largest eigenvalues, where K is the target dimensionality. This projection both reduces dimensionality and maximizes retained variance.
This document discusses implementing various graph algorithms using GraphBLAS kernels. It describes how degree filtered breadth-first search, k-truss detection, calculating the Jaccard index, and non-negative matrix factorization can be expressed using operations like sparse matrix multiplication, element-wise multiplication, scaling and reduction. The goal is to demonstrate how fundamental graph problems can be solved within the GraphBLAS framework using linear algebraic formulations of graph computations.
This document discusses implementing various graph algorithms using GraphBLAS kernels. It describes how degree filtered breadth-first search, k-truss detection, calculating the Jaccard index, and non-negative matrix factorization can be expressed using operations like SpGEMM, SpMV, element-wise multiplication, and scaling. The goal is to demonstrate how common graph analytics can utilize the linear algebra approach of the GraphBLAS framework.
This document summarizes a paper that analyzes compressive sampling (CS) for compressing and reconstructing electrocardiogram (ECG) signals using l1 minimization algorithms. It proposes remodeling the linear program problem into a second order cone program to improve performance metrics like percent root-mean-squared difference, compression ratio, and signal-to-noise ratio when reconstructing ECG signals from the PhysioNet database. The paper provides an overview of CS theory and l1 minimization algorithms, describes the proposed approach of using quadratic constraints, and defines performance metrics for analyzing reconstructed ECG signals.
Theories and Engineering Technics of 2D-to-3D Back-Projection ProblemSeongcheol Baek
The slides introduce mathematical basics of 3d-to-2d image projection, 2d-to-3d back-projection problem, and its engineering technics, such as convex optimization problem, principal component analysis (PCV), singular value decomposition (SVD), etc.
This document contains a sample question paper for Class XII Mathematics. It has 5 sections (A-E). Section A contains 18 multiple choice questions and 2 assertion-reason questions worth 1 mark each. Section B has 5 very short answer questions worth 2 marks each. Section C contains 6 short answer questions worth 3 marks each. Section D has 4 long answer questions worth 5 marks each. Section E contains 3 case study/passage based questions worth 4 marks each with internal subparts. The document provides sample questions on topics including trigonometry, calculus, matrices, probability, linear programming and more.
The document summarizes a method for mining frequent subgraphs from linear graphs. It describes:
1) Representing data like proteins, RNA and texts as linear graphs and the need for algorithms to mine frequent patterns from such graphs.
2) A method called LGM that can efficiently enumerate and mine both connected and disconnected subgraphs from linear graphs using reverse search techniques.
3) Experiments applying LGM to mine motifs from protein structures and phrases from texts, achieving better performance than existing methods.
"Incremental Lossless Graph Summarization", KDD 2020지훈 고
A presentation slides of Jihoon Ko*, Yunbum Kook* and Kijung Shin, "Incremental Lossless Graph Summarization", KDD 2020.
Given a fully dynamic graph, represented as a stream of edge insertions and deletions, how can we obtain and incrementally update a lossless summary of its current snapshot?
As large-scale graphs are prevalent, concisely representing them is inevitable for efficient storage and analysis. Lossless graph summarization is an effective graph-compression technique with many desirable properties. It aims to compactly represent the input graph as (a) a summary graph consisting of supernodes (i.e., sets of nodes) and superedges (i.e., edges between supernodes), which provide a rough description, and (b) edge corrections which fix errors induced by the rough description. While a number of batch algorithms, suited for static graphs, have been developed for rapid and compact graph summarization, they are highly inefficient in terms of time and space for dynamic graphs, which are common in practice.
In this work, we propose MoSSo, the first incremental algorithm for lossless summarization of fully dynamic graphs. In response to each change in the input graph, MoSSo updates the output representation by repeatedly moving nodes among supernodes. MoSSo decides nodes to be moved and their destinations carefully but rapidly based on several novel ideas. Through extensive experiments on 10 real graphs, we show MoSSo is (a) Fast and 'any time': processing each change in near-constant time (less than 0.1 millisecond), up to 7 orders of magnitude faster than running state-of-the-art batch methods, (b) Scalable: summarizing graphs with hundreds of millions of edges, requiring sub-linear memory during the process, and (c) Effective: achieving comparable compression ratios even to state-of-the-art batch methods.
This document discusses linear functions and straight line graphs. It defines key concepts such as the standard form of a linear equation (y=mx+c), where m is the gradient and c is the y-intercept. It explains how to calculate the gradient between two points and interprets positive and negative gradients. The document also covers finding the x-intercept and y-intercept of a line, and defines the domain and range of linear functions.
The document discusses various algorithms related to decrease and conquer, including:
1) Topological sorting, which lists the vertices of a directed acyclic graph such that all edges are oriented from earlier to later vertices in the list. Two methods for topological sorting are described.
2) Insertion sort, which inserts elements into the sorted portion of an array by iterating through the array and placing each element in its proper sorted location.
3) Graph searching algorithms like depth-first search and breadth-first search, which systematically explore the vertices and edges of a graph.
PAGE NUMBER PROBLEM: AN EVALUATION OF HEURISTICS AND ITS SOLUTION USING A HYB...ijcseit
The page number problem is to determine the minimum number of pages in a book in which a graph G can
be embedded with the vertices placed in a sequence along the spine and the edges on the pages of the book
such that no two edges cross each other in any drawing. In this paper we have (a) statistically evaluated
five heuristics for ordering vertices on the spine for minimum number of edge crossings with all the edges
placed in a single page, (b) statistically evaluated four heuristics for distributing edges on a minimum
number of pages with no crossings for a fixed ordering of vertices on the spine and (c) implemented and
experimentally evaluated a hybrid evolutionary algorithm (HEA) for solving the pagenumber problem. In
accordance with the results of (a) and (b) above, in HEA, placement of vertices on the spine is decided
using a random depth first search of the graph and an edge embedding heuristic adapted from Chung et al.
is used to distribute the edges on a minimal number of pages. The results of experiments with HEA on
selected standard and random graphs show that the algorithm achieves the optimal pagenumber for the
standard graphs. HEA performance is also compared with the Genetic Algorithm described by Kapoor et
al. It is observed that HEA gives a better solution for most of the graph instances.
This document contains a summary of a lecture on graph analytics and complexity by Dr. Animesh Chaturvedi. It includes questions and answers on graph algorithms like minimum spanning tree (MST), single-source shortest path (SSSP) problems, and the Agrawal–Kayal–Saxena primality test. Sample algorithms are provided to calculate the average MST and average SSP of multiple graphs by combining the graphs and running standard algorithms. The document is in English and other languages with thank you messages at the end.
The presentation deals with the clustering of trajectories of moving objects. A k-means-like algorithm based on a Euclidean distance between piece-wise linear curves is used. The main novelty of the paper is the opportunity of considering in the clustering procedure a step that automatically weights the importance of sub-trajectories of the original ones. The algorithm uses an adaptive distances approach and a cluster-wise weighting. The proposed algorithm is tested against some workbench trajectory datasets.
Presented at SIS 2019, Milan.
A Probabilistic Algorithm for Computation of Polynomial Greatest Common with ...mathsjournal
- The document presents a probabilistic algorithm for computing the polynomial greatest common divisor (PGCD) with smaller factors.
- It summarizes previous work on the subresultant algorithm for computing PGCD and discusses its limitations, such as not always correctly determining the variant τ.
- The new algorithm aims to determine τ correctly in most cases when given two polynomials f(x) and g(x). It does so by adding a few steps instead of directly computing the polynomial t(x) in the relation s(x)f(x) + t(x)g(x) = r(x).
Solving Fuzzy Maximal Flow Problem Using Octagonal Fuzzy NumberIJERA Editor
In this paper a general fuzzy maximal flow problem is discussed . A crisp maximal flow problem can be solved
in two methods : linear programming modeling and maximal flow algorithm . Here I tried to fuzzify the
maximal flow algorithm using octagonal fuzzy numbers introduced by S.U Malini and Felbin .C. kennedy [26].
By ranking the octagonal fuzzy numbers it is possible to compare them and using this we convert the fuzzy
valued maximal flow algorithm to a crisp valued algorithm . It is proved that a better solution is obtained when
it is solved using fuzzy octagonal number than when it is solved using trapezoidal fuzzy number . To illustrate
this a numerical example is solved and the obtained result is compared with the existing results . If there is no
uncertainty about the flow between source and sink then the proposed algorithm gives the same result as in crisp
maximal flow problems.
Enumeration methods are very important in a variety of settings, both mathematical and applications. For many problems there is actually no real hope to do the enumeration in reasonable time since the number of solutions is so big. This talk is about how to compute at the limit.
The talk is decomposed into:
(a) Regular enumeration procedure where one uses computerized case distinction.
(b) Use of symmetry groups for isomorphism checks.
(c) The augmentation scheme that allows to enumerate object up to isomorphism without keeping the full list in memory.
(d) The homomorphism principle that allows to map a complex problem to a simpler one.
Similar to Finding Top-k Similar Graphs in Graph Database @ ReadingCircle (20)
Software Engineering, Software Consulting, Tech Lead, Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Transaction, Spring MVC, OpenShift Cloud Platform, Kafka, REST, SOAP, LLD & HLD.
UI5con 2024 - Keynote: Latest News about UI5 and it’s EcosystemPeter Muessig
Learn about the latest innovations in and around OpenUI5/SAPUI5: UI5 Tooling, UI5 linter, UI5 Web Components, Web Components Integration, UI5 2.x, UI5 GenAI.
Recording:
https://www.youtube.com/live/MSdGLG2zLy8?si=INxBHTqkwHhxV5Ta&t=0
Measures in SQL (SIGMOD 2024, Santiago, Chile)Julian Hyde
SQL has attained widespread adoption, but Business Intelligence tools still use their own higher level languages based upon a multidimensional paradigm. Composable calculations are what is missing from SQL, and we propose a new kind of column, called a measure, that attaches a calculation to a table. Like regular tables, tables with measures are composable and closed when used in queries.
SQL-with-measures has the power, conciseness and reusability of multidimensional languages but retains SQL semantics. Measure invocations can be expanded in place to simple, clear SQL.
To define the evaluation semantics for measures, we introduce context-sensitive expressions (a way to evaluate multidimensional expressions that is consistent with existing SQL semantics), a concept called evaluation context, and several operations for setting and modifying the evaluation context.
A talk at SIGMOD, June 9–15, 2024, Santiago, Chile
Authors: Julian Hyde (Google) and John Fremlin (Google)
https://doi.org/10.1145/3626246.3653374
Neo4j - Product Vision and Knowledge Graphs - GraphSummit ParisNeo4j
Dr. Jesús Barrasa, Head of Solutions Architecture for EMEA, Neo4j
Découvrez les dernières innovations de Neo4j, et notamment les dernières intégrations cloud et les améliorations produits qui font de Neo4j un choix essentiel pour les développeurs qui créent des applications avec des données interconnectées et de l’IA générative.
Do you want Software for your Business? Visit Deuglo
Deuglo has top Software Developers in India. They are experts in software development and help design and create custom Software solutions.
Deuglo follows seven steps methods for delivering their services to their customers. They called it the Software development life cycle process (SDLC).
Requirement — Collecting the Requirements is the first Phase in the SSLC process.
Feasibility Study — after completing the requirement process they move to the design phase.
Design — in this phase, they start designing the software.
Coding — when designing is completed, the developers start coding for the software.
Testing — in this phase when the coding of the software is done the testing team will start testing.
Installation — after completion of testing, the application opens to the live server and launches!
Maintenance — after completing the software development, customers start using the software.
8 Best Automated Android App Testing Tool and Framework in 2024.pdfkalichargn70th171
Regarding mobile operating systems, two major players dominate our thoughts: Android and iPhone. With Android leading the market, software development companies are focused on delivering apps compatible with this OS. Ensuring an app's functionality across various Android devices, OS versions, and hardware specifications is critical, making Android app testing essential.
Transform Your Communication with Cloud-Based IVR SolutionsTheSMSPoint
Discover the power of Cloud-Based IVR Solutions to streamline communication processes. Embrace scalability and cost-efficiency while enhancing customer experiences with features like automated call routing and voice recognition. Accessible from anywhere, these solutions integrate seamlessly with existing systems, providing real-time analytics for continuous improvement. Revolutionize your communication strategy today with Cloud-Based IVR Solutions. Learn more at: https://thesmspoint.com/channel/cloud-telephony
Neo4j - Product Vision and Knowledge Graphs - GraphSummit ParisNeo4j
Dr. Jesús Barrasa, Head of Solutions Architecture for EMEA, Neo4j
Découvrez les dernières innovations de Neo4j, et notamment les dernières intégrations cloud et les améliorations produits qui font de Neo4j un choix essentiel pour les développeurs qui créent des applications avec des données interconnectées et de l’IA générative.
Artificia Intellicence and XPath Extension FunctionsOctavian Nadolu
The purpose of this presentation is to provide an overview of how you can use AI from XSLT, XQuery, Schematron, or XML Refactoring operations, the potential benefits of using AI, and some of the challenges we face.
Most important New features of Oracle 23c for DBAs and Developers. You can get more idea from my youtube channel video from https://youtu.be/XvL5WtaC20A
WhatsApp offers simple, reliable, and private messaging and calling services for free worldwide. With end-to-end encryption, your personal messages and calls are secure, ensuring only you and the recipient can access them. Enjoy voice and video calls to stay connected with loved ones or colleagues. Express yourself using stickers, GIFs, or by sharing moments on Status. WhatsApp Business enables global customer outreach, facilitating sales growth and relationship building through showcasing products and services. Stay connected effortlessly with group chats for planning outings with friends or staying updated on family conversations.
UI5con 2024 - Boost Your Development Experience with UI5 Tooling ExtensionsPeter Muessig
The UI5 tooling is the development and build tooling of UI5. It is built in a modular and extensible way so that it can be easily extended by your needs. This session will showcase various tooling extensions which can boost your development experience by far so that you can really work offline, transpile your code in your project to use even newer versions of EcmaScript (than 2022 which is supported right now by the UI5 tooling), consume any npm package of your choice in your project, using different kind of proxies, and even stitching UI5 projects during development together to mimic your target environment.
Flutter is a popular open source, cross-platform framework developed by Google. In this webinar we'll explore Flutter and its architecture, delve into the Flutter Embedder and Flutter’s Dart language, discover how to leverage Flutter for embedded device development, learn about Automotive Grade Linux (AGL) and its consortium and understand the rationale behind AGL's choice of Flutter for next-gen IVI systems. Don’t miss this opportunity to discover whether Flutter is right for your project.
Unveiling the Advantages of Agile Software Development.pdfbrainerhub1
Learn about Agile Software Development's advantages. Simplify your workflow to spur quicker innovation. Jump right in! We have also discussed the advantages.
E-commerce Development Services- Hornet DynamicsHornet Dynamics
For any business hoping to succeed in the digital age, having a strong online presence is crucial. We offer Ecommerce Development Services that are customized according to your business requirements and client preferences, enabling you to create a dynamic, safe, and user-friendly online store.
DDS Security Version 1.2 was adopted in 2024. This revision strengthens support for long runnings systems adding new cryptographic algorithms, certificate revocation, and hardness against DoS attacks.
2. About this paper
A paper in “graph theory”
About “graph similarity query”
Proposing new technique for accurate answer and
reducing computational cost
Proceedings of the 15th International Conference on
Extending Database Technology - EDBT '12
Zhu, Yuanyuan・Qin, Lu・Yu, Jeffrey Xu・Cheng, Hong
2
3. Outline
1. Back ground of graph theory
2. Introduction
3. Problem statement
4. The framework
5. Pruning without indexing
6. Pruning with indexing
7. Performance studies
8. Conclusion
3
4. Outline
1. Back ground of graph theory
2. Introduction
3. Problem statement
4. The framework
5. Pruning without indexing
6. Pruning with indexing
7. Performance studies
8. Conclusion
4
5. What is “graph”?
5
Graph is denoted by 𝑔 = 𝑉, 𝐸, 𝑙
𝑉 is a set of vertices
𝐸 ⊆ V × 𝑉 is the set of edges
𝑙 is a labeling function, 𝑙: 𝑉 → 𝑉
𝑉 is a set of labels
In this paper, edges of graph have no weight
7. Maximum Common Subgraph
7
If 𝑔 is a common subgraph of 𝑔1 and 𝑔2 and there is
no other common subgraph 𝑔′ of 𝑔1 and 𝑔2,such
that 𝐸 𝑔′ > |𝐸(𝑔)|, 𝑔𝑟𝑎𝑝ℎ 𝑔 is a maximum
common subgraph of two graphs
This calculation is NP-hard
𝑔𝑟𝑎𝑝ℎ 𝑔1
𝑔𝑟𝑎𝑝ℎ 𝑔2
𝑚𝑐𝑠 𝑞
8. Bipartite graph
8
A graph whose vertices can be devided into two
disjoint sets 𝑈 and 𝑉
𝑈 and 𝑉 are each independent sets
𝑈 𝑉
9. Matching of bipartite graph
9
If each edge has no same vertices, the edge set M is
called matching
𝑈 𝑉
10. Outline
1. Background of graph theory
2. Introduction
3. Problem statement
4. The framework
5. Pruning without indexing
6. Pruning with indexing
7. Performance studies
8. Conclusion
10
11. Graph query processing(1)
Using graph as query to graph Database
It has attracted much attention in recent year
Image retrieval
Chemical compound structure search
Query graph
GraphDB
11result graphs
querying
12. Graph query processing(2)
Mainly falling into two categories
Subgraph containment search
Identify a set of graphs that contain a query graph
Supergraph containment search
Identify a set of graphs that are contained by a query graph
Besides exact subgraph/supergraph containment
query, some studies allow a small number of edges
or nodes missing in the query result
→graph similarity search is important
12
13. Graph similarity search
13
Main theme of this paper
Search for the similarity of a query graph and each
graph of Database
“Top-k similar graphs “ means k graphs that is most similar
to a query graph
Query graph
1
2
3
Top-3 similar graph
14. Existing graph similarity search(1)
14
Two kinds of graph similarity search in related works
Subgraph similarity search
H.Shang,X.Lin,Y.Zhang,J.X.Yu,andW.Wang.Connected substructure
similarity search. In SIGMOD, pages 903–914, 2010.
X.Yan,P.Yu,andJ.Han.Substructuresimilaritysearchingraph
databases. In SIGMOD, pages 766–777, 2005.
Supergraph similarity search
H.Shang,K.Zhu,X.Lin,Y.Zhang,andR.Ichise.Similaritysearch on
supergraph containment. In ICDE, pages 637–648, 2010
To calculate similarity, it is needed to define the
distance of graphs:𝑑𝑖𝑠𝑡(𝑞, 𝑔)
16. Ex:existing similarity search(1)
16
Query 𝑞 and sample graph database 𝐷 =
{𝑔1, 𝑔2, 𝑔3}
Bold edges mean the MCS of 𝑞 and each 𝑔
B
C
C A C C
B
Query q
B
C
C D C C
B
𝑔𝑟𝑎𝑝ℎ 𝑔2 ∈ 𝐷
C B B C
𝑔𝑟𝑎𝑝ℎ 𝑔1 ∈ 𝐷
B
C
C A
AA
AA
A C C
B C
C
𝑔𝑟𝑎𝑝ℎ 𝑔3 ∈ 𝐷
17. Ex:existing similarity search(2)
17
If we use subgraph query (𝑑𝑖𝑠𝑡 𝑞, 𝑔 = 𝐸 𝑞 −
𝐸 𝑚𝑐𝑠 𝑞, 𝑔 ),𝑔3 will be returned as answer
𝑑𝑖𝑠𝑡 𝑞, 𝑔3 = 7 − 6 = 1
B
C
C A C C
B
Query q
B
C
C D C C
B
𝑔𝑟𝑎𝑝ℎ 𝑔2 ∈ 𝐷
C B B C
𝑔𝑟𝑎𝑝ℎ 𝑔1 ∈ 𝐷
B
C
C A
AA
AA
A C C
B C
C
𝑔𝑟𝑎𝑝ℎ 𝑔3 ∈ 𝐷
18. Ex:existing similarity search(3)
18
If we use supergraph query (𝑑𝑖𝑠𝑡 𝑞, 𝑔 = 𝐸 𝑔 −
𝐸 𝑚𝑐𝑠 𝑞, 𝑔 ), 𝑔1 will be returned as answer
𝑑𝑖𝑠𝑡 𝑞, 𝑔1 = 3 − 2 = 1
B
C
C A C C
B
Query q
B
C
C D C C
B
𝑔𝑟𝑎𝑝ℎ 𝑔2 ∈ 𝐷
C B B C
𝑔𝑟𝑎𝑝ℎ 𝑔1 ∈ 𝐷
B
C
C A
AA
AA
A C C
B C
C
𝑔𝑟𝑎𝑝ℎ 𝑔3 ∈ 𝐷
19. Ex:existing similarity search(4)
19
But, the best answer should be 𝑔2, from user’s
perspective
These way to calculate 𝑑𝑖𝑠𝑡 is not good
B
C
C A C C
B
Query q
B
C
C D C C
B
𝑔𝑟𝑎𝑝ℎ 𝑔2 ∈ 𝐷
C B B C
𝑔𝑟𝑎𝑝ℎ 𝑔1 ∈ 𝐷
B
C
C A
AA
AA
A C C
B C
C
𝑔𝑟𝑎𝑝ℎ 𝑔3 ∈ 𝐷
20. Main contributions of this paper
20
1. Studying top-k graph similarity query processing
based on new MCS based similarity measure
2. Deriving several distance lower bounds(without
and with index) to reduce the number of MCS
computations
3. Conducting extensive performance studies on a
real dataset to test the performance of their
algorithms
21. Outline
1. Background of graph theory
2. Introduction
3. Problem statement
4. The framework
5. Pruning without indexing
6. Pruning with indexing
7. Performance studies
8. Conclusion
21
22. Definitions(1)
22
In this paper, they define the 𝑑𝑖𝑠𝑡(𝑞, 𝑔) like this
𝑑𝑖𝑠𝑡 𝑞, 𝑔 = 𝐸 𝑞 + 𝐸 𝑔 − 2 × 𝐸 𝑚𝑐𝑠 𝑞, 𝑔
※This 𝑑𝑖𝑠𝑡 𝑞, 𝑔 (maybe) satisfies the axiom of metric
space
𝑥 = 𝑦 ⇔ 𝑑𝑖𝑠𝑡 𝑥, 𝑦 = 0
𝑑𝑖𝑠𝑡 𝑦, 𝑥 = 𝑑𝑖𝑠𝑡(𝑥, 𝑦)
𝑑𝑖𝑠𝑡 𝑥, 𝑦 ≥ 0
𝑑𝑖𝑠𝑡 𝑥, 𝑦 + 𝑑𝑖𝑠𝑡 𝑦, 𝑧 ≥ 𝑑𝑖𝑠𝑡(𝑥, 𝑧)
This is important in later
23. Definition(2)
23
In this paper, they allow MCS of two graphs to be
disconnected
It cat potentially capture more common substructures of
two graphs
It also can evaluate the structure similarity of two graphs
more globally
24. Ex:𝒅𝒊𝒔𝒕(𝒒, 𝒈) of this paper(1)
24
Query 𝑞 and sample graph database 𝐷 = {𝑔1, 𝑔2}
Bold edges mean the common edges of 𝑞 and each
𝑔
C
C
B
B AA
𝑔𝑟𝑎𝑝ℎ 𝑔1
A
C
C
C
B
B
C
C
B
B A
C
C
C
B
BC
C
C
B
B A
𝑔𝑟𝑎𝑝ℎ 𝑔2𝑞𝑢𝑒𝑟𝑦 𝑞
25. Ex:𝒅𝒊𝒔𝒕(𝒒, 𝒈) of this paper(2)
25
If we require MCS to be connected, 𝑔1 will be
returned as the answer
𝑑𝑖𝑠𝑡 𝑞, 𝑔1 = 12 + 6 − 2 × 6 = 6
𝑑𝑖𝑠𝑡 𝑞, 𝑔2 = 12 + 12 − 2 × 5 = 14
C
C
B
B AA
𝑔𝑟𝑎𝑝ℎ 𝑔1
A
C
C
C
B
B
C
C
B
B A
C
C
C
B
BC
C
C
B
B A
𝑔𝑟𝑎𝑝ℎ 𝑔2𝑞𝑢𝑒𝑟𝑦 𝑞
26. Ex:𝒅𝒊𝒔𝒕(𝒒, 𝒈) of this paper(3)
26
If we allow MCS to be disconnected, 𝑔2 will be
returned as the answer
𝑑𝑖𝑠𝑡 𝑞, 𝑔1 = 12 + 6 − 2 × 6 = 6
𝑑𝑖𝑠𝑡 𝑞, 𝑔2 = 12 + 12 − 2 × 10 = 4
𝑔2 is desired result for users
C
C
B
B AA
𝑔𝑟𝑎𝑝ℎ 𝑔1
A
C
C
C
B
B
C
C
B
B A
C
C
C
B
BC
C
C
B
B A
𝑔𝑟𝑎𝑝ℎ 𝑔2𝑞𝑢𝑒𝑟𝑦 𝑞
27. Outline
1. Background of graph theory
2. Introduction
3. Problem statement
4. The framework
5. Pruning without indexing
6. Pruning with indexing
7. Performance studies
8. Conclusion
27
28. Pruning strategy
28
As mentioned previously, computing MCS is NP-hard
problem
In this paper, they derived the lower bound of MCS
to reduce the number of MCS computations
They didn’t make MCS computation faster
If 𝑑𝑖𝑠𝑡(𝑞, 𝑔) is no less than the largest distance of
the current top-k answers, 𝑔 is not a top-k answer
and can be pruned safety
30. Based algorithm(2)
30
If 𝑑𝑖𝑠𝑡(𝑞, 𝑔) is smaller than the top value of current
top-k answer, the 𝑑𝑖𝑠𝑡(𝑞, 𝑔) is computed and
compared with the current top value again
31. Outline
1. Background of graph theory
2. Introduction
3. Problem statement
4. The framework
5. Pruning without indexing
6. Pruning with indexing
7. Performance studies
8. Conclusion
31
32. Edge frequency based lower
bound
32
Finding the lower bound of 𝑑𝑖𝑠𝑡(𝑞, 𝑔) is equivalent
to finding the upper bound of |𝐸(𝑚𝑐𝑠 𝑞, 𝑔 )|
Denote the set of the distinct edges in g as 𝐸 𝑑(𝑔)
Denote Frequency of e as 𝑓(𝑒, 𝑔)
𝑒𝑚𝑐𝑠1 𝑞, 𝑔 =
𝑒∈𝐸 𝑑(𝑞)∪𝐸 𝑑(𝑔) min{𝑓 𝑒, 𝑞 , 𝑓(𝑒, 𝑔)}
𝑑𝑖𝑠𝑡1 𝑞, 𝑔 = 𝐸 𝑞 + 𝐸 𝑔 − 2 × 𝑒𝑚𝑐𝑠1(𝑞, 𝑔)
33. Ex:using the 𝒅𝒊𝒔𝒕𝟏(𝒒, 𝒈) (1)
33
The frequency of edge(A,C),(B,C),(C,C) are 4,3,6
𝑒𝑚𝑐𝑠1 𝑞, 𝑔1 = 4 + 3 + 5 = 12
𝑑𝑖𝑠𝑡1 𝑞, 𝑔1 = 13 + 12 − 2 × 12 = 1
A
CCCCCC
C
C B A
A
𝑔𝑟𝑎𝑝ℎ 𝑔1
CCCCCC
C
C B A
A
C
C
𝑔𝑟𝑎𝑝ℎ 𝑔2
B
CC C
CCCCCCC
AA
A
𝑞𝑢𝑒𝑟𝑦 𝑞
34. Ex:using the 𝒅𝒊𝒔𝒕𝟏(𝒒, 𝒈) (2)
34
𝑒𝑚𝑐𝑠1 𝑞, 𝑔2 = 3 + 3 + 6 = 12
𝑑𝑖𝑠𝑡1 𝑞, 𝑔2 = 13 + 13 − 2 × 12 = 2
In fact, these lower bound are not tight compared to
the actual 𝑑𝑖𝑠𝑡 A
CCCCCC
C
C B A
A
𝑔𝑟𝑎𝑝ℎ 𝑔1
CCCCCC
C
C B A
A
C
C
𝑔𝑟𝑎𝑝ℎ 𝑔2
B
CC C
CCCCCCC
AA
A
𝑞𝑢𝑒𝑟𝑦 𝑞
35. Adjacency List Based Lower
Bound(1)
35
Constracting bipartite graph 𝐵(𝑞, 𝑔)
For each pair of nodes 𝑢 ∈ 𝑉(𝑞) and 𝑣 ∈ 𝑉(𝑔),
there is an edge between 𝑏(𝑢) and 𝑏 𝑣 if 𝑙 𝑢 =
𝑙 𝑣
𝐿(𝑎𝑑𝑗(𝑢)) is a multiset consisting of all labels in the
adjacent nodes of 𝑢
A
C
B
A
𝑢
𝐿 𝑎𝑑𝑗 𝑢 = {𝐴, 𝐴, 𝐵}
36. Adjacency List Based Lower
Bound(2)
36
The weight of edges is defined as 𝑤 𝑏 𝑢 , 𝑏 𝑣 =
|𝐿(𝑎𝑑𝑗(𝑢)) ∩ 𝐿(𝑎𝑑𝑗(𝑣))|
𝑀(𝑞, 𝑔) is the maximum weighted bipartite
matching
𝑒𝑚𝑐𝑠2 𝑞, 𝑔 =
1
2 𝑏 𝑢 ,𝑏 𝑣 ∈𝑀 𝑞,𝑔 𝑤 𝑏 𝑢 , 𝑏 𝑣
𝑑𝑖𝑠𝑡2 𝑞, 𝑔 = 𝐸 𝑞 + 𝐸 𝑔 − 2 × 𝑒𝑚𝑐𝑠2 𝑞, 𝑔
39. Ex:using the 𝒅𝒊𝒔𝒕𝟐(𝒒, 𝒈) (1)
39
𝑒𝑚𝑐𝑠2 𝑞, 𝑔1 = 2 + 2 + 2 + 1 ÷ 2 = 3.5
𝑑𝑖𝑠𝑡2 𝑞, 𝑔1 = 4 + 5 − 2 × 3.5 = 2
C
C
B A
A
𝑔𝑟𝑎𝑝ℎ 𝑔1
C
C
B
A
𝑞𝑢𝑒𝑟𝑦 𝑞
A
A
A
B
B
C
C
C
C
2
2
2
1
40. Ex:using the 𝒅𝒊𝒔𝒕𝟐(𝒒, 𝒈) (2)
40
If we use 𝑒𝑚𝑐𝑠1, 𝑒𝑚𝑐𝑠1 = 1 + 1 + 1 + 1 = 4
𝑑𝑖𝑠𝑡1 𝑞, 𝑔1 = 4 + 5 − 2 × 4 = 1
C
C
B A
A
𝑔𝑟𝑎𝑝ℎ 𝑔1
C
C
B
A
𝑞𝑢𝑒𝑟𝑦 𝑞
A
A
A
B
B
C
C
C
C
2
2
2
1
41. Ex:using the 𝒅𝒊𝒔𝒕𝟐(𝒒, 𝒈) (3)
41
Given two graphs 𝑞, 𝑔,we have 𝑑𝑖𝑠𝑡2(𝑞, 𝑔) ≥
𝑑𝑖𝑠𝑡1(𝑞, 𝑔)
C
C
B A
A
𝑔𝑟𝑎𝑝ℎ 𝑔1
C
C
B
A
𝑞𝑢𝑒𝑟𝑦 𝑞
A
A
A
B
B
C
C
C
C
2
2
2
1
42. Algorithm using 𝒅𝒊𝒔𝒕𝟏, 𝒅𝒊𝒔𝒕𝟐
42
The computational cost of are 𝑑𝑖𝑠𝑡 > 𝑑𝑖𝑠𝑡2 > 𝑑𝑖𝑠𝑡1
Using 𝑑𝑖𝑠𝑡1 as possible
43. Outline
1. Background of graph theory
2. Introduction
3. Problem statement
4. The framework
5. Pruning without indexing
6. Pruning with indexing
7. Performance studies
8. Conclusion
43
44. Triangle property of distance
44
Given three graph 𝑔1, 𝑔2, 𝑔3, 𝑑𝑖𝑠𝑡 𝑔1, 𝑔3 ≤
𝑑𝑖𝑠𝑡 𝑔1, 𝑔2 + 𝑑𝑖𝑠𝑡 𝑔2, 𝑔3
If 𝑔2 and 𝑔3 are very near, 𝑑𝑖𝑠𝑡(𝑔1, 𝑔2)~dist(𝑔2, 𝑔3)
If we know 𝑑𝑖𝑠𝑡(𝑔, 𝑔′), we can compute these lower
bound
𝑑𝑖𝑠𝑡3 𝑞, 𝑔 𝑔′ = 𝑑𝑖𝑠𝑡 𝑞, 𝑔′ − 𝑑𝑖𝑠𝑡 𝑔, 𝑔′
𝑑𝑖𝑠𝑡4 𝑞, 𝑔 𝑔′ = 𝑑𝑖𝑠𝑡 𝑞, 𝑔′ − 𝑑𝑖𝑠𝑡(𝑔, 𝑔′)
45. Indexing
45
The 𝑑𝑖𝑠𝑡(𝑔, 𝑔′) can be precomputed
But, computing all the pair need to do 𝑂(|𝐷|2) MCS
computations
Define a set of groups 𝐼 = {𝐺1, 𝐺2, … , 𝐺|𝐼|}, where
𝐺𝑖 ⊆ 𝐷, and 𝐺1 ∪ 𝐺2 ∪ ⋯ ∪ 𝐺 𝐼 = 𝐷
There is a center graph 𝑐𝑖 ∈ 𝐺𝑖
Precompute the 𝑑𝑖𝑠𝑡(𝑔, 𝑐𝑖), 𝑔 ∈ 𝐺𝑖
𝑔6
𝑔4 𝑔2𝑔7𝑔5𝑔1
𝑔3𝐺1 𝐺2
48. Three indexing strategy(1)
48
DPIndex
Given the number of 𝑚, randomly pick 𝑚 graphs as 𝑚
center nodes for group. For each non-center graph 𝑔 ∈
𝐷,assign it to the nearest center
Each graph only belongs to one group
49. Three indexing strategy(2)
49
OPIndex
After selecting 𝑚 graphs in 𝐷 as centers, assign each non-
center graph 𝑔 ∈ 𝐷 to the 𝑙 nealest centers
Allows each graph to belong to multiple groups
50. Three indexing strategy(3)
50
GSIndex
Treat each graph in 𝐷 as the center
For each center, find its nearest 𝑙 graphs in 𝐷, and putting
the 𝑙 + 1 graphs together as group
51. Outline
1. Background of graph thoery
2. Introduction
3. Problem statement
4. The framework
5. Pruning without indexing
6. Pruning with indexing
7. Performance studies
8. Conclusion
51
52. Overview of experiments
52
Similarity measures evaluation
Show why the query results of subgraph/supergraph
similarity query are not good
Query performance evaluation
Compare with noIndex and SeqScan, and compare their
three indexing techniques
Indexing cost evaluation
Compare the cost of their three indexing
53. environment
53
All the algorithms were implemented using Visual
C++ 2005
Tested on a PC with 2.66GHz CPU and 3.43GB
memory running Windows XP
54. parameters
54
They evaluate their approaches by varying five
parameters
𝑘:top-k value
|𝑉(𝑞)|:the size of query graph
𝐷 :the number of graphs in graph database
𝑚:the number of groups m used in DPIndex and OPIndex
𝑙:the maximum number of groups l
55. Similarity measures comparison
55
Experiments in three types
Subsim: 𝐸 𝑞 − 𝐸 𝑚𝑐𝑠 𝑞, 𝑔
Supersim: 𝐸 𝑔 − 𝐸 𝑚𝑐𝑠 𝑞, 𝑔
Fullsim: 𝐸 𝑞 + 𝐸 𝑔 − 2 × 𝐸 𝑚𝑐𝑠 𝑞, 𝑔
The near the answers and
query graph in size,
the better the answers are
56. Power of pruning strategy
56
Seqscan needs around 7000 MCS computation for
graph with size larger than 10
noIndex needs no more than 500
59. Outline
1. Background of graph theory
2. Introduction
3. Problem statement
4. The framework
5. Pruning without indexing
6. Pruning with indexing
7. Performance studies
8. Conclusion
59
60. Conclusion
60
Existing solutions:subgraph/supergraph similarity
search cannot be used to solve problem properly
They introduced a new graph distance using the
maximum common subgraph(MCS)
In order to reduce the number of MCS computation,
they proposed two distance lower bounds
They further introduced a triangle property to lower
bound
They conducted extensive performance studies