This document discusses triangular tree-width, a new measure for directed graphs that is related to efficiently computing matrix permanents. Triangular tree-width is defined based on the tree-widths of the increasing and decreasing edge subgraphs of a directed graph with respect to a particular vertex ordering. The goal is that bounding the triangular tree-width of a matrix's incidence graph would allow its permanent to be computed efficiently, whereas the standard tree-width may be unbounded. The document outlines the basics of tree-width and motivates the definition of triangular tree-width.
This document discusses clustering random walk time series. It introduces the concept of clustering and discusses challenges in clustering time series, such as how to define a distance between dependent random variables. It proposes using the copula transform and empirical copula transform to map time series to uniform distributions before calculating distances. The hierarchical block model for nested data partitions is presented. Experimental results show the proposed distances perform well in recovering the nested partitions on both synthetic hierarchical block model data and real credit default swap time series data. Consistency of clustering algorithms is defined in relation to recovering the hierarchical block model partitions.
Selection of bin width and bin number in histogramsKushal Kumar Dey
This document discusses various statistical methods for determining the optimal number of bins and bin width in histograms. It reviews methods such as Sturges' rule, Scott's rule, Bayesian optimal binning, and others. It presents the concepts behind each method and compares their performance in simulations. The simulations involved generating data from distributions like the chi-square, normal, and uniform, and comparing the methods based on how close their histogram estimates were to the true densities. Scott's rule and Freedman-Diaconis modification generally performed best at minimizing errors. The document also discusses generalizing some methods to multivariate histograms and potential areas for further research.
Dynamic Programming Over Graphs of Bounded TreewidthASPAK2014
This document discusses treewidth and algorithms for graphs with bounded treewidth. It contains three parts:
1) Algorithms for bounded treewidth graphs, including showing weighted max independent set and c-coloring can be solved in FPT time parameterized by treewidth using dynamic programming on tree decompositions.
2) Graph-theoretic properties of treewidth such as definitions of treewidth and tree decompositions.
3) Applications of these algorithms to problems like Hamiltonian cycle on general graphs.
The document describes a three-parameter generalized inverse Weibull (GIW) distribution that can model failure rates. Key properties of the GIW distribution include:
- It reduces to the inverse Weibull distribution when the shape parameter γ equals 1.
- Its probability density function, survival function, and hazard function are defined.
- Formulas are provided for the moments, moment generating function, and Shannon entropy of the GIW distribution.
- Methods are described for maximum likelihood estimation of the GIW distribution parameters from censored lifetime data.
On clustering financial time series - A need for distances between dependent ...Gautier Marti
This document discusses clustering financial time series data using distances between dependent random variables. It notes that traditional clustering based only on correlation can lead to spurious clusters, as correlation does not fully capture dependence. The paper proposes a distance measure that combines information about both the correlation and distribution of random variables. It tests this distance measure on synthetic data from a hierarchical block model and real credit default swap market data, finding it performs better than distances based only on correlation or distribution individually. Some open questions are also discussed, such as how to select the optimal weighting of correlation vs distribution information.
Optimal Transport vs. Fisher-Rao distance between CopulasGautier Marti
How can we compare two dependence structures (represented by copulas)? It depends on the task. For clustering variables with similar dependence, prefer Optimal Transport. For detecting change points in a dynamical dependence structure, prefer Fisher-Rao and its associated f-divergences (for example, an approach a la Frédéric Barbaresco in radar signal processing). This study illustrates these properties with bivariate Gaussian copulas.
Clustering Financial Time Series using their Correlations and their Distribut...Gautier Marti
This document discusses methods for clustering random walks. It introduces the GNPR (Generic Non-Parametric Representation) method for defining a distance between two random walks that separates dependence and distribution information. The GNPR method is shown to outperform standard approaches on synthetic datasets containing different clusters based on distribution and dependence. The GNPR method is also used to cluster credit default swaps, identifying a cluster of "Western sovereigns". The document concludes that GNPR is an effective way to deal with dependence and distribution information separately without losing information.
A common fixed point theorem in cone metric spacesAlexander Decker
This academic article summarizes a common fixed point theorem for continuous and asymptotically regular self-mappings on complete cone metric spaces. The theorem extends previous results to cone metric spaces, which generalize metric spaces by replacing real numbers with an ordered Banach space. It proves that under certain contractive conditions, the self-mapping has a unique fixed point. The proof constructs a Cauchy sequence that converges to the fixed point.
This document discusses clustering random walk time series. It introduces the concept of clustering and discusses challenges in clustering time series, such as how to define a distance between dependent random variables. It proposes using the copula transform and empirical copula transform to map time series to uniform distributions before calculating distances. The hierarchical block model for nested data partitions is presented. Experimental results show the proposed distances perform well in recovering the nested partitions on both synthetic hierarchical block model data and real credit default swap time series data. Consistency of clustering algorithms is defined in relation to recovering the hierarchical block model partitions.
Selection of bin width and bin number in histogramsKushal Kumar Dey
This document discusses various statistical methods for determining the optimal number of bins and bin width in histograms. It reviews methods such as Sturges' rule, Scott's rule, Bayesian optimal binning, and others. It presents the concepts behind each method and compares their performance in simulations. The simulations involved generating data from distributions like the chi-square, normal, and uniform, and comparing the methods based on how close their histogram estimates were to the true densities. Scott's rule and Freedman-Diaconis modification generally performed best at minimizing errors. The document also discusses generalizing some methods to multivariate histograms and potential areas for further research.
Dynamic Programming Over Graphs of Bounded TreewidthASPAK2014
This document discusses treewidth and algorithms for graphs with bounded treewidth. It contains three parts:
1) Algorithms for bounded treewidth graphs, including showing weighted max independent set and c-coloring can be solved in FPT time parameterized by treewidth using dynamic programming on tree decompositions.
2) Graph-theoretic properties of treewidth such as definitions of treewidth and tree decompositions.
3) Applications of these algorithms to problems like Hamiltonian cycle on general graphs.
The document describes a three-parameter generalized inverse Weibull (GIW) distribution that can model failure rates. Key properties of the GIW distribution include:
- It reduces to the inverse Weibull distribution when the shape parameter γ equals 1.
- Its probability density function, survival function, and hazard function are defined.
- Formulas are provided for the moments, moment generating function, and Shannon entropy of the GIW distribution.
- Methods are described for maximum likelihood estimation of the GIW distribution parameters from censored lifetime data.
On clustering financial time series - A need for distances between dependent ...Gautier Marti
This document discusses clustering financial time series data using distances between dependent random variables. It notes that traditional clustering based only on correlation can lead to spurious clusters, as correlation does not fully capture dependence. The paper proposes a distance measure that combines information about both the correlation and distribution of random variables. It tests this distance measure on synthetic data from a hierarchical block model and real credit default swap market data, finding it performs better than distances based only on correlation or distribution individually. Some open questions are also discussed, such as how to select the optimal weighting of correlation vs distribution information.
Optimal Transport vs. Fisher-Rao distance between CopulasGautier Marti
How can we compare two dependence structures (represented by copulas)? It depends on the task. For clustering variables with similar dependence, prefer Optimal Transport. For detecting change points in a dynamical dependence structure, prefer Fisher-Rao and its associated f-divergences (for example, an approach a la Frédéric Barbaresco in radar signal processing). This study illustrates these properties with bivariate Gaussian copulas.
Clustering Financial Time Series using their Correlations and their Distribut...Gautier Marti
This document discusses methods for clustering random walks. It introduces the GNPR (Generic Non-Parametric Representation) method for defining a distance between two random walks that separates dependence and distribution information. The GNPR method is shown to outperform standard approaches on synthetic datasets containing different clusters based on distribution and dependence. The GNPR method is also used to cluster credit default swaps, identifying a cluster of "Western sovereigns". The document concludes that GNPR is an effective way to deal with dependence and distribution information separately without losing information.
A common fixed point theorem in cone metric spacesAlexander Decker
This academic article summarizes a common fixed point theorem for continuous and asymptotically regular self-mappings on complete cone metric spaces. The theorem extends previous results to cone metric spaces, which generalize metric spaces by replacing real numbers with an ordered Banach space. It proves that under certain contractive conditions, the self-mapping has a unique fixed point. The proof constructs a Cauchy sequence that converges to the fixed point.
Ireducible core and equal remaining obligations rule for mcst gamesvinnief
This document presents an algorithm for solving minimum cost spanning extension (MCSE) problems, which generalize minimum cost spanning tree problems. The algorithm finds a minimum cost extension of an existing network to connect all users to a source, and generates an associated set of cost allocations. It works by sequentially adding edges in order of non-decreasing cost, such that each added edge does not introduce new cycles. The algorithm's output is proved to be contained within the core of the associated MCSE game and is independent of the specific extension constructed. The paper also generalizes the definition of the "irreducible core" to MCSE problems and shows the algorithm's output coincides with this.
Beating the Spread: Time-Optimal Point MeshingDon Sheehy
We present NetMesh, a new algorithm that produces a conforming Delaunay mesh for point sets in any fixed dimension with guaranteed optimal mesh size and quality.
Our comparison based algorithm runs in time $O(n\log n + m)$, where $n$ is the input size and $m$ is the output size, and with constants depending only on the dimension and the desired element quality bounds.
It can terminate early in $O(n\log n)$ time returning a $O(n)$ size Voronoi diagram of a superset of $P$ with a relaxed quality bound, which again matches the known lower bounds.
The previous best results in the comparison model depended on the log of the <b>spread</b> of the input, the ratio of the largest to smallest pairwise distance among input points.
We reduce this dependence to $O(\log n)$ by using a sequence of $\epsilon$-nets to determine input insertion order in an incremental Voronoi diagram.
We generate a hierarchy of well-spaced meshes and use these to show that the complexity of the Voronoi diagram stays linear in the number of points throughout the construction.
NEW NON-COPRIME CONJUGATE-PAIR BINARY TO RNS MULTI-MODULI FOR RESIDUE NUMBER ...csandit
In this paper a new Binary-to-RNS converters for multi-moduli RNS based on conjugate-pair as
of the set { 2
n1
– 2, 2
n1
+ 2, 2
n2
– 2, 2
n2
+ 2, …, 2
nN
– 2, 2
nN
+ 2 } are presented. 2
n
– 2
and 2
n
+ 2 modulies are called conjugates of each other. Benefits of Multi-moduli RNS
processors are; relying on the sets with pairs of conjugate moduli : 1) Large dynamic ranges. 2)
Fast and balanced RNS arithmetic. 3) Simple and efficient RNS processing hardware. 4)
Efficient weighted-to-RNS and RNS-to-Weighted converters. [1] The dynamic range (M)
achieved by the set above is defined by the least common multiple (LCM) of the moduli. This
new non-coprime conjugate-pair is unique and the only one of its shape as to be shown.
Muon g-2 anomaly
- The measured value of the muon anomalous magnetic moment (g-2) differs from the standard model prediction by more than 3 sigma, indicating potential new physics.
- Future experiments at Fermilab and J-PARC aim to measure the muon g-2 with higher precision and could discover a discrepancy exceeding 5 sigma in the next 3-5 years.
- This research focuses on investigating the source of the current muon g-2 anomaly by considering contributions from supersymmetric models, which could accommodate new particles within the mass reach of current accelerators.
The document discusses prospects for slepton searches in future experiments based on previous works. It summarizes the status of the muon anomalous magnetic moment measurement, which shows a 3-4 sigma discrepancy from the Standard Model prediction. This discrepancy could be explained by contributions from new physics, such as supersymmetry. Supersymmetry predicts superpartner particles like sleptons. The author's dissertation will examine the muon g-2 anomaly within the framework of the minimal supersymmetric standard model and study slepton mass bounds and prospects for discovering sleptons at future colliders in a model-independent way to help explain the muon g-2 discrepancy.
This document describes a new algorithm for dual tree kernel conditional density estimation (KCDE) that provides fast and accurate density predictions. The algorithm extends previous work on univariate KCDE to allow for multivariate labels (Y) and conditioning variables (X). It applies Gray's dual tree approach separately to the numerator and denominator of the KCDE formula, and uses error bounds to ensure the quotient estimates have bounded relative error. This new algorithm provides the fastest known method for kernel conditional density estimation for prediction tasks.
The document discusses Bayesian neural networks and related topics. It covers Bayesian neural networks, stochastic neural networks, variational autoencoders, and modeling prediction uncertainty in neural networks. Key points include using Bayesian techniques like MCMC and variational inference to place distributions over the weights of neural networks, modeling both model parameters and predictions as distributions, and how this allows capturing uncertainty in the network's predictions.
Shape restrictions such as monotonicity often naturally arise. In this talk, we consider a Bayesian approach to monotone nonparametric regression with a normal error. We assign a prior through piecewise constant functions and impose a conjugate normal prior on the coefficient. Since the resulting functions need not be monotone, we project samples from the posterior on the allowed parameter space to construct a “projection posterior”. We obtain the limit posterior distribution of a suitably centered and scaled posterior distribution for the function value at a point. The limit distribution has some interesting similarity and difference with the corresponding limit distribution for the maximum likelihood estimator. By comparing the quantiles of these two distributions, we observe an interesting new phenomenon that coverage of a credible interval can be more than the credibility level, the exact opposite of a phenomenon observed by Cox for smooth regression. We describe a recalibration strategy to modify the credible interval to meet the correct level of coverage.
This talk is based on joint work with Moumita Chakraborty, a doctoral student at North Carolina State University.
The document summarizes research on threshold network models, which generate scale-free networks without growth by assigning intrinsic weights to nodes based on a given distribution and connecting nodes based on whether their total weight exceeds a threshold. The model has been extended to spatial networks by incorporating distance between nodes and to include homophily. Analytical results show the degree distribution and other properties depend on the weight distribution and thresholding function used. Several open problems are also discussed.
The document discusses deep kernel learning, which combines deep learning and Gaussian processes (GPs). It briefly reviews the predictive equations and marginal likelihood for GPs, noting their computational requirements. GPs assume datasets with input vectors and target values, modeling the values as joint Gaussian distributions based on a mean function and covariance kernel. Predictive distributions for test points are also Gaussian. The goal of deep kernel learning is to leverage recent work on efficiently representing kernel functions to produce scalable deep kernels, allowing outperformance of standalone deep learning and GPs on various datasets.
Proportional and decentralized rule mcst gamesvinnief
This document presents two cost allocation rules - the proportional rule and the decentralized rule - for minimum cost spanning extension problems. The proportional rule allocates costs proportionally based on an algorithm that constructs a minimum cost spanning extension while simultaneously allocating link costs. The decentralized rule is based on an algorithm that constructs the extension in a decentralized manner, with links being added one by one and their costs immediately allocated. Both rules are shown to be refinements of the irreducible core defined for these problems. The proportional rule is then characterized axiomatically.
Design Method of Directional GenLOT with Trend Vanishing MomentsShogo Muramatsu
Proc. of Proc. of Asia Pacific Signal and Information Proc. Assoc. Annual Summit and Conf. (APSIPA ASC), pp.692-701, Biopolis, Singapore, Dec. 14 – 17, 2010
Zero-padding a signal involves appending artificial zeros to increase the length of the signal. This increases the frequency resolution of the discrete Fourier transform (DFT) by changing the implicit periodicity assumption made about the signal. Specifically, zero-padding moves the DFT closer to approximating the true discrete-time Fourier transform (DTFT) by changing the assumption from periodicity to assuming the signal is zero outside the observed range. While zero-padding does not provide new information, it can help reveal features of a signal by modifying the implicit assumptions of the DFT.
This document summarizes a seminar on kernels and support vector machines. It begins by explaining why kernels are useful for increasing flexibility and speed compared to direct inner product calculations. It then covers definitions of positive definite kernels and how to prove a function is a kernel. Several kernel families are discussed, including translation invariant, polynomial, and non-Mercer kernels. Finally, the document derives the primal and dual problems for support vector machines and explains how the kernel trick allows non-linear classification.
On the stability and accuracy of finite difference method for options pricingAlexander Decker
1) The document discusses finite difference methods for pricing options, specifically the implicit and Crank-Nicolson methods.
2) It analyzes the stability and accuracy of each method. The Crank-Nicolson method is found to be more accurate and converges faster than the implicit method.
3) Numerical examples are provided to demonstrate the convergence properties of each method compared to the true Black-Scholes value. The results show that the Crank-Nicolson method converges to the solution faster than the implicit method as the time and price steps approach zero.
Flexural analysis of thick beams using singleiaemedu
This document presents a single variable shear deformation theory for flexural analysis of thick isotropic beams. The theory accounts for transverse shear deformation effects using a polynomial displacement field. The governing differential equation and boundary conditions are derived using the principle of virtual work. Results for displacement, stresses, and natural bending frequencies are obtained for simply supported thick beams under various loading cases and compared to exact solutions and other higher-order theories. The theory provides excellent accuracy for transverse shear stresses while avoiding the need for a shear correction factor.
This document provides solutions to logistics management assignment problems. It addresses 6 problems related to network flows, facility location, and vehicle routing. For problem 1, it shows that the difference between the sum of outdegrees and indegrees in a network is always zero. For problem 2, it proves Goldman's majority theorem and provides an algorithm to solve the 1-median problem. The solutions utilize concepts like isthmus edges, node weights, and network separation.
11.the comparative study of finite difference method and monte carlo method f...Alexander Decker
This document compares the finite difference method and Monte Carlo method for pricing European options. It provides an overview of these two primary numerical methods used in financial modeling. The Monte Carlo method simulates asset price paths and averages discounted payoffs to estimate option value. It is well-suited for path-dependent options but converges slower than finite difference. The finite difference method solves the Black-Scholes PDE by approximating it on a grid. Specifically, it discusses the Crank-Nicolson scheme, which is unconditionally stable and converges faster than Monte Carlo for standard options.
We consider a general approach to describing the interaction in multigravity models in a D-dimensional space–time. We present various possibilities for generalizing the invariant volume. We derive the most general form of the interaction potential, which becomes a Pauli–Fierz-type model in the bigravity case. Analyzing this model in detail in the (3+1)-expansion formalism and also requiring the absence of ghosts leads to this bigravity model being completely equivalent to the Pauli–Fierz model. We thus in a concrete example show that introducing an interaction between metrics is equivalent to introducing the graviton mass.
This document discusses techniques for solving eigen problems, including the power method, inverse power method, QR decomposition, and QR algorithm. It provides details on implementing these techniques, such as the steps of the QR algorithm and ways to accelerate its convergence like deflation and ad hoc shifts. References are also included.
This document discusses several methods for solving systems of linear equations:
- The graphical method involves drawing the lines defined by the equations on a graph and finding their point of intersection.
- Cramer's rule provides an expression to find the solution using determinants of the coefficient matrix and matrices obtained by replacing columns.
- Matrix inverse involves finding the inverse of the coefficient matrix and multiplying it by the constants vector.
- Gauss elimination is a two step method involving eliminating variables in the forward step and back substitution to find the solution.
- LU decomposition writes the matrix as the product of a lower and upper triangular matrix to solve the system.
Ireducible core and equal remaining obligations rule for mcst gamesvinnief
This document presents an algorithm for solving minimum cost spanning extension (MCSE) problems, which generalize minimum cost spanning tree problems. The algorithm finds a minimum cost extension of an existing network to connect all users to a source, and generates an associated set of cost allocations. It works by sequentially adding edges in order of non-decreasing cost, such that each added edge does not introduce new cycles. The algorithm's output is proved to be contained within the core of the associated MCSE game and is independent of the specific extension constructed. The paper also generalizes the definition of the "irreducible core" to MCSE problems and shows the algorithm's output coincides with this.
Beating the Spread: Time-Optimal Point MeshingDon Sheehy
We present NetMesh, a new algorithm that produces a conforming Delaunay mesh for point sets in any fixed dimension with guaranteed optimal mesh size and quality.
Our comparison based algorithm runs in time $O(n\log n + m)$, where $n$ is the input size and $m$ is the output size, and with constants depending only on the dimension and the desired element quality bounds.
It can terminate early in $O(n\log n)$ time returning a $O(n)$ size Voronoi diagram of a superset of $P$ with a relaxed quality bound, which again matches the known lower bounds.
The previous best results in the comparison model depended on the log of the <b>spread</b> of the input, the ratio of the largest to smallest pairwise distance among input points.
We reduce this dependence to $O(\log n)$ by using a sequence of $\epsilon$-nets to determine input insertion order in an incremental Voronoi diagram.
We generate a hierarchy of well-spaced meshes and use these to show that the complexity of the Voronoi diagram stays linear in the number of points throughout the construction.
NEW NON-COPRIME CONJUGATE-PAIR BINARY TO RNS MULTI-MODULI FOR RESIDUE NUMBER ...csandit
In this paper a new Binary-to-RNS converters for multi-moduli RNS based on conjugate-pair as
of the set { 2
n1
– 2, 2
n1
+ 2, 2
n2
– 2, 2
n2
+ 2, …, 2
nN
– 2, 2
nN
+ 2 } are presented. 2
n
– 2
and 2
n
+ 2 modulies are called conjugates of each other. Benefits of Multi-moduli RNS
processors are; relying on the sets with pairs of conjugate moduli : 1) Large dynamic ranges. 2)
Fast and balanced RNS arithmetic. 3) Simple and efficient RNS processing hardware. 4)
Efficient weighted-to-RNS and RNS-to-Weighted converters. [1] The dynamic range (M)
achieved by the set above is defined by the least common multiple (LCM) of the moduli. This
new non-coprime conjugate-pair is unique and the only one of its shape as to be shown.
Muon g-2 anomaly
- The measured value of the muon anomalous magnetic moment (g-2) differs from the standard model prediction by more than 3 sigma, indicating potential new physics.
- Future experiments at Fermilab and J-PARC aim to measure the muon g-2 with higher precision and could discover a discrepancy exceeding 5 sigma in the next 3-5 years.
- This research focuses on investigating the source of the current muon g-2 anomaly by considering contributions from supersymmetric models, which could accommodate new particles within the mass reach of current accelerators.
The document discusses prospects for slepton searches in future experiments based on previous works. It summarizes the status of the muon anomalous magnetic moment measurement, which shows a 3-4 sigma discrepancy from the Standard Model prediction. This discrepancy could be explained by contributions from new physics, such as supersymmetry. Supersymmetry predicts superpartner particles like sleptons. The author's dissertation will examine the muon g-2 anomaly within the framework of the minimal supersymmetric standard model and study slepton mass bounds and prospects for discovering sleptons at future colliders in a model-independent way to help explain the muon g-2 discrepancy.
This document describes a new algorithm for dual tree kernel conditional density estimation (KCDE) that provides fast and accurate density predictions. The algorithm extends previous work on univariate KCDE to allow for multivariate labels (Y) and conditioning variables (X). It applies Gray's dual tree approach separately to the numerator and denominator of the KCDE formula, and uses error bounds to ensure the quotient estimates have bounded relative error. This new algorithm provides the fastest known method for kernel conditional density estimation for prediction tasks.
The document discusses Bayesian neural networks and related topics. It covers Bayesian neural networks, stochastic neural networks, variational autoencoders, and modeling prediction uncertainty in neural networks. Key points include using Bayesian techniques like MCMC and variational inference to place distributions over the weights of neural networks, modeling both model parameters and predictions as distributions, and how this allows capturing uncertainty in the network's predictions.
Shape restrictions such as monotonicity often naturally arise. In this talk, we consider a Bayesian approach to monotone nonparametric regression with a normal error. We assign a prior through piecewise constant functions and impose a conjugate normal prior on the coefficient. Since the resulting functions need not be monotone, we project samples from the posterior on the allowed parameter space to construct a “projection posterior”. We obtain the limit posterior distribution of a suitably centered and scaled posterior distribution for the function value at a point. The limit distribution has some interesting similarity and difference with the corresponding limit distribution for the maximum likelihood estimator. By comparing the quantiles of these two distributions, we observe an interesting new phenomenon that coverage of a credible interval can be more than the credibility level, the exact opposite of a phenomenon observed by Cox for smooth regression. We describe a recalibration strategy to modify the credible interval to meet the correct level of coverage.
This talk is based on joint work with Moumita Chakraborty, a doctoral student at North Carolina State University.
The document summarizes research on threshold network models, which generate scale-free networks without growth by assigning intrinsic weights to nodes based on a given distribution and connecting nodes based on whether their total weight exceeds a threshold. The model has been extended to spatial networks by incorporating distance between nodes and to include homophily. Analytical results show the degree distribution and other properties depend on the weight distribution and thresholding function used. Several open problems are also discussed.
The document discusses deep kernel learning, which combines deep learning and Gaussian processes (GPs). It briefly reviews the predictive equations and marginal likelihood for GPs, noting their computational requirements. GPs assume datasets with input vectors and target values, modeling the values as joint Gaussian distributions based on a mean function and covariance kernel. Predictive distributions for test points are also Gaussian. The goal of deep kernel learning is to leverage recent work on efficiently representing kernel functions to produce scalable deep kernels, allowing outperformance of standalone deep learning and GPs on various datasets.
Proportional and decentralized rule mcst gamesvinnief
This document presents two cost allocation rules - the proportional rule and the decentralized rule - for minimum cost spanning extension problems. The proportional rule allocates costs proportionally based on an algorithm that constructs a minimum cost spanning extension while simultaneously allocating link costs. The decentralized rule is based on an algorithm that constructs the extension in a decentralized manner, with links being added one by one and their costs immediately allocated. Both rules are shown to be refinements of the irreducible core defined for these problems. The proportional rule is then characterized axiomatically.
Design Method of Directional GenLOT with Trend Vanishing MomentsShogo Muramatsu
Proc. of Proc. of Asia Pacific Signal and Information Proc. Assoc. Annual Summit and Conf. (APSIPA ASC), pp.692-701, Biopolis, Singapore, Dec. 14 – 17, 2010
Zero-padding a signal involves appending artificial zeros to increase the length of the signal. This increases the frequency resolution of the discrete Fourier transform (DFT) by changing the implicit periodicity assumption made about the signal. Specifically, zero-padding moves the DFT closer to approximating the true discrete-time Fourier transform (DTFT) by changing the assumption from periodicity to assuming the signal is zero outside the observed range. While zero-padding does not provide new information, it can help reveal features of a signal by modifying the implicit assumptions of the DFT.
This document summarizes a seminar on kernels and support vector machines. It begins by explaining why kernels are useful for increasing flexibility and speed compared to direct inner product calculations. It then covers definitions of positive definite kernels and how to prove a function is a kernel. Several kernel families are discussed, including translation invariant, polynomial, and non-Mercer kernels. Finally, the document derives the primal and dual problems for support vector machines and explains how the kernel trick allows non-linear classification.
On the stability and accuracy of finite difference method for options pricingAlexander Decker
1) The document discusses finite difference methods for pricing options, specifically the implicit and Crank-Nicolson methods.
2) It analyzes the stability and accuracy of each method. The Crank-Nicolson method is found to be more accurate and converges faster than the implicit method.
3) Numerical examples are provided to demonstrate the convergence properties of each method compared to the true Black-Scholes value. The results show that the Crank-Nicolson method converges to the solution faster than the implicit method as the time and price steps approach zero.
Flexural analysis of thick beams using singleiaemedu
This document presents a single variable shear deformation theory for flexural analysis of thick isotropic beams. The theory accounts for transverse shear deformation effects using a polynomial displacement field. The governing differential equation and boundary conditions are derived using the principle of virtual work. Results for displacement, stresses, and natural bending frequencies are obtained for simply supported thick beams under various loading cases and compared to exact solutions and other higher-order theories. The theory provides excellent accuracy for transverse shear stresses while avoiding the need for a shear correction factor.
This document provides solutions to logistics management assignment problems. It addresses 6 problems related to network flows, facility location, and vehicle routing. For problem 1, it shows that the difference between the sum of outdegrees and indegrees in a network is always zero. For problem 2, it proves Goldman's majority theorem and provides an algorithm to solve the 1-median problem. The solutions utilize concepts like isthmus edges, node weights, and network separation.
11.the comparative study of finite difference method and monte carlo method f...Alexander Decker
This document compares the finite difference method and Monte Carlo method for pricing European options. It provides an overview of these two primary numerical methods used in financial modeling. The Monte Carlo method simulates asset price paths and averages discounted payoffs to estimate option value. It is well-suited for path-dependent options but converges slower than finite difference. The finite difference method solves the Black-Scholes PDE by approximating it on a grid. Specifically, it discusses the Crank-Nicolson scheme, which is unconditionally stable and converges faster than Monte Carlo for standard options.
We consider a general approach to describing the interaction in multigravity models in a D-dimensional space–time. We present various possibilities for generalizing the invariant volume. We derive the most general form of the interaction potential, which becomes a Pauli–Fierz-type model in the bigravity case. Analyzing this model in detail in the (3+1)-expansion formalism and also requiring the absence of ghosts leads to this bigravity model being completely equivalent to the Pauli–Fierz model. We thus in a concrete example show that introducing an interaction between metrics is equivalent to introducing the graviton mass.
This document discusses techniques for solving eigen problems, including the power method, inverse power method, QR decomposition, and QR algorithm. It provides details on implementing these techniques, such as the steps of the QR algorithm and ways to accelerate its convergence like deflation and ad hoc shifts. References are also included.
This document discusses several methods for solving systems of linear equations:
- The graphical method involves drawing the lines defined by the equations on a graph and finding their point of intersection.
- Cramer's rule provides an expression to find the solution using determinants of the coefficient matrix and matrices obtained by replacing columns.
- Matrix inverse involves finding the inverse of the coefficient matrix and multiplying it by the constants vector.
- Gauss elimination is a two step method involving eliminating variables in the forward step and back substitution to find the solution.
- LU decomposition writes the matrix as the product of a lower and upper triangular matrix to solve the system.
The document discusses the equivalence between sampling problems and search problems in complexity theory. It shows that for any sampling problem S, there exists a corresponding search problem R_S that is equivalent to S. Specifically, R_S can be solved via the same complexity class if and only if S can be solved via the corresponding sampling complexity class. This establishes a reduction from sampling to search problems. The reduction relies on the notion of Kolmogorov complexity to define an appropriate hard search problem R_S corresponding to a given sampling problem S.
The document describes a method for canonizing graphs of bounded treewidth in AC1 complexity. It presents the following:
1) Existing results showing canonization of bounded treewidth graphs is in P, TC1, TC2, and LogCFL complexity classes.
2) A new algorithm that canonizes bounded treewidth graphs in AC1 complexity by computing a tree decomposition of depth O(log n) and constructing a minimal description circuit of depth O(log n).
3) The algorithm works by computing descriptions for bags of the tree decomposition in parallel, sorting descriptions, and recursively combining descriptions while maintaining a circuit depth of O(log n).
This document discusses computational models for algebraic decision trees and algebraic computation trees. For algebraic decision trees, each node tests a polynomial of degree d or less, and the tree accepts or rejects inputs. Algebraic computation trees are more powerful, allowing nodes to calculate testing polynomials from previous results. The document examines complexity bounds for solving membership problems using these models over different fields F.
This document discusses Maltsev digraphs, which are directed graphs that admit a 3-ary polymorphism called a Maltsev polymorphism. The document proves several theorems characterizing when a digraph admits a Maltsev or conservative Maltsev polymorphism based on forbidden substructures. It explores applications to the complexity of constraint satisfaction problems over such digraphs.
The document discusses locally decodable codes, which allow recovery of individual data symbols from a coded data set even after erasures. Reed-Muller codes and multiplicity codes were early constructions that provided locality but only up to a rate of 0.5. Matching vector codes were later introduced and can achieve locality r for codes of positive rate and length n=O(r^2). However, the optimal tradeoff between rate, length, and locality remains an open problem.
The document discusses creating a complexity theory for randomized search heuristics. It uses the example of the Mastermind problem, where an oracle chooses a secret binary string and an algorithm tries to discover it by querying strings and receiving feedback on matches. This is modeled as a black-box optimization problem. The document proposes analyzing such problems using a ranking-based query complexity model rather than only analyzing specific algorithms on specific problems. It suggests this approach could provide general lower bounds and help develop a complexity theory for randomized search heuristics.
This document summarizes research on the computational complexity of finding surjective homomorphisms between graphs. It introduces the concept of surjective homomorphisms from irreflexive graphs to partially reflexive graphs. The main results are:
1. The problem of finding surjective homomorphisms (Surjective H-Coloring) is polynomial-time solvable if the graph H has reflexive vertices that induce a connected subgraph, and is NP-complete otherwise.
2. For polynomial-time cases, an algorithm runs in time O(n^k) where n is the number of vertices and k is the number of leaves in H.
3. Surjective H-Coloring is fixed-
This document summarizes research on the combinatorial properties of Burrows-Wheeler Transforms (BWT). It discusses prior work that characterized words with simple BWT image forms. It also introduces two general decision problems about BWT images and claims to provide efficient solutions to these problems. Specifically, it presents a theorem providing a criterion to check whether a given word is a valid BWT image based on analyzing the number of orbits in the word's stable sorting.
This document discusses the relationships between orbits of linear maps and regular languages. It shows that the chamber hitting problem (CHP) and permutation filter-realizability problem are Turing equivalent. It also shows that the injective filter-realizability problem and surjective filter-realizability problem are decidable, while the track product of the periodic and permutation filter-realizability problem is undecidable. The zero in the upper right corner problem, which is undecidable, can be reduced to the latter regular realizability problem.
This document discusses the relationships between orbits of linear maps and regular languages. It shows that the chamber hitting problem (CHP) and permutation filter realizability problem are Turing equivalent. It also shows that the injective filter and surjective filter realizability problems are decidable by reducing them to problems about orbits. However, the regular realizability problem for the track product of the periodic and permutation filters is undecidable, as it can reduce the undecidable zero in the upper right corner problem.
The document discusses locally decodable codes, which allow recovery of individual data symbols from a coded data set even after erasures. Reed-Muller codes and multiplicity codes were early constructions that provided locality but only up to a rate of 0.5. Matching vector codes were later introduced and can achieve locality r for codes of positive rate and length n=O(r^2). However, the optimal tradeoff between rate, length, and locality remains an open problem.
The document discusses efficient algorithms for performing approximate matching queries on strings that have been grammar-compressed. It introduces the concept of implicit unit-Monge matrices which can represent permutation matrices in a space-efficient way using a range tree data structure. This representation allows dominance counting queries, needed for string comparison, to be performed in O(log2 n) time after an O(n log n) preprocessing step. More advanced data structures can improve these asymptotic time and space bounds further.
Kristoffer Arnsfelt Hansen, Rasmus Ibsen-Jensen and Peter Bro Miltersen. The complexity of solving reachability games using value and strategy iteration
This document summarizes an algorithm for maximizing throughput in online scheduling of equal length jobs. The algorithm aims to schedule incoming jobs with the goal of maximizing total value of completed jobs by their deadlines. It uses a charging scheme and potential function to prove it is (2+√5)-competitive, an improvement over prior algorithms. The algorithm handles jobs arriving online with weights, processing times, deadlines, and considers models where preemption allows restarting or resuming previously completed work. Open questions remain around settling the exact competitive ratio and developing new algorithmic methods.
This document provides tips for living a beautiful life, including taking daily walks while smiling, sitting in silence for 10 minutes each day, spending time with both the elderly and young children, making an effort to make others smile, and accepting that life's problems are temporary lessons. It encourages living with energy, enthusiasm, empathy, faith, family and friends.
The document presents a metaphor comparing one's life to a bank account that is credited with 86,400 seconds (one day) each morning. The time not used each day is lost and cannot be reclaimed, so it is important to make the most of each day. Realizing the value of even small amounts of time, like minutes or seconds, is important by considering what could be missed or lost in that short span. The present is a gift and the only time we have, so it's important to treasure each moment and spend time with people who matter.
The document describes a presentation on bit probe schemes with one-sided error. Specifically, it discusses using pseudo-random graphs to construct a bit probe scheme where querying whether an element is in a set requires reading only one bit from the database. This improves upon prior work that had exponential computation time or two-sided errors. The presentation outlines a scheme using a random processor of queries, a small cached first-level memory, and a larger second-level database memory. It argues that for any set, most graphs will be "suitable" such that elements in the set have mostly blue neighbors, and proposes using a pseudo-random graph with a cached seed to construct the graph as needed.
The document discusses applications of graphs with bounded treewidth. It covers the following key points:
1) Courcelle's theorem shows that many NP-complete graph problems can be solved in linear time for graphs of bounded treewidth using monadic second-order logic. This includes problems like independent set, coloring, and Hamiltonian cycle.
2) The treewidth of a graph is closely related to its largest grid minor - graphs with large treewidth contain large grid minors. There are polynomial relationships between treewidth and largest grid minor for planar graphs.
3) Planar graphs have bounded treewidth if and only if they exclude some grid configuration as a contraction. This helps characterize planar graphs of bounded treewidth
This document summarizes key concepts about minimum spanning trees from a lecture by Dr. Muhammad Hanif Durad. It discusses motivation for finding minimum spanning trees, properties of MSTs, and algorithms like Kruskal's algorithm and Prim's algorithm for finding an MST in polynomial time. Kruskal's algorithm finds the MST by greedily adding the minimum weight edge that connects two components in a forest. Prim's algorithm finds the MST by growing a tree by greedily adding the minimum weight edge connecting the growing tree to a vertex not yet included.
The document discusses various topics relating to trees in discrete mathematics. It begins by defining different types of trees such as free trees, rooted trees, and binary trees. It then discusses concepts like the level and height of vertices in rooted trees. Other topics covered include minimum spanning trees, tree traversals, decision trees, isomorphism of trees, and algorithms for testing isomorphism between binary trees.
The document discusses greedy algorithms and their applications. It provides examples of problems that greedy algorithms can solve optimally, such as the change making problem and finding minimum spanning trees (MSTs). It also discusses problems where greedy algorithms provide approximations rather than optimal solutions, such as the traveling salesman problem. The document describes Prim's and Kruskal's algorithms for finding MSTs and Dijkstra's algorithm for solving single-source shortest path problems. It explains how these algorithms make locally optimal choices at each step in a greedy manner to build up global solutions.
The document discusses algorithms for finding minimum spanning trees (MSTs) in graphs. It describes:
1) The minimum spanning tree problem of finding a subset of edges in a graph that connects all vertices with minimum total weight.
2) Two algorithms - Kruskal's algorithm and Prim's algorithm - for solving this problem by growing an MST one edge at a time.
3) Kruskal's algorithm sorts edges by weight and adds edges between trees until all vertices are connected, running in O(E log V) time.
4) Prim's algorithm grows the MST from an initial vertex by adding minimum weight edges between the current tree and other vertices, using a priority queue
An Altering Distance Function in Fuzzy Metric Fixed Point Theoremsijtsrd
The aim of this paper is to improve conditions proposed in recently published fixed point results for complete and compact fuzzy metric spaces. For this purpose, the altering distance functions are used. Moreover, in some of the results presented the class of t-norms is extended by using the theory of countable extensions of t-norms. Dr C Vijender"An Altering Distance Function in Fuzzy Metric Fixed Point Theorems" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-1 | Issue-5 , August 2017, URL: http://www.ijtsrd.com/papers/ijtsrd2293.pdf http://www.ijtsrd.com/mathemetics/other/2293/an-altering-distance-function-in-fuzzy-metric-fixed-point-theorems/dr-c-vijender
A total dominating set D of graph G = (V, E) is a total strong split dominating set if the induced subgraph < V-D > is totally disconnected with atleast two vertices. The total strong split domination number γtss(G) is the minimum cardinality of a total strong split dominating set. In this paper, we characterize total strong split dominating sets and obtain the exact values of γtss(G) for some graphs. Also some inequalities of γtss(G) are established.
This document discusses the derivation of the formula for wave speed on a string. It begins by isolating a small segment of string at the peak of a pulse. The formula for wave speed is then given as v=√(Ts/μ), where Ts is the tension force on the segment and μ is the linear mass density of the string. The document then defines linear mass density as the mass per unit length and tension force as the restoring force pulling the string segment inward. Through applying Newton's second law to the segment, the formula is derived. Examples are then given to show how wave speed is affected by changes in tension or mass density.
EDGE-TENACITY IN CYCLES AND COMPLETE GRAPHSijfcstjournal
It is well known that the tenacity is a proper measure for studying vulnerability and reliability in graphs.
Here, a modified edge-tenacity of a graph is introduced based on the classical definition of tenacity.
Properties and bounds for this measure are introduced; meanwhile edge-tenacity is calculated for cycle
graphs and also for complete graphs.
A Note on “ Geraghty contraction type mappings”IOSRJM
This document summarizes a paper that proves a fixed point result for φα-Geraghty contraction type mappings. The paper generalizes previous results by replacing the continuity condition of φ with a weaker condition. Specifically:
1) The paper defines φα-Geraghty contraction type mappings and proves a fixed point theorem for such mappings under certain conditions.
2) This generalizes previous results that assumed continuity of φ by replacing it with a weaker limit condition.
3) The main theorem proves that a φα-Geraghty contraction type mapping has a fixed point if it is triangular α-orbital admissible, has an α-admissible starting point, and satisfies the weaker limit
Characterization of trees with equal total edge domination and double edge do...Alexander Decker
This document provides a constructive characterization of trees with equal total edge domination and double edge domination numbers. It begins with definitions of total edge domination number and double edge domination number. It then proves several observations and results about these numbers in trees. The main result is Theorem 3.1, which proves that for any tree T, the double edge domination number of T is greater than or equal to the total edge domination number of T. It then introduces six types of operations to construct trees with equal numbers. Lemmas 3.2 through 3.4 prove that applying these operations to trees that already have equal numbers results in another tree with equal numbers.
On algorithmic problems concerning graphs of higher degree of symmetrygraphhoc
Since the ancient determination of the five platonic solids the study of symmetry and regularity has always
been one of the most fascinating aspects of mathematics. One intriguing phenomenon of studies in graph
theory is the fact that quite often arithmetic regularity properties of a graph imply the existence of many
symmetries, i.e. large automorphism group G. In some important special situation higher degree of
regularity means that G is an automorphism group of finite geometry. For example, a glance through the
list of distance regular graphs of diameter d < 3 reveals the fact that most of them are connected with
classical Lie geometry. Theory of distance regular graphs is an important part of algebraic combinatorics
and its applications such as coding theory, communication networks, and block design. An important tool
for investigation of such graphs is their spectra, which is the set of eigenvalues of adjacency matrix of a
graph. Let G be a finite simple group of Lie type and X be the set homogeneous elements of the associated
geometry. The complexity of computing the adjacency matrices of a graph Gr on the vertices X such that
Aut GR = G depends very much on the description of the geometry with which one starts. For example, we
can represent the geometry as the totality of 1 cosets of parabolic subgroups 2 chains of embedded
subspaces (case of linear groups), or totally isotropic subspaces (case of the remaining classical groups), 3
special subspaces of minimal module for G which are defined in terms of a G invariant multilinear form.
The aim of this research is to develop an effective method for generation of graphs connected with classical
geometry and evaluation of its spectra, which is the set of eigenvalues of adjacency matrix of a graph. The
main approach is to avoid manual drawing and to calculate graph layout automatically according to its
formal structure. This is a simple task in a case of a tree like graph with a strict hierarchy of entities but it
becomes more complicated for graphs of geometrical nature. There are two main reasons for the
investigations of spectra: (1) very often spectra carry much more useful information about the graph than a
corresponding list of entities and relationships (2) graphs with special spectra, satisfying so called
Ramanujan property or simply Ramanujan graphs (by name of Indian genius mathematician) are important
for real life applications (see [13]). There is a motivated suspicion that among geometrical graphs one
could find some new Ramanujan graphs.
ON ALGORITHMIC PROBLEMS CONCERNING GRAPHS OF HIGHER DEGREE OF SYMMETRYFransiskeran
Since the ancient determination of the five platonic solids the study of symmetry and regularity has always
been one of the most fascinating aspects of mathematics. One intriguing phenomenon of studies in graph
theory is the fact that quite often arithmetic regularity properties of a graph imply the existence of many
symmetries, i.e. large automorphism group G. In some important special situation higher degree of
regularity means that G is an automorphism group of finite geometry. For example, a glance through the
list of distance regular graphs of diameter d < 3 reveals the fact that most of them are connected with
classical Lie geometry. Theory of distance regular graphs is an important part of algebraic combinatorics
and its applications such as coding theory, communication networks, and block design. An important tool
for investigation of such graphs is their spectra, which is the set of eigenvalues of adjacency matrix of a
graph. Let G be a finite simple group of Lie type and X be the set homogeneous elements of the associated
geometry.
This document discusses various graph parameters related to distance between vertices in a graph, including diameter, radius, and average distance. It examines how these parameters are defined, how they relate to each other and other graph properties, and how they behave in different graph classes. It also discusses characterizations of graph classes based on distance properties and generalizations to concepts like Steiner trees.
Subproblem-Tree Calibration: A Unified Approach to Max-Product Message Passin...Varad Meru
Max-product message passing algorithms are commonly used for MAP inference in MRFs. Recent work showed these algorithms can be viewed as performing block coordinate descent in a dual objective. However, existing algorithms are limited by the restricted ways they select blocks to update. The paper proposes a "Subproblem-Tree Calibration" framework that subsumes MPLP, MSD, and TRW-S as special cases and allows more flexible block selection. The algorithm represents the problem as a subproblem multi-graph and calibrates potentials on randomly selected subproblem trees via message passing, achieving dual optimality with respect to the tree's block of variables. Experimental results show the approach converges to different dual objectives than existing methods.
A Quest for Subexponential Time Parameterized Algorithms for Planar-k-Path: F...cseiitgn
The document summarizes a talk on obtaining subexponential time algorithms for NP-hard problems on planar graphs. It discusses using treewidth and tree decompositions to solve problems like 3-coloring in 2O(√n) time on n-vertex planar graphs. It also discusses the exponential time hypothesis and how it implies lower bounds, showing these algorithms are optimal up to constant factors in the exponent. The document outlines several chapters, including using grid minors and bidimensionality to obtain 2O(√k) algorithms for problems like k-path, even for some W[1]-hard problems parameterized by k.
This document discusses using tensor decomposition models for context-aware recommendation systems. It introduces tensor ring decomposition as a new model and describes its advantages over other tensor decomposition models like CP and Tucker in handling high-order data without dimensionality issues. The document proposes using a max-norm regularization with tensor ring decomposition to build recommendation models from tensorized contextual data like tags and timestamps, experimenting on MovieLens datasets.
This document discusses matchings and factors in graphs. It begins by defining terms like matching, maximal matching, maximum matching, alternating paths, and how augmenting a matching. It presents theorems like Berge's theorem characterizing maximum matchings, and Hall's marriage theorem giving conditions for a matching that saturates one side of a bipartite graph. It also discusses vertex covers, independence numbers, and their relationships to matchings. Algorithms for finding maximum matchings in bipartite graphs like the augmenting path algorithm are also covered.
We first consider a ternary matrix group related to the von Neumann regular semigroups and to the Artin braid group (in an algebraic way). The product of a special kind of ternary matrices (idempotent and of finite order) reproduces the regular semigroups and braid groups with their binary multiplication of components. We then generalize the construction to the higher arity case, which allows us to obtain some higher degree versions (in our sense) of the regular semigroups and braid groups. The latter are connected with the generalized polyadic braid equation and R-matrix introduced by the author, which differ from any version of the well-known tetrahedron equation and higher-dimensional analogs of the Yang-Baxter equation, n-simplex equations. The higher degree (in our sense) Coxeter group and symmetry groups are then defined, and it is shown that these are connected only in the non-higher case.
The document discusses computational models for algebraic decision trees and algebraic computation trees over a ground field F. It describes how algebraic decision trees use polynomials of degree ≤ d to branch at each node, while algebraic computation trees allow testing polynomials to be calculated from previous polynomials along the path. The document then covers existing lower bounds on the complexity C(S) of the membership problem for a set S in terms of topological invariants of S, such as the number of connected components, Euler characteristic, and sum of Betti numbers.
The document discusses recognizing sparse perfect elimination bipartite graphs. It begins with an example of Gaussian elimination on a matrix that introduces new non-zero values. The key points are that perfect elimination bipartite graphs correspond to matrices that can be eliminated without creating new non-zeros, and this can be achieved by finding a sequence of bisimplicial edges in the corresponding bipartite graph. The document proposes using bisimplicial edges as pivots during elimination to avoid introducing new non-zeros.
The document discusses recognizing sparse perfect elimination bipartite graphs through matrix elimination. It provides an example of Gaussian elimination on a matrix that introduces new non-zero values. The key points are:
- Perfect elimination bipartite graphs correspond to matrices that allow elimination without creating new non-zeros.
- Existing algorithms have time complexity of O(n^5) or O(n^3/log n) but may produce dense matrices from sparse ones.
- A new algorithm is proposed that avoids this issue by working directly with the sparse matrix structure.
The document discusses the method of multiplicities, which is a technique for combinatorics using algebra. It involves finding a polynomial that vanishes on a set with high multiplicity. This is applied to problems in list decoding of Reed-Solomon codes, bounding the size of Kakeya sets, and constructing randomness extractors. Specifically, the method is used to improve bounds on list decoding, show that certain Kakeya sets must be large, and allow extraction of more randomness from weak sources. Propagating multiplicities of derivatives allows tighter analysis of these problems.
The document summarizes research on multiple-conclusion calculi for first-order Gödel logic. It introduces Gödel logic and describes its semantics using both many-valued semantics based on truth values in the interval [0,1] and Kripke-style semantics. It then outlines proof theory for Gödel logic, including early sequent calculi and more recent hypersequent calculi. The hypersequent calculus introduced in 1991 uses standard rules and has been extended to the first-order case. The document provides details on the structural and logical rules of this single-conclusion hypersequent system.
The document summarizes a talk on polynomial identity testing (PIT). PIT is the problem of determining if a polynomial computed by an arithmetic circuit is identical to the zero polynomial. The talk outlines the definition of PIT, its connection to circuit lower bounds, and surveys positive results for restricted circuit classes. It also provides examples of proof techniques for PIT on depth-3 and depth-4 circuits and discusses the relationship between PIT and polynomial factorization.
This document presents an overview of the consensus problem from an informal and formal perspective. It discusses how consensus requires representativity, where the decision reflects a sufficient number of individual opinions, and stability, where the decision is robust to individual opinion variations. It also presents some key formalizations, including defining consensus as a function from the set of sensor inputs and memory states to decisions. It introduces the concept of a geodesic to measure stability as the maximum number of state transitions needed to return to the starting configuration along a trajectory where each sensor changes at most once.
The document presents a polynomial-time algorithm for finding a minimal conflicting set of rows (MCSR) in a binary matrix that contains a given row. It defines MCSR as a set of rows that does not have the consecutive ones property but where any proper subset does have the property. The algorithm works by representing the binary matrix as a vertex-colored bipartite graph and detecting forbidden substructures called Tucker configurations that characterize when the consecutive ones property does not hold. It finds an MCSR containing the given row by pruning rows from the graph until a Tucker configuration exists using the current set but not with any proper subset.
The document summarizes precedence automata and languages. It provides historical background on operator precedence grammars and Floyd languages. It then discusses how precedence parsing works using an example arithmetic expression. Key points include using a precedence table to determine parentheses insertion and defining three types of moves for an automata model based on symbol precedence: push, mark, and flush. The example demonstrates the automata processing a Dyck language expression.
The document discusses the constraint satisfaction problem (CSP) and the dichotomy conjecture regarding the complexity of CSP instances. It provides definitions and examples of CSPs. It explains the role of polymorphisms in determining the complexity, identifying semilattice, majority and affine polymorphisms as "good". It outlines the dichotomy conjecture that CSPs are either solvable in polynomial time or NP-complete depending on the presence of certain types of local structure defined by polymorphisms. The document also discusses algorithms and results for various constraint languages.
This document describes a Synchronized Alternating Pushdown Automaton (SAPDA) that accepts the language of reduplication with a center marker (RCM). The SAPDA utilizes recursive conjunctive transitions to check that the nth letter before the center marker '$' is the same as the nth letter from the end of the string, for all letters n. This allows the SAPDA to accept strings of the form w$w, where w is any string over the alphabet {a,b}. The construction of the SAPDA involves states that check specific letters at specific positions relative to the center marker.
The document discusses the constraint satisfaction problem (CSP) and the dichotomy conjecture in computational complexity theory. It defines CSP and provides examples. It discusses the role of polymorphisms - operations that preserve constraints. The presence or absence of certain polymorphisms like semilattice, majority, and affine operations determines the complexity of CSP for a given constraint language. The document outlines a proposed dichotomy - CSP is either solvable in polynomial time or NP-complete, depending on the polymorphisms. It surveys partial results proving this conjecture and algorithms for certain constraint languages.
The document discusses shared-memory systems and charts. It provides definitions and concepts related to modeling shared-memory concurrency using partial orders of events called pomsets. Specifically, it defines:
- Shared-memory systems as consisting of registers, data, processes, actions, and rules for updating configurations.
- Pomsets as labeled partial orders used to model executions.
- The may-occur-concurrently relation for rules in a shared-memory system.
- Partial-order semantics for runs of pomsets in a shared-memory system.
- Shared-memory charts (SMCs) as pomsets with gates used to model specifications.
The document discusses precedence automata and languages. It provides historical background on operator precedence grammars and related families of languages. As an example, it explains how parsing an arithmetic expression like 4+5×6 works according to an implicit context-free grammar and by respecting the precedence of operators. It introduces the concept of a precedence table to determine the admissible parentheses generators between pairs of symbols in a grammar.
Locally decodable codes allow recovery of individual data symbols even after data loss by accessing only a small number of codeword symbols. Reed-Muller codes provide locality but only up to a rate of 0.5, while multiplicity codes achieve higher rates but have weaker locality guarantees. Matching vector codes can match the best known locality bounds, constructing codes of length n with locality r for constant r, but the optimal tradeoff between rate, length and locality remains an open problem.
This document summarizes research on the combinatorial properties of Burrows-Wheeler Transforms (BWT). It discusses prior work that characterized words with simple BWT image forms. It also defines two general decision problems regarding whether a word is a valid BWT image or can form a specific BWT image pattern. The authors then present efficient solutions to these two problems, including a theorem providing a criterion for determining if a word is a BWT image based on the number of orbits in its stable sorting.
The document discusses compressed membership in automata with compressed labels. It introduces straight-line programs (SLPs) as a general representation for compressed strings that covers dictionary-based compression algorithms like LZ77 and LZ78. SLPs allow developing algorithms that operate directly on compressed data rather than decompressing it first. The document focuses on developing an algorithm to solve the string membership problem in polynomial time for SLP-compressed strings.
This document discusses join-reachability problems in directed graphs. It begins by introducing graph reachability queries and the goal of efficiently answering such queries with a data structure. It then defines join-reachability queries over a collection of graphs and motivates applications involving join-reachability. The document outlines techniques for preprocessing graphs like layer decomposition and cycle removal to simplify join-reachability problems. It also discusses computing the join-reachability graph and bounds on its size. The focus is on constructing efficient data structures to answer join-reachability queries.
This document discusses Wang tiles and their application to modeling self-assembly. It introduces Wang tiles as unit square tiles with colored edges that can stick together if adjacent edges have the same color. Self-assembly is modeled using tiles placed on a grid where new tiles stick based on edge color matches. There are two sticking rules - strong requires all neighbors to match and weak requires just one match. The tiling model simplifies real self-assembly but is still computationally complex enough to emulate universal computation, suggesting analysis questions are at least as difficult.
1. Introduction Triangular tree-width Permanents of bounded ttw matrices Conclusions
An extended tree-width notion for directed graphs
related to the computation of permanents
Klaus Meer
Brandenburgische Technische Universit¨t, Cottbus, Germany
a
CSR St.Petersburg, June 16, 2011
Klaus Meer An extended tree-width notion
2. Introduction Triangular tree-width Permanents of bounded ttw matrices Conclusions
Outline
1 Introduction
2 Basics; triangular tree-width
3 Permanents of bounded ttw matrices
4 Conclusion
Klaus Meer An extended tree-width notion
3. Introduction Triangular tree-width Permanents of bounded ttw matrices Conclusions
1. Introduction
Restriction of many hard graph-theoretic problems to subclasses of
bounded tree-width graphs becomes efficiently solvable
Klaus Meer An extended tree-width notion
4. Introduction Triangular tree-width Permanents of bounded ttw matrices Conclusions
1. Introduction
Restriction of many hard graph-theoretic problems to subclasses of
bounded tree-width graphs becomes efficiently solvable
Well known examples important here, see Courcelle, Makowsky,
Rotics:
permanent of square matrix M = (mij ) of bounded tree-width;
here, the tree-width of M is the tree-width of the underlying
graph GM which has an edge (i, j) iff mij = 0
Valiant: computing permanent of general 0-1 matrices is
#P-hard; if matrices have entries from field K of characteristic
= 2 the permanent polynomials build a VNP-complete family
Hamiltonian cycle decision problem
Klaus Meer An extended tree-width notion
5. Introduction Triangular tree-width Permanents of bounded ttw matrices Conclusions
Important: though above GM is directed its tree-width is taken as
that of the undirected graph, i.e., an edge (i, j) is present if at
least one entry mij or mji is non-zero;
thus, tree-width of GM does not reflect a case of lacking symmetry
where mij = 0 but mji = 0.
However, that might have impact on computation of the
permanent
Example: for an upper triangular (n, n)-matrix M the tree-width of
GM is n − 1; its permanent nevertheless is easy to compute.
Klaus Meer An extended tree-width notion
6. Introduction Triangular tree-width Permanents of bounded ttw matrices Conclusions
Goal: introduce new tree-width notion called triangular tree-width
for directed graphs such that
for square matrices M bounding the triangular tree-width of
GM allows to compute perm(M) efficiently
the (undirected) tree-width of GM is greater than or equal to
its triangular tree-width; examples where the former is
unbounded whereas the latter is not do exist.
Klaus Meer An extended tree-width notion
7. Introduction Triangular tree-width Permanents of bounded ttw matrices Conclusions
2. Triangular tree-width
Tree-width measures how close a graph is to a tree; on trees many
otherwise hard problems are easy
Klaus Meer An extended tree-width notion
8. Introduction Triangular tree-width Permanents of bounded ttw matrices Conclusions
2. Triangular tree-width
Tree-width measures how close a graph is to a tree; on trees many
otherwise hard problems are easy
Definition (Tree-width)
G = V , E graph, k-tree-decomposition of G is a tree
T = VT , ET such that:
(i) For each t ∈ VT a subset Xt ⊆ V of size at most k + 1.
(ii) For each edge (u, v ) ∈ E there is a t ∈ VT s.t. {u, v } ⊆ Xt .
(iii) For each vertex v ∈ V the set {t ∈ VT |v ∈ XT } forms a
(connected) subtree of T .
twd(G ) : smallest k such that there exists a k-tree-decomposition
Klaus Meer An extended tree-width notion
9. Introduction Triangular tree-width Permanents of bounded ttw matrices Conclusions
2. Triangular tree-width
Tree-width measures how close a graph is to a tree; on trees many
otherwise hard problems are easy
Definition (Tree-width)
G = V , E graph, k-tree-decomposition of G is a tree
T = VT , ET such that:
(i) For each t ∈ VT a subset Xt ⊆ V of size at most k + 1.
(ii) For each edge (u, v ) ∈ E there is a t ∈ VT s.t. {u, v } ⊆ Xt .
(iii) For each vertex v ∈ V the set {t ∈ VT |v ∈ XT } forms a
(connected) subtree of T .
twd(G ) : smallest k such that there exists a k-tree-decomposition
Trees have tree-width 1, cycles have twd 2
Klaus Meer An extended tree-width notion
10. Introduction Triangular tree-width Permanents of bounded ttw matrices Conclusions
Tree-width of a matrix M = (mij ) over K : (undirected) tree-width
of (directed) incidence graph GM
(i, j) edge in GM ⇔ mij = 0
Klaus Meer An extended tree-width notion
11. Introduction Triangular tree-width Permanents of bounded ttw matrices Conclusions
Tree-width of a matrix M = (mij ) over K : (undirected) tree-width
of (directed) incidence graph GM
(i, j) edge in GM ⇔ mij = 0
weight of edge (i, j) = mij ;
cycle cover of GM : subset of edges s.t. each vertex incident with
exactly one edge as outgoing and one as ingoing edge;
partial cycle cover: subset of a cycle cover
weight of a (partial) cycle cover: product of weights of
participating edges
Klaus Meer An extended tree-width notion
12. Introduction Triangular tree-width Permanents of bounded ttw matrices Conclusions
Tree-width of a matrix M = (mij ) over K : (undirected) tree-width
of (directed) incidence graph GM
(i, j) edge in GM ⇔ mij = 0
weight of edge (i, j) = mij ;
cycle cover of GM : subset of edges s.t. each vertex incident with
exactly one edge as outgoing and one as ingoing edge;
partial cycle cover: subset of a cycle cover
weight of a (partial) cycle cover: product of weights of
participating edges
ei,j
Permanent perm(M) := mi,j
e cycle cover
Valiant’s conjecture: Computation of permanent hard
Klaus Meer An extended tree-width notion
13. Introduction Triangular tree-width Permanents of bounded ttw matrices Conclusions
Basic idea for defining triangular tree-width
Let GM = (V , E ) with V = {1, . . . , n} and consider an order on
the vertices, f.e., 1 < 2 < . . . < n;
if a cycle contains an increasing edge w.r.t. the order, it must
contain as well a decreasing edge.
for computation of permanent more important than twd(GM ) is
the tree-width of both the graph of increasing and that of
decreasing edges
find optimal order with respect to bounding both tree-width
parameters
Klaus Meer An extended tree-width notion
14. Introduction Triangular tree-width Permanents of bounded ttw matrices Conclusions
Definition (Triangular tree-width ttw)
M square matrix, GM = (V , E ) with V = {1, . . . , n}, σ : V → V a
permutation;
inc inc
a) Gσ = (V , Eσ ) graph of increasing edges w.r.t. order
dec
defined by σ, Gσ accordingly; loops are located in Eσdec
Klaus Meer An extended tree-width notion
15. Introduction Triangular tree-width Permanents of bounded ttw matrices Conclusions
Definition (Triangular tree-width ttw)
M square matrix, GM = (V , E ) with V = {1, . . . , n}, σ : V → V a
permutation;
inc inc
a) Gσ = (V , Eσ ) graph of increasing edges w.r.t. order
dec
defined by σ, Gσ accordingly; loops are located in Eσdec
b) GM has a triangular tree-decomposition of width k ∈ N iff
inc dec
there exists a σ s.t. both Gσ , Gσ have tree-width at most k
Klaus Meer An extended tree-width notion
16. Introduction Triangular tree-width Permanents of bounded ttw matrices Conclusions
Definition (Triangular tree-width ttw)
M square matrix, GM = (V , E ) with V = {1, . . . , n}, σ : V → V a
permutation;
inc inc
a) Gσ = (V , Eσ ) graph of increasing edges w.r.t. order
dec
defined by σ, Gσ accordingly; loops are located in Eσdec
b) GM has a triangular tree-decomposition of width k ∈ N iff
inc dec
there exists a σ s.t. both Gσ , Gσ have tree-width at most k
inc dec
c) ttw (GM ) := min max{twd(Gσ ), twd(Gσ )} .
σ∈Sn
Klaus Meer An extended tree-width notion
17. Introduction Triangular tree-width Permanents of bounded ttw matrices Conclusions
Basic observations:
if M has a symmetric structure of non-zero entries, i.e.,
mij = 0 ⇔ mji = 0, then ttw (GM ) = twd(GM )
Klaus Meer An extended tree-width notion
18. Introduction Triangular tree-width Permanents of bounded ttw matrices Conclusions
Basic observations:
if M has a symmetric structure of non-zero entries, i.e.,
mij = 0 ⇔ mji = 0, then ttw (GM ) = twd(GM )
computation of the triangular tree-width of a directed graph is
NP-hard
Klaus Meer An extended tree-width notion
19. Introduction Triangular tree-width Permanents of bounded ttw matrices Conclusions
Basic observations:
if M has a symmetric structure of non-zero entries, i.e.,
mij = 0 ⇔ mji = 0, then ttw (GM ) = twd(GM )
computation of the triangular tree-width of a directed graph is
NP-hard
triangular tree-width extends the tree-width notion; in
particular, there are families of graphs for which the former is
bounded whereas the latter is not
Klaus Meer An extended tree-width notion
20. Introduction Triangular tree-width Permanents of bounded ttw matrices Conclusions
v11 r v12 r v13 r v14 r v15 r v16 r
v21 r v22 r v23 r v24 r v25 r v26 r
v31 r v32 r v33 r v34 r v35 r v36 r
v41 r v42 r v43 r v44 r v45 r v46 r
v51 r v52 r v53 r v54 r v55 r v56 r
v61 r v62 r v63 r v64 r v65 r v66 r
The grid graph Gn for n = 6; direction of edges from left to right
and from top to bottom
Klaus Meer An extended tree-width notion
21. Introduction Triangular tree-width Permanents of bounded ttw matrices Conclusions
v11 r v12 r v13 r v14 r v15 r v16 r
v21 r v22 r v23 r v24 r v25 r v26 r
v31 r v32 r v33 r v34 r v35 r v36 r
v41 r v42 r v43 r v44 r v45 r v46 r
v51 r v52 r v53 r v54 r v55 r v56 r
v61 r v62 r v63 r v64 r v65 r v66 r
The grid graph Gn for n = 6; direction of edges from left to right
and from top to bottom
twd(Gn ) tends to ∞ for n → ∞ Thomassen
Klaus Meer An extended tree-width notion
22. Introduction Triangular tree-width Permanents of bounded ttw matrices Conclusions
v11 r v12 r v13 r v14 r v15 r v16 r
v21 r v22 r v23 r v24 r v25 r v26 r
v31 r v32 r v33 r v34 r v35 r v36 r
v41 r v42 r v43 r v44 r v45 r v46 r
v51 r v52 r v53 r v54 r v55 r v56 r
v61 r v62 r v63 r v64 r v65 r v66 r
order vertices from left to right and from bottom to top; the red
edges are increasing, blue ones are decreasing and ttw (Gn,σ ) = 1.
Klaus Meer An extended tree-width notion
23. Introduction Triangular tree-width Permanents of bounded ttw matrices Conclusions
3. Permanents of bounded ttw matrices
Theorem (Main theorem)
Let {Mi }i∈I be a family of matrices of bounded triangular
tree-width at most k ∈ N. For every member M of the family,
given corresponding tree-decompositions of the graphs of
increasing and of decreasing edges, perm(M) can be computed in
polynomial time in the size of M. The computation is fixed
parameter tractable w.r.t. k.
Klaus Meer An extended tree-width notion
24. Introduction Triangular tree-width Permanents of bounded ttw matrices Conclusions
Typically, proving such theorems employs climbing a
tree-decomposition bottom up, see, f.e., Flarup & Koiran &
Lyaudet;
once a vertex has been removed during this process from a bag of
the tree-decomposition it has not to be considered any longer
Klaus Meer An extended tree-width notion
25. Introduction Triangular tree-width Permanents of bounded ttw matrices Conclusions
Typically, proving such theorems employs climbing a
tree-decomposition bottom up, see, f.e., Flarup & Koiran &
Lyaudet;
once a vertex has been removed during this process from a bag of
the tree-decomposition it has not to be considered any longer
Main problem: we have to deal with two tree-decompositions. In
general, a vertex that has been removed in one can occur further
up in the other. Thus, it cannot be removed in the usual way and
backtracking cannot be avoided.
Klaus Meer An extended tree-width notion
26. Introduction Triangular tree-width Permanents of bounded ttw matrices Conclusions
Solution of this problem: guarantee that one of the two
decompositions bounds the number of occurences of each vertex in
a bag to say 10 · k many.
Definition (Perfect decompositions)
G = (V , E ) directed has a perfect triangular tree-decomposition of
width k if there is a permutation σ and two corresponding
inc dec inc
tree-decompositions Tσ , Tσ of width k for Gσ , Gσ , dec
respectively, and none of the vertices of G occurs in more than 10k
many bags of Tσ .dec
Klaus Meer An extended tree-width notion
27. Introduction Triangular tree-width Permanents of bounded ttw matrices Conclusions
STEP 1: Show main theorem for perfect decompositions
inc inc
Climb decomposition Tσ of Gσ bottom up and construct partial
cycle covers together with their weights;
Klaus Meer An extended tree-width notion
28. Introduction Triangular tree-width Permanents of bounded ttw matrices Conclusions
STEP 1: Show main theorem for perfect decompositions
inc inc
Climb decomposition Tσ of Gσ bottom up and construct partial
cycle covers together with their weights;
inc
to each node of Tσ there correspond f (k) many types of partial
cycle covers; here a type represents the information about how
vertices occur in the cover; f only depends on k;
Klaus Meer An extended tree-width notion
29. Introduction Triangular tree-width Permanents of bounded ttw matrices Conclusions
STEP 1: Show main theorem for perfect decompositions
inc inc
Climb decomposition Tσ of Gσ bottom up and construct partial
cycle covers together with their weights;
inc
to each node of Tσ there correspond f (k) many types of partial
cycle covers; here a type represents the information about how
vertices occur in the cover; f only depends on k;
inc
each time a vertex i disappears when climbing up in Tσ all
dec
information about i given in Tσ is incorporated; since i occurs in
at most 10 · k bags of Tσ ˜
dec again this results in at most f (k) many
types;
Klaus Meer An extended tree-width notion
30. Introduction Triangular tree-width Permanents of bounded ttw matrices Conclusions
STEP 1: Show main theorem for perfect decompositions
inc inc
Climb decomposition Tσ of Gσ bottom up and construct partial
cycle covers together with their weights;
inc
to each node of Tσ there correspond f (k) many types of partial
cycle covers; here a type represents the information about how
vertices occur in the cover; f only depends on k;
inc
each time a vertex i disappears when climbing up in Tσ all
dec
information about i given in Tσ is incorporated; since i occurs in
at most 10 · k bags of Tσ ˜
dec again this results in at most f (k) many
types;
thus, i can be removed in both decompositions
Klaus Meer An extended tree-width notion
31. Introduction Triangular tree-width Permanents of bounded ttw matrices Conclusions
STEP 2: Removing the perfectness assumption
Theorem
For M, GM and permutation σ with triangular tree-decomposition
inc dec ˜
Tσ , Tσ one can construct a new matrix M, a corresponding
graph GM and a permutation σ such that
˜ ˜
˜ = perm(M), G ˜ is of bounded triangular tree-width
perm(M) M
˜ inc ˜ dec
witnessed by σ and (Tσ , Tσ ) is a perfect triangular
˜
decomposition.
Klaus Meer An extended tree-width notion
32. Introduction Triangular tree-width Permanents of bounded ttw matrices Conclusions
STEP 2: Removing the perfectness assumption
Theorem
For M, GM and permutation σ with triangular tree-decomposition
inc dec ˜
Tσ , Tσ one can construct a new matrix M, a corresponding
graph GM and a permutation σ such that
˜ ˜
˜ = perm(M), G ˜ is of bounded triangular tree-width
perm(M) M
˜ inc ˜ dec
witnessed by σ and (Tσ , Tσ ) is a perfect triangular
˜
decomposition.
Proof: New graph is obtained by adding at most linearly many
vertices and edges in such a way that original vertices occurring
dec
too often in bags of Tσ are partially replaced by new ones;
replacement can be done in such a way that the cycle covers of the
old and those of the new graph are in a 1-1 correspondence
maintaining the weights
Klaus Meer An extended tree-width notion
33. Introduction Triangular tree-width Permanents of bounded ttw matrices Conclusions
Theorem
The Hamiltonian cycle decision problem is efficiently solvable for
families of matrices that are of bounded triangular tree-width k.
Here, a corresponding tree-decomposition has to be given.
Klaus Meer An extended tree-width notion
34. Introduction Triangular tree-width Permanents of bounded ttw matrices Conclusions
Theorem
The Hamiltonian cycle decision problem is efficiently solvable for
families of matrices that are of bounded triangular tree-width k.
Here, a corresponding tree-decomposition has to be given.
Proof: Main difference is in removing the perfectness assumption;
here, the transformation leads to a slight modification of the HC
problem.
Klaus Meer An extended tree-width notion
35. Introduction Triangular tree-width Permanents of bounded ttw matrices Conclusions
4. Conclusions
Triangular tree-width notion tailored to permanent problem;
are there other graph related problems for which the result holds?
More general: does it hold for all monadic-second order definable
problems?
We conjecture not
Klaus Meer An extended tree-width notion
36. Introduction Triangular tree-width Permanents of bounded ttw matrices Conclusions
4. Conclusions
Triangular tree-width notion tailored to permanent problem;
are there other graph related problems for which the result holds?
More general: does it hold for all monadic-second order definable
problems?
We conjecture not
Is triangular tree-width related to other parameters for directed
graphs?
Such parameters are for example directed tree-width (Johnson &
Robertson & Seymour & Thomas) and entanglement (Berwanger
& Gr¨del)
a
Klaus Meer An extended tree-width notion