This document summarizes an investigation of using a dual tree algorithm and space partitioning trees to approximate matrix multiplication more efficiently than the naive O(MDN) approach under certain conditions. It presents an algorithm that organizes the row vectors of the left matrix and column vectors of the right matrix into ball trees, then performs a dual tree comparison to estimate the product matrix entries. For this to provide better complexity than naive multiplication, the vectors must fall into clusters proportional to D^τ for some τ > 0. However, uniformly distributed vectors would result in exponentially small expected cluster sizes, limiting the practical applicability of this approach. Future work is needed to address this issue.
This document presents and compares three approximation methods for thin plate spline mappings that reduce the computational complexity from O(p3) to O(m3), where m is a small subset of points p. Method 1 uses only the subset of points to estimate the mapping. Method 2 uses the subset of basis functions with all target values. Method 3 approximates the full matrix using the Nyström method. Experiments on synthetic grids show Method 3 has the lowest error, followed by Method 2, with Method 1 having the highest error. The three methods trade off accuracy, computation time, and the ability to do principal warp analysis.
Stability criterion of periodic oscillations in a (9)Alexander Decker
This document studies the fixed points and properties of the Duffing map, which is a 2-D discrete dynamical system. It finds that the Duffing map has three fixed points when a > b + 1. It divides the parameter space into six regions to determine whether the fixed points are attracting, repelling, or saddle points. It also determines the bifurcation points of the parameter space. Key properties of the Duffing map explored include that it is a diffeomorphism when b ≠ 0, its Jacobian is equal to b, and the eigenvalues of its derivative depend on a and b.
This document summarizes a method called transplantation that can be used to show two planar domains have the same spectrum and are therefore isospectral. Transplantation takes a Dirichlet eigenfunction on one domain and constructs a corresponding eigenfunction on the other domain with the same eigenvalue. This is done by dividing the domains into congruent triangles and piecing together the restrictions of the eigenfunction in a way that satisfies continuity and boundary conditions. Numerical computation of the discretized Laplacian spectrum on sample isospectral domains verifies the transplanted eigenfunctions have identical eigenvalues, demonstrating the domains are isospectral.
Maximizing a Nonnegative, Monotone, Submodular Function Constrained to Matchingssagark4
This document summarizes a research paper about maximizing a nonnegative, monotone, submodular function subject to matching constraints. It shows that this constrained submodular maximization matching (CSM-Matching) problem is NP-hard by reducing the maximum k-cover problem to it. It then presents a 3-approximation greedy algorithm for CSM-Matching. It further reduces CSM-Matching to the problem of maximizing a submodular function over two matroids, for which there is a (2+ε)-approximation algorithm called LSV2. Using LSV2 as a black box, the document shows how to find a (4+ε)-approximate solution to CSM-
This document discusses the divide-and-conquer algorithm design strategy. It begins by defining divide-and-conquer as dividing a problem into smaller subproblems, solving those subproblems recursively, and combining the solutions. Examples covered include sorting algorithms like mergesort and quicksort, tree traversals, binary search, integer multiplication, matrix multiplication using Strassen's algorithm, and closest pair problems. Analysis techniques like recursion trees and the master theorem are introduced.
Graphical methods for 2 d heat transfer Arun Sarasan
This document discusses numerical methods for solving two-dimensional heat transfer problems. It begins by explaining that analytical solutions are often not available for modern engineering problems due to complex geometries and boundary conditions. Numerical methods using computers can provide useful approximate solutions. The finite difference method and finite element method are introduced as two common numerical techniques. The finite difference method involves discretizing the domain into a nodal network and deriving finite difference approximations of the governing heat equation at each node to develop a system of algebraic equations that can be solved numerically. Iterative methods like Jacobi and Gauss-Seidel are often used to solve large systems of equations. The document provides examples of applying these concepts to model heat conduction problems.
New data structures and algorithms for \\post-processing large data sets and ...Alexander Litvinenko
In this work, we describe advanced numerical tools for working with multivariate functions and for
the analysis of large data sets. These tools will drastically reduce the required computing time and the
storage cost, and, therefore, will allow us to consider much larger data sets or ner meshes. Covariance
matrices are crucial in spatio-temporal statistical tasks, but are often very expensive to compute and
store, especially in 3D. Therefore, we approximate covariance functions by cheap surrogates in a
low-rank tensor format. We apply the Tucker and canonical tensor decompositions to a family of
Matern- and Slater-type functions with varying parameters and demonstrate numerically that their
approximations exhibit exponentially fast convergence. We prove the exponential convergence of the
Tucker and canonical approximations in tensor rank parameters. Several statistical operations are
performed in this low-rank tensor format, including evaluating the conditional covariance matrix,
spatially averaged estimation variance, computing a quadratic form, determinant, trace, loglikelihood,
inverse, and Cholesky decomposition of a large covariance matrix. Low-rank tensor approximations
reduce the computing and storage costs essentially. For example, the storage cost is reduced from an
exponential O(nd) to a linear scaling O(drn), where d is the spatial dimension, n is the number of
mesh points in one direction, and r is the tensor rank. Prerequisites for applicability of the proposed
techniques are the assumptions that the data, locations, and measurements lie on a tensor (axesparallel)
grid and that the covariance function depends on a distance,...
This document summarizes a paper that presents new algorithms for solving the cyclic order-preserving assignment problem (COPAP) and related sub-problem, the linear order-preserving assignment problem (LOPAP). It introduces a new point-assignment cost function called the Procrustean local shape distance (PLSD) and explores heuristics for using the A* search algorithm to more efficiently solve COPAP and LOPAP. Experimental results on the MPEG-7 shape dataset are presented and recommendations are made for solving COPAP/LOPAP in practice.
This document presents and compares three approximation methods for thin plate spline mappings that reduce the computational complexity from O(p3) to O(m3), where m is a small subset of points p. Method 1 uses only the subset of points to estimate the mapping. Method 2 uses the subset of basis functions with all target values. Method 3 approximates the full matrix using the Nyström method. Experiments on synthetic grids show Method 3 has the lowest error, followed by Method 2, with Method 1 having the highest error. The three methods trade off accuracy, computation time, and the ability to do principal warp analysis.
Stability criterion of periodic oscillations in a (9)Alexander Decker
This document studies the fixed points and properties of the Duffing map, which is a 2-D discrete dynamical system. It finds that the Duffing map has three fixed points when a > b + 1. It divides the parameter space into six regions to determine whether the fixed points are attracting, repelling, or saddle points. It also determines the bifurcation points of the parameter space. Key properties of the Duffing map explored include that it is a diffeomorphism when b ≠ 0, its Jacobian is equal to b, and the eigenvalues of its derivative depend on a and b.
This document summarizes a method called transplantation that can be used to show two planar domains have the same spectrum and are therefore isospectral. Transplantation takes a Dirichlet eigenfunction on one domain and constructs a corresponding eigenfunction on the other domain with the same eigenvalue. This is done by dividing the domains into congruent triangles and piecing together the restrictions of the eigenfunction in a way that satisfies continuity and boundary conditions. Numerical computation of the discretized Laplacian spectrum on sample isospectral domains verifies the transplanted eigenfunctions have identical eigenvalues, demonstrating the domains are isospectral.
Maximizing a Nonnegative, Monotone, Submodular Function Constrained to Matchingssagark4
This document summarizes a research paper about maximizing a nonnegative, monotone, submodular function subject to matching constraints. It shows that this constrained submodular maximization matching (CSM-Matching) problem is NP-hard by reducing the maximum k-cover problem to it. It then presents a 3-approximation greedy algorithm for CSM-Matching. It further reduces CSM-Matching to the problem of maximizing a submodular function over two matroids, for which there is a (2+ε)-approximation algorithm called LSV2. Using LSV2 as a black box, the document shows how to find a (4+ε)-approximate solution to CSM-
This document discusses the divide-and-conquer algorithm design strategy. It begins by defining divide-and-conquer as dividing a problem into smaller subproblems, solving those subproblems recursively, and combining the solutions. Examples covered include sorting algorithms like mergesort and quicksort, tree traversals, binary search, integer multiplication, matrix multiplication using Strassen's algorithm, and closest pair problems. Analysis techniques like recursion trees and the master theorem are introduced.
Graphical methods for 2 d heat transfer Arun Sarasan
This document discusses numerical methods for solving two-dimensional heat transfer problems. It begins by explaining that analytical solutions are often not available for modern engineering problems due to complex geometries and boundary conditions. Numerical methods using computers can provide useful approximate solutions. The finite difference method and finite element method are introduced as two common numerical techniques. The finite difference method involves discretizing the domain into a nodal network and deriving finite difference approximations of the governing heat equation at each node to develop a system of algebraic equations that can be solved numerically. Iterative methods like Jacobi and Gauss-Seidel are often used to solve large systems of equations. The document provides examples of applying these concepts to model heat conduction problems.
New data structures and algorithms for \\post-processing large data sets and ...Alexander Litvinenko
In this work, we describe advanced numerical tools for working with multivariate functions and for
the analysis of large data sets. These tools will drastically reduce the required computing time and the
storage cost, and, therefore, will allow us to consider much larger data sets or ner meshes. Covariance
matrices are crucial in spatio-temporal statistical tasks, but are often very expensive to compute and
store, especially in 3D. Therefore, we approximate covariance functions by cheap surrogates in a
low-rank tensor format. We apply the Tucker and canonical tensor decompositions to a family of
Matern- and Slater-type functions with varying parameters and demonstrate numerically that their
approximations exhibit exponentially fast convergence. We prove the exponential convergence of the
Tucker and canonical approximations in tensor rank parameters. Several statistical operations are
performed in this low-rank tensor format, including evaluating the conditional covariance matrix,
spatially averaged estimation variance, computing a quadratic form, determinant, trace, loglikelihood,
inverse, and Cholesky decomposition of a large covariance matrix. Low-rank tensor approximations
reduce the computing and storage costs essentially. For example, the storage cost is reduced from an
exponential O(nd) to a linear scaling O(drn), where d is the spatial dimension, n is the number of
mesh points in one direction, and r is the tensor rank. Prerequisites for applicability of the proposed
techniques are the assumptions that the data, locations, and measurements lie on a tensor (axesparallel)
grid and that the covariance function depends on a distance,...
This document summarizes a paper that presents new algorithms for solving the cyclic order-preserving assignment problem (COPAP) and related sub-problem, the linear order-preserving assignment problem (LOPAP). It introduces a new point-assignment cost function called the Procrustean local shape distance (PLSD) and explores heuristics for using the A* search algorithm to more efficiently solve COPAP and LOPAP. Experimental results on the MPEG-7 shape dataset are presented and recommendations are made for solving COPAP/LOPAP in practice.
This document introduces expander graphs and summarizes Kolmogorov and Barzdin's proof on realizing networks in three-dimensional space, which was one of the first examples of expander graphs. It defines expander graphs as sparsely populated but well-connected graphs. Kolmogorov and Barzdin constructed random graphs with properties equivalent to expander graphs and used these properties to prove that most networks can be realized in a sphere with volume proportional to the number of vertices. Their proof established lower and upper bounds on the volume needed for realization and handled both bounded and unbounded branching networks.
The document discusses using the renormalization group to build a "tower" of connected field theories across dimensions, starting from known 2-dimensional conformal field theories. It focuses on analyzing the O(N) x O(m) Landau-Ginzburg-Wilson model in 6 dimensions, which is connected to the 4D theory through a Wilson-Fisher fixed point. Perturbative calculations and the large N expansion are used to calculate critical exponents and check that the theories are in the same universality class. Studying higher dimensional theories could provide insight into physics beyond the Standard Model.
This document summarizes the results of an analysis of the time, parameters, and efficiency of the dual-sieving lattice reduction algorithm. The analysis found that the running time grows exponentially with dimension and radius factor. For dimensions over 40, a radius factor of 1.07 performed better than 1.06. The average list length decreased quickly with each merge step due to vectors not satisfying the lattice and radius conditions. Decreasing the alpha parameter could help address this. For high dimensions, a symmetric merge tree with alpha values of 1.11-1.12 and radius factors of 1.01-1.09 were used. The number of duplicates decreased with each merge step for low dimensions.
IRJET- Common Fixed Point Results in Menger SpacesIRJET Journal
This document presents a common fixed point theorem for five self-maps on a complete Menger space. The theorem proves that if the maps satisfy certain conditions, including being continuous, having a compatibility property, and satisfying a contraction condition, then the maps have a common fixed point. The conditions and proof involve the use of probabilistic distances, triangular norms, Cauchy sequences, and limits in Menger spaces. The theorem generalizes prior results on common fixed points and provides a way to establish the existence of solutions to equations involving multiple operators.
Numerical Solution of Diffusion Equation by Finite Difference Methodiosrjce
IOSR Journal of Mathematics(IOSR-JM) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of mathemetics and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in mathematics. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
ENHANCEMENT OF TRANSMISSION RANGE ASSIGNMENT FOR CLUSTERED WIRELESS SENSOR NE...IJCNCJournal
Transmitter range assignment in clustered wireless networks is the bottleneck of the balance between
energy conservation and the connectivity to deliver data to the sink or gateway node. The aim of this
research is to optimize the energy consumption through reducing the transmission ranges of the nodes,
while maintaining high probability to have end-to-end connectivity to the network’s data sink. We modified
the approach given in [1] to achieve more than 25% power saving through reducing cluster head (CH)
transmission range of the backbone nodes in a multihop wireless sensor network with ensuring at least
95% end-to-end connectivity probability.
This document summarizes a paper that analyzes compressive sampling (CS) for compressing and reconstructing electrocardiogram (ECG) signals using l1 minimization algorithms. It proposes remodeling the linear program problem into a second order cone program to improve performance metrics like percent root-mean-squared difference, compression ratio, and signal-to-noise ratio when reconstructing ECG signals from the PhysioNet database. The paper provides an overview of CS theory and l1 minimization algorithms, describes the proposed approach of using quadratic constraints, and defines performance metrics for analyzing reconstructed ECG signals.
The document discusses the divide and conquer algorithm design paradigm. It begins by defining divide and conquer as recursively breaking down a problem into smaller sub-problems, solving the sub-problems, and then combining the solutions to solve the original problem. Some examples of problems that can be solved using divide and conquer include binary search, quicksort, merge sort, and the fast Fourier transform algorithm. The document then discusses control abstraction, efficiency analysis, and uses divide and conquer to provide algorithms for large integer multiplication and merge sort. It concludes by defining the convex hull problem and providing an example input and output.
The document discusses minimum spanning trees (MSTs). It defines MSTs and provides examples of applications like wiring electronic circuits. It then describes two common algorithms for finding MSTs: Kruskal's algorithm and Prim's algorithm. Kruskal's algorithm finds MSTs by sorting edges by weight and adding edges that connect different components without creating cycles. Prim's algorithm grows an MST from a single vertex by always adding the lowest-weight edge connecting a vertex to the growing tree.
I am Mathew K. I am a Planetary Science Assignment Expert at eduassignmenthelp.com. I hold a Masters’s Degree in Planetary Science, The University of Chicago, USA. I have been helping students with their homework for the past 8 years. I solve assignments related to Planetary Sciences.
Visit eduassignmenthelp.com or email info@eduassignmenthelp.com.
You can also call on +1 678 648 4277 for any assistance with Planetary Science Assignments.
Chapter summary and solutions to end-of-chapter exercises for "Data Visualization: Principles and Practice" book by Alexandru C. Telea
Chapter provides an overview of a number of methods for visualizing tensor data. It explains principal component analysis as a technique used to process a tensor matrix and extract from it information that can directly be used in its visualization. It forms a fundamental part of many tensor data processing and visualization algorithms. Section 7.4 shows how the results of the principal component analysis can be visualized using the simple color-mapping techniques. Next parts of the chapter explain how same data can be visualized using tensor glyphs, and streamline-like visualization techniques.
In contrast to Slicer, which is a more general framework for analyzing and visualizing 3D slice-based data volumes, the Diffusion Toolkit focuses on DT-MRI datasets, and thus offers more extensive and easier to use options for fiber tracking.
The document discusses the Fundamental Theorem of Calculus, which states that differentiation and integration are inverse operations. It provides examples of using the theorem to evaluate definite integrals and find the average value of functions. The Second Fundamental Theorem of Calculus describes integrals as accumulation functions and relates the integral and derivative. The Net Change Theorem applies the Fundamental Theorem to relate the net change in a function to its integral over an interval. Examples demonstrate using these theorems to solve problems in physics, chemistry, and particle motion.
1) The document presents a new co-clustering framework called Block Value Decomposition (BVD) for dyadic data. BVD factorizes a data matrix into three components: a row coefficient matrix, a block value matrix, and a column coefficient matrix.
2) An algorithm for non-negative BVD (NBVD) is derived based on minimizing the reconstruction error between the original and reconstructed matrices. The algorithm iteratively updates the three matrices using equations derived from Kuhn-Tucker conditions.
3) Empirical evaluations on text clustering datasets show NBVD achieves high clustering accuracy that is competitive with or better than other co-clustering algorithms.
This document contains a summary of a lecture on graph analytics and complexity by Dr. Animesh Chaturvedi. It includes questions and answers on graph algorithms like minimum spanning tree (MST), single-source shortest path (SSSP) problems, and the Agrawal–Kayal–Saxena primality test. Sample algorithms are provided to calculate the average MST and average SSP of multiple graphs by combining the graphs and running standard algorithms. The document is in English and other languages with thank you messages at the end.
The document discusses several papers related to quadric error metrics and mesh simplification. It summarizes techniques for using quadric error metrics to collapse vertices and preserve attributes, boundaries, and volumes during simplification. Methods include the memoryless approach of accumulating quadrics on local neighborhoods rather than the full mesh, and solving for optimal positions using constraints or optimization functions.
Matrices can be used to represent 2D geometric transformations such as translation, scaling, and rotation. Translations are represented by adding a translation vector to coordinates. Scaling is represented by multiplying coordinates by scaling factors. Rotations are represented by premultiplying coordinates with a rotation matrix. Multiple transformations can be combined by multiplying their respective matrices. Using homogeneous coordinates allows all transformations to be uniformly represented as matrix multiplications.
The document discusses relations and their representations. It defines a binary relation as a subset of A×B where A and B are nonempty sets. Relations can be represented using arrow diagrams, directed graphs, and zero-one matrices. A directed graph represents the elements of A as vertices and draws an edge from vertex a to b if aRb. The zero-one matrix representation assigns 1 to the entry in row a and column b if (a,b) is in the relation, and 0 otherwise. The document also discusses indegrees, outdegrees, composite relations, and properties of relations like reflexivity.
This document discusses minimum spanning trees. It defines a minimum spanning tree as a spanning tree of a connected, undirected graph that has a minimum total cost among all spanning trees of that graph. The document provides properties of minimum spanning trees, including that they are acyclic, connect all vertices, and have n-1 edges for a graph with n vertices. Applications of minimum spanning trees mentioned include communication networks, power grids, and laying telephone wires to minimize total length.
Prim's and Kruskal's algorithms are two common approaches for finding a minimum spanning tree (MST) in a weighted, undirected graph. Prim's algorithm grows the MST from a single starting vertex by iteratively adding the lowest-cost edge that connects the MST to another vertex. Kruskal's algorithm considers all edges in order of weight and adds edges to the MST if they do not form cycles. Both run in O(E log E) time, where E is the number of edges, and are greedy algorithms that work to find the optimal MST solution.
This document provides an overview of mathematical morphology and its applications to image processing. It discusses basic concepts like dilation, erosion, opening, closing and their properties. It also covers algorithms for tasks like boundary extraction, region filling, thinning and skeletonization. Grayscale morphology is introduced, including dilation, erosion and other operations on grayscale images. Some common applications are described, such as morphological smoothing, gradient calculation, top-hat transforms and textural segmentation.
El documento habla sobre los conceptos básicos de la producción y la empresa. Explica que la empresa es la unidad básica de producción y que su principal objetivo es obtener beneficios. También describe los diferentes sectores económicos como el primario, secundario y terciario. Finalmente, introduce el concepto de frontera de posibilidades de producción y cómo representa la producción máxima que una economía puede lograr con sus recursos y tecnología disponibles.
This document introduces expander graphs and summarizes Kolmogorov and Barzdin's proof on realizing networks in three-dimensional space, which was one of the first examples of expander graphs. It defines expander graphs as sparsely populated but well-connected graphs. Kolmogorov and Barzdin constructed random graphs with properties equivalent to expander graphs and used these properties to prove that most networks can be realized in a sphere with volume proportional to the number of vertices. Their proof established lower and upper bounds on the volume needed for realization and handled both bounded and unbounded branching networks.
The document discusses using the renormalization group to build a "tower" of connected field theories across dimensions, starting from known 2-dimensional conformal field theories. It focuses on analyzing the O(N) x O(m) Landau-Ginzburg-Wilson model in 6 dimensions, which is connected to the 4D theory through a Wilson-Fisher fixed point. Perturbative calculations and the large N expansion are used to calculate critical exponents and check that the theories are in the same universality class. Studying higher dimensional theories could provide insight into physics beyond the Standard Model.
This document summarizes the results of an analysis of the time, parameters, and efficiency of the dual-sieving lattice reduction algorithm. The analysis found that the running time grows exponentially with dimension and radius factor. For dimensions over 40, a radius factor of 1.07 performed better than 1.06. The average list length decreased quickly with each merge step due to vectors not satisfying the lattice and radius conditions. Decreasing the alpha parameter could help address this. For high dimensions, a symmetric merge tree with alpha values of 1.11-1.12 and radius factors of 1.01-1.09 were used. The number of duplicates decreased with each merge step for low dimensions.
IRJET- Common Fixed Point Results in Menger SpacesIRJET Journal
This document presents a common fixed point theorem for five self-maps on a complete Menger space. The theorem proves that if the maps satisfy certain conditions, including being continuous, having a compatibility property, and satisfying a contraction condition, then the maps have a common fixed point. The conditions and proof involve the use of probabilistic distances, triangular norms, Cauchy sequences, and limits in Menger spaces. The theorem generalizes prior results on common fixed points and provides a way to establish the existence of solutions to equations involving multiple operators.
Numerical Solution of Diffusion Equation by Finite Difference Methodiosrjce
IOSR Journal of Mathematics(IOSR-JM) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of mathemetics and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in mathematics. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
ENHANCEMENT OF TRANSMISSION RANGE ASSIGNMENT FOR CLUSTERED WIRELESS SENSOR NE...IJCNCJournal
Transmitter range assignment in clustered wireless networks is the bottleneck of the balance between
energy conservation and the connectivity to deliver data to the sink or gateway node. The aim of this
research is to optimize the energy consumption through reducing the transmission ranges of the nodes,
while maintaining high probability to have end-to-end connectivity to the network’s data sink. We modified
the approach given in [1] to achieve more than 25% power saving through reducing cluster head (CH)
transmission range of the backbone nodes in a multihop wireless sensor network with ensuring at least
95% end-to-end connectivity probability.
This document summarizes a paper that analyzes compressive sampling (CS) for compressing and reconstructing electrocardiogram (ECG) signals using l1 minimization algorithms. It proposes remodeling the linear program problem into a second order cone program to improve performance metrics like percent root-mean-squared difference, compression ratio, and signal-to-noise ratio when reconstructing ECG signals from the PhysioNet database. The paper provides an overview of CS theory and l1 minimization algorithms, describes the proposed approach of using quadratic constraints, and defines performance metrics for analyzing reconstructed ECG signals.
The document discusses the divide and conquer algorithm design paradigm. It begins by defining divide and conquer as recursively breaking down a problem into smaller sub-problems, solving the sub-problems, and then combining the solutions to solve the original problem. Some examples of problems that can be solved using divide and conquer include binary search, quicksort, merge sort, and the fast Fourier transform algorithm. The document then discusses control abstraction, efficiency analysis, and uses divide and conquer to provide algorithms for large integer multiplication and merge sort. It concludes by defining the convex hull problem and providing an example input and output.
The document discusses minimum spanning trees (MSTs). It defines MSTs and provides examples of applications like wiring electronic circuits. It then describes two common algorithms for finding MSTs: Kruskal's algorithm and Prim's algorithm. Kruskal's algorithm finds MSTs by sorting edges by weight and adding edges that connect different components without creating cycles. Prim's algorithm grows an MST from a single vertex by always adding the lowest-weight edge connecting a vertex to the growing tree.
I am Mathew K. I am a Planetary Science Assignment Expert at eduassignmenthelp.com. I hold a Masters’s Degree in Planetary Science, The University of Chicago, USA. I have been helping students with their homework for the past 8 years. I solve assignments related to Planetary Sciences.
Visit eduassignmenthelp.com or email info@eduassignmenthelp.com.
You can also call on +1 678 648 4277 for any assistance with Planetary Science Assignments.
Chapter summary and solutions to end-of-chapter exercises for "Data Visualization: Principles and Practice" book by Alexandru C. Telea
Chapter provides an overview of a number of methods for visualizing tensor data. It explains principal component analysis as a technique used to process a tensor matrix and extract from it information that can directly be used in its visualization. It forms a fundamental part of many tensor data processing and visualization algorithms. Section 7.4 shows how the results of the principal component analysis can be visualized using the simple color-mapping techniques. Next parts of the chapter explain how same data can be visualized using tensor glyphs, and streamline-like visualization techniques.
In contrast to Slicer, which is a more general framework for analyzing and visualizing 3D slice-based data volumes, the Diffusion Toolkit focuses on DT-MRI datasets, and thus offers more extensive and easier to use options for fiber tracking.
The document discusses the Fundamental Theorem of Calculus, which states that differentiation and integration are inverse operations. It provides examples of using the theorem to evaluate definite integrals and find the average value of functions. The Second Fundamental Theorem of Calculus describes integrals as accumulation functions and relates the integral and derivative. The Net Change Theorem applies the Fundamental Theorem to relate the net change in a function to its integral over an interval. Examples demonstrate using these theorems to solve problems in physics, chemistry, and particle motion.
1) The document presents a new co-clustering framework called Block Value Decomposition (BVD) for dyadic data. BVD factorizes a data matrix into three components: a row coefficient matrix, a block value matrix, and a column coefficient matrix.
2) An algorithm for non-negative BVD (NBVD) is derived based on minimizing the reconstruction error between the original and reconstructed matrices. The algorithm iteratively updates the three matrices using equations derived from Kuhn-Tucker conditions.
3) Empirical evaluations on text clustering datasets show NBVD achieves high clustering accuracy that is competitive with or better than other co-clustering algorithms.
This document contains a summary of a lecture on graph analytics and complexity by Dr. Animesh Chaturvedi. It includes questions and answers on graph algorithms like minimum spanning tree (MST), single-source shortest path (SSSP) problems, and the Agrawal–Kayal–Saxena primality test. Sample algorithms are provided to calculate the average MST and average SSP of multiple graphs by combining the graphs and running standard algorithms. The document is in English and other languages with thank you messages at the end.
The document discusses several papers related to quadric error metrics and mesh simplification. It summarizes techniques for using quadric error metrics to collapse vertices and preserve attributes, boundaries, and volumes during simplification. Methods include the memoryless approach of accumulating quadrics on local neighborhoods rather than the full mesh, and solving for optimal positions using constraints or optimization functions.
Matrices can be used to represent 2D geometric transformations such as translation, scaling, and rotation. Translations are represented by adding a translation vector to coordinates. Scaling is represented by multiplying coordinates by scaling factors. Rotations are represented by premultiplying coordinates with a rotation matrix. Multiple transformations can be combined by multiplying their respective matrices. Using homogeneous coordinates allows all transformations to be uniformly represented as matrix multiplications.
The document discusses relations and their representations. It defines a binary relation as a subset of A×B where A and B are nonempty sets. Relations can be represented using arrow diagrams, directed graphs, and zero-one matrices. A directed graph represents the elements of A as vertices and draws an edge from vertex a to b if aRb. The zero-one matrix representation assigns 1 to the entry in row a and column b if (a,b) is in the relation, and 0 otherwise. The document also discusses indegrees, outdegrees, composite relations, and properties of relations like reflexivity.
This document discusses minimum spanning trees. It defines a minimum spanning tree as a spanning tree of a connected, undirected graph that has a minimum total cost among all spanning trees of that graph. The document provides properties of minimum spanning trees, including that they are acyclic, connect all vertices, and have n-1 edges for a graph with n vertices. Applications of minimum spanning trees mentioned include communication networks, power grids, and laying telephone wires to minimize total length.
Prim's and Kruskal's algorithms are two common approaches for finding a minimum spanning tree (MST) in a weighted, undirected graph. Prim's algorithm grows the MST from a single starting vertex by iteratively adding the lowest-cost edge that connects the MST to another vertex. Kruskal's algorithm considers all edges in order of weight and adds edges to the MST if they do not form cycles. Both run in O(E log E) time, where E is the number of edges, and are greedy algorithms that work to find the optimal MST solution.
This document provides an overview of mathematical morphology and its applications to image processing. It discusses basic concepts like dilation, erosion, opening, closing and their properties. It also covers algorithms for tasks like boundary extraction, region filling, thinning and skeletonization. Grayscale morphology is introduced, including dilation, erosion and other operations on grayscale images. Some common applications are described, such as morphological smoothing, gradient calculation, top-hat transforms and textural segmentation.
El documento habla sobre los conceptos básicos de la producción y la empresa. Explica que la empresa es la unidad básica de producción y que su principal objetivo es obtener beneficios. También describe los diferentes sectores económicos como el primario, secundario y terciario. Finalmente, introduce el concepto de frontera de posibilidades de producción y cómo representa la producción máxima que una economía puede lograr con sus recursos y tecnología disponibles.
Mudasir Hussain is seeking a role as a deck cadet or trainee navigating officer. He has an Associate's degree in ship management from Pakistan Marine Academy with a CGPA of 2.5. He has completed several safety and first aid certification courses. He is fluent in English, Urdu and Pashto and has experience with Microsoft Office. His objective is to find a challenging career opportunity in a progressive organization that offers congenial work environment.
This document is a copyright registration record from 1991 for a computer program called "HyperMail" created by Jonathan Mark L'Homme. It lists Jonathan Mark L'Homme as the copyright claimant for the program, which was registered as a computer file. The program was created in 1991 and published on August 5, 1991.
Amer Butt has over 15 years of experience managing grants and ensuring financial compliance for projects funded by international donors. He is currently the Grants & Compliance Coordinator at DAI Pakistan Pvt. Ltd., where he manages grants for a project funded by the UK Foreign and Commonwealth Office. Previously, he held grants management roles at MEDA Pakistan and IUCN – The World Conservation Union, implementing projects funded by USAID, UNDP, CIDA, and other donors. Amer Butt has a Master's degree in Business Administration and a Bachelor's degree in Commerce, and is proficient in grants management, budgeting, auditing, and Microsoft Excel.
1) O documento propõe oficinas livres organizadas por coletivos com diferentes atividades como debates, oficinas e palestras por 2 semanas.
2) Os coletivos seriam formados por colaboradores de acordo com seus interesses e as oficinas abordariam demandas dos núcleos.
3) Haveria em média 4 encontros por mês, incluindo oficinas, preparação e um encontro de integração de todos os coletivos.
¡Certifícate por 80h lectivas en convenio con el #OSCE!
Curso Integral de las Contrataciones Públicas.
Inicio: 13 de febrero. Entérate más aquí: http://bit.ly/2ad6uI4
Informes: RPM. #990 035 466 / 987 972 131
contrataciones@rc-consulting.org
Entérate de nuestros cursos actualizados que traemos para ti este año 2017 CLIC AQUÍ => http://goo.gl/fcRnEi
Adquiere en DVD los cursos más importantes de la Gestión Pública => http://bit.ly/1M5Sz3f
Lee artículos especializados en nuestro blog => http://goo.gl/QecLGi
Sé parte de nuestra Comunidad Virtual GRATIS => http://goo.gl/A22Amb
Síguenos en Facebook => http://goo.gl/6vNsuf
Y en Twitter => http://goo.gl/Q6SEb6
CONTÁCTANOS...
R&C Consulting
Escuela de Gobierno y Gestión Pública
Av. Petit Thouars N° 2166 Piso 4, Lince - Lima Perú
Movistar: 9-9911-4921
RPM: # 956991632
RPC: 987-972-131
Fijo: (01) 2661067 Anexo 101
La navidad es una época feliz para compartir con la familia y amigos. Las personas se reúnen, intercambian regalos y disfrutan de la compañía mutua. La navidad trae alegría a muchos corazones.
Fast, easy usability tricks for big product improvementsChris Nodder
Take one week to set a product vision and high level design that the whole team understands and uses to plan and build the product.
1. Find some users to watch
2. Interpret what they tell you without bias
3. Create actionable product ideas
4. Turn your ideas into designs
5. User test your designs
…all before you even start coding!
Find more at questionablemethods.com
Presentation given at GOTO Copenhagen 2012
O documento fornece diretrizes para treinar a equipe de vendas online em relacionamento com clientes, conteúdo de qualidade e monitoramento. Inclui a importância de criar laços com clientes, usar diferentes mídias e responder rápido a críticas.
Las neuronas son células nerviosas especializadas en la recepción de estímulos y conducción del impulso nervioso. Pueden clasificarse como monopolares, bipolares o multipolares dependiendo del número de prolongaciones. También pueden ser sensorias, motoras o interneuronas dependiendo de su función. El transporte axoplasmático es importante para el movimiento de orgánulos y metabolitos a lo largo del axón, y puede ocurrir en dirección anterógrada o retrógrada. La degeneración axonal y walleriana ocurren cuando hay
Les 21, 22 et 23 avril 1978 se tenait à Compiègne le premier congrès de notre société savante. Un vendredi, un samedi et un dimanche… Le numéro de la lettre d’Inforcom issu de ce 1er congrès faisait partie du sac de goodies des congressistes du 20ème. Parmi les 150 premiers congressistes, Anne-Marie Laulan et Bernard Miège. Ils étaient présents à notre 20ème Congrès à la disposition de tous, bien sûr, pour évoquer nos débuts et nos débats, comme ils sont à notre disposition au Conseil d’administration de notre Société Savante pour apporter des idées, pour témoigner de ce qui fait notre « culture SIC » et notre « culture Sfsic dans les SIC ».
Evil By Design: Leading Customers Into Temptation (SXSW Version)Chris Nodder
Draft copy of my talk for South By South West 2014: "Evil By Design: Leading Customers into Temptation"
I'm going to show you how to make your customers happy using techniques that are normally employed by the lowest of the low; con-men, pick-up artists, used car salesmen, and as-seen-on-tv marketers.
You'll learn that it's OK to deceive people, and that in fact we do it all the time already. Using examples from the web and the real world, I'll show that not all deception is bad. In fact, there's a continuum from evil, through commercial and motivational to charitable deception. Customers themselves are often complicit in the deception - they *want* to be deceived!
Rather than the futile task of trying not to deceive, instead ask what your motives are for deceiving. Learn the design patterns, notice when they are being used against you, and most of all, when you design products make sure you are deceiving for good, not for evil.
This is a lighthearted yet serious dissection of dark patterns with a focus on the ethics of persuasive design and practical tips for delighting customers.
This document contains solutions to several problems involving vector calculus and partial differential equations.
For problem 1, key points include: deriving an identity involving curl and dot products; showing that curl is self-adjoint under certain boundary conditions where the vector field is parallel to the boundary normal; and explaining how Maxwell's equations with these boundary conditions would produce oscillating electromagnetic wave solutions.
Problem 2 involves solving the eigenproblem for the Laplacian in an annular region using separation of variables. Continuity conditions at the inner and outer radii lead to a transcendental equation determining the eigenvalues.
Problem 3 examines eigenproblems for the Laplacian and curl operators, showing they are self-adjoint and obtaining matrix and finite difference
Stuck with complex computer network assignments? Don't stress, we're here to help! 💡 Our expert team at ComputerNetworkAssignmentHelp.com is your go-to source for top-notch assignment assistance. 🌐 Whether it's routing protocols, network security, or any networking topic, we've got the expertise to guide you. 📊 Get high-quality solutions, on-time delivery, and affordable prices. 🤝 Let us ease your academic journey! Visit us now and say goodbye to assignment worries! 🚀
VIsit:-https://www.computernetworkassignmenthelp.com/
I am Shane Bruce. I love exploring new topics. Academic writing seemed an interesting option for me. After working for many years with eduassignmenthelp.com, I have assisted many students with their assignments. I can proudly say, each student I have served is happy with the quality of the solution that I have provided. I have acquired my Master’s Degree in MATLAB, from Curtin University in Perth, Australia.
This document analyzes Bernstein's proposed circuit-based approach for the matrix step of the number field sieve integer factorization method. It finds that Bernstein overestimated the improvement in factoring larger integers, which would be a factor of 1.17 larger rather than 3.01 as claimed. The document also proposes an improved circuit design based on a new mesh routing algorithm. It estimates that for 1024-bit RSA, the matrix step could be completed in a day using a few thousand dollars of custom hardware, but that the relation collection step still determines the practical security of RSA.
This document describes a new algorithm for dual tree kernel conditional density estimation (KCDE) that provides fast and accurate density predictions. The algorithm extends previous work on univariate KCDE to allow for multivariate labels (Y) and conditioning variables (X). It applies Gray's dual tree approach separately to the numerator and denominator of the KCDE formula, and uses error bounds to ensure the quotient estimates have bounded relative error. This new algorithm provides the fastest known method for kernel conditional density estimation for prediction tasks.
This document discusses subspace clustering with missing data. It summarizes two algorithms for solving this problem: 1) an EM-type algorithm that formulates the problem probabilistically and iteratively estimates the subspace parameters using an EM approach. 2) A k-means form algorithm called k-GROUSE that alternates between assigning vectors to subspaces based on projection residuals and updating each subspace using incremental gradient descent on the Grassmannian manifold. It also discusses the sampling complexity results from a recent paper, showing subspace clustering is possible without an impractically large sample size.
I am Katie P. I am a Maths Assignment Expert at mathsassignmenthelp.com. I hold a Master's in Mathematics from Concordia University. I have been helping students with their assignments for the past 10 years. I solve assignments related to Maths.
Visit mathsassignmenthelp.com or email info@mathsassignmenthelp.com.
You can also call +1 678 648 4277 for any assistance with Maths Assignments.
This document presents results from a lattice QCD calculation of the proton isovector scalar charge (gs) at two light quark masses. The calculation uses domain-wall fermions and Iwasaki gauge actions on a 323x64 lattice with a spacing of 0.144 fm. Ratios of three-point to two-point correlation functions are formed and fit to a plateau to extract gs. Values of gs are obtained for quark masses of 0.0042 and 0.001, and all-mode averaging is used for the lighter mass. Chiral perturbation theory will be used to extrapolate gs to the physical quark mass. Preliminary results for gs at the unphysical quark masses are reported in lattice units.
This document discusses algorithms for computing the Kolmogorov-Smirnov distribution, which is used to measure goodness of fit between empirical and theoretical distributions. It describes existing algorithms that are either fast but unreliable or precise but slow. The authors propose a new algorithm that uses different approximations depending on sample size and test statistic value, to provide fast and reliable computation of both the distribution and its complement. Their C program implements this approach using multiple methods, including an exact but slow recursion formula and faster but less precise approximations for large samples.
This document discusses three problems related to partial differential equations:
1) Finding eigenfunctions and eigenvalues for an operator and bounding the maximum value of a Rayleigh quotient.
2) Solving the Laplacian eigenproblem in a cylinder using finite differences and comparing to analytical solutions.
3) Solving the wave equation for a vibrating string driven by an oscillating force and deriving the Green's function.
The security of the RSA algorithm depends on the difficulty of factoring large numbers. The best known factoring algorithms are trial division, Dixon's algorithm, the quadratic sieve, and the number field sieve. The quadratic sieve and number field sieve are parallelizable algorithms that improve on Dixon's algorithm by using a "sieving" technique to more efficiently find relations between factors. While factoring performance improves incrementally over time, a large key size (over 300 bits) is still considered secure against the best known factoring methods.
This document contains information from a class presentation on computing elements of one-dimensional arrays and recursively defined sequences. It discusses how to count the elements of an array that is cut in the middle, the probability of an element having an even or odd subscript, computing terms of a recursively defined sequence, and properties of relations. The presentation was given by 5 students to their instructor for their BS in Computer Science class from 2013-2017.
This document discusses algorithms for solving the point in polygon problem for arbitrary polygons. It presents two main concepts: the even-odd rule and the winding number rule. It shows that both concepts are closely related and can be based on determining the winding number. The document derives an incremental angle algorithm for computing the winding number and modifies it to accelerate the computation and handle special cases. It compares the resulting winding number algorithm to those found in literature.
This document discusses different techniques for image segmentation, which is the process of partitioning an image into meaningful regions or objects. It covers several main methods of region segmentation, including region growing, clustering, and split-and-merge. It also discusses techniques for finding line and curve segments in an image, such as using the Hough transform or edge tracking procedures. Finally, it provides examples of applying these segmentation techniques to extract regions, straight lines, and circles from images.
The document describes a data structure called a Compact Dynamic Rewritable Array (CDRW) that compactly stores arrays where each entry can be dynamically rewritten. It supports creating an array of size N where each entry is initially 0 bits, setting an entry to a value of at most k bits, and getting an entry's value. The goal is to use close to the minimum possible space of the sum of each entry's length while supporting these operations in O(1) time. The document presents solutions using compact hashing that achieve O(1) time for get and set using (1+) times the minimum space plus O(N) bits, for any constant >0. Experimental results show these perform well in terms
Cs6402 design and analysis of algorithms may june 2016 answer keyappasami
The document discusses algorithms and complexity analysis. It provides Euclid's algorithm for computing greatest common divisor, compares the orders of growth of n(n-1)/2 and n^2, and describes the general strategy of divide and conquer methods. It also defines problems like the closest pair problem, single source shortest path problem, and assignment problem. Finally, it discusses topics like state space trees, the extreme point theorem, and lower bounds.
Report on Efficient Estimation for High Similarities using Odd Sketches AXEL FOTSO
This is a short report of the paper "Efficient Estimation for High Similarities using Odd Sketches" produced for the course Softskills seminar at Telecom Paristech
Perimetric Complexity of Binary Digital ImagesRSARANYADEVI
Perimetric complexity is a measure of the complexity of binary pictures. It is defined as the sum of inside and outside perimeters of the foreground, squared, divided by the foreground area, divided by . Difficulties arise when this definition is applied to digital images composed of binary pixels. In this article we identify these problems and propose solutions. Perimetric complexity is often used as a measure of visual complexity, in which case it should take into account the limited resolution of the visual system. We propose a measure of visual perimetric complexity that meets this requirement.
The document provides information about a test for candidates applying for an M.Tech in Computer Science. It describes:
1) The test will have two parts - a morning objective test (Test MIII) and an afternoon short answer test (Test CS).
2) The CS test booklet will have two groups - Group A covering analytical ability and mathematics at the B.Sc. pass level, and Group B covering advanced topics in mathematics, statistics, physics, computer science, and engineering at the B.Sc. Hons. and B.Tech. levels.
3) Sample questions are provided for both Group A (mathematical reasoning and basic concepts) and Group B (advanced topics in real analysis
The document describes a test for candidates applying for an M.Tech. in Computer Science. [The test consists of two parts - an objective test in the morning and a short answer test in the afternoon. The short answer test has two groups - Group A covers analytical ability and mathematics at the B.Sc. level, while Group B covers additional topics in mathematics, statistics, physics, computer science, or engineering depending on the candidate's choice.] The document provides sample questions testing concepts in mathematics including algebra, calculus, number theory, and logic.
1. Approximate Matrix Multiplication and Space Partitioning Trees:
An Exploration
N.P. Slagle, Lance J. Fortnow
October 23, 2012
Abstract
Herein we explore a dual tree algorithm for matrix multiplication of A ∈ RM×D
and B ∈
RD×N
, very narrowly effective if the normalized rows of A and columns of B, treated as vectors
in RD
, fall into clusters of order proportionate to Ω(Dτ
) with radii less than arcsin( /
√
2) on
the surface of the unit D-ball. The algorithm leverages a pruning rule necessary to guarantee
precision proportionate to vector magnitude products in the resultant matrix. Unfortunately,
if the rows and columns are uniformly distributed on the surface of the unit D-ball, then the
expected points per required cluster approaches zero exponentially fast in D; thus, the approach
requires a great deal of work to pass muster.
1 Introduction and Related Work
Matrix multiplication, ubiquitous in computing, naively requires O(MDN) floating point opera-
tions to multiply together matrices A ∈ RM×D and B ∈ RD×N . We present an investigation of our
novel approach to matrix multiplication after a brief discussion of related work and an explanation
of space-partitioning trees.
1.1 State-of-the-Art for Square Matrices
For N = D = M, Strassen [12] gave an O(Nlog2 7) algorithm that partitions the matrices into
blocks, generalizing the notion that to multiply binary integers a and b, one need only compute
[(a + b)2 − (a − b)2]/4, an operation requiring three additions, two squares, and a left shift. Several
improvements appear in the literature [1],[6],[8],[10], the most recent of which give O(N2.3736...) [11]
and O(N2.3727...) [13], both augmentations of the Coppersmith-Winograd algorithm [2]. The latest
algorithms feature constants sufficiently large to preclude application on modern hardware [9].
1.2 Motivating the Space
The product of A in RM×D and B in RD×N features all possible inner products between the row
vectors of A and the column vectors of B, each an element of RD. We investigate whether organizing
these two sets of vectors into space-partitioning trees can reduce the complexity of the na¨ıve matrix
multiplication by exploiting the distribution of the data.
1
2. 1.2.1 Space-Partitioning Trees
We can organize a finite collection of points S in Euclidian space RD into a space-partitioning tree
T such that the root node P0 contains all points in S, and for any other node P in T , all points in
P are in π(P), the parent node of P. Figure 1 depicts a space-partitioning tree in R2.
Figure 1: A space-partitioning tree in R2; the ellipses denote covariances on the node points; the
large dots denote node centroids.
A space-partitioning tree definition requires a recursive partitioning rule, such as that appearing
in algorithm 1. Organizing S into such a tree generally requires O(D|S| log(D|S|)) time complexity.
Algorithm 1 [L, R] =partition(P, m)
1: If |P| ≤ m, then RETURN [NULL, NULL].
2: Pick the dimension k that maximizes the range of xk for x ∈ P.
3: Sort the points in P according to dimension k.
4: Split P into L and R using the median (or mean) of xk.
5. RETURN [L, R].
1.2.2 Dual Tree Algorithm
Given a reference tree R and a query tree Q of data points, we can perform pairwise operations
such as kernel summations and inner products across across nodes rather than points, performing
a depth-first search on both trees. The algorithm leverages a pruning criterion to guarantee level
approximation in the outputs. Algorithm 2 exhibits this approach.
2
3. Algorithm 2 dualTreeCompareNodes(R, Q, operation op, pruning rule R, , C)
1: If R and Q are leaf nodes, then perform the point-wise operation, filling in appropriate entries
of C. RETURN
2: If rule R(R, Q) is true, approximate op between points in the nodes using their centroids, filling
in appropriate entries of C; then RETURN.
3: Call
• dualTreeCompareNodes(R.left, Q.left)
• dualTreeCompareNodes(R.left, Q.right)
• dualTreeCompareNodes(R.right, Q.left)
• dualTreeCompareNodes(R.right, Q.right)
1.2.3 Space-Partitioning Trees in the Literature
Applied statistical methods such as dual tree approximate kernel summations [3], [4], [5] and
other pairwise statistical problems [7] partition the query and test samples into respective space-
partitioning trees for efficient look-ups. Using cover trees, Ram [7] demonstrates linear time com-
plexity for na¨ıve O(N2) pairwise algorithms.
2 Dual Tree Investigation
2.1 Product Matrix Entries
Given the two matrices A ∈ RM×D and B ∈ RD×N , we can think of the entries of C = AB as
cij = |ai||bj|cos θij, where ai is the ith row of A, bj is the jth column of B, and θij is the angle
between ai and bj. We can compute the magnitudes of these vectors in time O(D(M + N)) and all
products of the magnitudes in time O(MN), for a total time complexity of O(MN + D(M + N)).
Thus, computing the cosines of the angles for M, N ∈ O(D) is the O(MDN) bottleneck. We give
narrow conditions under which we can reduce this complexity.
2.2 Algorithm
In our investigation, we normalize the row vectors of A and the column vectors of B, then organize
each set into a ball tree, a space-partitioning tree such that each node is a D-ball. To compute the
cosines of the angles between all pairs, we apply the dual tree algorithm. The pruning rule must
guarantee that the relative error of our estimate cij with respect to the full magnitude |ai||bj| be
no more than , or, more formally,
|cij − cij| ≤ |ai||bj|. (1)
Thus, we require
| cos θij − cos θij| ≤ . (2)
The pruning rule guaranteeing the above error bound appears in algorithm 3.
3
4. Algorithm 3 dualTreeMatrixMultiplication(A, B, )
1: Allocate M × N matrix C.
2: Compute the magnitudes of ai and bj for i = 1, . . . , M, j = 1, . . . , N.
3: Fill in C so that cij = |ai||bj|.
4: Compute ui = ai/|ai|, vj = bj/|bj|.
5: Allocate trees U and V with root(U) = {ui} and root(V) = {vj}.
6: Call partition(root(U), size), partition(root(V), size), with size the minimum number of points
(defaulted to one) per tree node.
7: Let op(s, t) =< s, t >.
8: For node balls R ∈ U, Q ∈ V, define
• α:=angle between the centers of R,Q,
• β:=angle subtending half of the node ball R, and
• γ:=angle subtending half of the node ball Q,
all angles in [0, π].
9: Define the pruning rule R as an evaluation of |β + γ| ≤ | sin α|+| cos α| .
10: Call dualTreeCompareNodes(root(U),root(V), op, R, , C).
11: RETURN C.
We could define a more conservative pruning rule of |β + γ| ≤ /
√
2 ≤ | sin α|+| cos α| . For future
analyses, we apply the more conservative bound.
Figure 2 exhibits the relationships between angles α, β, and γ.
Figure 2: Angles in algorithm 3.
2.2.1 Proof of the Pruning Rule
Simply put, the pruning rule in algorithm 3 bounds the largest possible error on the cosine function
in terms of the center-to-center angle (our approximation) and the angles subtending the balls R
and Q, formally stated in theorem 1.
4
5. Theorem 1. Given ball nodes R and Q and angles as defined in algorithm 3, if β+γ ≤ | sin α|+| cos α| ,
then |r · q − cos α| ≤ for all r ∈ R, q ∈ Q.
To prove theorem 1, we need the following lemma.
Lemma 2. Given both the ball nodes R and Q and angles listed in theorem 1, let error(r, q) =
|r · q − cos α|. The maximum of error occurs when r and q are in the span of the two centers of R
and Q. Furthermore, the maxima of error are | cos(α β γ) − cos α|.
Proof. Let ¯r and ¯q be the centers of R and Q, respectively. Since cos θ is monotone for θ ∈ [0, π], the extrema of the
error function occur when r and q fall on the surface of R and Q, respectively. Furthermore, we only care about the
extrema of r · q since the maxima and minima of this function bound the error about cos α. Thus, we optimize r · q
subject to ¯r · ¯q = cos α, ¯r · r = cos β, ¯q · q = cos γ, and r · r = q · q = ¯r · ¯r = ¯q · ¯q = 1.
Leveraging Lagrange multipliers, we obtain the solutions
r = ¯r[cos β cot α sin β] + ¯q ±
sin β
sin α
and
q = ¯r ±
sin γ
sin α
+ ¯q[cos γ cot α sin γ],
with
r · q = ± cos α sin β sin γ + cos α cos β cos γ ± sin α cos β sin γ± sin α sin β cos γ = cos(α β γ).
Notice, the possible values of r and q maximizing the error are simply the edges of the cones
subtending balls R and Q in the hyperplane spanned by ¯r and ¯q. Now, we prove theorem 1.
Proof. By hypothesis, |β + γ| [| sin α| + | cos α|] ≤ . Since |β + γ| ≥ | β γ|, | sin h| ≤ |h|, |1 − cos h| ≤ |h| for
β, γ ∈ [0, π] and h ∈ [−π, π], we have
≥ | β γ| | sin α|
sin( β γ)
β γ
+ | cos α|
1 − cos( β γ)
β γ
≥ | cos α cos( β γ) − sin α sin( β γ) − cos α|,
and so
≥ | cos α cos( β γ) − sin α sin( β γ) − cos α| = | cos(α β γ) − cos α|.
2.2.2 Analysis of Algorithm 3
Given matrices A in RM×D and B in RD×N , computing the magnitudes, normalizing the rows of A
and columns of B, and computing magnitude products for C requires O(D(M +N)+MN). Orga-
nizing the normalized points into space-partitioning trees requires O(MD log MD + ND log ND).
Finally, an analysis of the dual tree algorithm requires conditions on the data points. We suppose
that given the approximation constant , the number of points falling in node balls of appropri-
ate size, say radius roughly arcsin( /
√
2), is bounded below by fD( ). If the points are clustered
into such balls, each prune saves the computation of at least D[fD( )]2. So we can fill into C
[fD( )]2 entries with a constant number of inner products at cost O(D), for a total complexity of
O(MDN/[fD( )]2). Thus, we have the following theorem.
Theorem 3. The total time complexity of algorithm 3 is O(MD log MD + ND log ND + MN(1 +
D/[fD( )]2)).
5
6. 2.3 Gaping Caveat
An obvious caveat in the analysis is the behavior of fD( ) as D increases without bound. For a
rough sketch of the expected behavior of fD, recall that the volume and surface area of a D-ball of
radius r are
VD(r) =
2Dπ
D−1
2 Γ D+1
2
DΓ(D)
rD
(3)
and
SAD(r) = VD (r) =
2Dπ
D−1
2 Γ D+1
2
Γ(D)
rD−1
. (4)
Since uniformly distributed data represents something of a worst-case scenario with respect to
clustering algorithms, we explore the expected cluster sizes by dividing the surface of the unit
D-ball by the node balls of appropriate size.
Theorem 4. Assuming that the normalized rows of A and columns of B are uniformly distributed
about the unit D-ball, let W be the number of points in each ball of radius arcsin( /
√
2). Then
E[W] ≈ MN
VD−1(arcsin( /
√
2))
SAD(1)
=
MNΓ D
2
2
√
πΓ D+1
2
arcsinD−1
( /
√
2)
Thus, since Γ (x + 1/2) /Γ (x) ∈ O(x), exponentially few points fall into each ball of radius
arcsin( /
√
2). Thus, we require strong clustering conditions, stated formally below, if the dual tree
approach described in algorithm 3 is to defeat na¨ıve matrix multiplication.
Theorem 5. If M = D = N and the normalized rows of A and columns of B form clusters of size
Ω(Dτ ) for τ > 0 where cluster radii are approximately arcsin( /
√
2) for
√
2 > > 0, then algorithm
3 runs in time O(D2 log D + D3−2τ ).
3 Concluding Remarks and Future Work
Given the problem of multiplying together matrices A ∈ RM×D and B ∈ RD×N , we present a
dual tree algorithm effective if row vectors of the left matrix and column vectors of the right
matrix fall into clusters of size proportionate to some positive power τ of the dimension D of said
vectors. Unfortunately, worst-case uniformly distributed vectors give exponentially small cluster
sizes. Possible improvements include partitioning columns of A and rows of B so that the size of
clusters increases slightly while incurring a greater cost in tree construction and the number of
magnitudes to calculate, or appealing to the asymptotic orthogonality of vectors as D becomes
arbitrarily large. Clearly, the approach needs a great deal of work to be of practical interest.
4 References
1. D. Bini, M. Capovani, F. Romani, and G. Lotti. O(n2.7799
) complexity for n × n approximate matrix multi-
plication. Inf. Process. Lett., 8(5):234235, 1979.
2. D. Coppersmith and S. Winograd. Matrix multiplication via arithmetic progressions. J. Symbolic Computa-
tion, 9(3):251280, 1990.
3. A.G. Gray and A.W. Moore. N-Body Problems in Statistical Learning. T.K. Leen, T.G. Dietterich, and V.
Tresp, editors, Advances in Information Processing Systems 13 (December 2000). MIT Press, 2001.
6
7. 4. A.G. Gray and A. W. Moore. Rapid Evaluation of Multiple Density Models. In Artificial Intelligence and
Statistics 2003, 2003.
5. M.P. Holmes, A.G. Gray, and C.L. Isbell Jr. Fast Kernel Conditional Density Estimation: A Dual Tree Monte
Carlo Approach. Computational Statistics and Data Analysis, 1707-1718, 2010.
6. V. Y. Pan. Strassens algorithm is not optimal. In Proc. FOCS, volume 19, pages 166176, 1978.
7. P. Ram, D. Lee, W. March, A.G. Gray. Linear-time Algorithms for Pairwise Statistical Problems, NIPS, 2010.
8. F. Romani. Some properties of disjoint sums of tensors related to matrix multiplication, SIAM J. Comput.,
pages 263267, 1982.
9. S. Robinson. Toward an Optimal Algorithm for Matrix Multiplication, SIAM News 38 (9), 2005.
10. A. Sch¨onhage. Partial and total matrix multiplication. SIAM J. Comput., 10(3):434455, 1981.
11. A. Stothers. Ph.D. Thesis, U. Edinburgh, 2010.
12. V. Strassen. Gaussian Elimination is not Optimal, Numer. Math. 13, p. 354-356, 1969.
13. V.V. Williams. Multiplying matrices faster than Coppersmith-Winograd, STOC, 2012.
7