We present a Logspace Approximation Scheme (LSAS), i.e. an approximation algorithm for maximum matching in planar graphs (not necessarily bipartite) that achieves an approximation ratio arbitrarily close to one, in Logspace This deviates from the well known Baker’s approach for approximation in planar graphs by avoiding the use of distance computation - which is not known to be in Logspace. Our algorithm actually works for any “recursively sparse” graph class which contains a linear size matching.The scheme is based on an LSAS in bounded degree graphs which are not known to be amenable to Baker’s method. We solve the bounded degree case by parallel augmentation of short augmenting paths. Finding a large number of such disjoint paths can, in turn, be reduced to finding a large independent set in a bounded degree graph. This is joint work with Samir Datta and Raghav Kulkarni.
Analytical Review of Feature Extraction Techniques for Automatic Speech Recog...IOSR Journals
This document provides an analytical review of feature extraction techniques for automatic speech recognition. It discusses several common feature extraction methods including mel spectral coefficients, cepstral transformation, and mel frequency cepstral coefficients (MFCC). MFCC are widely used in speech recognition as they reflect the human auditory perception and produce de-correlated coefficients. The document also covers vector space representation of features and different distance metrics like Euclidean, city block, and weighted Euclidean that can be used for classification of unknown vectors.
Comparitive Analysis of Algorithm strategiesTalha Shaikh
The document discusses various algorithm strategies including decrease and conquer, greedy approach, backtracking, and transform and conquer. It provides definitions, examples, advantages and disadvantages for each strategy. Decrease and conquer algorithms like insertion sort reduce the problem size at each step. Greedy algorithms make locally optimal choices at each step. Backtracking algorithms systematically explore all solutions using a depth first search approach.
Vectors are quantities that have both magnitude and direction. They can be represented by capital letters with an arrow or lowercase letters with a bar. A vector has components in different dimensions - for two dimensions it has x and y components, and for three dimensions it has x, y, and z components. Some key vector concepts are parallel vectors (same direction), equal vectors (same magnitude and direction), negative/opposite vectors, free vectors (originate from different points), and position vectors (originate from the same point). The dot product of two vectors is a scalar quantity that depends on the angle between the vectors, and can be used to determine properties like whether vectors are parallel or orthogonal.
The document discusses dynamic programming techniques for finding the optimal number of scalar multiplications in matrix multiplication. It provides an example of calculating the optimal parenthesization of matrix multiplications using a 6x6 matrix. The complexity of the algorithm is O(n^3). Dynamic programming is more efficient than brute force or divide and conquer approaches for this problem. Optimal substructure and overlapping subproblems are elements that allow dynamic programming to be applied. Proofs are given that the unweighted shortest path problem has optimal substructure but the unweighted longest path problem may not.
α Nearness ant colony system with adaptive strategies for the traveling sales...ijfcstjournal
On account of ant colony algorithm easy to fall into local optimum, this paper presents an improved ant
colony optimization called α-AACS and reports its performance. At first, we provide an concise description
of the original ant colony system(ACS) and introduce α-nearness based on the minimum 1-tree for ACS’s
disadvantage, which better reflects the chances of a given link being a member of an optimal tour. Then, we
improve α-nearness by computing a lower bound and propose other adaptations for ACS. Finally, we
conduct a fair competition between our algorithm and others. The results clearly show that α-AACS has a
better global searching ability in finding the best solutions, which indicates that α-AACS is an effective
approach for solving the traveling salesman problem.
The document discusses algorithms for finding shortest paths in graphs. It describes Dijkstra's algorithm and Bellman-Ford algorithm for solving the single-source shortest path problem. Dijkstra's algorithm runs in O(ElogV) time and works for graphs with non-negative edge weights, while Bellman-Ford algorithm runs in O(EV) time and can handle graphs with negative edge weights as long as there are no negative cycles. The document also discusses Floyd-Warshall algorithm for solving the all-pairs shortest path problem.
Spectral clustering is a technique for clustering data points into groups using the spectrum (eigenvalues and eigenvectors) of the similarity matrix of the points. It works by constructing a graph from the pairwise similarities of points, calculating the Laplacian of the graph, and using the k eigenvectors of the Laplacian corresponding to the smallest eigenvalues to embed the points into a k-dimensional space. K-means clustering is then applied to the embedded points to obtain the final clustering. The document discusses two basic spectral clustering algorithms that differ in whether they use the normalized or unnormalized Laplacian.
The document discusses disjoint set data structures and union-find algorithms. Disjoint set data structures track partitions of elements into separate, non-overlapping sets. Union-find algorithms perform two operations on these data structures: find, to determine which set an element belongs to; and union, to combine two sets into a single set. The document describes array-based representations of disjoint sets and algorithms for the union and find operations, including a weighted union algorithm that aims to keep trees relatively balanced by favoring attaching the smaller tree to the root of the larger tree.
Analytical Review of Feature Extraction Techniques for Automatic Speech Recog...IOSR Journals
This document provides an analytical review of feature extraction techniques for automatic speech recognition. It discusses several common feature extraction methods including mel spectral coefficients, cepstral transformation, and mel frequency cepstral coefficients (MFCC). MFCC are widely used in speech recognition as they reflect the human auditory perception and produce de-correlated coefficients. The document also covers vector space representation of features and different distance metrics like Euclidean, city block, and weighted Euclidean that can be used for classification of unknown vectors.
Comparitive Analysis of Algorithm strategiesTalha Shaikh
The document discusses various algorithm strategies including decrease and conquer, greedy approach, backtracking, and transform and conquer. It provides definitions, examples, advantages and disadvantages for each strategy. Decrease and conquer algorithms like insertion sort reduce the problem size at each step. Greedy algorithms make locally optimal choices at each step. Backtracking algorithms systematically explore all solutions using a depth first search approach.
Vectors are quantities that have both magnitude and direction. They can be represented by capital letters with an arrow or lowercase letters with a bar. A vector has components in different dimensions - for two dimensions it has x and y components, and for three dimensions it has x, y, and z components. Some key vector concepts are parallel vectors (same direction), equal vectors (same magnitude and direction), negative/opposite vectors, free vectors (originate from different points), and position vectors (originate from the same point). The dot product of two vectors is a scalar quantity that depends on the angle between the vectors, and can be used to determine properties like whether vectors are parallel or orthogonal.
The document discusses dynamic programming techniques for finding the optimal number of scalar multiplications in matrix multiplication. It provides an example of calculating the optimal parenthesization of matrix multiplications using a 6x6 matrix. The complexity of the algorithm is O(n^3). Dynamic programming is more efficient than brute force or divide and conquer approaches for this problem. Optimal substructure and overlapping subproblems are elements that allow dynamic programming to be applied. Proofs are given that the unweighted shortest path problem has optimal substructure but the unweighted longest path problem may not.
α Nearness ant colony system with adaptive strategies for the traveling sales...ijfcstjournal
On account of ant colony algorithm easy to fall into local optimum, this paper presents an improved ant
colony optimization called α-AACS and reports its performance. At first, we provide an concise description
of the original ant colony system(ACS) and introduce α-nearness based on the minimum 1-tree for ACS’s
disadvantage, which better reflects the chances of a given link being a member of an optimal tour. Then, we
improve α-nearness by computing a lower bound and propose other adaptations for ACS. Finally, we
conduct a fair competition between our algorithm and others. The results clearly show that α-AACS has a
better global searching ability in finding the best solutions, which indicates that α-AACS is an effective
approach for solving the traveling salesman problem.
The document discusses algorithms for finding shortest paths in graphs. It describes Dijkstra's algorithm and Bellman-Ford algorithm for solving the single-source shortest path problem. Dijkstra's algorithm runs in O(ElogV) time and works for graphs with non-negative edge weights, while Bellman-Ford algorithm runs in O(EV) time and can handle graphs with negative edge weights as long as there are no negative cycles. The document also discusses Floyd-Warshall algorithm for solving the all-pairs shortest path problem.
Spectral clustering is a technique for clustering data points into groups using the spectrum (eigenvalues and eigenvectors) of the similarity matrix of the points. It works by constructing a graph from the pairwise similarities of points, calculating the Laplacian of the graph, and using the k eigenvectors of the Laplacian corresponding to the smallest eigenvalues to embed the points into a k-dimensional space. K-means clustering is then applied to the embedded points to obtain the final clustering. The document discusses two basic spectral clustering algorithms that differ in whether they use the normalized or unnormalized Laplacian.
The document discusses disjoint set data structures and union-find algorithms. Disjoint set data structures track partitions of elements into separate, non-overlapping sets. Union-find algorithms perform two operations on these data structures: find, to determine which set an element belongs to; and union, to combine two sets into a single set. The document describes array-based representations of disjoint sets and algorithms for the union and find operations, including a weighted union algorithm that aims to keep trees relatively balanced by favoring attaching the smaller tree to the root of the larger tree.
The document summarizes several linear sorting algorithms, including bucket sort, counting sort, general bucket sort, and radix sort. Counting sort runs in O(n+k) time and O(k) space, where k is the range of integer keys, and is stable. Radix sort uses a stable sorting algorithm like counting sort to sort based on each digit of d-digit numbers, resulting in O(d(n+k)) time for sorting n numbers with d digits in the range [1,k].
The document discusses the divide-and-conquer algorithm design paradigm. It defines divide-and-conquer as breaking a problem down into smaller subproblems, solving those subproblems recursively, and combining the solutions to solve the original problem. Examples of algorithms that use this approach include merge sort, quicksort, and matrix multiplication. Advantages include solving difficult problems efficiently in parallel and with good memory performance. The document also provides an example of applying divide-and-conquer to the closest pair of points problem.
The document discusses algorithms for finding minimum spanning trees in graphs. It describes Prim's algorithm and Kruskal's algorithm. Prim's algorithm works by gradually adding the closest vertex and edge to a growing spanning tree. Kruskal's algorithm sorts all the edges by weight and adds edges to the spanning tree if they do not form cycles. The running time of Prim's algorithm is O(V^2) while Kruskal's algorithm has a running time of O(E log E + V) where V is vertices and E is edges. Examples are provided to illustrate how each algorithm works on sample graphs.
This document discusses string matching algorithms. It begins with an introduction to the naive string matching algorithm and its quadratic runtime. Then it proposes three improved algorithms: FC-RJ, FLC-RJ, and FMLC-RJ, which attempt to match patterns by restricting comparisons based on the first, first and last, or first, middle, and last characters, respectively. Experimental results show that these three proposed algorithms outperform the naive algorithm by reducing execution time, with FMLC-RJ working best for three-character patterns.
This document discusses algorithms for sorting and searching data. It introduces basic data structures like arrays and linked lists. Different sorting algorithms are described like insertion sort, shell sort, and quicksort. Dictionaries that allow efficient insertion, search and deletion are also covered, including hash tables, binary search trees, red-black trees, and skip lists. The document provides pseudocode for the algorithms and estimates their time complexity using Big O notation. Source code implementations of the algorithms in C and Visual Basic are available for download.
Polynomial Tensor Sketch for Element-wise Matrix Function (ICML 2020)ALINLAB
1) The document proposes a polynomial tensor sketch method to approximate element-wise matrix functions in linear time. It combines tensor sketching, which can approximate matrix monomials fast, with polynomial approximation of the target function.
2) Coreset-based regression is used to efficiently compute optimal polynomial coefficients by selecting a small subset of rows.
3) Experiments show the method outperforms alternatives like random Fourier features for applications like kernel approximation, kernel SVM, and Sinkhorn algorithm, providing speedups of up to 49x.
This document discusses shortest path algorithms. It begins with the Konigsberg bridge problem solved by Euler that helped develop graph theory. It then discusses the shortest path problem in graph theory and two algorithms to solve it: Dijkstra's algorithm and the A* search algorithm. It explains how these algorithms work and their applications, such as in map routing, network routing, games development, and more.
Probabilistic group theory, combinatorics, and computingSpringer
Sims introduced the concept of a base for a permutation group, which is a sequence of points that only the identity element fixes. He developed the Schreier-Sims algorithm for computing with permutation groups using their base. However, this algorithm is ineffective for large groups like the alternating and symmetric groups that have large minimum bases. The document then describes Jordan's theorem, which can be used to recognize these large groups, and develops a Monte Carlo algorithm using Jordan elements to probabilistically recognize the alternating and symmetric groups in sublinear time.
Research on Chaotic Firefly Algorithm and the Application in Optimal Reactive...TELKOMNIKA JOURNAL
The document proposes a chaotic firefly algorithm (CFA) to overcome the shortcomings of the original firefly algorithm getting stuck in local optima. CFA introduces chaos initialization, chaos population regeneration, and linear decreasing inertia weight to increase global search ability. CFA is tested on six benchmark functions and applied to optimize reactive power dispatch in an IEEE 30-bus system. Results show CFA performs better than the original firefly algorithm and particle swarm optimization in finding optimal solutions faster.
The document discusses depth-first search (DFS) and breadth-first search (BFS) algorithms for graph traversal. It explains that DFS uses a stack to systematically visit all vertices in a graph by exploring neighboring vertices before moving to the next level, while BFS uses a queue to explore neighboring vertices at the same level before moving to the next. Examples are provided to illustrate how DFS can be used to check for graph connectivity and cyclicity.
The document describes a seminar report on using a divide and conquer algorithm to find the closest pair of points from a set of points in two dimensions. It discusses implementing both a brute force algorithm that compares all pairs, taking O(n^2) time, and a divide and conquer algorithm that recursively divides the point set into halves and finds the closest pairs in each subset and near the dividing line, taking O(nlogn) time. It provides pseudocode for both algorithms and discusses the history and improvements made to the closest pair problem over time, reducing the number of distance computations needed.
Firefly Algorithm is a nature-inspired metaheuristic algorithm based on the flashing patterns of fireflies. The paper reviews recent developments in Firefly Algorithm and its applications. Firefly Algorithm uses three rules: all fireflies are attracted to other fireflies regardless of sex; attractiveness depends on brightness which decreases with distance; and brightness depends on the landscape of the objective function. The algorithm balances exploration and exploitation through parameters that control randomness and attractiveness. It has been shown to efficiently solve multimodal optimization problems and outperform other algorithms in applications such as engineering design, antenna design, scheduling, and clustering.
Firefly Algorithm: Recent Advances and ApplicationsXin-She Yang
This document summarizes a research paper on the firefly algorithm, a nature-inspired metaheuristic optimization algorithm. It briefly reviews the fundamentals and development of the firefly algorithm, discussing how it balances exploration and exploitation. The firefly algorithm is shown to be more efficient than intermittent search strategies through numerical experiments. Its automatic subdivision ability and ability to handle multimodality make it well-suited for complex optimization problems.
Algorithm Design and Complexity - Course 1&2Traian Rebedea
Courses 1 & 2 for the Algorithm Design and Complexity course at the Faculty of Engineering in Foreign Languages - Politehnica University of Bucharest, Romania
This document introduces vectors and their properties. It defines a vector as having both magnitude and direction, represented by bold letters with arrows. Scalar quantities only have magnitude. The key vector operations are addition, by placing vectors tip to tail, and scalar multiplication. Laws of vector algebra include commutativity, associativity and distributivity. Dot and cross products are also introduced, with the dot product yielding a scalar and cross product a vector. Several problems demonstrate applying concepts like finding angles between vectors and using vector identities.
S6 l04 analytical and numerical methods of structural analysisShaikh Mohsin
This document provides an overview of analytical and numerical methods for structural analysis. It begins by explaining the process of structural analysis from the real object to the design model. It then discusses analytical methods like mechanics of materials and numerical methods like the finite element method. The document provides examples comparing analytical and numerical solutions. In summary, it outlines the appropriate uses of both methods and emphasizes the importance of understanding the underlying mechanics rather than solely relying on software tools.
Dynamic Programming design technique is one of the fundamental algorithm design techniques, and possibly one of the ones that are hardest to master for those who did not study it formally. In these slides (which are continuation of part 1 slides), we cover two problems: maximum value contiguous subarray, and maximum increasing subsequence.
The document discusses greedy algorithms and their application to optimization problems. It provides examples of problems that can be solved using greedy approaches, such as fractional knapsack and making change. However, it notes that some problems like 0-1 knapsack and shortest paths on multi-stage graphs cannot be solved optimally with greedy algorithms. The document also describes various greedy algorithms for minimum spanning trees, single-source shortest paths, and fractional knapsack problems.
Algorithm Design and Complexity - Course 5Traian Rebedea
This document provides an overview of greedy algorithms and their use in solving optimization problems. It discusses key aspects of greedy algorithms including making locally optimal choices at each step, optimal substructures, and the greedy choice property. Two problems addressed in detail are the activity selection problem and building Huffman trees. The activity selection problem can be solved optimally using a greedy approach by always selecting the activity with the earliest finish time. Huffman trees provide data compression by assigning codes to characters based on frequency, with more common characters having shorter codes, and can be constructed greedily by repeatedly combining the two subtrees with lowest weight.
The presentation summarizes algorithms topics including dynamic programming, greedy algorithms, and sorting. It covers dynamic programming approaches to matrix chain multiplication and polygon triangulation. It also discusses the recursive and memoized solutions to matrix chain multiplication, and compares their time complexities. Kruskal's minimum spanning tree algorithm is explained along with observations on its runtime with increasing edges or vertices. Quicksort is analyzed using least squares fitting to determine constants in its average time complexity formula.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
The document summarizes several linear sorting algorithms, including bucket sort, counting sort, general bucket sort, and radix sort. Counting sort runs in O(n+k) time and O(k) space, where k is the range of integer keys, and is stable. Radix sort uses a stable sorting algorithm like counting sort to sort based on each digit of d-digit numbers, resulting in O(d(n+k)) time for sorting n numbers with d digits in the range [1,k].
The document discusses the divide-and-conquer algorithm design paradigm. It defines divide-and-conquer as breaking a problem down into smaller subproblems, solving those subproblems recursively, and combining the solutions to solve the original problem. Examples of algorithms that use this approach include merge sort, quicksort, and matrix multiplication. Advantages include solving difficult problems efficiently in parallel and with good memory performance. The document also provides an example of applying divide-and-conquer to the closest pair of points problem.
The document discusses algorithms for finding minimum spanning trees in graphs. It describes Prim's algorithm and Kruskal's algorithm. Prim's algorithm works by gradually adding the closest vertex and edge to a growing spanning tree. Kruskal's algorithm sorts all the edges by weight and adds edges to the spanning tree if they do not form cycles. The running time of Prim's algorithm is O(V^2) while Kruskal's algorithm has a running time of O(E log E + V) where V is vertices and E is edges. Examples are provided to illustrate how each algorithm works on sample graphs.
This document discusses string matching algorithms. It begins with an introduction to the naive string matching algorithm and its quadratic runtime. Then it proposes three improved algorithms: FC-RJ, FLC-RJ, and FMLC-RJ, which attempt to match patterns by restricting comparisons based on the first, first and last, or first, middle, and last characters, respectively. Experimental results show that these three proposed algorithms outperform the naive algorithm by reducing execution time, with FMLC-RJ working best for three-character patterns.
This document discusses algorithms for sorting and searching data. It introduces basic data structures like arrays and linked lists. Different sorting algorithms are described like insertion sort, shell sort, and quicksort. Dictionaries that allow efficient insertion, search and deletion are also covered, including hash tables, binary search trees, red-black trees, and skip lists. The document provides pseudocode for the algorithms and estimates their time complexity using Big O notation. Source code implementations of the algorithms in C and Visual Basic are available for download.
Polynomial Tensor Sketch for Element-wise Matrix Function (ICML 2020)ALINLAB
1) The document proposes a polynomial tensor sketch method to approximate element-wise matrix functions in linear time. It combines tensor sketching, which can approximate matrix monomials fast, with polynomial approximation of the target function.
2) Coreset-based regression is used to efficiently compute optimal polynomial coefficients by selecting a small subset of rows.
3) Experiments show the method outperforms alternatives like random Fourier features for applications like kernel approximation, kernel SVM, and Sinkhorn algorithm, providing speedups of up to 49x.
This document discusses shortest path algorithms. It begins with the Konigsberg bridge problem solved by Euler that helped develop graph theory. It then discusses the shortest path problem in graph theory and two algorithms to solve it: Dijkstra's algorithm and the A* search algorithm. It explains how these algorithms work and their applications, such as in map routing, network routing, games development, and more.
Probabilistic group theory, combinatorics, and computingSpringer
Sims introduced the concept of a base for a permutation group, which is a sequence of points that only the identity element fixes. He developed the Schreier-Sims algorithm for computing with permutation groups using their base. However, this algorithm is ineffective for large groups like the alternating and symmetric groups that have large minimum bases. The document then describes Jordan's theorem, which can be used to recognize these large groups, and develops a Monte Carlo algorithm using Jordan elements to probabilistically recognize the alternating and symmetric groups in sublinear time.
Research on Chaotic Firefly Algorithm and the Application in Optimal Reactive...TELKOMNIKA JOURNAL
The document proposes a chaotic firefly algorithm (CFA) to overcome the shortcomings of the original firefly algorithm getting stuck in local optima. CFA introduces chaos initialization, chaos population regeneration, and linear decreasing inertia weight to increase global search ability. CFA is tested on six benchmark functions and applied to optimize reactive power dispatch in an IEEE 30-bus system. Results show CFA performs better than the original firefly algorithm and particle swarm optimization in finding optimal solutions faster.
The document discusses depth-first search (DFS) and breadth-first search (BFS) algorithms for graph traversal. It explains that DFS uses a stack to systematically visit all vertices in a graph by exploring neighboring vertices before moving to the next level, while BFS uses a queue to explore neighboring vertices at the same level before moving to the next. Examples are provided to illustrate how DFS can be used to check for graph connectivity and cyclicity.
The document describes a seminar report on using a divide and conquer algorithm to find the closest pair of points from a set of points in two dimensions. It discusses implementing both a brute force algorithm that compares all pairs, taking O(n^2) time, and a divide and conquer algorithm that recursively divides the point set into halves and finds the closest pairs in each subset and near the dividing line, taking O(nlogn) time. It provides pseudocode for both algorithms and discusses the history and improvements made to the closest pair problem over time, reducing the number of distance computations needed.
Firefly Algorithm is a nature-inspired metaheuristic algorithm based on the flashing patterns of fireflies. The paper reviews recent developments in Firefly Algorithm and its applications. Firefly Algorithm uses three rules: all fireflies are attracted to other fireflies regardless of sex; attractiveness depends on brightness which decreases with distance; and brightness depends on the landscape of the objective function. The algorithm balances exploration and exploitation through parameters that control randomness and attractiveness. It has been shown to efficiently solve multimodal optimization problems and outperform other algorithms in applications such as engineering design, antenna design, scheduling, and clustering.
Firefly Algorithm: Recent Advances and ApplicationsXin-She Yang
This document summarizes a research paper on the firefly algorithm, a nature-inspired metaheuristic optimization algorithm. It briefly reviews the fundamentals and development of the firefly algorithm, discussing how it balances exploration and exploitation. The firefly algorithm is shown to be more efficient than intermittent search strategies through numerical experiments. Its automatic subdivision ability and ability to handle multimodality make it well-suited for complex optimization problems.
Algorithm Design and Complexity - Course 1&2Traian Rebedea
Courses 1 & 2 for the Algorithm Design and Complexity course at the Faculty of Engineering in Foreign Languages - Politehnica University of Bucharest, Romania
This document introduces vectors and their properties. It defines a vector as having both magnitude and direction, represented by bold letters with arrows. Scalar quantities only have magnitude. The key vector operations are addition, by placing vectors tip to tail, and scalar multiplication. Laws of vector algebra include commutativity, associativity and distributivity. Dot and cross products are also introduced, with the dot product yielding a scalar and cross product a vector. Several problems demonstrate applying concepts like finding angles between vectors and using vector identities.
S6 l04 analytical and numerical methods of structural analysisShaikh Mohsin
This document provides an overview of analytical and numerical methods for structural analysis. It begins by explaining the process of structural analysis from the real object to the design model. It then discusses analytical methods like mechanics of materials and numerical methods like the finite element method. The document provides examples comparing analytical and numerical solutions. In summary, it outlines the appropriate uses of both methods and emphasizes the importance of understanding the underlying mechanics rather than solely relying on software tools.
Dynamic Programming design technique is one of the fundamental algorithm design techniques, and possibly one of the ones that are hardest to master for those who did not study it formally. In these slides (which are continuation of part 1 slides), we cover two problems: maximum value contiguous subarray, and maximum increasing subsequence.
The document discusses greedy algorithms and their application to optimization problems. It provides examples of problems that can be solved using greedy approaches, such as fractional knapsack and making change. However, it notes that some problems like 0-1 knapsack and shortest paths on multi-stage graphs cannot be solved optimally with greedy algorithms. The document also describes various greedy algorithms for minimum spanning trees, single-source shortest paths, and fractional knapsack problems.
Algorithm Design and Complexity - Course 5Traian Rebedea
This document provides an overview of greedy algorithms and their use in solving optimization problems. It discusses key aspects of greedy algorithms including making locally optimal choices at each step, optimal substructures, and the greedy choice property. Two problems addressed in detail are the activity selection problem and building Huffman trees. The activity selection problem can be solved optimally using a greedy approach by always selecting the activity with the earliest finish time. Huffman trees provide data compression by assigning codes to characters based on frequency, with more common characters having shorter codes, and can be constructed greedily by repeatedly combining the two subtrees with lowest weight.
The presentation summarizes algorithms topics including dynamic programming, greedy algorithms, and sorting. It covers dynamic programming approaches to matrix chain multiplication and polygon triangulation. It also discusses the recursive and memoized solutions to matrix chain multiplication, and compares their time complexities. Kruskal's minimum spanning tree algorithm is explained along with observations on its runtime with increasing edges or vertices. Quicksort is analyzed using least squares fitting to determine constants in its average time complexity formula.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Algorithm homework help is a service that assists students who are studying algorithms and need help with their homework assignments. Algorithms are a crucial part of computer science and are used to solve complex problems efficiently. The algorithm homework help service provides a platform for students to get help with their algorithm assignments, which could be anything from simple problems to complex ones.
The document discusses various combinatorial optimization problems including the minimum spanning tree (MST), travelling salesman problem (TSP), and knapsack problem. It provides details on the MST and TSP, defining them, describing algorithms to solve them such as Kruskal's and Prim's for the MST and dynamic programming for the TSP, and discussing their applications and time complexities. The document also compares Prim and Kruskal algorithms and discusses how dynamic programming can provide an efficient solution for the TSP in some cases but not when the number of targets is too large.
The document discusses greedy algorithms and provides examples of minimum spanning tree (MST) algorithms. It begins by defining greedy algorithms as making locally optimal choices at each step to arrive at a global solution. Two common MST algorithms, Kruskal's and Prim's, are described. Kruskal's builds the tree by sorting edges by weight and adding the lowest weight edge at each step if it does not form a cycle. Prim's grows the tree from one vertex, always adding the lowest weight edge connecting a tree vertex to a non-tree vertex. The document provides examples of each algorithm applied to weighted graphs.
This document provides an overview of the Design and Analysis of Algorithms course. It discusses the closest pair of points problem and provides a divide and conquer algorithm to solve it in O(n log^2 n) time. The algorithm works by recursively dividing the problem into subproblems on left and right halves, computing the closest pairs for each, and then combining results while searching a sorted array to handle point pairs across divisions. Homework includes improving the closest pair algorithm to O(n log n) time and considering a data structure for orthogonal range searching.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
This document compares three popular path planning algorithms: A*, greedy best first search, and jump point search. It implements the algorithms in MATLAB using grid-based maps with random start/goal points and static obstacles. The algorithms are evaluated based on computational complexity, time complexity, and space complexity. Jump point search generally has the best performance out of the three algorithms as it can make long jumps along straight lines in the grid, exploring fewer nodes than A*.
Symbolic Computation via Gröbner BasisIJERA Editor
The purpose of this paper is to find the orthogonal projection of a rational parametric curve onto a rational parametric surface in 3-space. We show that the orthogonal projection problem can be reduced to the problem of finding elimination ideals via Gröbnerbasis. We provide a computational algorithm to find the orthogonal projection, and include a few illustrative examples. The presented method is effective and potentially useful for many applications related to the design of surfaces and other industrial and research fields.
Improvement of shortest path algorithms using subgraphs heuristicsMahdi Atawneh
The document summarizes a research paper that proposes a new algorithm to improve shortest path algorithms using subgraphs' heuristics. It begins with an introduction to shortest path problems and algorithms. It then discusses different graph representations that are used, including matrix, linear array, and reverse matrix representations. The proposed algorithm takes advantage of these representations by constructing a main matrix and reverse matrix, marking candidate nodes, and visiting neighbors breadth-first to minimize the graph and exclude non-destination nodes. Experimental results show the proposed algorithm outperforms Dijkstra's algorithm on sparse and dense graphs with runtime not exceeding O((V+E)logV).
Sensor Fusion Study - Ch3. Least Square Estimation [강소라, Stella, Hayden]AI Robotics KR
This document discusses Wiener filtering and recursive least squares estimation. It begins with an introduction to Wiener filtering, providing an overview of its history and development. It then discusses how the power spectrum of a stochastic process changes when passed through a linear time-invariant system. Next, it formulates the problem of using a linear filter to extract a signal from additive noise. It derives expressions for the power spectrum of the error and its variance. Finally, it considers optimizing a parametric filter by assuming the optimal filter is a first-order low-pass filter and that the signal and noise spectra are known forms. It derives an expression for the optimal parameter T based on minimizing the error variance.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
Discrete structure ch 3 short question'shammad463061
An algorithm is a finite sequence of precise instructions for performing a computation or solving a problem. There are several key properties of algorithms including that they must have defined input and output, be definite with precisely defined steps, be correct in producing the right output, and be finite so they terminate in a finite number of steps. Different algorithms are analyzed based on their time and space complexity, with a focus on worst-case complexity. Common algorithms include searching, sorting, and algorithms for solving optimization problems. Determining the complexity of algorithms and whether problems can be solved in polynomial time is important for understanding what problems are tractable or intractable.
This document discusses applications of calculus derivatives in telecommunications. It presents two examples of using derivatives to minimize costs and maximize areas. The first example finds the optimal cable cost to minimize total wiring costs. The second finds the maximum area that can be fenced given a limited amount of fencing material. Both examples take the derivative of a function, set it equal to zero, and solve to find critical values that optimize the objective function. The conclusion emphasizes how derivatives are essential for optimizing complex systems and aiding efficient problem solving across many fields including telecommunications.
The document discusses divide and conquer algorithms. It explains that divide and conquer algorithms work by dividing problems into smaller subproblems, solving the subproblems independently, and then combining the solutions to solve the original problem. An example of finding the minimum and maximum elements in an array using divide and conquer is provided, with pseudocode. Advantages of divide and conquer algorithms include solving difficult problems and often finding efficient solutions.
This document discusses applications of calculus derivatives in telecommunications. It presents two examples of using derivatives to minimize costs and maximize areas. The first example finds the optimal cable cost to minimize total wiring costs. The second determines the maximum area that can be fenced given a length of fencing material and rectangular space. Both examples take derivatives, set them equal to zero, and solve to find critical values that optimize the objective function. The conclusion emphasizes how derivatives provide direct, scientific information needed in fields like telecommunications to optimize complex systems.
This document provides an overview of hierarchical representation with hyperbolic geometry. It introduces hyperbolic space as an alternative to Euclidean space for embedding symbolic and hierarchical data. Key points covered include: (1) the limitations of Euclidean embedding for graph structures, (2) definitions of hyperbolic space and the Poincare disk model, (3) optimization techniques for gradient descent in hyperbolic space including calculating gradients and using retractions, and (4) simple toy experiments demonstrating optimization in hyperbolic space.
Kernal based speaker specific feature extraction and its applications in iTau...TELKOMNIKA JOURNAL
This document summarizes kernel-based speaker recognition techniques for an automatic speaker recognition system (ASR) in iTaukei cross-language speech recognition. It discusses kernel principal component analysis (KPCA), kernel independent component analysis (KICA), and kernel linear discriminant analysis (KLDA) for nonlinear speaker-specific feature extraction to improve ASR classification rates. Evaluation of the ASR system using these techniques on a Japanese language corpus and self-recorded iTaukei corpus showed that KLDA achieved the best performance, with an equal error rate improvement of up to 8.51% compared to KPCA and KICA.
Similar to Space-efficient Approximation Scheme for Maximum Matching in Sparse Graphs (20)
A Quest for Subexponential Time Parameterized Algorithms for Planar-k-Path: F...cseiitgn
The document summarizes a talk on obtaining subexponential time algorithms for NP-hard problems on planar graphs. It discusses using treewidth and tree decompositions to solve problems like 3-coloring in 2O(√n) time on n-vertex planar graphs. It also discusses the exponential time hypothesis and how it implies lower bounds, showing these algorithms are optimal up to constant factors in the exponent. The document outlines several chapters, including using grid minors and bidimensionality to obtain 2O(√k) algorithms for problems like k-path, even for some W[1]-hard problems parameterized by k.
Much of the early work on parameterized complexity considered the solution size as the parameter when parameterizing optimization problems, with a possible exception of treewidth. This talk will survey results and open problems on *alternate parameterizations*, where the parameter is typically some structure of the input or the distance of the output size from a guarantee.
Dynamic Parameterized Problems - Algorithms and Complexitycseiitgn
In this talk, we will discuss the parameterized complexity of various classical graph-theoretic problems in the dynamic framework where the input graph is being updated by a sequence of edge additions and deletions. We will describe fixed-parameter tractable algorithms and lower bounds on the running time of algorithms for these problems.
A multiple choice problem consists of a set of color classes P = {C1 , C2 , . . . , Cn }. Each color class Ci consists of a pair of objects typically a pair of points. Objective of such a problem, is to select one object from each color class such that certain optimality criteria is satisfied. One example of such problem is rainbow minmax gap problem(RMGP). In RMGP, given P, the objective is to select exactly one point from each color class, such that the maximum distance between a pair of consecutive selected points is minimized. This problem was studied by Consuegra and Narasimhan. We show that the problem is NP-hard. For our proof we also describe an auxiliary result on satisfiability. A 3-SAT formula is an LSAT formula if each clause (viewed as a set of literals) intersects at most one other clause, and, moreover, if two clauses intersect, then they have exactly one literal in common. We show that the problem of deciding whether an LSAT formula is satisfiable or not is NP-complete. We also briefly describe some approximation results of some multiple choice problems.
The Chasm at Depth Four, and Tensor Rank : Old results, new insightscseiitgn
Agrawal and Vinay [FOCS 2008] showed how any polynomial size arithmetic circuit can be thought of as a depth four arithmetic circuit of subexponential size. The resulting circuit size in this simulation was more carefully analyzed by Koiran [TCS 2012] and subsequently by Tavenas [MFCS 2013]. We provide a simple proof of this chain of results. We then abstract the main ingredient to apply it to formulas and constant depth circuits, and show more structured depth reductions for them.In an apriori surprising result, Raz [STOC 2010] showed that for any $n$ and $d$, such that $\omega(1) \leq d \leq O(logn/loglogn)$, constructing explicit tensors $T: [n] \rightarrow F$ of high enough rank would imply superpolynomial lower bounds for arithmetic formulas over the field F. Using the additional structure we obtain from our proof of the depth reduction for arithmetic formulas, we give a new and arguably simpler proof of this connection. We also extend this result for homogeneous formulas to show that, in fact, the connection holds for any d such that $\omega(1) \leq d \leq n^{o(1)}$. Joint work with Mrinal Kumar, Ramprasad Saptharishi and V Vinay.
Isolation Lemma for Directed Reachability and NL vs. Lcseiitgn
We discuss versions of isolation lemma for problems in NP and NL. We then show that improving the success probability of the isolation lemma is equivalent to some complexity theoretic collapses such as NP is in P/poly and NL is in L/poly. Basic familiarity with complexity classes such as NP, P/poly, NL will be assumed. All the other notions used in the talk will be introduced during the talk.
Unbounded Error Communication Complexity of XOR Functionscseiitgn
The document discusses unbounded error communication complexity of XOR functions. It presents a main theorem that for any MODm function, where m is expressed as j2k with j odd or 4, the unbounded error protocol complexity of MODm ∘ XOR is Ω(n/jm). The proof outline involves relating complexity to the spectral norm of the communication matrix using Fourier analysis of MODm functions for odd m. An upper bound is also presented along with prior work on characterizing complexity of symmetric XOR functions.
Narrow sieves, representative sets and divide-and-color are three breakthrough techniques related to color coding, which led to the design of extremely fast parameterized algorithms. In this talk, I will discuss the power and limitations of these techniques. I will also briefly address some recent developments related to these techniques, including general schemes for mixing them.
Solving connectivity problems via basic Linear Algebracseiitgn
Directed reachability and undirected connectivity are well studied problems in Complexity Theory. Reachability/Connectivity between distinct pairs of vertices through disjoint paths are well known but hard variations. We talk about recent algorithms to solve variants and restrictions of these problems in the static and dynamic settings by reductions to the determinant.
We study communication cost of computing functions when inputs are distributed among k processors, each of which is located at one vertex of a network/graph called a terminal. Every other node of the network also has a processor, with no input. The communication is point-to-point and the cost is the total number of bits exchanged by the protocol, in the worst case, on all edges. Our results show the effect of topology of the network on the total communication cost. We prove tight bounds for simple functions like Element-Distinctness (ED), which depend on the 1-median of the graph. On the other hand, we show that for a large class of natural functions like Set-Disjointness the communication cost is essentially n times the cost of the optimal Steiner tree connecting the terminals. Further, we show for natural composed functions like ED∘XOR and XOR∘ED, the naive protocols suggested by their definition is optimal for general networks. Interestingly, the bounds for these functions depend on more involved topological parameters that are a combination of Steiner tree and 1-median costs. To obtain our results, we use some tools like metric embeddings and linear programming whose use in the context of communication complexity is novel as far as we know. (Based on joint works with Jaikumar Radhakrishnan and Atri Rudra)
Space-efficient Approximation Scheme for Maximum Matching in Sparse Graphscseiitgn
This document describes a space efficient approximation scheme for maximum matching in sparse graphs. It begins with an introduction to matching problems and Baker's algorithm for approximating problems on planar graphs. It notes that computing distances is difficult in logspace for planar graphs. The document then outlines previous work on matching algorithms and complexity, and states that the goal is to obtain an approximation scheme for maximum matching that runs in logspace.
Efficiently decoding Reed-Muller codes from random errorscseiitgn
"Consider the following setting. Suppose we are given as input a ""corrupted"" truth-table of a polynomial f(x1,..,xm) of degree r = o(√m), with a random set of 1/2 - o(1) fraction of the evaluations flipped. Can the polynomial f be recovered? Turns out we can, and we can do it efficiently! The above question is a demonstration of the reliability of Reed-Muller codes under the random error model. For Reed-Muller codes over F_2, the message is thought of as an m-variate polynomial of degree r, and its encoding is just the evaluation over all of F_2^m.
In this talk, we shall study the resilience of RM codes in the *random error* model. We shall see that under random errors, RM codes can efficiently decode many more errors than its minimum distance. (For example, in the above toy example, minimum distance is about 1/2^{√m} but we can correct close to 1/2-fraction of random errors). This builds on a recent work of [Abbe-Shpilka-Wigderson-2015] who established strong connections between decoding erasures and decoding errors. The main result in this talk would be constructive versions of those connections that yield efficient decoding algorithms. This is joint work with Amir Shpilka and Ben Lee Volk."
Complexity Classes and the Graph Isomorphism Problemcseiitgn
The Graph Isomorphism problem is one of the few problems in NP, but not expected to be NP complete and not known to be in P.In this talk I will review some of the attempts that have been made in order to provide a better classification of the problem in terms of complexity classes reviewing upper and lower bounds and illustrating in this way the utility of several complexity classes.
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
Space-efficient Approximation Scheme for Maximum Matching in Sparse Graphs
1. Introduction
Matching
Our Contribution
Space Efficient Approximation Scheme for
Maximum Matching in Sparse Graphs
Samir Datta Raghav Kulkarni Anish Mukherjee
Chennai Mathematical Institute
NMI Workshop on Complexity Theory, IIT Gandhinagar
November 04, 2016
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
3. Introduction
Matching
Our Contribution
Baker’s Algorithm
Theorem (Baker ’83, Informal)
A class of problems (many of which are NP-Hard in general) can
be approximated arbitrarily close to the optimal value in linear time
for planar graphs.
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
4. Introduction
Matching
Our Contribution
Baker’s Algorithm
Theorem (Baker ’83, Informal)
A class of problems (many of which are NP-Hard in general) can
be approximated arbitrarily close to the optimal value in linear time
for planar graphs.
Example
Includes problems like
maximum independent set
partition into triangles
minimum vertex-cover
minimum dominating set
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
5. Introduction
Matching
Our Contribution
Baker’s Algorithm
Theorem (Baker ’83, Informal)
A class of problems (many of which are NP-Hard in general) can
be approximated arbitrarily close to the optimal value in linear time
for planar graphs.
Example
Includes problems like
maximum independent set
partition into triangles
minimum vertex-cover
minimum dominating set
... and any MSO definable properties
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
7. Introduction
Matching
Our Contribution
Baker’s Algorithm
Basic Idea
Partition the vertices into breadth-first search levels
Decompose the graph into successive width-k slices by
deleting levels congruent to i mod k
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
8. Introduction
Matching
Our Contribution
Baker’s Algorithm
Basic Idea
Partition the vertices into breadth-first search levels
Decompose the graph into successive width-k slices by
deleting levels congruent to i mod k
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
9. Introduction
Matching
Our Contribution
Baker’s Algorithm
Basic Idea
Partition the vertices into breadth-first search levels
Decompose the graph into successive width-k slices by
deleting levels congruent to i mod k
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
10. Introduction
Matching
Our Contribution
Baker’s Algorithm
Basic Idea
Partition the vertices into breadth-first search levels
Decompose the graph into successive width-k slices by
deleting levels congruent to i mod k
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
11. Introduction
Matching
Our Contribution
Baker’s Algorithm
Basic Idea
Partition the vertices into breadth-first search levels
Decompose the graph into successive width-k slices by
deleting levels congruent to i mod k
Resulting components have treewidth 3k − 1 [Boadlander]
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
12. Introduction
Matching
Our Contribution
Baker’s Algorithm
Basic Idea
Partition the vertices into breadth-first search levels
Decompose the graph into successive width-k slices by
deleting levels congruent to i mod k
Resulting components have treewidth 3k − 1 [Boadlander]
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
13. Introduction
Matching
Our Contribution
Baker’s Algorithm
Basic Idea
Partition the vertices into breadth-first search levels
Decompose the graph into successive width-k slices by
deleting levels congruent to i mod k
Resulting components have treewidth 3k − 1 [Boadlander]
Solve the problem optimally in each partition in linear time
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
14. Introduction
Matching
Our Contribution
Baker’s Algorithm
Basic Idea
Partition the vertices into breadth-first search levels
Decompose the graph into successive width-k slices by
deleting levels congruent to i mod k
Resulting components have treewidth 3k − 1 [Boadlander]
Solve the problem optimally in each partition in linear time
Union of solutions in all components is within (1 − 1/k) OPT
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
15. Introduction
Matching
Our Contribution
Baker’s Algorithm II
But here we are interested in space efficient algorithms,
namely algorithms running in Logspace
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
16. Introduction
Matching
Our Contribution
Baker’s Algorithm II
But here we are interested in space efficient algorithms,
namely algorithms running in Logspace
EJT gives an algorithm for Courcelle’s theorem in Logspace
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
17. Introduction
Matching
Our Contribution
Baker’s Algorithm II
But here we are interested in space efficient algorithms,
namely algorithms running in Logspace
EJT gives an algorithm for Courcelle’s theorem in Logspace
But for the first part we need to compute distance
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
18. Introduction
Matching
Our Contribution
Baker’s Algorithm II
But here we are interested in space efficient algorithms,
namely algorithms running in Logspace
EJT gives an algorithm for Courcelle’s theorem in Logspace
But for the first part we need to compute distance
Distance is NL-Complete in general undirected graphs and in
UL ∩ co-UL for planar graphs.
And these classes are not believed to be in Logspace.
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
19. Introduction
Matching
Our Contribution
Baker’s Algorithm II
But here we are interested in space efficient algorithms,
namely algorithms running in Logspace
EJT gives an algorithm for Courcelle’s theorem in Logspace
But for the first part we need to compute distance
Distance is NL-Complete in general undirected graphs and in
UL ∩ co-UL for planar graphs.
And these classes are not believed to be in Logspace.
Question
Can we get away without using distance ?
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
20. Introduction
Matching
Our Contribution
Baker’s Algorithm II
But here we are interested in space efficient algorithms,
namely algorithms running in Logspace
EJT gives an algorithm for Courcelle’s theorem in Logspace
But for the first part we need to compute distance
Distance is NL-Complete in general undirected graphs and in
UL ∩ co-UL for planar graphs.
And these classes are not believed to be in Logspace.
Question
Can we get away without using distance ? Not yet
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
23. Introduction
Matching
Our Contribution
Matching
Matching
A matching M ⊆ E is a set of independent edges
A matching M is called perfect if M covers all vertices of G
M of maximum size is called maximum matching
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
24. Introduction
Matching
Our Contribution
Matching
Matching
A matching M ⊆ E is a set of independent edges
A matching M is called perfect if M covers all vertices of G
M of maximum size is called maximum matching
Augmenting Paths
In an alternating path the edges alternate between M and
E M
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
25. Introduction
Matching
Our Contribution
Matching
Matching
A matching M ⊆ E is a set of independent edges
A matching M is called perfect if M covers all vertices of G
M of maximum size is called maximum matching
Augmenting Paths
In an alternating path the edges alternate between M and
E M
An alternating path P is augmenting if P begins and ends at
unmatched vertices, that is, M ⊕ P = (M P) ∪ (P M) is a
matching with cardinality |M| + 1.
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
26. Introduction
Matching
Our Contribution
Matching
Edmond’s blossom algorithm for maximum matching was one
of the first examples of a non-trivial polynomial time algorithm
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
27. Introduction
Matching
Our Contribution
Matching
Edmond’s blossom algorithm for maximum matching was one
of the first examples of a non-trivial polynomial time algorithm
Valiant’s #P-hardness for counting perfect matchings gave
surprising insights into the counting complexity classes
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
28. Introduction
Matching
Our Contribution
Matching
Edmond’s blossom algorithm for maximum matching was one
of the first examples of a non-trivial polynomial time algorithm
Valiant’s #P-hardness for counting perfect matchings gave
surprising insights into the counting complexity classes
The RNC bound for maximum matching has yielded powerful
tools, such as the isolating lemma [MVV87]
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
29. Introduction
Matching
Our Contribution
Matching
Edmond’s blossom algorithm for maximum matching was one
of the first examples of a non-trivial polynomial time algorithm
Valiant’s #P-hardness for counting perfect matchings gave
surprising insights into the counting complexity classes
The RNC bound for maximum matching has yielded powerful
tools, such as the isolating lemma [MVV87]
Bipartite Perfect Matching is in quasi-NC [FGT16]
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
30. Introduction
Matching
Our Contribution
Matching
Edmond’s blossom algorithm for maximum matching was one
of the first examples of a non-trivial polynomial time algorithm
Valiant’s #P-hardness for counting perfect matchings gave
surprising insights into the counting complexity classes
The RNC bound for maximum matching has yielded powerful
tools, such as the isolating lemma [MVV87]
Bipartite Perfect Matching is in quasi-NC [FGT16]
The best hardness known is NL-hardness [CSV84]
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
31. Introduction
Matching
Our Contribution
Matching in Planar Graphs
Counting perfect matchings in planar graphs is in NC [Vaz88]
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
32. Introduction
Matching
Our Contribution
Matching in Planar Graphs
Counting perfect matchings in planar graphs is in NC [Vaz88]
Only the bipartite planar case is known to be in NC for finding
a perfect matching [MN95]
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
33. Introduction
Matching
Our Contribution
Matching in Planar Graphs
Counting perfect matchings in planar graphs is in NC [Vaz88]
Only the bipartite planar case is known to be in NC for finding
a perfect matching [MN95]
Open Problem
Is the construction version in general planar graphs in NC ?
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
34. Introduction
Matching
Our Contribution
Matching in Planar Graphs
Counting perfect matchings in planar graphs is in NC [Vaz88]
Only the bipartite planar case is known to be in NC for finding
a perfect matching [MN95]
Open Problem
Is the construction version in general planar graphs in NC ?
Computing a maximum matching for bipartite planar graphs is
shown to be in NC [Hoang]
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
35. Introduction
Matching
Our Contribution
Matching in Planar Graphs
Counting perfect matchings in planar graphs is in NC [Vaz88]
Only the bipartite planar case is known to be in NC for finding
a perfect matching [MN95]
Open Problem
Is the construction version in general planar graphs in NC ?
Computing a maximum matching for bipartite planar graphs is
shown to be in NC [Hoang]
Only L-hardness is known for planar graphs [DKLM10].
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
36. Introduction
Matching
Our Contribution
Time-Space Tradeoff
Removing non-determinism even for planar reachability leads
to either a quasi-polynomial time blow-up or need large space
(O(
√
n)) [INPVW13, AKNW14]
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
37. Introduction
Matching
Our Contribution
Time-Space Tradeoff
Removing non-determinism even for planar reachability leads
to either a quasi-polynomial time blow-up or need large space
(O(
√
n)) [INPVW13, AKNW14]
For general graphs it is even worse, with O(n/2
√
log n) space
and polynomial time [BBRS]
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
38. Introduction
Matching
Our Contribution
Time-Space Tradeoff
Removing non-determinism even for planar reachability leads
to either a quasi-polynomial time blow-up or need large space
(O(
√
n)) [INPVW13, AKNW14]
For general graphs it is even worse, with O(n/2
√
log n) space
and polynomial time [BBRS]
Previous Results
Approximating maximum matching has been considered both
in time and parallel complexity model
Linear-time [DP14] and NC [HV06] approximation scheme are
the best known complexity bounds here
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
39. Introduction
Matching
Our Contribution
Time-Space Tradeoff
Removing non-determinism even for planar reachability leads
to either a quasi-polynomial time blow-up or need large space
(O(
√
n)) [INPVW13, AKNW14]
For general graphs it is even worse, with O(n/2
√
log n) space
and polynomial time [BBRS]
Previous Results
Approximating maximum matching has been considered both
in time and parallel complexity model
Linear-time [DP14] and NC [HV06] approximation scheme are
the best known complexity bounds here
But work on space efficient approximation seems limited.
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
41. Introduction
Matching
Our Contribution
Results
Theorem
Given a planar graph and any fixed > 0, we can find a (1 − )
factor approximation to the maximum matching in Logspace.
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
42. Introduction
Matching
Our Contribution
Results
Theorem
Given a planar graph and any fixed > 0, we can find a (1 − )
factor approximation to the maximum matching in Logspace.
This result extends to many other sparse graph classes
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
43. Introduction
Matching
Our Contribution
Results
Theorem
Given a planar graph and any fixed > 0, we can find a (1 − )
factor approximation to the maximum matching in Logspace.
This result extends to many other sparse graph classes
Some of our ideas are similar to the classical algorithm of
Hopcroft-Karp for maximum matching in bipartite graphs
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
44. Introduction
Matching
Our Contribution
Results
Theorem
Given a planar graph and any fixed > 0, we can find a (1 − )
factor approximation to the maximum matching in Logspace.
This result extends to many other sparse graph classes
Some of our ideas are similar to the classical algorithm of
Hopcroft-Karp for maximum matching in bipartite graphs
But we consider graphs which are not necessarily bipartite
Our algorithm trades off Logspace and non-bipartiteness for
approximation and sparsity
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
45. Introduction
Matching
Our Contribution
Results
Theorem
Given a planar graph and any fixed > 0, we can find a (1 − )
factor approximation to the maximum matching in Logspace.
This result extends to many other sparse graph classes
Some of our ideas are similar to the classical algorithm of
Hopcroft-Karp for maximum matching in bipartite graphs
But we consider graphs which are not necessarily bipartite
Our algorithm trades off Logspace and non-bipartiteness for
approximation and sparsity
Solve by reducing it to bounded degree graphs suitably.
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
46. Introduction
Matching
Our Contribution
Results
Theorem
Let G be a graph with degrees bounded by a constant d then for
any fixed > 0, we can find a (1 − ) factor approximation to the
maximum matching in Logspace.
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
47. Introduction
Matching
Our Contribution
Results
Theorem
Let G be a graph with degrees bounded by a constant d then for
any fixed > 0, we can find a (1 − ) factor approximation to the
maximum matching in Logspace.
The main fact we use here is that any bounded degree graphs
always contains a linear size matching
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
48. Introduction
Matching
Our Contribution
Results
Theorem
Let G be a graph with degrees bounded by a constant d then for
any fixed > 0, we can find a (1 − ) factor approximation to the
maximum matching in Logspace.
The main fact we use here is that any bounded degree graphs
always contains a linear size matching
Many planar graph classes, such as 3-connected planar
graphs, are known to be containing a large matching
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
49. Introduction
Matching
Our Contribution
Results
Theorem
Let G be a graph with degrees bounded by a constant d then for
any fixed > 0, we can find a (1 − ) factor approximation to the
maximum matching in Logspace.
The main fact we use here is that any bounded degree graphs
always contains a linear size matching
Many planar graph classes, such as 3-connected planar
graphs, are known to be containing a large matching
In fact our algorithm works for any recursively sparse graph
containing a large matching.
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
50. Introduction
Matching
Our Contribution
A Brief Idea
1 Consider short augmenting paths. In a bounded degree graph,
there exist linearly many short augmenting paths
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
51. Introduction
Matching
Our Contribution
A Brief Idea
1 Consider short augmenting paths. In a bounded degree graph,
there exist linearly many short augmenting paths
2 Pick a large subset of non-intersecting augmenting paths i.e
find a large independent set of in Logspace
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
52. Introduction
Matching
Our Contribution
A Brief Idea
1 Consider short augmenting paths. In a bounded degree graph,
there exist linearly many short augmenting paths
2 Pick a large subset of non-intersecting augmenting paths i.e
find a large independent set of in Logspace
3 To convert a planar graph to a bounded degree graph we
delete high degree vertices
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
53. Introduction
Matching
Our Contribution
A Brief Idea
1 Consider short augmenting paths. In a bounded degree graph,
there exist linearly many short augmenting paths
2 Pick a large subset of non-intersecting augmenting paths i.e
find a large independent set of in Logspace
3 To convert a planar graph to a bounded degree graph we
delete high degree vertices
4 The number of such vertices is small though possibly still
linear in the graph size
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
54. Introduction
Matching
Our Contribution
A Brief Idea
1 Consider short augmenting paths. In a bounded degree graph,
there exist linearly many short augmenting paths
2 Pick a large subset of non-intersecting augmenting paths i.e
find a large independent set of in Logspace
3 To convert a planar graph to a bounded degree graph we
delete high degree vertices
4 The number of such vertices is small though possibly still
linear in the graph size
5 Remove small number of vertices and edges to transform the
graph down to one containing a linear sized matching.
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
55. Introduction
Matching
Our Contribution
Bounded degree graphs I
We deal with augmenting paths of length at most 2k + 1
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
56. Introduction
Matching
Our Contribution
Bounded degree graphs I
We deal with augmenting paths of length at most 2k + 1
Such paths can be found in Logspace by say exhaustively
listing all (2k + 1)-tuples of vertices using L-transducers
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
57. Introduction
Matching
Our Contribution
Bounded degree graphs I
We deal with augmenting paths of length at most 2k + 1
Such paths can be found in Logspace by say exhaustively
listing all (2k + 1)-tuples of vertices using L-transducers
If |M| differs significantly from |Mopt| then we show that
there are many short augmenting paths
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
58. Introduction
Matching
Our Contribution
Bounded degree graphs I
We deal with augmenting paths of length at most 2k + 1
Such paths can be found in Logspace by say exhaustively
listing all (2k + 1)-tuples of vertices using L-transducers
If |M| differs significantly from |Mopt| then we show that
there are many short augmenting paths
Lemma
If |M| < (1 − 3
k )|Mopt| for some k then there are at least
3|Mopt|/2k augmenting paths consisting of at most 2k + 1 edges.
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
59. Introduction
Matching
Our Contribution
Bounded degree graphs I
We deal with augmenting paths of length at most 2k + 1
Such paths can be found in Logspace by say exhaustively
listing all (2k + 1)-tuples of vertices using L-transducers
If |M| differs significantly from |Mopt| then we show that
there are many short augmenting paths
Lemma
If |M| < (1 − 3
k )|Mopt| for some k then there are at least
3|Mopt|/2k augmenting paths consisting of at most 2k + 1 edges.
Form an intersection graph of these short augmenting paths
by making two paths adjacent if they have a vertex in common
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
60. Introduction
Matching
Our Contribution
Maximum matching in bounded degree graphs II
Lemma
A β-factor approximation to the maximum independent set can be
computed in Logspace
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
61. Introduction
Matching
Our Contribution
Maximum matching in bounded degree graphs II
Lemma
A β-factor approximation to the maximum independent set can be
computed in Logspace
Colour the paths and the largest colour class works
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
62. Introduction
Matching
Our Contribution
Maximum matching in bounded degree graphs II
Lemma
A β-factor approximation to the maximum independent set can be
computed in Logspace
Colour the paths and the largest colour class works
As the degree is bounded by some D, find at most D disjoint
forests that partition the edge set
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
63. Introduction
Matching
Our Contribution
Maximum matching in bounded degree graphs II
Lemma
A β-factor approximation to the maximum independent set can be
computed in Logspace
Colour the paths and the largest colour class works
As the degree is bounded by some D, find at most D disjoint
forests that partition the edge set
Can be done using Reingold’s algorithm for connectivity
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
64. Introduction
Matching
Our Contribution
Maximum matching in bounded degree graphs II
Lemma
A β-factor approximation to the maximum independent set can be
computed in Logspace
Colour the paths and the largest colour class works
As the degree is bounded by some D, find at most D disjoint
forests that partition the edge set
Can be done using Reingold’s algorithm for connectivity
Colour each forest with 2 colours and it gives D bit colours to
every node
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
65. Introduction
Matching
Our Contribution
Maximum matching in bounded degree graphs II
Lemma
A β-factor approximation to the maximum independent set can be
computed in Logspace
Colour the paths and the largest colour class works
As the degree is bounded by some D, find at most D disjoint
forests that partition the edge set
Can be done using Reingold’s algorithm for connectivity
Colour each forest with 2 colours and it gives D bit colours to
every node
This yields a 2D i.e. constant colouring of the graph.
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
66. Introduction
Matching
Our Contribution
Theorem
In a bounded degree graph for any fixed > 0, we can find a
(1 − ) factor approximation to the maximum matching in L.
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
67. Introduction
Matching
Our Contribution
Theorem
In a bounded degree graph for any fixed > 0, we can find a
(1 − ) factor approximation to the maximum matching in L.
Previous lemma yields large fraction of short paths,
augmentable in parallel
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
68. Introduction
Matching
Our Contribution
Theorem
In a bounded degree graph for any fixed > 0, we can find a
(1 − ) factor approximation to the maximum matching in L.
Previous lemma yields large fraction of short paths,
augmentable in parallel
A L-transducer can do the augmentation and we chain
(1 − 3/k)2k/β such transducers
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
69. Introduction
Matching
Our Contribution
Theorem
In a bounded degree graph for any fixed > 0, we can find a
(1 − ) factor approximation to the maximum matching in L.
Previous lemma yields large fraction of short paths,
augmentable in parallel
A L-transducer can do the augmentation and we chain
(1 − 3/k)2k/β such transducers
At each step we increase the matching size by an additive
term of |Mopt|/(2k/β)
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
70. Introduction
Matching
Our Contribution
Theorem
In a bounded degree graph for any fixed > 0, we can find a
(1 − ) factor approximation to the maximum matching in L.
Previous lemma yields large fraction of short paths,
augmentable in parallel
A L-transducer can do the augmentation and we chain
(1 − 3/k)2k/β such transducers
At each step we increase the matching size by an additive
term of |Mopt|/(2k/β)
After k rounds the ratio would be at least (1 − 3/k) ≥ 1 − .
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
71. Introduction
Matching
Our Contribution
Algorithm 1
1 Fix integer k = 3
.
2 Construct the intersection graph of augmenting paths of
length at most 2k + 1 in G.
3 Let the graph be H with maximum degree
≤ D = (2k + 1)2d2k+1
4 Find at most D disjoint forests that partition the edge set.
5 Colour each forest with 2 colours, giving D bit colours to
every node
6 Augment the vertex disjoint augmenting paths in parallel
7 Add the new matching to M
8 Return M
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
72. Introduction
Matching
Our Contribution
Planar maximum matching
Definition
A graph is tame if all pairs of vertices (a, b) which are endpoints of
a even length isolated path, support at most two of them.
This can be ensured by deleting a set of edges E from G
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
73. Introduction
Matching
Our Contribution
Planar maximum matching
Definition
A graph is tame if all pairs of vertices (a, b) which are endpoints of
a even length isolated path, support at most two of them.
This can be ensured by deleting a set of edges E from G
Lemma
The size of the maximum matching in G E is the same as in G.
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
74. Introduction
Matching
Our Contribution
Planar maximum matching
Definition
A graph is tame if all pairs of vertices (a, b) which are endpoints of
a even length isolated path, support at most two of them.
This can be ensured by deleting a set of edges E from G
Lemma
The size of the maximum matching in G E is the same as in G.
Main Lemma
A tame planar graph has a linear sized maximum matching.
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
76. Introduction
Matching
Our Contribution
Planar maximum matching: tame graphs
One of the following is true :
Total length of long isolated paths in G is large enough
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
77. Introduction
Matching
Our Contribution
Planar maximum matching: tame graphs
One of the following is true :
Total length of long isolated paths in G is large enough
We can transform the graph by case analysis to a minimum
degree 3 planar graph
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
78. Introduction
Matching
Our Contribution
Planar maximum matching: tame graphs
One of the following is true :
Total length of long isolated paths in G is large enough
We can transform the graph by case analysis to a minimum
degree 3 planar graph
Lemma
A graph in which the total length of isolated paths is N has a
matching of size at least N/4.
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
79. Introduction
Matching
Our Contribution
Planar maximum matching: tame graphs
One of the following is true :
Total length of long isolated paths in G is large enough
We can transform the graph by case analysis to a minimum
degree 3 planar graph
Lemma
A graph in which the total length of isolated paths is N has a
matching of size at least N/4.
Lemma
A min degree 3 planar graph has a matching of size at least n/140.
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
80. Introduction
Matching
Our Contribution
Planar maximum matching III
Theorem
There is a LSAS for maximum matching in planar graphs.
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
81. Introduction
Matching
Our Contribution
Planar maximum matching III
Theorem
There is a LSAS for maximum matching in planar graphs.
proof
Tame the graph G to G preserving the maximum matching
size. Suppose there are least αn matching edges in G
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
82. Introduction
Matching
Our Contribution
Planar maximum matching III
Theorem
There is a LSAS for maximum matching in planar graphs.
proof
Tame the graph G to G preserving the maximum matching
size. Suppose there are least αn matching edges in G
Delete vertices of degree more than d from G which removes
at most 6n/d many matching edges
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
83. Introduction
Matching
Our Contribution
Planar maximum matching III
Theorem
There is a LSAS for maximum matching in planar graphs.
proof
Tame the graph G to G preserving the maximum matching
size. Suppose there are least αn matching edges in G
Delete vertices of degree more than d from G which removes
at most 6n/d many matching edges
So we have a (α − 6/d)n sized matching remaining
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
84. Introduction
Matching
Our Contribution
Planar maximum matching III
Theorem
There is a LSAS for maximum matching in planar graphs.
proof
Tame the graph G to G preserving the maximum matching
size. Suppose there are least αn matching edges in G
Delete vertices of degree more than d from G which removes
at most 6n/d many matching edges
So we have a (α − 6/d)n sized matching remaining
Taking d = 12
2α− reduces the problem to find a (1 − /2)
factor approximation algorithm for bounded degree graphs.
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
85. Introduction
Matching
Our Contribution
Conclusion
We showed that maximum matching can be approximated to
any arbitrary constant factor for bounded degree graphs
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
86. Introduction
Matching
Our Contribution
Conclusion
We showed that maximum matching can be approximated to
any arbitrary constant factor for bounded degree graphs
For planar graphs we require only the following properties:
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
87. Introduction
Matching
Our Contribution
Conclusion
We showed that maximum matching can be approximated to
any arbitrary constant factor for bounded degree graphs
For planar graphs we require only the following properties:
Sparsity: The average degree is bounded by 6.
Bipartite sparsity: Even lower, i.e 4.
Min-degree: The minimum degree is at least 3
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
88. Introduction
Matching
Our Contribution
Conclusion
We showed that maximum matching can be approximated to
any arbitrary constant factor for bounded degree graphs
For planar graphs we require only the following properties:
Sparsity: The average degree is bounded by 6.
Bipartite sparsity: Even lower, i.e 4.
Min-degree: The minimum degree is at least 3
So can be extended many other classes of sparse graphs
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
89. Introduction
Matching
Our Contribution
Conclusion
We showed that maximum matching can be approximated to
any arbitrary constant factor for bounded degree graphs
For planar graphs we require only the following properties:
Sparsity: The average degree is bounded by 6.
Bipartite sparsity: Even lower, i.e 4.
Min-degree: The minimum degree is at least 3
So can be extended many other classes of sparse graphs
bounded genus graphs,
k-page graphs,
1-planar graphs, k-Apex graphs etc
recursively sparse graph containing a linear size matching.
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
92. Introduction
Matching
Our Contribution
Open Problems
Baker’s Theorem in Logspace ?
Devise an LSAS for maximum matching in general graphs
or at least in arbitrary sparse graphs
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
93. Introduction
Matching
Our Contribution
Open Problems
Baker’s Theorem in Logspace ?
Devise an LSAS for maximum matching in general graphs
or at least in arbitrary sparse graphs
Lower bounds in the context of approximation ?
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
94. Introduction
Matching
Our Contribution
Open Problems
Baker’s Theorem in Logspace ?
Devise an LSAS for maximum matching in general graphs
or at least in arbitrary sparse graphs
Lower bounds in the context of approximation ?
Currently we do not know of any non-trivial, even
TC0
-hardness results for approximation to any factor.
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
96. Introduction
Matching
Our Contribution
Packing complex patterns ?
H-Matching
Pack disjoint copies of a fixed graph H
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
97. Introduction
Matching
Our Contribution
Packing complex patterns ?
H-Matching
Pack disjoint copies of a fixed graph H
Maximum planar H-matching is NP-Complete for any H
containing at least three nodes.
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
98. Introduction
Matching
Our Contribution
Packing complex patterns ?
H-Matching
Pack disjoint copies of a fixed graph H
Maximum planar H-matching is NP-Complete for any H
containing at least three nodes.
Approximation and hardness is known for some restricted cases
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
99. Introduction
Matching
Our Contribution
Packing complex patterns ?
H-Matching
Pack disjoint copies of a fixed graph H
Maximum planar H-matching is NP-Complete for any H
containing at least three nodes.
Approximation and hardness is known for some restricted cases
We give LSAS for graphs with a small balanced separator,
for packing any fixed graph H when degrees are bounded
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
100. Introduction
Matching
Our Contribution
Packing complex patterns ?
H-Matching
Pack disjoint copies of a fixed graph H
Maximum planar H-matching is NP-Complete for any H
containing at least three nodes.
Approximation and hardness is known for some restricted cases
We give LSAS for graphs with a small balanced separator,
for packing any fixed graph H when degrees are bounded
Otherwise, Packing some special class of patterns
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in
101. Introduction
Matching
Our Contribution
Packing complex patterns ?
H-Matching
Pack disjoint copies of a fixed graph H
Maximum planar H-matching is NP-Complete for any H
containing at least three nodes.
Approximation and hardness is known for some restricted cases
We give LSAS for graphs with a small balanced separator,
for packing any fixed graph H when degrees are bounded
Otherwise, Packing some special class of patterns
As before, the idea is to delete high degree vertices
and tame the graph by removing some forbidden patterns
Samir Datta Raghav Kulkarni Anish Mukherjee Space Efficient Approximation Scheme for Maximum Matching in