Iterative improvement is an algorithm design technique for solving optimization problems. It starts with a feasible solution and repeatedly makes small changes to the current solution to find a solution with a better objective function value until no further improvements can be found. The simplex method, Ford-Fulkerson algorithm, and local search heuristics are examples that use this technique. The maximum flow problem can be solved using the iterative Ford-Fulkerson algorithm, which finds augmenting paths in a flow network to incrementally increase the flow from the source to the sink until no more augmenting paths exist.
This document discusses lower bounds and limitations of algorithms. It begins by defining lower bounds and providing examples of problems where tight lower bounds have been established, such as sorting requiring Ω(nlogn) comparisons. It then discusses methods for establishing lower bounds, including trivial bounds, decision trees, adversary arguments, and problem reduction. The document explores different classes of problems based on complexity, such as P, NP, and NP-complete problems. It concludes by examining approaches for tackling difficult combinatorial problems that are NP-hard, such as using exact algorithms, approximation algorithms, and local search heuristics.
The document provides an introduction to the analysis of algorithms. It discusses key concepts like the definition of an algorithm, properties of algorithms, common computational problems, and basic issues related to algorithms. It also covers algorithm design strategies, fundamental data structures, and the fundamentals of analyzing algorithm efficiency. Examples of algorithms for computing the greatest common divisor and checking for prime numbers are provided to illustrate algorithm design and analysis.
This document discusses dynamic programming and greedy algorithms. It begins by defining dynamic programming as a technique for solving problems with overlapping subproblems. Examples provided include computing the Fibonacci numbers and binomial coefficients. Greedy algorithms are introduced as constructing solutions piece by piece through locally optimal choices. Applications discussed are the change-making problem, minimum spanning trees using Prim's and Kruskal's algorithms, and single-source shortest paths. Floyd's algorithm for all pairs shortest paths and optimal binary search trees are also summarized.
The document discusses the divide and conquer algorithm design paradigm. It begins by defining divide and conquer as recursively breaking down a problem into smaller sub-problems, solving the sub-problems, and then combining the solutions to solve the original problem. Some examples of problems that can be solved using divide and conquer include binary search, quicksort, merge sort, and the fast Fourier transform algorithm. The document then discusses control abstraction, efficiency analysis, and uses divide and conquer to provide algorithms for large integer multiplication and merge sort. It concludes by defining the convex hull problem and providing an example input and output.
This document discusses dynamic programming and algorithms for solving all-pair shortest path problems. It begins by explaining dynamic programming as an optimization technique that works bottom-up by solving subproblems once and storing their solutions, rather than recomputing them. It then presents Floyd's algorithm for finding shortest paths between all pairs of nodes in a graph. The algorithm iterates through nodes, updating the shortest path lengths between all pairs that include that node by exploring paths through it. Finally, it discusses solving multistage graph problems using forward and backward methods that work through the graph stages in different orders.
This document discusses hashing techniques for indexing and retrieving elements in a data structure. It begins by defining hashing and its components like hash functions, collisions, and collision handling. It then describes two common collision handling techniques - separate chaining and open addressing. Separate chaining uses linked lists to handle collisions while open addressing resolves collisions by probing to find alternate empty slots using techniques like linear probing and quadratic probing. The document provides examples and explanations of how these hashing techniques work.
This document discusses dynamic programming and algorithms for solving all-pair shortest path problems. It begins by defining dynamic programming as avoiding recalculating solutions by storing results in a table. It then describes Floyd's algorithm for finding shortest paths between all pairs of nodes in a graph. The algorithm iterates through nodes, calculating shortest paths that pass through each intermediate node. It takes O(n3) time for a graph with n nodes. Finally, it discusses the multistage graph problem and provides forward and backward algorithms to find the minimum cost path from source to destination in a multistage graph in O(V+E) time, where V and E are the numbers of vertices and edges.
Divide and Conquer Algorithms - D&C forms a distinct algorithm design technique in computer science, wherein a problem is solved by repeatedly invoking the algorithm on smaller occurrences of the same problem. Binary search, merge sort, Euclid's algorithm can all be formulated as examples of divide and conquer algorithms. Strassen's algorithm and Nearest Neighbor algorithm are two other examples.
This document discusses lower bounds and limitations of algorithms. It begins by defining lower bounds and providing examples of problems where tight lower bounds have been established, such as sorting requiring Ω(nlogn) comparisons. It then discusses methods for establishing lower bounds, including trivial bounds, decision trees, adversary arguments, and problem reduction. The document explores different classes of problems based on complexity, such as P, NP, and NP-complete problems. It concludes by examining approaches for tackling difficult combinatorial problems that are NP-hard, such as using exact algorithms, approximation algorithms, and local search heuristics.
The document provides an introduction to the analysis of algorithms. It discusses key concepts like the definition of an algorithm, properties of algorithms, common computational problems, and basic issues related to algorithms. It also covers algorithm design strategies, fundamental data structures, and the fundamentals of analyzing algorithm efficiency. Examples of algorithms for computing the greatest common divisor and checking for prime numbers are provided to illustrate algorithm design and analysis.
This document discusses dynamic programming and greedy algorithms. It begins by defining dynamic programming as a technique for solving problems with overlapping subproblems. Examples provided include computing the Fibonacci numbers and binomial coefficients. Greedy algorithms are introduced as constructing solutions piece by piece through locally optimal choices. Applications discussed are the change-making problem, minimum spanning trees using Prim's and Kruskal's algorithms, and single-source shortest paths. Floyd's algorithm for all pairs shortest paths and optimal binary search trees are also summarized.
The document discusses the divide and conquer algorithm design paradigm. It begins by defining divide and conquer as recursively breaking down a problem into smaller sub-problems, solving the sub-problems, and then combining the solutions to solve the original problem. Some examples of problems that can be solved using divide and conquer include binary search, quicksort, merge sort, and the fast Fourier transform algorithm. The document then discusses control abstraction, efficiency analysis, and uses divide and conquer to provide algorithms for large integer multiplication and merge sort. It concludes by defining the convex hull problem and providing an example input and output.
This document discusses dynamic programming and algorithms for solving all-pair shortest path problems. It begins by explaining dynamic programming as an optimization technique that works bottom-up by solving subproblems once and storing their solutions, rather than recomputing them. It then presents Floyd's algorithm for finding shortest paths between all pairs of nodes in a graph. The algorithm iterates through nodes, updating the shortest path lengths between all pairs that include that node by exploring paths through it. Finally, it discusses solving multistage graph problems using forward and backward methods that work through the graph stages in different orders.
This document discusses hashing techniques for indexing and retrieving elements in a data structure. It begins by defining hashing and its components like hash functions, collisions, and collision handling. It then describes two common collision handling techniques - separate chaining and open addressing. Separate chaining uses linked lists to handle collisions while open addressing resolves collisions by probing to find alternate empty slots using techniques like linear probing and quadratic probing. The document provides examples and explanations of how these hashing techniques work.
This document discusses dynamic programming and algorithms for solving all-pair shortest path problems. It begins by defining dynamic programming as avoiding recalculating solutions by storing results in a table. It then describes Floyd's algorithm for finding shortest paths between all pairs of nodes in a graph. The algorithm iterates through nodes, calculating shortest paths that pass through each intermediate node. It takes O(n3) time for a graph with n nodes. Finally, it discusses the multistage graph problem and provides forward and backward algorithms to find the minimum cost path from source to destination in a multistage graph in O(V+E) time, where V and E are the numbers of vertices and edges.
Divide and Conquer Algorithms - D&C forms a distinct algorithm design technique in computer science, wherein a problem is solved by repeatedly invoking the algorithm on smaller occurrences of the same problem. Binary search, merge sort, Euclid's algorithm can all be formulated as examples of divide and conquer algorithms. Strassen's algorithm and Nearest Neighbor algorithm are two other examples.
1) The document describes the divide-and-conquer algorithm design paradigm. It splits problems into smaller subproblems, solves the subproblems recursively, and then combines the solutions to solve the original problem.
2) Binary search is provided as an example algorithm that uses divide-and-conquer. It divides the search space in half at each step to quickly determine if an element is present.
3) Finding the maximum and minimum elements in an array is another problem solved using divide-and-conquer. It recursively finds the max and min of halves of the array and combines the results.
This document provides an overview of the CS303 Computer Algorithms course taught by Dr. Yanxia Jia. It discusses the importance of algorithms, provides examples of classic algorithm problems like sorting and searching, and summarizes common algorithm design techniques and data structures used, including arrays, linked lists, stacks, queues, heaps, graphs and trees.
This document discusses the merge sort algorithm for sorting a sequence of numbers. It begins by introducing the divide and conquer approach, which merge sort uses. It then provides an example of how merge sort works, dividing the sequence into halves, sorting the halves recursively, and then merging the sorted halves together. The document proceeds to provide pseudocode for the merge sort and merge algorithms. It analyzes the running time of merge sort using recursion trees, determining that it runs in O(n log n) time. Finally, it covers techniques for solving recurrence relations that arise in algorithms like divide and conquer approaches.
The document discusses algorithm analysis and design. It begins with an introduction to analyzing algorithms including average case analysis and solving recurrences. It then provides definitions of algorithms both informal and formal. Key aspects of algorithm study and specification methods like pseudocode are outlined. Selection sort and tower of Hanoi problems are presented as examples and analyzed for time and space complexity. Average case analysis is discussed assuming all inputs are equally likely.
Algorithm Design and Complexity - Course 3Traian Rebedea
The document provides an overview of recursive algorithms and complexity analysis. It discusses recursive algorithms, divide and conquer design technique, and several examples of recursive algorithms including Towers of Hanoi, Merge Sort, and Quick Sort. For recursive algorithms, it explains how to analyze their running time using recurrence relations. It then covers four methods for solving recurrence relations: iteration, recursion trees, substitution method, and master theorem. The substitution method and master theorem are described as the most rigorous mathematical approaches.
The document discusses various algorithms that can be solved using backtracking. It begins by defining backtracking as a general algorithm design technique for problems that involve searching for solutions satisfying constraints. It then provides examples of problems that can be solved using backtracking, including the 8 queens problem, sum of subsets, graph coloring, and finding Hamiltonian cycles in a graph. For each problem, it outlines the key steps and provides pseudocode for the backtracking algorithm.
Algorithm Design and Complexity - Course 5Traian Rebedea
This document provides an overview of greedy algorithms and their use in solving optimization problems. It discusses key aspects of greedy algorithms including making locally optimal choices at each step, optimal substructures, and the greedy choice property. Two problems addressed in detail are the activity selection problem and building Huffman trees. The activity selection problem can be solved optimally using a greedy approach by always selecting the activity with the earliest finish time. Huffman trees provide data compression by assigning codes to characters based on frequency, with more common characters having shorter codes, and can be constructed greedily by repeatedly combining the two subtrees with lowest weight.
Dynamic Programming is an algorithmic paradigm that solves a given complex problem by breaking it into subproblems and stores the results of subproblems to avoid computing the same results again.
This document provides an overview of brute force and divide-and-conquer algorithms. It discusses various brute force algorithms like computing an, string matching, closest pair problem, convex hull problems, and exhaustive search algorithms like the traveling salesman problem and knapsack problem. It also analyzes the time efficiency of these brute force algorithms. The document then discusses the divide-and-conquer approach and provides examples like merge sort, quicksort, and matrix multiplication. It provides pseudocode and analysis for mergesort. In summary, the document covers brute force and divide-and-conquer techniques for solving algorithmic problems.
This document discusses greedy algorithms and provides examples of their use. It begins by defining characteristics of greedy algorithms, such as making locally optimal choices that reduce a problem into smaller subproblems. The document then covers designing greedy algorithms, proving their optimality, and analyzing examples like the fractional knapsack problem and minimum spanning tree algorithms. Specific greedy algorithms covered in more depth include Kruskal's and Prim's minimum spanning tree algorithms and Huffman coding.
Skiena algorithm 2007 lecture16 introduction to dynamic programmingzukun
This document summarizes a lecture on dynamic programming. It begins by introducing dynamic programming as a powerful tool for solving optimization problems on ordered items like strings. It then contrasts greedy algorithms, which make locally optimal choices, with dynamic programming, which systematically searches all possibilities while storing results. The document provides examples of computing Fibonacci numbers and binomial coefficients using dynamic programming by storing partial results rather than recomputing them. It outlines three key steps to applying dynamic programming: formulating a recurrence, bounding subproblems, and specifying an evaluation order.
The document discusses backtracking and branch and bound algorithms. Backtracking incrementally builds candidates and abandons them (backtracks) when they cannot lead to a valid solution. Branch and bound systematically enumerates solutions and discards branches that cannot produce a better solution than the best found so far based on upper bounds. Examples provided are the N-Queens problem solved with backtracking and the knapsack problem solved with branch and bound. Pseudocode is given for both algorithms.
Dynamic programming is used to solve optimization problems by combining solutions to overlapping subproblems. It works by breaking down problems into subproblems, solving each subproblem only once, and storing the solutions in a table to avoid recomputing them. There are two key properties for applying dynamic programming: overlapping subproblems and optimal substructure. Some applications of dynamic programming include finding shortest paths, matrix chain multiplication, the traveling salesperson problem, and knapsack problems.
The document discusses the framework for analyzing the efficiency of algorithms by measuring how the running time and space requirements grow as the input size increases, focusing on determining the order of growth of the number of basic operations using asymptotic notation such as O(), Ω(), and Θ() to classify algorithms based on their worst-case, best-case, and average-case time complexities.
The document discusses various optimization problems that can be solved using the greedy method. It begins by explaining that the greedy method involves making locally optimal choices at each step that combine to produce a globally optimal solution. Several examples are then provided to illustrate problems that can and cannot be solved with the greedy method. These include shortest path problems, minimum spanning trees, activity-on-edge networks, and Huffman coding. Specific greedy algorithms like Kruskal's algorithm, Prim's algorithm, and Dijkstra's algorithm are also covered. The document concludes by noting that the greedy method can only be applied to solve a small number of optimization problems.
The branch and bound method searches a tree model of the solution space for discrete optimization problems. It uses bounding functions to prune subtrees that cannot contain optimal solutions, avoiding evaluating all possible solutions. The method generates the tree in a depth-first manner, and selects the next node to expand (the E-node) based on cost estimates. Common strategies are FIFO, LIFO, and least cost search. The traveling salesman problem can be solved using branch and bound by formulating the solution space as a tree and applying row and column reduction techniques to the cost matrix at each node to identify prunable branches.
The document discusses the divide-and-conquer algorithm design paradigm. It explains that a problem is divided into smaller subproblems, the subproblems are solved independently, and then the solutions are combined. Recurrence equations can be used to analyze the running time of divide-and-conquer algorithms. The document provides examples of solving recurrences using methods like the recursion tree method and the master theorem.
The document discusses iterative improvement algorithms and provides examples such as the simplex method for solving linear programming problems. It explains the standard form of a linear programming problem and gives an outline of the simplex method, which generates a sequence of feasible solutions with improving objective values until an optimal solution is found. Some notes on limitations of the simplex method and improvements like the ellipsoid and interior-point methods are also mentioned.
Undecidable Problems - COPING WITH THE LIMITATIONS OF ALGORITHM POWERmuthukrishnavinayaga
This document discusses algorithms and their analysis. It begins by defining key properties of algorithms like their lower, upper, and tight bounds. It then discusses different techniques for determining algorithm lower bounds such as trivial, information theoretical, adversary, and reduction arguments. Decision trees are presented as a model for representing algorithms that use comparisons. Lower bounds proofs are given for sorting and searching algorithms. The document also covers polynomial time versus non-polynomial time problems, as well as NP-complete problems. Specific algorithms are analyzed like knapsack, traveling salesman, and approximation algorithms.
The document provides an introduction to the simplex method for solving linear programming problems. It begins by explaining the limitations of graphical methods for problems with more than two decision variables or constraints. The simplex method, developed by George Dantzig, overcomes these limitations through an algebraic approach. The document then outlines the standard form and characteristics of a linear programming problem before explaining how to transform problems into standard form. Finally, it provides a high-level overview of the simplex algorithm, including setting up the initial tableau, pivoting to improve the objective function, and determining the entering and exiting variables at each step.
Undecidable Problems and Approximation AlgorithmsMuthu Vinayagam
The document discusses algorithm limitations and approximation algorithms. It begins by explaining that some problems have no algorithms or cannot be solved in polynomial time. It then discusses different algorithm bounds and how to derive lower bounds through techniques like decision trees. The document also covers NP-complete problems, approximation algorithms for problems like traveling salesman, and techniques like branch and bound. It provides examples of approximation algorithms that provide near-optimal solutions when an optimal solution is impossible or inefficient to find.
1) The document describes the divide-and-conquer algorithm design paradigm. It splits problems into smaller subproblems, solves the subproblems recursively, and then combines the solutions to solve the original problem.
2) Binary search is provided as an example algorithm that uses divide-and-conquer. It divides the search space in half at each step to quickly determine if an element is present.
3) Finding the maximum and minimum elements in an array is another problem solved using divide-and-conquer. It recursively finds the max and min of halves of the array and combines the results.
This document provides an overview of the CS303 Computer Algorithms course taught by Dr. Yanxia Jia. It discusses the importance of algorithms, provides examples of classic algorithm problems like sorting and searching, and summarizes common algorithm design techniques and data structures used, including arrays, linked lists, stacks, queues, heaps, graphs and trees.
This document discusses the merge sort algorithm for sorting a sequence of numbers. It begins by introducing the divide and conquer approach, which merge sort uses. It then provides an example of how merge sort works, dividing the sequence into halves, sorting the halves recursively, and then merging the sorted halves together. The document proceeds to provide pseudocode for the merge sort and merge algorithms. It analyzes the running time of merge sort using recursion trees, determining that it runs in O(n log n) time. Finally, it covers techniques for solving recurrence relations that arise in algorithms like divide and conquer approaches.
The document discusses algorithm analysis and design. It begins with an introduction to analyzing algorithms including average case analysis and solving recurrences. It then provides definitions of algorithms both informal and formal. Key aspects of algorithm study and specification methods like pseudocode are outlined. Selection sort and tower of Hanoi problems are presented as examples and analyzed for time and space complexity. Average case analysis is discussed assuming all inputs are equally likely.
Algorithm Design and Complexity - Course 3Traian Rebedea
The document provides an overview of recursive algorithms and complexity analysis. It discusses recursive algorithms, divide and conquer design technique, and several examples of recursive algorithms including Towers of Hanoi, Merge Sort, and Quick Sort. For recursive algorithms, it explains how to analyze their running time using recurrence relations. It then covers four methods for solving recurrence relations: iteration, recursion trees, substitution method, and master theorem. The substitution method and master theorem are described as the most rigorous mathematical approaches.
The document discusses various algorithms that can be solved using backtracking. It begins by defining backtracking as a general algorithm design technique for problems that involve searching for solutions satisfying constraints. It then provides examples of problems that can be solved using backtracking, including the 8 queens problem, sum of subsets, graph coloring, and finding Hamiltonian cycles in a graph. For each problem, it outlines the key steps and provides pseudocode for the backtracking algorithm.
Algorithm Design and Complexity - Course 5Traian Rebedea
This document provides an overview of greedy algorithms and their use in solving optimization problems. It discusses key aspects of greedy algorithms including making locally optimal choices at each step, optimal substructures, and the greedy choice property. Two problems addressed in detail are the activity selection problem and building Huffman trees. The activity selection problem can be solved optimally using a greedy approach by always selecting the activity with the earliest finish time. Huffman trees provide data compression by assigning codes to characters based on frequency, with more common characters having shorter codes, and can be constructed greedily by repeatedly combining the two subtrees with lowest weight.
Dynamic Programming is an algorithmic paradigm that solves a given complex problem by breaking it into subproblems and stores the results of subproblems to avoid computing the same results again.
This document provides an overview of brute force and divide-and-conquer algorithms. It discusses various brute force algorithms like computing an, string matching, closest pair problem, convex hull problems, and exhaustive search algorithms like the traveling salesman problem and knapsack problem. It also analyzes the time efficiency of these brute force algorithms. The document then discusses the divide-and-conquer approach and provides examples like merge sort, quicksort, and matrix multiplication. It provides pseudocode and analysis for mergesort. In summary, the document covers brute force and divide-and-conquer techniques for solving algorithmic problems.
This document discusses greedy algorithms and provides examples of their use. It begins by defining characteristics of greedy algorithms, such as making locally optimal choices that reduce a problem into smaller subproblems. The document then covers designing greedy algorithms, proving their optimality, and analyzing examples like the fractional knapsack problem and minimum spanning tree algorithms. Specific greedy algorithms covered in more depth include Kruskal's and Prim's minimum spanning tree algorithms and Huffman coding.
Skiena algorithm 2007 lecture16 introduction to dynamic programmingzukun
This document summarizes a lecture on dynamic programming. It begins by introducing dynamic programming as a powerful tool for solving optimization problems on ordered items like strings. It then contrasts greedy algorithms, which make locally optimal choices, with dynamic programming, which systematically searches all possibilities while storing results. The document provides examples of computing Fibonacci numbers and binomial coefficients using dynamic programming by storing partial results rather than recomputing them. It outlines three key steps to applying dynamic programming: formulating a recurrence, bounding subproblems, and specifying an evaluation order.
The document discusses backtracking and branch and bound algorithms. Backtracking incrementally builds candidates and abandons them (backtracks) when they cannot lead to a valid solution. Branch and bound systematically enumerates solutions and discards branches that cannot produce a better solution than the best found so far based on upper bounds. Examples provided are the N-Queens problem solved with backtracking and the knapsack problem solved with branch and bound. Pseudocode is given for both algorithms.
Dynamic programming is used to solve optimization problems by combining solutions to overlapping subproblems. It works by breaking down problems into subproblems, solving each subproblem only once, and storing the solutions in a table to avoid recomputing them. There are two key properties for applying dynamic programming: overlapping subproblems and optimal substructure. Some applications of dynamic programming include finding shortest paths, matrix chain multiplication, the traveling salesperson problem, and knapsack problems.
The document discusses the framework for analyzing the efficiency of algorithms by measuring how the running time and space requirements grow as the input size increases, focusing on determining the order of growth of the number of basic operations using asymptotic notation such as O(), Ω(), and Θ() to classify algorithms based on their worst-case, best-case, and average-case time complexities.
The document discusses various optimization problems that can be solved using the greedy method. It begins by explaining that the greedy method involves making locally optimal choices at each step that combine to produce a globally optimal solution. Several examples are then provided to illustrate problems that can and cannot be solved with the greedy method. These include shortest path problems, minimum spanning trees, activity-on-edge networks, and Huffman coding. Specific greedy algorithms like Kruskal's algorithm, Prim's algorithm, and Dijkstra's algorithm are also covered. The document concludes by noting that the greedy method can only be applied to solve a small number of optimization problems.
The branch and bound method searches a tree model of the solution space for discrete optimization problems. It uses bounding functions to prune subtrees that cannot contain optimal solutions, avoiding evaluating all possible solutions. The method generates the tree in a depth-first manner, and selects the next node to expand (the E-node) based on cost estimates. Common strategies are FIFO, LIFO, and least cost search. The traveling salesman problem can be solved using branch and bound by formulating the solution space as a tree and applying row and column reduction techniques to the cost matrix at each node to identify prunable branches.
The document discusses the divide-and-conquer algorithm design paradigm. It explains that a problem is divided into smaller subproblems, the subproblems are solved independently, and then the solutions are combined. Recurrence equations can be used to analyze the running time of divide-and-conquer algorithms. The document provides examples of solving recurrences using methods like the recursion tree method and the master theorem.
The document discusses iterative improvement algorithms and provides examples such as the simplex method for solving linear programming problems. It explains the standard form of a linear programming problem and gives an outline of the simplex method, which generates a sequence of feasible solutions with improving objective values until an optimal solution is found. Some notes on limitations of the simplex method and improvements like the ellipsoid and interior-point methods are also mentioned.
Undecidable Problems - COPING WITH THE LIMITATIONS OF ALGORITHM POWERmuthukrishnavinayaga
This document discusses algorithms and their analysis. It begins by defining key properties of algorithms like their lower, upper, and tight bounds. It then discusses different techniques for determining algorithm lower bounds such as trivial, information theoretical, adversary, and reduction arguments. Decision trees are presented as a model for representing algorithms that use comparisons. Lower bounds proofs are given for sorting and searching algorithms. The document also covers polynomial time versus non-polynomial time problems, as well as NP-complete problems. Specific algorithms are analyzed like knapsack, traveling salesman, and approximation algorithms.
The document provides an introduction to the simplex method for solving linear programming problems. It begins by explaining the limitations of graphical methods for problems with more than two decision variables or constraints. The simplex method, developed by George Dantzig, overcomes these limitations through an algebraic approach. The document then outlines the standard form and characteristics of a linear programming problem before explaining how to transform problems into standard form. Finally, it provides a high-level overview of the simplex algorithm, including setting up the initial tableau, pivoting to improve the objective function, and determining the entering and exiting variables at each step.
Undecidable Problems and Approximation AlgorithmsMuthu Vinayagam
The document discusses algorithm limitations and approximation algorithms. It begins by explaining that some problems have no algorithms or cannot be solved in polynomial time. It then discusses different algorithm bounds and how to derive lower bounds through techniques like decision trees. The document also covers NP-complete problems, approximation algorithms for problems like traveling salesman, and techniques like branch and bound. It provides examples of approximation algorithms that provide near-optimal solutions when an optimal solution is impossible or inefficient to find.
Global Optimization with Descending Region AlgorithmLoc Nguyen
Global optimization is necessary in some cases when we want to achieve the best solution or we require a new solution which is better the old one. However global optimization is a hazard problem. Gradient descent method is a well-known technique to find out local optimizer whereas approximation solution approach aims to simplify how to solve the global optimization problem. In order to find out the global optimizer in the most practical way, I propose a so-called descending region (DR) algorithm which is combination of gradient descent method and approximation solution approach. The ideology of DR algorithm is that given a known local minimizer, the better minimizer is searched only in a so-called descending region under such local minimizer. Descending region is begun by a so-called descending point which is the main subject of DR algorithm. Descending point, in turn, is solution of intersection equation (A). Finally, I prove and provide a simpler linear equation system (B) which is derived from (A). So (B) is the most important result of this research because (A) is solved by solving (B) many enough times. In other words, DR algorithm is refined many times so as to produce such (B) for searching for the global optimizer. I propose a so-called simulated Newton – Raphson (SNR) algorithm which is a simulation of Newton – Raphson method to solve (B). The starting point is very important for SNR algorithm to converge. Therefore, I also propose a so-called RTP algorithm, which is refined and probabilistic process, in order to partition solution space and generate random testing points, which aims to estimate the starting point of SNR algorithm. In general, I combine three algorithms such as DR, SNR, and RTP to solve the hazard problem of global optimization. Although the approach is division and conquest methodology in which global optimization is split into local optimization, solving equation, and partitioning, the solution is synthesis in which DR is backbone to connect itself with SNR and RTP.
1. Regression is a supervised learning technique used to predict continuous valued outputs. It can be used to model relationships between variables to fit a linear equation to the training data.
2. Gradient descent is an iterative algorithm that is used to find the optimal parameters for a regression model by minimizing the cost function. It works by taking steps in the negative gradient direction of the cost function to converge on the local minimum.
3. The learning rate determines the step size in gradient descent. A small learning rate leads to slow convergence while a large rate may oscillate and fail to converge. Gradient descent automatically takes smaller steps as it approaches the local minimum.
This document provides an overview of optimization techniques. It defines optimization as identifying variable values that minimize or maximize an objective function subject to constraints. It then discusses various applications of optimization in finance, engineering, and data modeling. The document outlines different types of optimization problems and algorithms. It provides examples of unconstrained optimization algorithms like gradient descent, conjugate gradient, Newton's method, and BFGS. It also discusses the Nelder-Mead simplex algorithm for constrained optimization and compares the performance of these algorithms on sample problems.
This document provides an overview of linear programming and the simplex method for solving linear programming problems. It begins with defining the basic linear programming problem as having an objective function and a set of constraints. It then describes how to formulate a sample linear programming problem as a set of equations in standard form. The document explains how to find the feasible region and optimal solution graphically. It introduces the concept of basic feasible solutions and shows how the simplex method works by iteratively moving from one basic feasible solution to an adjacent better solution until the optimal solution is found. Key steps like choosing entering and leaving variables are demonstrated.
The document discusses the simplex method, an algebraic method for solving linear programming problems with more than two decision variables or constraints. It was developed by George Dantzig in 1947. The simplex method uses slack variables to represent unused resources and identifies basic and non-basic variables to iteratively find an optimal solution. It begins with an initial feasible solution and calculates values at each step to determine which variable should leave the basis and improve the objective function.
The document summarizes the simplex algorithm for solving linear programs. It describes how the algorithm works by starting with an initial basic feasible solution and iteratively arriving at solutions with greater objective values through pivoting. Pivoting exchanges a non-basic variable for a basic variable to obtain an equivalent linear program with a higher objective value. The algorithm terminates when all coefficients in the objective function are negative, indicating an optimal solution has been found.
The document provides an overview of the structure and content covered on the AP Calculus AB exam, including:
- The exam is 3 hours 15 minutes long and divided into multiple choice and free response sections testing limits, derivatives, integrals, and applications of calculus.
- Content topics covered include limits of functions, continuity, derivatives and their applications (related rates, max/min problems), integrals, and differential equations.
- Formulas and strategies are provided for evaluating limits, finding derivatives using various rules, applying derivatives to sketch curves, solve optimization problems, and solve motion problems using related rates.
Support vector machine in data mining.pdfRubhithaA
1. Support vector machines (SVMs) are a type of machine learning algorithm that learn nonlinear decision boundaries using kernel functions to transform data into higher dimensions.
2. SVMs find the optimal separating hyperplane that maximizes the margin between positive and negative examples. This hyperplane is determined by the support vectors, which are the data points closest to the decision boundary.
3. The SVM optimization problem involves minimizing a loss function subject to constraints. This can be solved using Lagrangian duality, which transforms the problem into an equivalent maximization problem over dual variables instead of the original weights and biases.
The document discusses greedy algorithms and provides examples. It begins with an overview of greedy algorithms and their properties. It then provides a sample problem (traveling salesman) and shows how a greedy approach can provide an iterative solution. The document notes advantages and disadvantages of greedy algorithms and provides additional examples, including optimal binary tree merging and the knapsack problem. It concludes with describing algorithms for optimal solutions to these problems.
The document defines linear programming and its key components. It explains that linear programming is a mathematical optimization technique used to allocate limited resources to achieve the best outcome, such as maximizing profit or minimizing costs. The document outlines the basic steps of the simplex method for solving linear programming problems and provides an example to illustrate determining the maximum value of a linear function given a set of constraints. It also discusses other applications of linear programming in fields like engineering, manufacturing, energy, and transportation for optimization.
The Nelder-Mead search algorithm is an optimization technique used to find the minimum or maximum of an objective function by iteratively modifying a set of parameter values. It begins with an initial set of points forming a simplex in parameter space and evaluates the objective function at each point, replacing the highest point with a new point to form a new simplex. This process repeats until the optimal value is found or a stopping criteria is met. The algorithm is simple to implement and can find local optima for problems with multiple parameters, but depends on the initial starting point and may get stuck in local optima rather than finding the global solution.
The document describes transportation problems and how to formulate and solve them using linear programming. It provides an example of a transportation problem involving supplying electricity from three power plants to four cities. Key steps include: (1) defining decision variables for amounts shipped, (2) writing the objective function to minimize total shipping costs, (3) specifying supply and demand constraints, and (4) using methods like the Northwest Corner method to find an initial basic feasible solution. The transportation simplex method is then used to iteratively improve the solution by entering and exiting variables from the basis.
Linear programming is a technique for choosing the best alternative from a set of feasible alternatives by expressing the objective function and constraints as linear mathematical functions. It requires a clear objective, quantitative terms, identifiable constraints, and feasible alternative choices. The objective functions and constraints define a feasible region, and the simplex method is an iterative algorithm that generates corner point solutions to find the optimal solution at an extreme point of the feasible region. It works by maintaining a basic feasible solution and performing elementary row operations at each iteration to improve the objective function value until reaching the optimal solution.
Lecture 5 6_7 - divide and conquer and method of solving recurrencesjayavignesh86
The document discusses divide and conquer algorithms and solving recurrences. It covers asymptotic notations, examples of divide and conquer including finding the largest number in a list, recurrence relations, and methods for solving recurrences including iteration, substitution, and recursion trees. The iteration method involves unfolding the recurrence into a summation. The recursion tree method visually depicts recursive calls in a tree to help solve the recurrence. Divide and conquer algorithms break problems into smaller subproblems, solve the subproblems recursively, and combine the solutions.
Support Vector Machines is the the the the the the the the thesanjaibalajeessn
This document provides an overview of support vector machines (SVMs) and how they can be used for both linear and non-linear classification problems. It explains that SVMs find the optimal separating hyperplane that maximizes the margin between classes. For non-linearly separable data, the document introduces kernel functions, which map the data into a higher-dimensional feature space to allow for nonlinear decision boundaries through the "kernel trick" of computing inner products without explicitly performing the mapping.
This module discusses polynomial functions of degree greater than two. The key points are:
1. The graph of a third-degree polynomial has both a minimum and maximum point, while higher degree polynomials have one less turning point than their degree.
2. Methods like finding upper and lower bounds and Descartes' Rule of Signs can help determine properties of the graph like zeros.
3. Odd degree polynomials increase on the far left and right if the leading term is positive, and decrease if negative. Even degree polynomials increase on the far left and decrease on the far right, or vice versa.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
Low power architecture of logic gates using adiabatic techniquesnooriasukmaningtyas
The growing significance of portable systems to limit power consumption in ultra-large-scale-integration chips of very high density, has recently led to rapid and inventive progresses in low-power design. The most effective technique is adiabatic logic circuit design in energy-efficient hardware. This paper presents two adiabatic approaches for the design of low power circuits, modified positive feedback adiabatic logic (modified PFAL) and the other is direct current diode based positive feedback adiabatic logic (DC-DB PFAL). Logic gates are the preliminary components in any digital circuit design. By improving the performance of basic gates, one can improvise the whole system performance. In this paper proposed circuit design of the low power architecture of OR/NOR, AND/NAND, and XOR/XNOR gates are presented using the said approaches and their results are analyzed for powerdissipation, delay, power-delay-product and rise time and compared with the other adiabatic techniques along with the conventional complementary metal oxide semiconductor (CMOS) designs reported in the literature. It has been found that the designs with DC-DB PFAL technique outperform with the percentage improvement of 65% for NOR gate and 7% for NAND gate and 34% for XNOR gate over the modified PFAL techniques at 10 MHz respectively.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
2. Iterative Improvement
Algorithm design technique for solving optimization
problems
• Start with a feasible solution
• Repeat the following step until no improvement can be
found:
– change the current feasible solution to a feasible solution
with a better value of the objective function
• Return the last feasible solution as optimal
Note: Typically, a change in a current solution is “small”
(local search)
Major difficulty: Local optimum vs. global optimum
3. Important Examples
• simplex method
• Ford-Fulkerson algorithm for maximum flow
problem
• maximum matching of graph vertices
• Gale-Shapley algorithm for the stable marriage
problem
• local search heuristics
4. Linear Programming
Linear programming (LP) problem is to optimize a
linear function of several variables subject to linear
constraints:
maximize (or minimize) c1 x1 + ...+ cn xn
subject to ai 1x1+ ...+ ain xn ≤ (or ≥ or =)
bi , i = 1,...,m x1 ≥ 0, ... , xn ≥ 0
The function z = c1 x1 + ...+ cn xn is called the objective
function;
constraints x1 ≥ 0, ... , xn ≥ 0 are called nonnegativity
constraints
5. Example
maximize 3x + 5y
subject to x + y ≤ 4
x + 3y ≤ 6
x ≥ 0, y ≥ 0
x
y
( 0, 2 )
( 0, 0 ) ( 4, 0 )
( 3, 1 )
x + y = 4
x + 3y = 6
Feasible region is the set of
points defined by the
constraints
6. Geometric solution
maximize 3x + 5y
subject to x + y ≤ 4
x + 3y ≤ 6
x ≥ 0, y ≥ 0
Optimal solution: x = 3, y = 1
x
y
( 0, 2 )
( 0, 0 ) ( 4, 0 )
( 3, 1 )
3x + 5y = 10
3x + 5y = 14
3x + 5y = 20
Extreme Point Theorem Any LP problem with a nonempty bounded feasible
region has an optimal solution; moreover, an optimal solution can always be
found at an extreme point of the problem's feasible region.
7. 3 possible outcomes in solving an LP problem
• has a finite optimal solution, which may no be
unique
• unbounded: the objective function of
maximization (minimization) LP problem is
unbounded from above (below) on its feasible
region
• infeasible: there are no points satisfying all the
constraints, i.e. the constraints are contradictory
8. The Simplex Method
• The classic method for solving LP problems;
one of the most important algorithms ever
invented
• Invented by George Dantzig in 1947
• Based on the iterative improvement idea:
Generates a sequence of adjacent points of the problem’s
feasible region with improving values of the objective
function until no further improvement is possible
9. Standard form of LP problem
• must be a maximization problem
• all constraints (except the nonnegativity constraints)
must be in the form of linear equations
• all the variables must be required to be nonnegative
Thus, the general linear programming problem in
standard
form with m constraints and n unknowns (n ≥ m) is
maximize c1 x1 + ...+ cn xn
subject to ai 1x1+ ...+ ain xn = bi , i = 1,...,m,
x1 ≥ 0, ... , xn ≥ 0
Every LP problem can be represented in such form
10. Example
maximize 3x + 5y maximize 3x + 5y + 0u + 0v
subject to x + y ≤ 4 subject to x + y + u = 4
x + 3y ≤ 6 x + 3y + v = 6
x≥0, y≥0 x≥0, y≥0, u≥0, v≥0
Variables u and v, transforming inequality constraints into
equality constrains, are called slack variables
11. Basic feasible solutions
A basic solution to a system of m linear equations in n unknowns (n ≥ m) is
obtained by setting n – m variables to 0 and solving the resulting system to get
the values of the other m variables. The variables set to 0 are called nonbasic;
the variables obtained by solving the system are called basic.
A basic solution is called feasible if all its (basic) variables are nonnegative.
Example x + y + u = 4
x + 3y + v = 6
(0, 0, 4, 6) is basic feasible solution
(x, y are nonbasic; u, v are basic)
There is a 1-1 correspondence between extreme points of LP’s feasible region
and its basic feasible solutions.
12. Simplex Tableau
maximize z = 3x + 5y + 0u + 0v
subject to x + y + u = 4
x + 3y + v = 6
x≥0, y≥0, u≥0, v≥0
basic
variables
objective row
basic feasible solution
(0, 0, 4, 6)
value of z at (0, 0, 4, 6)
13. Outline of the Simplex Method
Step 0 [Initialization] Present a given LP problem in standard form and
set up initial tableau.
Step 1 [Optimality test] If all entries in the objective row are nonnegative —
stop: the tableau represents an optimal solution.
Step 2 [Find entering variable] Select (the most) negative entry in the
objective row. Mark its column to indicate the entering variable and the pivot column.
Step 3 [Find departing variable] For each positive entry in the pivot column, calculate the θ-ratio
by dividing that row's entry in the rightmost column by its entry in the pivot column. (If there are
no positive entries in the pivot column — stop: the problem is unbounded.) Find the row with the
smallest θ-ratio, mark this row to indicate the departing variable and the pivot row.
Step 4 [Form the next tableau] Divide all the entries in the pivot row by its entry in the pivot
column. Subtract from each of the other rows, including the objective row, the new pivot row
multiplied by the entry in the pivot column of the row in question. Replace the label of the pivot
row by the variable's name of the pivot column and go back to Step 1.
14. Example of Simplex Method Application
maximize z = 3x + 5y + 0u + 0v
subject to x + y + u = 4
x + 3y + v = 6
x≥0, y≥0, u≥0, v≥0
maximize z = 3x + 5y + 0u + 0v
subject to x + y + u = 4
x + 3y + v = 6
x≥0, y≥0, u≥0, v≥0
basic feasible sol.
(0, 0, 4, 6)
z = 0
basic feasible sol.
(0, 2, 2, 0)
z = 10
basic feasible sol.
(3, 1, 0, 0)
z = 14
15. Notes on the Simplex Method
• Finding an initial basic feasible solution may pose a problem
• Theoretical possibility of cycling
• Typical number of iterations is between m and 3m, where m is
the number of equality constraints in the standard form
• Worse-case efficiency is exponential
• More recent interior-point algorithms such as Karmarkar’s
algorithm (1984) have polynomial worst-case efficiency and
have performed competitively with the simplex method in
empirical tests
16. Maximum Flow Problem
Problem of maximizing the flow of a material through
a transportation network (e.g., pipeline system,
communications or transportation networks)
Formally represented by a connected weighted
digraph with n vertices numbered from 1 to n with
the following properties:
– contains exactly one vertex with no entering edges, called the
source (numbered 1)
– contains exactly one vertex with no leaving edges, called the sink
(numbered n)
– has positive integer weight uij on each directed edge (i.j), called the
edge capacity, indicating the upper bound on the amount of the
material that can be sent from i to j through this edge
17. Example of Flow Network
1 2 3
4
5
6
2 2
3
1
5
3
4
Source
Sink
18. Definition of a Flow
A flow is an assignment of real numbers xij to edges (i,j) of
a given network that satisfy the following:
• flow-conservation requirements
The total amount of material entering an intermediate
vertex must be equal to the total amount of the
material
leaving the vertex
• capacity constraints
0 ≤ xij ≤ uij for every edge (i,j) E
∑ xji = ∑ xij for i = 2,3,…, n-1
j: (j,i) є E j: (i,j) є E
19. Flow value and Maximum Flow Problem
Since no material can be lost or added to by going
through intermediate vertices of the network, the
total amount of the material leaving the source
must end up at the sink:
∑ x1j = ∑ xjn
The value of the flow is defined as the total outflow
from the source (= the total inflow into the sink).
The maximum flow problem is to find a flow of the
largest value (maximum flow) for a given network.
j: (1,j) є E j: (j,n) є E
20. Maximum-Flow Problem as LP problem
Maximize v = ∑ x1j
j: (1,j) E
subject to
∑ xji - ∑ xij = 0 for i = 2, 3,…,n-1
j: (j,i) E j: (i,j) E
0 ≤ xij ≤ uij for every edge (i,j) E
21. Augmenting Path (Ford-Fulkerson)
Method
• Start with the zero flow (xij = 0 for every edge)
• On each iteration, try to find a flow-augmenting path
from source to sink, which a path along which some
additional flow can be sent
• If a flow-augmenting path is found, adjust the flow
along the edges of this path to get a flow of increased
value and try again
• If no flow-augmenting path is found, the current flow is
maximum
24. 1 2 3
4
5
6
2/2 2/2
1/3
1/1
1/5
1/3 1/4
max flow value = 3
Example 1 (maximum flow)
25. Finding a flow-augmenting path
To find a flow-augmenting path for a flow x, consider
paths from source to sink in the underlying undirected
graph in which any two consecutive vertices i,j are
either:
– connected by a directed edge (i to j) with some positive unused
capacity rij = uij – xij
• known as forward edge ( → )
OR
– connected by a directed edge (j to i) with positive flow xji
• known as backward edge ( ← )
If a flow-augmenting path is found, the current flow
can be increased by r units by increasing xij by r on
each forward edge and decreasing xji by r on each
backward edge, where
r = min {rij on all forward edges, xji on all backward
edges}
26. Finding a flow-augmenting path (cont.)
• Assuming the edge capacities are integers, r is a
positive integer
• On each iteration, the flow value increases by at least 1
• Maximum value is bounded by the sum of the
capacities of the edges leaving the source; hence the
augmenting-path method has to stop after a finite
number of iterations
• The final flow is always maximum, its value doesn’t
depend on a sequence of augmenting paths used
27. Performance degeneration of the
method
• The augmenting-path method doesn’t
prescribe a specific way for generating flow-
augmenting paths
• Selecting a bad sequence of augmenting paths
could impact the method’s efficiency
Example 2
4
2
1 3
0/U 0/U
0/1
0/U
0/U
U = large positive integer
29. Shortest-Augmenting-Path Algorithm
Generate augmenting path with the least number
of edges by BFS as follows.
Starting at the source, perform BFS traversal by
marking new (unlabeled) vertices with two labels:
– first label – indicates the amount of additional flow that can
be brought from the source to the vertex being labeled
– second label – indicates the vertex from which the vertex
being labeled was reached, with “+” or “–” added to the
second label to indicate whether the vertex was reached via a
forward or backward edge
30. Vertex labeling
• The source is always labeled with ∞,-
• All other vertices are labeled as follows:
– If unlabeled vertex j is connected to the front vertex i of the
traversal queue by a directed edge from i to j with positive
unused capacity rij = uij –xij (forward edge), vertex j is labeled
with lj,i+, where lj = min{li, rij}
– If unlabeled vertex j is connected to the front vertex i of the
traversal queue by a directed edge from j to i with positive flow
xji (backward edge), vertex j is labeled lj,i-, where lj = min{li, xji}
31. Vertex labeling (cont.)
• If the sink ends up being labeled, the current flow
can be augmented by the amount indicated by the
sink’s first label
• The augmentation of the current flow is performed
along the augmenting path traced by following the
vertex second labels from sink to source; the
current flow quantities are increased on the
forward edges and decreased on the backward
edges of this path
• If the sink remains unlabeled after the traversal
queue becomes empty, the algorithm returns the
current flow as maximum and stops
32. Example: Shortest-Augmenting-Path Algorithm
Queue: 1 2 4 3 5 6
↑ ↑ ↑ ↑
Augment the flow by 2 (the sink’s first
label) along the path 1→2→3→6
1 2 3
4
5
6
0/2 0/2
0/3 0/1
0/5
0/3 0/4
1 2 3
4
5
6
0/2 0/2
0/3 0/1
0/5
0/3 0/4
∞,- 2,1+
2,2+
2,3+
2,2+
3,1+
33. 1 2 3
4
5
6
2/2 2/2
0/3 0/1
2/5
0/3 0/4
1 2 3
4
5
6
2/2 2/2
0/3 0/1
2/5
0/3 0/4
∞,- 1,3-
1,2+
1,5+
1,4+
3,1+
Augment the flow by 1 (the sink’s first
label) along the path 1→4→3←2→5→6
Queue: 1 4 3 2 5 6
↑ ↑ ↑ ↑ ↑
Example (cont.)
34. 1 2 3
4
5
6
2/2 2/2
1/3 1/1
1/5
1/3 1/4
1 2 3
4
5
6
2/2 2/2
1/3 1/1
1/5
1/3 1/4
∞,-
2,1+
No augmenting path (the sink is unlabeled)
the current flow is maximum
Queue: 1 4
↑ ↑
Example (cont.)
35. Definition of a Cut
Let X be a set of vertices in a network that includes its source
but does not include its sink, and let X, the complement of X,
be the rest of the vertices including the sink. The cut
induced by this partition of the vertices is the set of all the
edges with a tail in X and a head in X.
Capacity of a cut is defined as the sum of capacities of the
edges that compose the cut.
• We’ll denote a cut and its capacity by C(X,X) and c(X,X)
• Note that if all the edges of a cut were deleted from the
network, there would be no directed path from source to
sink
• Minimum cut is a cut of the smallest capacity in a given
network
36. Examples of network cuts
If X = {1} and X = {2,3,4,5,6}, C(X,X) = {(1,2), (1,4)}, c =
5
If X ={1,2,3,4,5} and X = {6}, C(X,X) = {(3,6), (5,6)}, c =
6
If X = {1,2,4} and X ={3,5,6}, C(X,X) = {(2,3), (2,5),
(4,3)}, c = 9
1 2 3
4
5
6
2 2
3
1
5
3 4
37. Max-Flow Min-Cut Theorem
• The value of maximum flow in a network is
equal to the capacity of its minimum cut
• The shortest augmenting path algorithm yields
both a maximum flow and a minimum cut:
– maximum flow is the final flow produced by the algorithm
– minimum cut is formed by all the edges from the labeled
vertices to unlabeled vertices on the last iteration of the
algorithm
– all the edges from the labeled to unlabeled vertices are full,
i.e., their flow amounts are equal to the edge capacities,
while all the edges from the unlabeled to labeled vertices, if
any, have zero flow amounts on them
38.
39. Time Efficiency
• The number of augmenting paths needed by the
shortest-augmenting-path algorithm never exceeds
nm/2, where n and m are the number of vertices and
edges, respectively
• Since the time required to find shortest augmenting
path by breadth-first search is in O(n+m)=O(m) for
networks represented by their adjacency lists, the time
efficiency of the shortest-augmenting-path algorithm is
in O(nm2) for this representation
• More efficient algorithms have been found that can
run in close to O(nm) time, but these algorithms don’t
fall into the iterative-improvement paradigm
40. Bipartite Graphs
Bipartite graph: a graph whose vertices can be partitioned
into two disjoint sets V and U, not necessarily of the
same size, so that every edge connects a vertex in V to a
vertex in U
A graph is bipartite if and only if it does not have a cycle of
an odd length
4 5
10
9
8
7
6
1 2 3
V
U
41. Bipartite Graphs (cont.)
A bipartite graph is 2-colorable: the vertices can be
colored in two colors so that every edge has its
vertices colored differently
4 5
10
9
8
7
6
1 2 3
V
U
42. Matching in a Graph
A matching in a graph is a subset of its edges with the
property that no two edges share a vertex
4 5
10
9
8
7
6
1 2 3
V
U
a matching
in this graph
M = {(4,8), (5,9)}
A maximum (or maximum cardinality) matching is a matching with the
largest number of edges
• always exists
• not always unique
43. Free Vertices and Maximum
Matching
4 5
10
9
8
7
6
1 2 3
V
U
A matching
in this graph (M)
A matched
vertex
A free
vertex
For a given matching M, a vertex is called free (or unmatched) if it is not an endpoint of
any edge in M; otherwise, a vertex is said to be matched
• If every vertex is matched, then M is a maximum matching
• If there are unmatched or free vertices, then M may be able to be improved
• We can immediately increase a matching by adding an edge connecting two
free vertices (e.g., (1,6) above)
44. Augmenting Paths and Augmentation
An augmenting path for a matching M is a path from a
free vertex in V to a free vertex in U whose edges
alternate between edges not in M and edges in M
• The length of an augmenting path is always odd
• Adding to M the odd numbered path edges and deleting from it the even
numbered path edges increases the matching size by 1 (augmentation)
• One-edge path between two free vertices is special case of augmenting path
4 5
10
9
8
3
V
U
Augmentation along path 2,6,1,7
1 2
7
6
4 5
10
9
8
3
V
U
1 2
7
6
45. Augmenting Paths (another example)
10
9
8
7
6
1 2 3
V
U
Augmentation along
3, 8, 4, 9, 5, 10
4 5
10
9
8
7
6
1 2 3
V
U
4 5
• Matching on the right is maximum (perfect matching)
• Theorem A matching M is maximum if and only if there exists
no augmenting path with respect to M
46. Augmenting Path Method (template)
• Start with some initial matching
– e.g., the empty set
• Find an augmenting path and augment the
current matching along that path
– e.g., using breadth-first search like method
• When no augmenting path can be found,
terminate and return the last matching, which
is maximum
47. BFS-based Augmenting Path Algorithm
• Initialize queue Q with all free vertices in one of the
sets (say V)
• While Q is not empty, delete front vertex w and label
every unlabeled vertex u adjacent to w as follows:
Case 1 (w is in V)
If u is free, augment the matching along the path ending at u by
moving backwards until a free vertex in V is reached. After that,
erase all labels and reinitialize Q with all the vertices in V that are still
free
If u is matched (not with w), label u with w and enqueue u
Case 2 (w is in U) Label its matching mate v with w and enqueue v
• After Q becomes empty, return the last matching,
which is maximum
48. Example (revisited)
4 5
10
9
8
7
6
1 2 3
V
U
4 5
10
9
8
7
6
1 2 3
V
U
Queue: 1 2 3
1
Queue: 1 2 3 Augment
from 6
Initial Graph Resulting Graph
Each vertex is labeled with the vertex it was reached from. Queue deletions are
indicated by arrows. The free vertex found in U is shaded and labeled for clarity;
the new matching obtained by the augmentation is shown on the next slide.
49. Example (cont.)
4 5
10
9
8
7
6
1 2 3
V
U
4 5
10
9
8
7
6
1 2 3
V
U
Queue: 2 3
2
Queue: 2 3 6 8 1 4 Augment
from 7
Initial Graph Resulting Graph
3
8
6
1
50. Example (cont.)
4 5
10
9
8
7
6
1 2 3
V
U
4 5
10
9
8
7
6
1 2 3
V
U
Queue: 3 Queue: 3 6 8 2 4 9 Augment
from 10
Initial Graph Resulting Graph
8
3 3
6
4 4
51. Example: maximum matching found
• This matching is maximum since there are no
remaining free vertices in V (the queue is empty)
• Note that this matching differs from the maximum
matching found earlier
maximum
matching
4 5
10
9
8
7
6
1 2 3
V
U
52.
53. Notes on Maximum Matching Algorithm
• Each iteration (except the last) matches two free vertices
(one each from V and U). Therefore, the number of
iterations cannot exceed n/2 + 1, where n is the number
of vertices in the graph. The time spent on each iteration
is in O(n+m), where m is the number of edges in the
graph. Hence, the time efficiency is in O(n(n+m))
• This can be improved to O(sqrt(n)(n+m)) by combining
multiple iterations to maximize the number of edges
added to matching M in each search
• Finding a maximum matching in an arbitrary graph is much
more difficult, but the problem was solved in 1965 by Jack
Edmonds
54. Conversion to Max-Flow Problem
• Add a source and a sink, direct edges (with unit
capacity) from the source to the vertices of V and
from the vertices of U to the sink
• Direct all edges from V to U with unit capacity
V
U
s
t
4 5
10
9
8
7
1 2 3
6
1
1
1 1 1 1
1
1
1
1
1 1 1
1
1
1 1 1 1 1
55. Stable Marriage Problem
There is a set Y = {m1,…,mn} of n men and a set X =
{w1,…,wn} of n women. Each man has a ranking list
of the women, and each woman has a ranking list of
the men (with no ties in these lists).
A marriage matching M is a set of n pairs (mi, wj).
A pair (m, w) is said to be a blocking pair for matching
M if man m and woman w are not matched in M but
prefer each other to their mates in M.
A marriage matching M is called stable if there is no
blocking pair for it; otherwise, it’s called unstable.
The stable marriage problem is to find a stable
marriage matching for men’s and women’s given
preferences.
56. Instance of the Stable Marriage
Problem
An instance of the stable marriage problem can be specified either by two sets of
preference lists or by a ranking matrix, as in the example below.
men’s preferences women’s preferences
1st 2nd 3rd 1st 2nd 3rd
Bob: Lea Ann Sue Ann: Jim Tom Bob
Jim: Lea Sue Ann Lea: Tom Bob Jim
Tom: Sue Lea Ann Sue: Jim Tom Bob
ranking matrix
Ann Lea Sue
Bob 2,3 1,2 3,3
Jim 3,1 1,3 2,1
Tom 3,2 2,1 1,2
{(Bob, Ann) (Jim, Lea) (Tom, Sue)} is unstable
{(Bob, Ann) (Jim, Sue) (Tom, Lea)} is stable
57. Stable Marriage Algorithm (Gale-Shapley)
Step 0 Start with all the men and women being free
Step 1 While there are free men, arbitrarily select one
of them and do the following:
Proposal The selected free man m proposes to
w, the next woman on his preference list
Response If w is free, she accepts the proposal
to be matched with m. If she is not free, she
compares m with her current mate. If she
prefers m to him, she accepts m’s proposal,
making her former mate free; otherwise, she
simply rejects m’s proposal, leaving m free
Step 2 Return the set of n matched pairs
58. Example
Ann Lea Sue
Bob 2,3 1,2 3,3
Jim 3,1 1,3 2,1
Tom 3,2 2,1 1,2
Free men:
Bob, Jim, Tom
Bob proposed to Lea
Lea accepted
Ann Lea Sue
Bob 2,3 1,2 3,3
Jim 3,1 1,3 2,1
Tom 3,2 2,1 1,2
Jim proposed to Lea
Lea rejected
Free men:
Jim, Tom
59. Example (cont.)
Ann Lea Sue
Bob 2,3 1,2 3,3
Jim 3,1 1,3 2,1
Tom 3,2 2,1 1,2
Free men:
Jim, Tom
Jim proposed to Sue
Sue accepted
Ann Lea Sue
Bob 2,3 1,2 3,3
Jim 3,1 1,3 2,1
Tom 3,2 2,1 1,2
Tom proposed to Sue
Sue rejected
Free men:
Tom
60. Example (cont.)
Ann Lea Sue
Bob 2,3 1,2 3,3
Jim 3,1 1,3 2,1
Tom 3,2 2,1 1,2
Free men:
Tom
Tom proposed to Lea
Lea replaced Bob with
Tom
Ann Lea Sue
Bob 2,3 1,2 3,3
Jim 3,1 1,3 2,1
Tom 3,2 2,1 1,2
Bob proposed to Ann
Ann accepted
Free men:
Bob
61. Analysis of the Gale-Shapley
Algorithm
• The algorithm terminates after no more than n2 iterations
with a stable marriage output
• The stable matching produced by the algorithm is always
man-optimal: each man gets the highest rank woman on
his list under any stable marriage. One can obtain the
woman-optimal matching by making women propose to
men
• A man (woman) optimal matching is unique for a given set
of participant preferences
• The stable marriage problem has practical applications
such as matching medical-school graduates with hospitals
for residency training