This document discusses brute force and divide-and-conquer algorithms. It covers various brute force algorithms like computing an, string matching, closest pair problem, convex hull problems, and exhaustive search algorithms like the traveling salesman problem and knapsack problem. It then discusses divide-and-conquer techniques like merge sort, quicksort, and multiplication of large integers. Examples and analysis of algorithms like selection sort, bubble sort, and mergesort are provided.
This document discusses dynamic programming and greedy algorithms. It begins by defining dynamic programming as a technique for solving problems with overlapping subproblems. Examples provided include computing the Fibonacci numbers and binomial coefficients. Greedy algorithms are introduced as constructing solutions piece by piece through locally optimal choices. Applications discussed are the change-making problem, minimum spanning trees using Prim's and Kruskal's algorithms, and single-source shortest paths. Floyd's algorithm for all pairs shortest paths and optimal binary search trees are also summarized.
This document discusses hashing techniques for indexing and retrieving elements in a data structure. It begins by defining hashing and its components like hash functions, collisions, and collision handling. It then describes two common collision handling techniques - separate chaining and open addressing. Separate chaining uses linked lists to handle collisions while open addressing resolves collisions by probing to find alternate empty slots using techniques like linear probing and quadratic probing. The document provides examples and explanations of how these hashing techniques work.
Cs6402 design and analysis of algorithms may june 2016 answer keyappasami
The document discusses algorithms and complexity analysis. It provides Euclid's algorithm for computing greatest common divisor, compares the orders of growth of n(n-1)/2 and n^2, and describes the general strategy of divide and conquer methods. It also defines problems like the closest pair problem, single source shortest path problem, and assignment problem. Finally, it discusses topics like state space trees, the extreme point theorem, and lower bounds.
Divide and Conquer - Part II - Quickselect and Closest Pair of PointsAmrinder Arora
This document discusses divide and conquer algorithms. It covers the closest pair of points problem, which can be solved in O(n log n) time using a divide and conquer approach. It also discusses selection algorithms like quickselect that can find the median or kth element of an unsorted array in linear time O(n) on average. The document provides pseudocode for these algorithms and analyzes their time complexity using recurrence relations. It also provides an overview of topics like mergesort, quicksort, and solving recurrence relations that were covered in previous lectures.
This document summarizes a lecture on algorithms and graph traversal techniques. It discusses:
1) Breadth-first search (BFS) and depth-first search (DFS) algorithms for traversing graphs. BFS uses a queue while DFS uses a stack.
2) Applications of BFS and DFS, including finding connected components, minimum spanning trees, and bi-connected components.
3) Identifying articulation points to determine biconnected components in a graph.
4) The 0/1 knapsack problem and approaches for solving it using greedy algorithms, backtracking, and branch and bound search.
This document discusses algorithms and their analysis. It defines an algorithm as a step-by-step procedure to solve a problem or calculate a quantity. Algorithm analysis involves evaluating memory usage and time complexity. Asymptotics, such as Big-O notation, are used to formalize the growth rates of algorithms. Common sorting algorithms like insertion sort and quicksort are analyzed using recurrence relations to determine their time complexities as O(n^2) and O(nlogn), respectively.
This is the second lecture in the CS 6212 class. Covers asymptotic notation and data structures. Also outlines the coming lectures wherein we will study the various algorithm design techniques.
QUESTION BANK FOR ANNA UNNIVERISTY SYLLABUSJAMBIKA
first of all i am very happy that the only university that keeps its blog updated. the habit of using algorithm analysis to justify design decisions when you write implement new algorithms and to compare the experimental performance .
This document discusses dynamic programming and greedy algorithms. It begins by defining dynamic programming as a technique for solving problems with overlapping subproblems. Examples provided include computing the Fibonacci numbers and binomial coefficients. Greedy algorithms are introduced as constructing solutions piece by piece through locally optimal choices. Applications discussed are the change-making problem, minimum spanning trees using Prim's and Kruskal's algorithms, and single-source shortest paths. Floyd's algorithm for all pairs shortest paths and optimal binary search trees are also summarized.
This document discusses hashing techniques for indexing and retrieving elements in a data structure. It begins by defining hashing and its components like hash functions, collisions, and collision handling. It then describes two common collision handling techniques - separate chaining and open addressing. Separate chaining uses linked lists to handle collisions while open addressing resolves collisions by probing to find alternate empty slots using techniques like linear probing and quadratic probing. The document provides examples and explanations of how these hashing techniques work.
Cs6402 design and analysis of algorithms may june 2016 answer keyappasami
The document discusses algorithms and complexity analysis. It provides Euclid's algorithm for computing greatest common divisor, compares the orders of growth of n(n-1)/2 and n^2, and describes the general strategy of divide and conquer methods. It also defines problems like the closest pair problem, single source shortest path problem, and assignment problem. Finally, it discusses topics like state space trees, the extreme point theorem, and lower bounds.
Divide and Conquer - Part II - Quickselect and Closest Pair of PointsAmrinder Arora
This document discusses divide and conquer algorithms. It covers the closest pair of points problem, which can be solved in O(n log n) time using a divide and conquer approach. It also discusses selection algorithms like quickselect that can find the median or kth element of an unsorted array in linear time O(n) on average. The document provides pseudocode for these algorithms and analyzes their time complexity using recurrence relations. It also provides an overview of topics like mergesort, quicksort, and solving recurrence relations that were covered in previous lectures.
This document summarizes a lecture on algorithms and graph traversal techniques. It discusses:
1) Breadth-first search (BFS) and depth-first search (DFS) algorithms for traversing graphs. BFS uses a queue while DFS uses a stack.
2) Applications of BFS and DFS, including finding connected components, minimum spanning trees, and bi-connected components.
3) Identifying articulation points to determine biconnected components in a graph.
4) The 0/1 knapsack problem and approaches for solving it using greedy algorithms, backtracking, and branch and bound search.
This document discusses algorithms and their analysis. It defines an algorithm as a step-by-step procedure to solve a problem or calculate a quantity. Algorithm analysis involves evaluating memory usage and time complexity. Asymptotics, such as Big-O notation, are used to formalize the growth rates of algorithms. Common sorting algorithms like insertion sort and quicksort are analyzed using recurrence relations to determine their time complexities as O(n^2) and O(nlogn), respectively.
This is the second lecture in the CS 6212 class. Covers asymptotic notation and data structures. Also outlines the coming lectures wherein we will study the various algorithm design techniques.
QUESTION BANK FOR ANNA UNNIVERISTY SYLLABUSJAMBIKA
first of all i am very happy that the only university that keeps its blog updated. the habit of using algorithm analysis to justify design decisions when you write implement new algorithms and to compare the experimental performance .
This document contains exercises, hints, and solutions for Chapter 1 of the book "Introduction to the Design and Analysis of Algorithms." It includes 11 exercises related to algorithms for computing greatest common divisors, square roots, binary representations, and other topics. The document also provides hints for each exercise to help students solve them and includes the solutions.
This document discusses lower bounds and limitations of algorithms. It begins by defining lower bounds and providing examples of problems where tight lower bounds have been established, such as sorting requiring Ω(nlogn) comparisons. It then discusses methods for establishing lower bounds, including trivial bounds, decision trees, adversary arguments, and problem reduction. The document explores different classes of problems based on complexity, such as P, NP, and NP-complete problems. It concludes by examining approaches for tackling difficult combinatorial problems that are NP-hard, such as using exact algorithms, approximation algorithms, and local search heuristics.
The document discusses using the branch and bound technique to solve optimization problems like the traveling salesman problem and water jug problem. It explains that branch and bound systematically breaks problems into smaller subsets, calculates bounds on objective functions, and discards subsets that cannot produce better solutions than what is currently known. For the water jug problem, it provides an example path and explains how branch and bound finds the optimal 6-state solution. For traveling salesman, it notes the problem is to minimize travel distance by visiting all cities once and returning home.
The branch-and-bound method is used to solve optimization problems by traversing a state space tree. It computes a bound at each node to determine if the node is promising. Better approaches traverse nodes breadth-first and choose the most promising node using a bounding heuristic. The traveling salesperson problem is solved using branch-and-bound by finding an initial tour, defining a bounding heuristic as the actual cost plus minimum remaining cost, and expanding promising nodes in best-first order until finding the minimal tour.
Dynamic programming is a mathematical optimization method and computer programming technique used to solve complex problems by breaking them down into simpler subproblems. It was developed by Richard Bellman in the 1950s and has been applied in many fields. Dynamic programming problems can be solved optimally by breaking them into subproblems with optimal substructures that can be solved recursively. It uses techniques like top-down or bottom-up approaches and storing results of subproblems to solve larger problems efficiently by avoiding recomputing the common subproblems. Multistage graphs are a type of problem well-suited for dynamic programming solutions using techniques like greedy algorithms, Dijkstra's algorithm, or dynamic programming to find shortest paths. Traversal and search algorithms like breadth-
The document provides an introduction to algorithms presented by Sachin Sharma Bhandari. It defines algorithms, discusses analyzing time and space efficiency, and provides examples of sorting and other problems solved by algorithms. It also reviews data structures like stacks, queues, linked lists, and binary trees that are important for algorithm design.
Dynamic Programming design technique is one of the fundamental algorithm design techniques, and possibly one of the ones that are hardest to master for those who did not study it formally. In these slides (which are continuation of part 1 slides), we cover two problems: maximum value contiguous subarray, and maximum increasing subsequence.
The document discusses backtracking and branch and bound algorithms. Backtracking incrementally builds candidates and abandons them (backtracks) when they cannot lead to a valid solution. Branch and bound systematically enumerates solutions and discards branches that cannot produce a better solution than the best found so far based on upper bounds. Examples provided are the N-Queens problem solved with backtracking and the knapsack problem solved with branch and bound. Pseudocode is given for both algorithms.
This document contains a past exam paper for the subject "Design and Analysis of Algorithms". It has 2 parts with a total of 15 questions. Part A covers basic algorithm concepts like recurrence relations, efficiency classes, minimum spanning trees, and more. Part B involves solving algorithm problems using techniques like dynamic programming, Huffman coding, shortest paths, and more. It also tests concepts like P vs NP, approximation algorithms, and analysis of algorithm efficiency.
This document discusses dynamic programming and algorithms for solving all-pair shortest path problems. It begins by defining dynamic programming as avoiding recalculating solutions by storing results in a table. It then describes Floyd's algorithm for finding shortest paths between all pairs of nodes in a graph. The algorithm iterates through nodes, calculating shortest paths that pass through each intermediate node. It takes O(n3) time for a graph with n nodes. Finally, it discusses the multistage graph problem and provides forward and backward algorithms to find the minimum cost path from source to destination in a multistage graph in O(V+E) time, where V and E are the numbers of vertices and edges.
This document contains information about Kamalesh Karmakar, an assistant professor in the computer science department at Meghnad Saha Institute of Technology. It lists the algorithm topics he teaches, including algorithm analysis, design techniques, complexity theory, and more. It also provides references for algorithm textbooks and notes on time and space complexity analysis, asymptotic notation, and different algorithm design techniques like divide-and-conquer, dynamic programming, backtracking, and greedy methods.
The document discusses approximation algorithms and genetic algorithms for solving optimization problems like the traveling salesman problem (TSP) and vertex cover problem. It provides examples of approximation algorithms for these NP-hard problems, including algorithms that find near-optimal solutions within polynomial time. Genetic algorithms are also presented as an approach to solve TSP and other problems by encoding potential solutions and applying genetic operators like crossover and mutation.
This document discusses greedy algorithms and provides examples of their use. It begins by defining characteristics of greedy algorithms, such as making locally optimal choices that reduce a problem into smaller subproblems. The document then covers designing greedy algorithms, proving their optimality, and analyzing examples like the fractional knapsack problem and minimum spanning tree algorithms. Specific greedy algorithms covered in more depth include Kruskal's and Prim's minimum spanning tree algorithms and Huffman coding.
This document discusses dynamic programming and algorithms for solving all-pair shortest path problems. It begins by explaining dynamic programming as an optimization technique that works bottom-up by solving subproblems once and storing their solutions, rather than recomputing them. It then presents Floyd's algorithm for finding shortest paths between all pairs of nodes in a graph. The algorithm iterates through nodes, updating the shortest path lengths between all pairs that include that node by exploring paths through it. Finally, it discusses solving multistage graph problems using forward and backward methods that work through the graph stages in different orders.
This document discusses using the Branch and Bound technique to solve the traveling salesman problem and water jug problem. Branch and Bound is a method for solving discrete and combinatorial optimization problems by breaking the problem into smaller subsets, calculating bounds on the objective function, and discarding subsets that cannot produce better solutions than the best found so far. The document provides examples of applying Branch and Bound to find the optimal path between states for the water jug problem and the shortest route between cities for the traveling salesman problem.
Divide and Conquer Algorithms - D&C forms a distinct algorithm design technique in computer science, wherein a problem is solved by repeatedly invoking the algorithm on smaller occurrences of the same problem. Binary search, merge sort, Euclid's algorithm can all be formulated as examples of divide and conquer algorithms. Strassen's algorithm and Nearest Neighbor algorithm are two other examples.
Iterative improvement is an algorithm design technique for solving optimization problems. It starts with a feasible solution and repeatedly makes small changes to the current solution to find a solution with a better objective function value until no further improvements can be found. The simplex method, Ford-Fulkerson algorithm, and local search heuristics are examples that use this technique. The maximum flow problem can be solved using the iterative Ford-Fulkerson algorithm, which finds augmenting paths in a flow network to incrementally increase the flow from the source to the sink until no more augmenting paths exist.
Graph Traversal Algorithms - Breadth First SearchAmrinder Arora
The document discusses branch and bound algorithms. It begins with an overview of breadth first search (BFS) and how it can be used to solve problems on infinite mazes or graphs. It then provides pseudocode for implementing BFS using a queue data structure. Finally, it discusses branch and bound as a general technique for solving optimization problems that applies when greedy methods and dynamic programming fail. Branch and bound performs a BFS-like search, but prunes parts of the search tree using lower and upper bounds to avoid exploring all possible solutions.
This document provides an overview of brute force and divide-and-conquer algorithms. It discusses various brute force algorithms like computing an, string matching, closest pair problem, convex hull problems, and exhaustive search algorithms like the traveling salesman problem and knapsack problem. It also analyzes the time efficiency of these brute force algorithms. The document then discusses the divide-and-conquer approach and provides examples like merge sort, quicksort, and matrix multiplication. It provides pseudocode and analysis for mergesort. In summary, the document covers brute force and divide-and-conquer techniques for solving algorithmic problems.
The document discusses brute force algorithms and exhaustive search techniques. It provides examples of problems that can be solved using these approaches, such as computing powers and factorials, sorting, string matching, polynomial evaluation, the traveling salesman problem, knapsack problem, and the assignment problem. For each problem, it describes generating all possible solutions and evaluating them to find the best one. Most brute force algorithms have exponential time complexity, evaluating all possible combinations or permutations of the input.
A Comparative Analysis Of Assignment ProblemJim Webb
This document provides a comparative analysis of different methods for solving assignment problems, including the Hungarian method and a new proposed Matrix Ones Assignment (MOA) method. It first introduces assignment problems and describes their applications. It then explains the Hungarian method in detail through examples. Finally, it outlines the steps of the new MOA method, which aims to create ones in the assignment matrix to find optimal assignments. The document compares the two approaches and provides an example solved using the MOA method.
This document contains exercises, hints, and solutions for Chapter 1 of the book "Introduction to the Design and Analysis of Algorithms." It includes 11 exercises related to algorithms for computing greatest common divisors, square roots, binary representations, and other topics. The document also provides hints for each exercise to help students solve them and includes the solutions.
This document discusses lower bounds and limitations of algorithms. It begins by defining lower bounds and providing examples of problems where tight lower bounds have been established, such as sorting requiring Ω(nlogn) comparisons. It then discusses methods for establishing lower bounds, including trivial bounds, decision trees, adversary arguments, and problem reduction. The document explores different classes of problems based on complexity, such as P, NP, and NP-complete problems. It concludes by examining approaches for tackling difficult combinatorial problems that are NP-hard, such as using exact algorithms, approximation algorithms, and local search heuristics.
The document discusses using the branch and bound technique to solve optimization problems like the traveling salesman problem and water jug problem. It explains that branch and bound systematically breaks problems into smaller subsets, calculates bounds on objective functions, and discards subsets that cannot produce better solutions than what is currently known. For the water jug problem, it provides an example path and explains how branch and bound finds the optimal 6-state solution. For traveling salesman, it notes the problem is to minimize travel distance by visiting all cities once and returning home.
The branch-and-bound method is used to solve optimization problems by traversing a state space tree. It computes a bound at each node to determine if the node is promising. Better approaches traverse nodes breadth-first and choose the most promising node using a bounding heuristic. The traveling salesperson problem is solved using branch-and-bound by finding an initial tour, defining a bounding heuristic as the actual cost plus minimum remaining cost, and expanding promising nodes in best-first order until finding the minimal tour.
Dynamic programming is a mathematical optimization method and computer programming technique used to solve complex problems by breaking them down into simpler subproblems. It was developed by Richard Bellman in the 1950s and has been applied in many fields. Dynamic programming problems can be solved optimally by breaking them into subproblems with optimal substructures that can be solved recursively. It uses techniques like top-down or bottom-up approaches and storing results of subproblems to solve larger problems efficiently by avoiding recomputing the common subproblems. Multistage graphs are a type of problem well-suited for dynamic programming solutions using techniques like greedy algorithms, Dijkstra's algorithm, or dynamic programming to find shortest paths. Traversal and search algorithms like breadth-
The document provides an introduction to algorithms presented by Sachin Sharma Bhandari. It defines algorithms, discusses analyzing time and space efficiency, and provides examples of sorting and other problems solved by algorithms. It also reviews data structures like stacks, queues, linked lists, and binary trees that are important for algorithm design.
Dynamic Programming design technique is one of the fundamental algorithm design techniques, and possibly one of the ones that are hardest to master for those who did not study it formally. In these slides (which are continuation of part 1 slides), we cover two problems: maximum value contiguous subarray, and maximum increasing subsequence.
The document discusses backtracking and branch and bound algorithms. Backtracking incrementally builds candidates and abandons them (backtracks) when they cannot lead to a valid solution. Branch and bound systematically enumerates solutions and discards branches that cannot produce a better solution than the best found so far based on upper bounds. Examples provided are the N-Queens problem solved with backtracking and the knapsack problem solved with branch and bound. Pseudocode is given for both algorithms.
This document contains a past exam paper for the subject "Design and Analysis of Algorithms". It has 2 parts with a total of 15 questions. Part A covers basic algorithm concepts like recurrence relations, efficiency classes, minimum spanning trees, and more. Part B involves solving algorithm problems using techniques like dynamic programming, Huffman coding, shortest paths, and more. It also tests concepts like P vs NP, approximation algorithms, and analysis of algorithm efficiency.
This document discusses dynamic programming and algorithms for solving all-pair shortest path problems. It begins by defining dynamic programming as avoiding recalculating solutions by storing results in a table. It then describes Floyd's algorithm for finding shortest paths between all pairs of nodes in a graph. The algorithm iterates through nodes, calculating shortest paths that pass through each intermediate node. It takes O(n3) time for a graph with n nodes. Finally, it discusses the multistage graph problem and provides forward and backward algorithms to find the minimum cost path from source to destination in a multistage graph in O(V+E) time, where V and E are the numbers of vertices and edges.
This document contains information about Kamalesh Karmakar, an assistant professor in the computer science department at Meghnad Saha Institute of Technology. It lists the algorithm topics he teaches, including algorithm analysis, design techniques, complexity theory, and more. It also provides references for algorithm textbooks and notes on time and space complexity analysis, asymptotic notation, and different algorithm design techniques like divide-and-conquer, dynamic programming, backtracking, and greedy methods.
The document discusses approximation algorithms and genetic algorithms for solving optimization problems like the traveling salesman problem (TSP) and vertex cover problem. It provides examples of approximation algorithms for these NP-hard problems, including algorithms that find near-optimal solutions within polynomial time. Genetic algorithms are also presented as an approach to solve TSP and other problems by encoding potential solutions and applying genetic operators like crossover and mutation.
This document discusses greedy algorithms and provides examples of their use. It begins by defining characteristics of greedy algorithms, such as making locally optimal choices that reduce a problem into smaller subproblems. The document then covers designing greedy algorithms, proving their optimality, and analyzing examples like the fractional knapsack problem and minimum spanning tree algorithms. Specific greedy algorithms covered in more depth include Kruskal's and Prim's minimum spanning tree algorithms and Huffman coding.
This document discusses dynamic programming and algorithms for solving all-pair shortest path problems. It begins by explaining dynamic programming as an optimization technique that works bottom-up by solving subproblems once and storing their solutions, rather than recomputing them. It then presents Floyd's algorithm for finding shortest paths between all pairs of nodes in a graph. The algorithm iterates through nodes, updating the shortest path lengths between all pairs that include that node by exploring paths through it. Finally, it discusses solving multistage graph problems using forward and backward methods that work through the graph stages in different orders.
This document discusses using the Branch and Bound technique to solve the traveling salesman problem and water jug problem. Branch and Bound is a method for solving discrete and combinatorial optimization problems by breaking the problem into smaller subsets, calculating bounds on the objective function, and discarding subsets that cannot produce better solutions than the best found so far. The document provides examples of applying Branch and Bound to find the optimal path between states for the water jug problem and the shortest route between cities for the traveling salesman problem.
Divide and Conquer Algorithms - D&C forms a distinct algorithm design technique in computer science, wherein a problem is solved by repeatedly invoking the algorithm on smaller occurrences of the same problem. Binary search, merge sort, Euclid's algorithm can all be formulated as examples of divide and conquer algorithms. Strassen's algorithm and Nearest Neighbor algorithm are two other examples.
Iterative improvement is an algorithm design technique for solving optimization problems. It starts with a feasible solution and repeatedly makes small changes to the current solution to find a solution with a better objective function value until no further improvements can be found. The simplex method, Ford-Fulkerson algorithm, and local search heuristics are examples that use this technique. The maximum flow problem can be solved using the iterative Ford-Fulkerson algorithm, which finds augmenting paths in a flow network to incrementally increase the flow from the source to the sink until no more augmenting paths exist.
Graph Traversal Algorithms - Breadth First SearchAmrinder Arora
The document discusses branch and bound algorithms. It begins with an overview of breadth first search (BFS) and how it can be used to solve problems on infinite mazes or graphs. It then provides pseudocode for implementing BFS using a queue data structure. Finally, it discusses branch and bound as a general technique for solving optimization problems that applies when greedy methods and dynamic programming fail. Branch and bound performs a BFS-like search, but prunes parts of the search tree using lower and upper bounds to avoid exploring all possible solutions.
This document provides an overview of brute force and divide-and-conquer algorithms. It discusses various brute force algorithms like computing an, string matching, closest pair problem, convex hull problems, and exhaustive search algorithms like the traveling salesman problem and knapsack problem. It also analyzes the time efficiency of these brute force algorithms. The document then discusses the divide-and-conquer approach and provides examples like merge sort, quicksort, and matrix multiplication. It provides pseudocode and analysis for mergesort. In summary, the document covers brute force and divide-and-conquer techniques for solving algorithmic problems.
The document discusses brute force algorithms and exhaustive search techniques. It provides examples of problems that can be solved using these approaches, such as computing powers and factorials, sorting, string matching, polynomial evaluation, the traveling salesman problem, knapsack problem, and the assignment problem. For each problem, it describes generating all possible solutions and evaluating them to find the best one. Most brute force algorithms have exponential time complexity, evaluating all possible combinations or permutations of the input.
A Comparative Analysis Of Assignment ProblemJim Webb
This document provides a comparative analysis of different methods for solving assignment problems, including the Hungarian method and a new proposed Matrix Ones Assignment (MOA) method. It first introduces assignment problems and describes their applications. It then explains the Hungarian method in detail through examples. Finally, it outlines the steps of the new MOA method, which aims to create ones in the assignment matrix to find optimal assignments. The document compares the two approaches and provides an example solved using the MOA method.
This document discusses and compares different methods for solving assignment problems. It begins with an abstract that defines assignment problems as optimally assigning n objects to m other objects in an injective (one-to-one) fashion. It then provides an introduction to the Hungarian method and a new proposed Matrix Ones Assignment (MOA) method. The body of the document provides details on modeling assignment problems with cost matrices, formulations as linear programs, and step-by-step explanations of the Hungarian and MOA methods. It includes an example solved using the Hungarian method.
This document provides an overview of asymptotic analysis and Landau notation. It discusses justifying algorithm analysis mathematically rather than experimentally. Examples are given to show that two functions may appear different but have the same asymptotic growth rate. Landau symbols like O, Ω, o and Θ are introduced to describe asymptotic upper and lower bounds between functions. Big-Q represents asymptotic equivalence between functions, meaning one can be improved over the other with a faster computer.
APLICACIONES DE LA DERIVADA EN LA CARRERA DE (Mecánica, Electrónica, Telecomu...WILIAMMAURICIOCAHUAT1
El cálculo diferencial proporciona información sobre el comportamiento de las funciones
matemáticas. Todos estos problemas están incluidos en el alcance de la optimización de funciones y pueden resolverse aplicando cálculo
The document provides an overview of recursive and iterative algorithms. It discusses key differences between recursive and iterative algorithms such as definition, application, termination, usage, code size, and time complexity. Examples of recursive algorithms like recursive sum, factorial, binary search, tower of Hanoi, and permutation generator are presented along with pseudocode. Analysis of recursive algorithms like recursive sum, factorial, binary search, Fibonacci number, and tower of Hanoi is demonstrated to determine their time complexities. The document also discusses iterative algorithms, proving an algorithm's correctness, the brute force approach, and store and reuse methods.
Dynamic programming (DP) is a powerful technique for solving optimization problems by breaking them down into overlapping subproblems and storing the results of already solved subproblems. The document provides examples of how DP can be applied to problems like rod cutting, matrix chain multiplication, and longest common subsequence. It explains the key elements of DP, including optimal substructure (subproblems can be solved independently and combined to solve the overall problem) and overlapping subproblems (subproblems are solved repeatedly).
The document discusses assignment problems and provides examples to illustrate how to solve them. Assignment problems involve allocating jobs to people or machines in a way that minimizes costs or maximizes profits. The key steps to solve assignment problems are: (1) construct a cost matrix, (2) perform row and column reductions to obtain zeros, (3) draw lines to cover zeros and determine optimal assignments. Traveling salesman problems, which involve finding the lowest cost route to visit all cities once, can also be formulated as assignment problems.
The document discusses the divide and conquer algorithm design technique. It begins by defining divide and conquer as breaking a problem down into smaller subproblems, solving the subproblems, and then combining the solutions to solve the original problem. It then provides examples of applying divide and conquer to problems like matrix multiplication and finding the maximum subarray. The document also discusses analyzing divide and conquer recurrences using methods like recursion trees and the master theorem.
MATLAB is an interactive development environment and programming language used by engineers and scientists for technical computing, data analysis, and algorithm development. It allows users to access data from files, web services, applications, hardware, and databases, and perform data analysis and visualization. MATLAB can be used for applications in areas like control systems, signal processing, communications, and more.
The document discusses several algorithm design strategies including brute force, divide and conquer, and decrease and conquer. It provides examples of each strategy, including string matching and linear search as brute force algorithms. Merge sort and Strassen's matrix multiplication are presented as divide and conquer algorithms, with analysis of their time complexities. Binary search is analyzed as a decrease and conquer algorithm with logarithmic time complexity.
dynamic programming complete by Mumtaz Ali (03154103173)Mumtaz Ali
The document discusses dynamic programming, including its meaning, definition, uses, techniques, and examples. Dynamic programming refers to breaking large problems down into smaller subproblems, solving each subproblem only once, and storing the results for future use. This avoids recomputing the same subproblems repeatedly. Examples covered include matrix chain multiplication, the Fibonacci sequence, and optimal substructure. The document provides details on formulating and solving dynamic programming problems through recursive definitions and storing results in tables.
Divide-and-conquer is an algorithm design technique that involves dividing a problem into smaller subproblems, solving the subproblems recursively, and combining the solutions. The document discusses several divide-and-conquer algorithms including mergesort, quicksort, and binary search. Mergesort divides an array in half, sorts each half, and then merges the halves. Quicksort picks a pivot element and partitions the array into elements less than and greater than the pivot. Both quicksort and mergesort have average-case time complexity of Θ(n log n).
The document describes a seminar report on using a divide and conquer algorithm to find the closest pair of points from a set of points in two dimensions. It discusses implementing both a brute force algorithm that compares all pairs, taking O(n^2) time, and a divide and conquer algorithm that recursively divides the point set into halves and finds the closest pairs in each subset and near the dividing line, taking O(nlogn) time. It provides pseudocode for both algorithms and discusses the history and improvements made to the closest pair problem over time, reducing the number of distance computations needed.
This document discusses algorithms for finding minimum and maximum elements in an array, including simultaneous minimum and maximum algorithms. It introduces dynamic programming as a technique for improving inefficient divide-and-conquer algorithms by storing results of subproblems to avoid recomputing them. Examples of dynamic programming include calculating the Fibonacci sequence and solving an assembly line scheduling problem to minimize total time.
1) The document describes the divide-and-conquer algorithm design paradigm. It can be applied to problems where the input can be divided into smaller subproblems, the subproblems can be solved independently, and the solutions combined to solve the original problem.
2) Binary search is provided as an example divide-and-conquer algorithm. It works by recursively dividing the search space in half and only searching the subspace containing the target value.
3) Finding the maximum and minimum elements in an array is also solved using divide-and-conquer. The array is divided into two halves, the max/min found for each subarray, and the overall max/min determined by comparing the subsolutions.
1) The document describes the divide-and-conquer algorithm design paradigm. It splits problems into smaller subproblems, solves the subproblems recursively, and then combines the solutions to solve the original problem.
2) Binary search is provided as an example algorithm that uses divide-and-conquer. It divides the search space in half at each step to quickly determine if an element is present.
3) Finding the maximum and minimum elements in an array is another problem solved using divide-and-conquer. It recursively finds the max and min of halves of the array and combines the results.
The document discusses various object-oriented methodologies including Rumbaugh, Booch, and Jacobson methodologies. It provides details on Rumbaugh's Object Modeling Technique (OMT) which separates modeling into object, dynamic, and functional models. It describes Booch's methodology which uses class, object, state transition, and other diagrams. It also discusses Jacobson's methodologies including Object-Oriented Software Engineering (OOSE) which is use case driven, and Object-Oriented Business Engineering (OOBE) which uses use cases. The document then covers topics on software quality assurance including types of errors, testing strategies like black box and white box testing, and testing approaches like top-down
The document discusses applying object-oriented design principles to the design of a point of sale (POS) system. It covers applying several GRASP patterns, including Creator, Information Expert, and Low Coupling.
For the Creator pattern, it suggests the Sale class should create SalesLineItem objects since a Sale contains line items. For Information Expert, it assigns responsibilities to classes that have the necessary information - Sale computes the total since it contains line items, SalesLineItem computes the subtotal since it knows quantity and price, and ProductDescription provides the price. Low Coupling aims to reduce dependencies between classes.
The document provides information about Unit III of the syllabus, which covers dynamic and implementation UML diagrams. It discusses different types of dynamic diagrams like interaction diagrams (sequence diagram, collaboration diagram), state machine diagrams, and activity diagrams. It also discusses implementation diagrams like package diagrams, component diagrams, and deployment diagrams. Chapters from the third edition textbook related to these diagrams are listed. The document then provides more details on types of UML diagrams including structural diagrams and behavioral diagrams. It focuses on interaction diagrams, describing sequence diagrams and communication diagrams in detail. Examples and notation for drawing interaction diagrams are also explained.
The document discusses Unit II of a syllabus which covers class diagrams, including elaboration, domain modeling, finding conceptual classes and relationships. It discusses when to use class diagrams and provides examples of a class diagram for a hotel management system. It also discusses inception and elaboration phases in software development processes and provides artifacts used in elaboration. Finally, it discusses domain modeling including how to identify conceptual classes, draw associations, and avoid adding too many associations.
The document discusses object-oriented analysis and design (OOAD) and the Unified Process (UP). It covers key concepts in OOAD like requirements analysis, use cases, domain modeling, interaction diagrams, and design class diagrams. It also discusses iterative development approaches like the Unified Process, which emphasizes iterative development through short iterations with analysis, design, implementation, and testing in each iteration and feedback between iterations.
This document provides an introduction to the analysis of algorithms. It defines an algorithm and lists key properties including being finite, definite, and able to produce the correct output for any valid input. Common computational problems and basic algorithm design strategies are outlined. Asymptotic notations for analyzing time and space efficiency are introduced. Examples of algorithms for calculating the greatest common divisor and determining if a number is prime are provided and analyzed. Fundamental data structures and techniques for analyzing recursive algorithms are also discussed.
This document discusses lower bounds and limitations of algorithms. It begins by defining lower bounds and providing examples of problems where tight lower bounds have been established, such as sorting requiring Ω(nlogn) comparisons. It then discusses methods for establishing lower bounds, including trivial bounds, decision trees, adversary arguments, and problem reduction. The document covers several examples to illustrate these techniques. It also discusses the complexity classes P, NP, and NP-complete problems. Finally, it discusses approaches for tackling difficult combinatorial problems that are NP-hard, including exact and approximation algorithms.
Iterative improvement is an algorithm design technique for solving optimization problems. It starts with a feasible solution and repeatedly makes small changes to the current solution to find a solution with a better objective function value until no further improvements can be found. The simplex method, Ford-Fulkerson algorithm, and local search heuristics are examples that use this technique. The maximum flow problem can be solved using the iterative Ford-Fulkerson algorithm, which finds augmenting paths in a flow network to incrementally increase the flow from the source to the sink until no more augmenting paths exist.
This document discusses dynamic programming and greedy algorithms. It begins by defining dynamic programming as a technique for solving problems with overlapping subproblems. It provides examples of dynamic programming approaches to computing Fibonacci numbers, binomial coefficients, the knapsack problem, and other problems. It also discusses greedy algorithms and provides examples of their application to problems like the change-making problem, minimum spanning trees, and single-source shortest paths.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
artificial intelligence and data science contents.pptxGauravCar
What is artificial intelligence? Artificial intelligence is the ability of a computer or computer-controlled robot to perform tasks that are commonly associated with the intellectual processes characteristic of humans, such as the ability to reason.
› ...
Artificial intelligence (AI) | Definitio
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Embedded machine learning-based road conditions and driving behavior monitoring
Unit 2
1. Unit II
BRUTE FORCE AND DIVIDE-AND-
CONQUER
PART 1
Dr.S.Gunasundari
Associate Professor
CSE Dept
2. Syllabus
Brute Force
Computing an
String matching
Closest-Pair
Convex Hull Problems
Exhaustive Search
Traveling Salesman Problem
Knapsack Problem
Assignment problem.
Divide and conquer
• Merge sort
• Quick sort
• Binary search-
• Closest Pair
• Multiplication of Large Integers
09-03-2022
3. Brute Force
A straightforward approach, usually based directly on the problem’s statement and definitions
of the concepts involved
Examples:
1. Computing an (a > 0, n a nonnegative integer)
2. Computing n!
3. Multiplying two matrices
4. Searching for a key of a given value in a list
09-03-2022
4. Brute-Force Sorting Algorithm
Selection Sort
Scan the array to find its smallest element and swap it with the first element.
Then, starting with the second element, scan the elements to the right of it
to find the smallest among them and swap it with the second element.
Generally, on pass i (0 i n-2), find the smallest element in A[i..n-1] and
swap it with A[i]:
09-03-2022
9. Bubble sort
Consecutive adjacent pairs of elements in the array
are compared with each other.
If the element at the lower index is greater than the
element at the higher index
the two elements are interchanged so that the
element is placed before the bigger one.
This process will continue till the list of unsorted
elements exhausts
09-03-2022
14. Computing an
an = a * a * a …..(n times)
a – non-zero number
n- non negative integer
Compute an by multiplying a by n times
09-03-2022
15. Brute-Force String Matching
pattern: a string of m characters to search for
text: a (longer) string of n characters to search in
problem: find a substring in the text that matches the pattern
Brute-force algorithm
Step 1 Align pattern at beginning of text
Step 2 Moving from left to right, compare each character of pattern to the corresponding
character in text until
all characters are found to match (successful search); or
a mismatch is detected
Step 3 While pattern is not found and the text is not yet exhausted, realign pattern one
position to the right and repeat Step 2
09-03-2022
16. Examples of Brute-Force String Matching
1. Pattern: 001011
Text: 10010101101001100101111010
1. Pattern: happy
Text: It is never too late to have a happy childhood.
09-03-2022
19. Closest-Pair Problem
Find the two closest points in a set of n points (in the two-dimensional Cartesian plane).
Brute-force algorithm
Compute the distance between every pair of distinct points n(n-1)/2 pairs and return the
indexes of the points for which the distance is the smallest.
09-03-2022
21. Analysis
Basic Operation: Computing Euclidean distance
Avoid finding square roots
Now basic operation is squaring the number
09-03-2022
22. Convex Hull
The convex hull problem is the problem of
constructing the convex hull for a given set S
of n points
Definition A set of points (finite or infinite) in
the plane is called convex if for any two points
P and Q in the set, the entire line segment with
the endpoints at P and Q belongs to the set
09-03-2022
24. Convex Hull Brute Force Algorithm
A line segment connecting two points Pi and Pj of a set of n points is a part of its
convex hull’s boundary
if and only if all the other points of the set lie on the same side of the straight line through
these two points
Repeating this test for every pair of points yields a list of line segments that make up
the convex hull’s boundary
09-03-2022
26. Analysis of Convex Hull Brute Force Algorithm
The time efficiency of this algorithm is in O(n3)
for each of n(n-1)/ 2 pairs of distinct points, we may need to find the sign of ax +
by - c for each of the other n - 2 points
09-03-2022
27. Brute-Force Strength and Weakness
Strengths
wide applicability
simplicity
yields reasonable algorithms for some important problems
(e.g., matrix multiplication, sorting, searching, string matching)
Weaknesses
rarely yields efficient algorithms
some brute-force algorithms are unacceptably slow
not as constructive as some other design techniques
09-03-2022
28. Exhaustive Search
A brute force solution to a problem involving search for an element with a special
property, usually among combinatorial objects such as permutations, combinations,
or subsets of a set.
Method:
generate a list of all potential solutions to the problem in a systematic manner
evaluate potential solutions one by one, disqualifying infeasible ones and, for an
optimization problem, keeping track of the best one found so far
when search ends, announce the solution(s) found
09-03-2022
29. Example 1: Traveling Salesman Problem
Given n cities with known distances between each pair, find the shortest tour that passes
through all the cities exactly once before returning to the starting city
Alternatively: Find shortest Hamiltonian circuit in a weighted connected graph
Hamiltonian circuit is defined as a cycle that passes through all the vertices of the graph exactly once.
Graph Vertices represent cities
Edge weight represent distance
Example:
a b
c d
8
2
7
5 3
4
09-03-2022
30. TSP by Exhaustive Search
Tour Cost
a→b→c→d→a 2+3+7+5 = 17
a→b→d→c→a 2+4+7+8 = 21
a→c→b→d→a 8+3+4+5 = 20
a→c→d→b→a 8+7+4+2 = 21
a→d→b→c→a 5+4+3+8 = 20
a→d→c→b→a 5+7+3+2 = 17
a b
c d
8
2
7
5 3
4
09-03-2022
we can get all the tours by generating all the permutations of n − 1 intermediate
cities, compute the tour lengths, and find the shortest among them
Efficiency: (n-1)!
• Some pair of tours differ by direction
• a→b→c→d→a AND a→d→c→b→a
• we could cut the number of vertex permutations by half ((n-1)!)/2
32. Example 2: Knapsack Problem
Given n items:
weights: w1 w2 … wn
values: v1 v2 … vn
a knapsack of capacity W
Find most valuable subset of the items that fit into the knapsack
Example: Knapsack capacity W=16
item weight value
1 2 $20
2 5 $30
3 10 $50
4 5 $10
09-03-2022
33. Knapsack Problem by Exhaustive Search
Subset Total weight Total value
{1} 2 $20
{2} 5 $30
{3} 10 $50
{4} 5 $10
{1,2} 7 $50
{1,3} 12 $70
{1,4} 7 $30
{2,3} 15 $80
{2,4} 10 $40
{3,4} 15 $60
{1,2,3} 17 not feasible
{1,2,4} 12 $60
{1,3,4} 17 not feasible
{2,3,4} 20 not feasible
{1,2,3,4} 22 not feasible
Efficiency:
09-03-2022
Since
the number of subsets of an n-element set is 2n, the
exhaustive search leads to a (2n) algorithm, no
matter how efficiently individual subsets are
generated.
35. Example 3: The Assignment Problem
There are n people who need to be assigned to n jobs, one person per job. The
cost of assigning person i to job j is C[i,j]. Find an assignment that minimizes the
total cost.
Job 0 Job 1 Job 2 Job 3
Person 0 9 2 7 8
Person 1 6 4 3 7
Person 2 5 8 1 8
Person 3 7 6 9 4
Algorithmic Plan: Generate all legitimate assignments, compute
their costs, and select the cheapest one.
How many assignments are there?
09-03-2022
36. Assignment Problem by Exhaustive Search
9 2 7 8
6 4 3 7
5 8 1 8
7 6 9 4
Assignment (col.#s) Total Cost
1, 2, 3, 4 9+4+1+4=18
1, 2, 4, 3 9+4+8+9=30
1, 3, 2, 4 9+3+8+4=24
1, 3, 4, 2 9+3+8+6=26
1, 4, 2, 3 9+7+8+9=33
1, 4, 3, 2 9+7+1+6=23
etc.
C =
09-03-2022
Since the number of permutations to be
considered for the general case of the
assignment problem is n!,
41. Final Comments on Exhaustive Search
Exhaustive-search algorithms run in a realistic amount of time only on very small
instances
In some cases, there are much better alternatives!
Euler circuits
shortest paths
minimum spanning tree
assignment problem
In many cases, exhaustive search or its variation is the only known way to get
exact solution
09-03-2022
42. Divide-and-Conquer
The most-well known algorithm design strategy:
1. Divide instance of problem into two or more smaller instances
2. Solve smaller instances recursively
3. Obtain solution to original (larger) instance by combining these solutions
09-03-2022
43. Divide-and-Conquer Technique (cont.)
subproblem 2
of size n/2
subproblem 1
of size n/2
a solution to
subproblem 1
a solution to
the original problem
a solution to
subproblem 2
a problem of size n
09-03-2022
44. Divide-and-Conquer Examples
Sorting: mergesort and quicksort
Binary tree traversals
Binary search (?)
Multiplication of large integers
Matrix multiplication: Strassen’s algorithm
Closest-pair and convex-hull algorithms
09-03-2022
45. General Divide-and-Conquer Recurrence
T(n) = aT(n/b) + f (n) where f(n) (nd), d 0
Master Theorem: If a < bd, T(n) (nd)
If a = bd, T(n) (nd log n)
If a > bd, T(n) (nlog b a )
Note: The same results hold with O instead of .
Examples: T(n) = 4T(n/2) + n T(n) ?
T(n) = 4T(n/2) + n2 T(n) ?
T(n) = 4T(n/2) + n3 T(n) ?
09-03-2022
46. Mergesort
Split array A[0..n-1] in two about equal halves and make copies of each half in arrays B and C
Sort arrays B and C recursively
Merge sorted arrays B and C into array A as follows:
Repeat the following until no elements remain in one of the arrays:
compare the first elements in the remaining unprocessed portions of
the arrays
copy the smaller of the two into A, while incrementing the index
indicating the unprocessed portion of that array
Once all elements in one of the arrays are processed, copy the remaining
unprocessed elements from the other array into A.
09-03-2022
50. Analysis of Mergesort
All cases have same efficiency: Θ(n log n)
Number of comparisons in the worst case is close to theoretical minimum for
comparison-based sorting:
log2 n! ≈ n log2 n - 1.44n
Space requirement: Θ(n) (not in-place)
Can be implemented without recursion (bottom-up)
09-03-2022
51. Quicksort
Select a pivot (partitioning element) – here, the first element
Rearrange the list so that all the elements in the first s positions are smaller than or equal to the pivot
and all the elements in the remaining n-s positions are larger than or equal to the pivot (see next slide
for an algorithm)
Exchange the pivot with the last element in the first (i.e., ) subarray — the pivot is now in its final
position
Sort the two subarrays recursively
p
A[i]p A[i]p
09-03-2022
54. Analysis of Quicksort
Best case: split in the middle — Θ(n log n)
Worst case: sorted array! — Θ(n2)
Average case: random arrays — Θ(n log n)
Improvements:
better pivot selection: median of three partitioning
switch to insertion sort on small subfiles
elimination of recursion
These combine to 20-25% improvement
Considered the method of choice for internal sorting of large files (n ≥ 10000)
09-03-2022
55. Binary Search
Very efficient algorithm for searching in sorted array:
K
vs
A[0] . . . A[m] . . . A[n-1]
If K = A[m], stop (successful search); otherwise, continue
searching by the same method in A[0..m-1] if K < A[m]
and in A[m+1..n-1] if K > A[m]
l 0; r n-1
while l r do
m (l+r)/2
if K = A[m] return m
else if K < A[m] r m-1
else l m+1
return -1
09-03-2022
56. Analysis of Binary Search
Time efficiency
worst-case recurrence: Cw (n) = 1 + Cw( n/2 ), Cw (1) = 1
solution: Cw(n) = log2(n+1)
This is VERY fast: e.g., Cw(106) = 20
Optimal for searching a sorted array
Limitations: must be a sorted array (not linked list)
Bad (degenerate) example of divide-and-conquer
Has a continuous counterpart called bisection method for solving equations in one unknown f(x) = 0 (see
Sec. 12.4)
09-03-2022
57. Binary Tree Algorithms
Binary tree is a divide-and-conquer ready structure!
Ex. 1: Classic traversals (preorder, inorder, postorder)
Algorithm Inorder(T)
if T a a
Inorder(Tleft) b c b c
print(root of T) d e d e
Inorder(Tright)
Efficiency: Θ(n)
09-03-2022
58. Binary Tree Algorithms (cont.)
Ex. 2: Computing the height of a binary tree
T T
L R
h(T) = max{h(TL), h(TR)} + 1 if T and h() = -1
Efficiency: Θ(n)
09-03-2022
59. Multiplication of Large Integers
Consider the problem of multiplying two (large) n-digit integers represented by arrays of their digits such as:
A = 12345678901357986429 B = 87654321284820912836
The grade-school algorithm:
a1 a2 … an
b1 b2 … bn
(d10) d11d12 … d1n
(d20) d21d22 … d2n
… … … … … … …
(dn0) dn1dn2 … dnn
Efficiency: n2 one-digit multiplications
09-03-2022
60. First Divide-and-Conquer Algorithm
A small example: A B where A = 2135 and B = 4014
A = (21·102 + 35), B = (40 ·102 + 14)
So, A B = (21 ·102 + 35) (40 ·102 + 14)
= 21 40 ·104 + (21 14 + 35 40) ·102 + 35 14
In general, if A = A1A2 and B = B1B2 (where A and B are n-digit,
A1, A2, B1, B2 are n/2-digit numbers),
A B = A1 B1·10n + (A1 B2 + A2 B1) ·10n/2 + A2 B2
Recurrence for the number of one-digit multiplications M(n):
M(n) = 4M(n/2), M(1) = 1
Solution: M(n) = n2
09-03-2022
61. Second Divide-and-Conquer Algorithm
A B = A1 B1·10n + (A1 B2 + A2 B1) ·10n/2 + A2 B2
The idea is to decrease the number of multiplications from 4 to 3:
(A1 + A2 ) (B1 + B2 ) = A1 B1 + (A1 B2 + A2 B1) + A2 B2,
I.e., (A1 B2 + A2 B1) = (A1 + A2 ) (B1 + B2 ) - A1 B1 - A2 B2,
which requires only 3 multiplications at the expense of (4-1) extra add/sub.
Recurrence for the number of multiplications M(n):
M(n) = 3M(n/2), M(1) = 1
Solution: M(n) = 3log 2n = nlog 23 ≈ n1.585
09-03-2022
65. Analysis of Strassen’s Algorithm
If n is not a power of 2, matrices can be padded with zeros.
Number of multiplications:
M(n) = 7M(n/2), M(1) = 1
Solution: M(n) = 7log 2n = nlog 27 ≈ n2.807 vs. n3 of brute-force alg.
Algorithms with better asymptotic efficiency are known but they
are even more complex.
09-03-2022
66. Closest-Pair Problem by Divide-and-Conquer
Step 1 Divide the points given into two subsets S1 and S2 by a vertical line x = c so that half the points lie to
the left or on the line and half the points lie to the right or on the line.
09-03-2022
67. Closest Pair by Divide-and-Conquer (cont.)
Step 2 Find recursively the closest pairs for the left and right
subsets.
Step 3 Set d = min{d1, d2}
We can limit our attention to the points in the symmetric
vertical strip of width 2d as possible closest pair. Let C1
and C2 be the subsets of points in the left subset S1 and of
the right subset S2, respectively, that lie in this vertical
strip. The points in C1 and C2 are stored in increasing
order of their y coordinates, which is maintained by
merging during the execution of the next step.
Step 4 For every point P(x,y) in C1, we inspect points in
C2 that may be closer to P than d. There can be no more
than 6 such points (because d ≤ d2)!
09-03-2022
68. Closest Pair by Divide-and-Conquer: Worst Case
The worst case scenario is depicted below:
09-03-2022
69. Efficiency of the Closest-Pair Algorithm
Running time of the algorithm is described by
T(n) = 2T(n/2) + M(n), where M(n) O(n)
By the Master Theorem (with a = 2, b = 2, d = 1)
T(n) O(n log n)
09-03-2022
70. Quickhull Algorithm
Convex hull: smallest convex set that includes given points
Assume points are sorted by x-coordinate values
Identify extreme points P1 and P2 (leftmost and rightmost)
Compute upper hull recursively:
find point Pmax that is farthest away from line P1P2
compute the upper hull of the points to the left of line P1Pmax
compute the upper hull of the points to the left of line PmaxP2
Compute lower hull in a similar manner
P1
P2
Pmax
09-03-2022
71. Efficiency of Quickhull Algorithm
Finding point farthest away from line P1P2 can be done in linear time
Time efficiency:
worst case: Θ(n2) (as quicksort)
average case: Θ(n) (under reasonable assumptions about
distribution of points given)
If points are not initially sorted by x-coordinate value, this can be accomplished in O(n log n) time
Several O(n log n) algorithms for convex hull are known
09-03-2022