This document discusses dynamic programming and provides examples to illustrate the technique. It begins by defining dynamic programming as a bottom-up approach to problem solving where solutions to smaller subproblems are stored and built upon to solve larger problems. It then provides examples of dynamic programming algorithms for calculating Fibonacci numbers, binomial coefficients, and finding shortest paths using Floyd's algorithm. The key aspects of dynamic programming like avoiding recomputing solutions and storing intermediate results in tables are emphasized.
dynamic programming complete by Mumtaz Ali (03154103173)Mumtaz Ali
The document discusses dynamic programming, including its meaning, definition, uses, techniques, and examples. Dynamic programming refers to breaking large problems down into smaller subproblems, solving each subproblem only once, and storing the results for future use. This avoids recomputing the same subproblems repeatedly. Examples covered include matrix chain multiplication, the Fibonacci sequence, and optimal substructure. The document provides details on formulating and solving dynamic programming problems through recursive definitions and storing results in tables.
it contains the detail information about Dynamic programming, Knapsack problem, Forward / backward knapsack, Optimal Binary Search Tree (OBST), Traveling sales person problem(TSP) using dynamic programming
The document discusses the dynamic programming approach to solving the matrix chain multiplication problem. It explains that dynamic programming breaks problems down into overlapping subproblems, solves each subproblem once, and stores the solutions in a table to avoid recomputing them. It then presents the algorithm MATRIX-CHAIN-ORDER that uses dynamic programming to solve the matrix chain multiplication problem in O(n^3) time, as opposed to a brute force approach that would take exponential time.
This document discusses the concept of dynamic programming. It provides examples of dynamic programming problems including assembly line scheduling and matrix chain multiplication. The key steps of a dynamic programming problem are: (1) characterize the optimal structure of a solution, (2) define the problem recursively, (3) compute the optimal solution in a bottom-up manner by solving subproblems only once and storing results, and (4) construct an optimal solution from the computed information.
Dynamic Programming is an algorithmic paradigm that solves a given complex problem by breaking it into subproblems and stores the results of subproblems to avoid computing the same results again.
Dynamic programming is an algorithmic technique that solves problems by breaking them down into smaller subproblems and storing the results of subproblems to avoid recomputing them. It is useful for optimization problems with overlapping subproblems. The key steps are to characterize the structure of an optimal solution, recursively define the value of an optimal solution, compute that value, and construct the optimal solution. Examples discussed include rod cutting, longest increasing subsequence, longest palindrome subsequence, and palindrome partitioning. Other problems that can be solved with dynamic programming include edit distance, shortest paths, optimal binary search trees, the traveling salesman problem, and reliability design.
Dynamic programming in Algorithm AnalysisRajendran
The document discusses dynamic programming and amortized analysis. It covers:
1) An example of amortized analysis of dynamic tables, where the worst case cost of an insert is O(n) but the amortized cost is O(1).
2) Dynamic programming can be used when a problem breaks into recurring subproblems. Longest common subsequence is given as an example that is solved using dynamic programming in O(mn) time rather than a brute force O(2^m*n) approach.
3) The dynamic programming algorithm for longest common subsequence works by defining a 2D array c where c[i,j] represents the length of the LCS of the first i elements
dynamic programming complete by Mumtaz Ali (03154103173)Mumtaz Ali
The document discusses dynamic programming, including its meaning, definition, uses, techniques, and examples. Dynamic programming refers to breaking large problems down into smaller subproblems, solving each subproblem only once, and storing the results for future use. This avoids recomputing the same subproblems repeatedly. Examples covered include matrix chain multiplication, the Fibonacci sequence, and optimal substructure. The document provides details on formulating and solving dynamic programming problems through recursive definitions and storing results in tables.
it contains the detail information about Dynamic programming, Knapsack problem, Forward / backward knapsack, Optimal Binary Search Tree (OBST), Traveling sales person problem(TSP) using dynamic programming
The document discusses the dynamic programming approach to solving the matrix chain multiplication problem. It explains that dynamic programming breaks problems down into overlapping subproblems, solves each subproblem once, and stores the solutions in a table to avoid recomputing them. It then presents the algorithm MATRIX-CHAIN-ORDER that uses dynamic programming to solve the matrix chain multiplication problem in O(n^3) time, as opposed to a brute force approach that would take exponential time.
This document discusses the concept of dynamic programming. It provides examples of dynamic programming problems including assembly line scheduling and matrix chain multiplication. The key steps of a dynamic programming problem are: (1) characterize the optimal structure of a solution, (2) define the problem recursively, (3) compute the optimal solution in a bottom-up manner by solving subproblems only once and storing results, and (4) construct an optimal solution from the computed information.
Dynamic Programming is an algorithmic paradigm that solves a given complex problem by breaking it into subproblems and stores the results of subproblems to avoid computing the same results again.
Dynamic programming is an algorithmic technique that solves problems by breaking them down into smaller subproblems and storing the results of subproblems to avoid recomputing them. It is useful for optimization problems with overlapping subproblems. The key steps are to characterize the structure of an optimal solution, recursively define the value of an optimal solution, compute that value, and construct the optimal solution. Examples discussed include rod cutting, longest increasing subsequence, longest palindrome subsequence, and palindrome partitioning. Other problems that can be solved with dynamic programming include edit distance, shortest paths, optimal binary search trees, the traveling salesman problem, and reliability design.
Dynamic programming in Algorithm AnalysisRajendran
The document discusses dynamic programming and amortized analysis. It covers:
1) An example of amortized analysis of dynamic tables, where the worst case cost of an insert is O(n) but the amortized cost is O(1).
2) Dynamic programming can be used when a problem breaks into recurring subproblems. Longest common subsequence is given as an example that is solved using dynamic programming in O(mn) time rather than a brute force O(2^m*n) approach.
3) The dynamic programming algorithm for longest common subsequence works by defining a 2D array c where c[i,j] represents the length of the LCS of the first i elements
This document discusses algorithms for finding minimum and maximum elements in an array, including simultaneous minimum and maximum algorithms. It introduces dynamic programming as a technique for improving inefficient divide-and-conquer algorithms by storing results of subproblems to avoid recomputing them. Examples of dynamic programming include calculating the Fibonacci sequence and solving an assembly line scheduling problem to minimize total time.
Dynamic programming is an algorithm design paradigm that can be applied to problems exhibiting optimal substructure and overlapping subproblems. It works by breaking down a problem into subproblems and storing the results of already solved subproblems, rather than recomputing them multiple times. This allows for an efficient bottom-up approach. Examples where dynamic programming can be applied include the matrix chain multiplication problem, the 0-1 knapsack problem, and finding the longest common subsequence between two strings.
This document discusses dynamic programming techniques. It covers matrix chain multiplication and all pairs shortest paths problems. Dynamic programming involves breaking down problems into overlapping subproblems and storing the results of already solved subproblems to avoid recomputing them. It has four main steps - defining a mathematical notation for subproblems, proving optimal substructure, deriving a recurrence relation, and developing an algorithm using the relation.
(1) Dynamic programming is an algorithm design technique that solves problems by breaking them down into smaller subproblems and storing the results of already solved subproblems. (2) It is applicable to problems where subproblems overlap and solving them recursively would result in redundant computations. (3) The key steps of a dynamic programming algorithm are to characterize the optimal structure, define the problem recursively in terms of optimal substructures, and compute the optimal solution bottom-up by solving subproblems only once.
The document discusses the dynamic programming approach to solving the Fibonacci numbers problem and the rod cutting problem. It explains that dynamic programming formulations first express the problem recursively but then optimize it by storing results of subproblems to avoid recomputing them. This is done either through a top-down recursive approach with memoization or a bottom-up approach by filling a table with solutions to subproblems of increasing size. The document also introduces the matrix chain multiplication problem and how it can be optimized through dynamic programming by considering overlapping subproblems.
Dynamic programming is used to solve optimization problems by combining solutions to overlapping subproblems. It works by breaking down problems into subproblems, solving each subproblem only once, and storing the solutions in a table to avoid recomputing them. There are two key properties for applying dynamic programming: overlapping subproblems and optimal substructure. Some applications of dynamic programming include finding shortest paths, matrix chain multiplication, the traveling salesperson problem, and knapsack problems.
Dynamic programming is used to solve optimization problems by breaking them down into subproblems. It solves each subproblem only once, storing the results in a table to lookup when the subproblem recurs. This avoids recomputing solutions and reduces computation. The key is determining the optimal substructure of problems. It involves characterizing optimal solutions recursively, computing values in a bottom-up table, and tracing back the optimal solution. An example is the 0/1 knapsack problem to maximize profit fitting items in a knapsack of limited capacity.
Dynamic programming is an algorithm design technique for optimization problems that reduces time by increasing space usage. It works by breaking problems down into overlapping subproblems and storing the solutions to subproblems, rather than recomputing them, to build up the optimal solution. The key aspects are identifying the optimal substructure of problems and handling overlapping subproblems in a bottom-up manner using tables. Examples that can be solved with dynamic programming include the knapsack problem, shortest paths, and matrix chain multiplication.
Dynamic Programming is a method for solving a complex problem by breaking it down into a collection of simpler subproblems, solving each of those subproblems just once, and storing their solutions using a memory-based data structure (array, map,etc).
The document discusses three fundamental algorithms paradigms: recursion, divide-and-conquer, and dynamic programming. Recursion uses method calls to break down problems into simpler subproblems. Divide-and-conquer divides problems into independent subproblems, solves each, and combines solutions. Dynamic programming breaks problems into overlapping subproblems and builds up solutions, storing results of subproblems to avoid recomputing them. Examples like mergesort and calculating Fibonacci numbers are provided to illustrate the approaches.
Dynamic programming is a technique for solving problems with overlapping subproblems and optimal substructure. It works by breaking problems down into smaller subproblems and storing the results in a table to avoid recomputing them. Examples where it can be applied include the knapsack problem, longest common subsequence, and computing Fibonacci numbers efficiently through bottom-up iteration rather than top-down recursion. The technique involves setting up recurrences relating larger instances to smaller ones, solving the smallest instances, and building up the full solution using the stored results.
Introduction to Dynamic Programming, Principle of OptimalityBhavin Darji
Introduction
Dynamic Programming
How Dynamic Programming reduces computation
Steps in Dynamic Programming
Dynamic Programming Properties
Principle of Optimality
Problem solving using Dynamic Programming
The document discusses the matrix chain multiplication problem, which involves finding the most efficient way to multiply a sequence of matrices by determining the optimal parenthesization. It describes that there are multiple ways to multiply the matrices and lists an example of different possibilities. It then introduces a dynamic programming approach to solve this problem in polynomial time by treating it as the combination of optimal solutions to subproblems. The algorithm works by computing a minimum cost table and split table to track the optimal way to multiply the matrices.
Dynamic Programming design technique is one of the fundamental algorithm design techniques, and possibly one of the ones that are hardest to master for those who did not study it formally. In these slides (which are continuation of part 1 slides), we cover two problems: maximum value contiguous subarray, and maximum increasing subsequence.
Skiena algorithm 2007 lecture16 introduction to dynamic programmingzukun
This document summarizes a lecture on dynamic programming. It begins by introducing dynamic programming as a powerful tool for solving optimization problems on ordered items like strings. It then contrasts greedy algorithms, which make locally optimal choices, with dynamic programming, which systematically searches all possibilities while storing results. The document provides examples of computing Fibonacci numbers and binomial coefficients using dynamic programming by storing partial results rather than recomputing them. It outlines three key steps to applying dynamic programming: formulating a recurrence, bounding subproblems, and specifying an evaluation order.
Dynamic programming is a mathematical optimization method and computer programming technique used to solve complex problems by breaking them down into simpler subproblems. It was developed by Richard Bellman in the 1950s and has been applied in many fields. Dynamic programming problems can be solved optimally by breaking them into subproblems with optimal substructures that can be solved recursively. It uses techniques like top-down or bottom-up approaches and storing results of subproblems to solve larger problems efficiently by avoiding recomputing the common subproblems. Multistage graphs are a type of problem well-suited for dynamic programming solutions using techniques like greedy algorithms, Dijkstra's algorithm, or dynamic programming to find shortest paths. Traversal and search algorithms like breadth-
The document discusses the 0/1 knapsack problem and different approaches to find the optimal solution. It describes the problem as filling a knapsack from objects with given weights and benefits to maximize the total benefit within a weight limit. It then summarizes dynamic programming and greedy approaches to solve the problem, and shows the optimal solution is to choose items with weights 3, 4, and 5 to get a total benefit of 9 within the weight limit of 7.
The document contains pseudocode for 4 algorithms:
1) Binomial coefficient algorithm to compute binomial coefficients using dynamic programming.
2) Warshall's algorithm to find the transitive closure of a graph by computing the path matrix between all pairs of vertices.
3) Floyd's algorithm to find all pairs shortest paths in a weighted graph using dynamic programming.
4) Knapsack algorithm to find the optimal solution to the knapsack problem using dynamic programming.
This document discusses algorithms for finding minimum and maximum elements in an array, including simultaneous minimum and maximum algorithms. It introduces dynamic programming as a technique for improving inefficient divide-and-conquer algorithms by storing results of subproblems to avoid recomputing them. Examples of dynamic programming include calculating the Fibonacci sequence and solving an assembly line scheduling problem to minimize total time.
Dynamic programming is an algorithm design paradigm that can be applied to problems exhibiting optimal substructure and overlapping subproblems. It works by breaking down a problem into subproblems and storing the results of already solved subproblems, rather than recomputing them multiple times. This allows for an efficient bottom-up approach. Examples where dynamic programming can be applied include the matrix chain multiplication problem, the 0-1 knapsack problem, and finding the longest common subsequence between two strings.
This document discusses dynamic programming techniques. It covers matrix chain multiplication and all pairs shortest paths problems. Dynamic programming involves breaking down problems into overlapping subproblems and storing the results of already solved subproblems to avoid recomputing them. It has four main steps - defining a mathematical notation for subproblems, proving optimal substructure, deriving a recurrence relation, and developing an algorithm using the relation.
(1) Dynamic programming is an algorithm design technique that solves problems by breaking them down into smaller subproblems and storing the results of already solved subproblems. (2) It is applicable to problems where subproblems overlap and solving them recursively would result in redundant computations. (3) The key steps of a dynamic programming algorithm are to characterize the optimal structure, define the problem recursively in terms of optimal substructures, and compute the optimal solution bottom-up by solving subproblems only once.
The document discusses the dynamic programming approach to solving the Fibonacci numbers problem and the rod cutting problem. It explains that dynamic programming formulations first express the problem recursively but then optimize it by storing results of subproblems to avoid recomputing them. This is done either through a top-down recursive approach with memoization or a bottom-up approach by filling a table with solutions to subproblems of increasing size. The document also introduces the matrix chain multiplication problem and how it can be optimized through dynamic programming by considering overlapping subproblems.
Dynamic programming is used to solve optimization problems by combining solutions to overlapping subproblems. It works by breaking down problems into subproblems, solving each subproblem only once, and storing the solutions in a table to avoid recomputing them. There are two key properties for applying dynamic programming: overlapping subproblems and optimal substructure. Some applications of dynamic programming include finding shortest paths, matrix chain multiplication, the traveling salesperson problem, and knapsack problems.
Dynamic programming is used to solve optimization problems by breaking them down into subproblems. It solves each subproblem only once, storing the results in a table to lookup when the subproblem recurs. This avoids recomputing solutions and reduces computation. The key is determining the optimal substructure of problems. It involves characterizing optimal solutions recursively, computing values in a bottom-up table, and tracing back the optimal solution. An example is the 0/1 knapsack problem to maximize profit fitting items in a knapsack of limited capacity.
Dynamic programming is an algorithm design technique for optimization problems that reduces time by increasing space usage. It works by breaking problems down into overlapping subproblems and storing the solutions to subproblems, rather than recomputing them, to build up the optimal solution. The key aspects are identifying the optimal substructure of problems and handling overlapping subproblems in a bottom-up manner using tables. Examples that can be solved with dynamic programming include the knapsack problem, shortest paths, and matrix chain multiplication.
Dynamic Programming is a method for solving a complex problem by breaking it down into a collection of simpler subproblems, solving each of those subproblems just once, and storing their solutions using a memory-based data structure (array, map,etc).
The document discusses three fundamental algorithms paradigms: recursion, divide-and-conquer, and dynamic programming. Recursion uses method calls to break down problems into simpler subproblems. Divide-and-conquer divides problems into independent subproblems, solves each, and combines solutions. Dynamic programming breaks problems into overlapping subproblems and builds up solutions, storing results of subproblems to avoid recomputing them. Examples like mergesort and calculating Fibonacci numbers are provided to illustrate the approaches.
Dynamic programming is a technique for solving problems with overlapping subproblems and optimal substructure. It works by breaking problems down into smaller subproblems and storing the results in a table to avoid recomputing them. Examples where it can be applied include the knapsack problem, longest common subsequence, and computing Fibonacci numbers efficiently through bottom-up iteration rather than top-down recursion. The technique involves setting up recurrences relating larger instances to smaller ones, solving the smallest instances, and building up the full solution using the stored results.
Introduction to Dynamic Programming, Principle of OptimalityBhavin Darji
Introduction
Dynamic Programming
How Dynamic Programming reduces computation
Steps in Dynamic Programming
Dynamic Programming Properties
Principle of Optimality
Problem solving using Dynamic Programming
The document discusses the matrix chain multiplication problem, which involves finding the most efficient way to multiply a sequence of matrices by determining the optimal parenthesization. It describes that there are multiple ways to multiply the matrices and lists an example of different possibilities. It then introduces a dynamic programming approach to solve this problem in polynomial time by treating it as the combination of optimal solutions to subproblems. The algorithm works by computing a minimum cost table and split table to track the optimal way to multiply the matrices.
Dynamic Programming design technique is one of the fundamental algorithm design techniques, and possibly one of the ones that are hardest to master for those who did not study it formally. In these slides (which are continuation of part 1 slides), we cover two problems: maximum value contiguous subarray, and maximum increasing subsequence.
Skiena algorithm 2007 lecture16 introduction to dynamic programmingzukun
This document summarizes a lecture on dynamic programming. It begins by introducing dynamic programming as a powerful tool for solving optimization problems on ordered items like strings. It then contrasts greedy algorithms, which make locally optimal choices, with dynamic programming, which systematically searches all possibilities while storing results. The document provides examples of computing Fibonacci numbers and binomial coefficients using dynamic programming by storing partial results rather than recomputing them. It outlines three key steps to applying dynamic programming: formulating a recurrence, bounding subproblems, and specifying an evaluation order.
Dynamic programming is a mathematical optimization method and computer programming technique used to solve complex problems by breaking them down into simpler subproblems. It was developed by Richard Bellman in the 1950s and has been applied in many fields. Dynamic programming problems can be solved optimally by breaking them into subproblems with optimal substructures that can be solved recursively. It uses techniques like top-down or bottom-up approaches and storing results of subproblems to solve larger problems efficiently by avoiding recomputing the common subproblems. Multistage graphs are a type of problem well-suited for dynamic programming solutions using techniques like greedy algorithms, Dijkstra's algorithm, or dynamic programming to find shortest paths. Traversal and search algorithms like breadth-
The document discusses the 0/1 knapsack problem and different approaches to find the optimal solution. It describes the problem as filling a knapsack from objects with given weights and benefits to maximize the total benefit within a weight limit. It then summarizes dynamic programming and greedy approaches to solve the problem, and shows the optimal solution is to choose items with weights 3, 4, and 5 to get a total benefit of 9 within the weight limit of 7.
The document contains pseudocode for 4 algorithms:
1) Binomial coefficient algorithm to compute binomial coefficients using dynamic programming.
2) Warshall's algorithm to find the transitive closure of a graph by computing the path matrix between all pairs of vertices.
3) Floyd's algorithm to find all pairs shortest paths in a weighted graph using dynamic programming.
4) Knapsack algorithm to find the optimal solution to the knapsack problem using dynamic programming.
This document contains information about Kamalesh Karmakar, an assistant professor in the computer science department at Meghnad Saha Institute of Technology. It lists the algorithm topics he teaches, including algorithm analysis, design techniques, complexity theory, and more. It also provides references for algorithm textbooks and notes on time and space complexity analysis, asymptotic notation, and different algorithm design techniques like divide-and-conquer, dynamic programming, backtracking, and greedy methods.
Algorithm Analysis and Design Class NotesKumar Avinash
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive function. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
The document discusses the analysis of algorithms. It begins by defining an algorithm and describing different types. It then covers analyzing algorithms in terms of correctness, time efficiency, space efficiency, and optimality through theoretical and empirical analysis. The document discusses analyzing time efficiency by determining the number of repetitions of basic operations as a function of input size. It provides examples of input size, basic operations, and formulas for counting operations. It also covers analyzing best, worst, and average cases and establishes asymptotic efficiency classes. The document then analyzes several examples of non-recursive and recursive algorithms.
This document summarizes a lecture on algorithms and graph traversal techniques. It discusses:
1) Breadth-first search (BFS) and depth-first search (DFS) algorithms for traversing graphs. BFS uses a queue while DFS uses a stack.
2) Applications of BFS and DFS, including finding connected components, minimum spanning trees, and bi-connected components.
3) Identifying articulation points to determine biconnected components in a graph.
4) The 0/1 knapsack problem and approaches for solving it using greedy algorithms, backtracking, and branch and bound search.
This document provides an overview of dynamic programming. It begins by explaining that dynamic programming is a technique for solving optimization problems by breaking them down into overlapping subproblems and storing the results of solved subproblems in a table to avoid recomputing them. It then provides examples of problems that can be solved using dynamic programming, including Fibonacci numbers, binomial coefficients, shortest paths, and optimal binary search trees. The key aspects of dynamic programming algorithms, including defining subproblems and combining their solutions, are also outlined.
Dynamic programming is an algorithm design technique that solves problems by breaking them down into smaller overlapping subproblems and storing the results of already solved subproblems, rather than recomputing them. It is applicable to problems exhibiting optimal substructure and overlapping subproblems. The key steps are to define the optimal substructure, recursively define the optimal solution value, compute values bottom-up, and optionally reconstruct the optimal solution. Common examples that can be solved with dynamic programming include knapsack, shortest paths, matrix chain multiplication, and longest common subsequence.
Module 2ppt.pptx divid and conquer methodJyoReddy9
This document discusses dynamic programming and provides examples of problems that can be solved using dynamic programming. It covers the following key points:
- Dynamic programming can be used to solve problems that exhibit optimal substructure and overlapping subproblems. It works by breaking problems down into subproblems and storing the results of subproblems to avoid recomputing them.
- Examples of problems discussed include matrix chain multiplication, all pairs shortest path, optimal binary search trees, 0/1 knapsack problem, traveling salesperson problem, and flow shop scheduling.
- The document provides pseudocode for algorithms to solve matrix chain multiplication and optimal binary search trees using dynamic programming. It also explains the basic steps and principles of dynamic programming algorithm design
815.07 machine learning using python.pdfSairaAtta5
Recursive functions can be used to solve problems by breaking them down into smaller subproblems. Dynamic programming is a technique for solving recursive problems more efficiently by avoiding recomputing results. It works by either building up the solution from smallest to largest subproblems (bottom-up) or saving computed results to lookup later (top-down). Examples where dynamic programming improves performance include calculating factorials, Fibonacci numbers, binomial coefficients, and the Poisson-binomial distribution.
The document discusses dynamic programming and its application to the matrix chain multiplication problem. It begins by explaining dynamic programming as a bottom-up approach to solving problems by storing solutions to subproblems. It then details the matrix chain multiplication problem of finding the optimal way to parenthesize the multiplication of a chain of matrices to minimize operations. Finally, it provides an example applying dynamic programming to the matrix chain multiplication problem, showing the construction of cost and split tables to recursively build the optimal solution.
The document discusses recursive functions and provides examples of recursive algorithms for calculating factorial, greatest common divisor (GCD), Fibonacci numbers, power functions, and solving the Towers of Hanoi problem. Recursive functions are functions that call themselves during their execution. They break down problems into subproblems of the same type until reaching a base case. This recursive breakdown allows problems to be solved in a top-down, step-by-step manner.
Dynamic programming is a technique for solving complex problems by breaking them down into simpler sub-problems. It involves storing solutions to sub-problems for later use, avoiding recomputing them. Examples where it can be applied include matrix chain multiplication and calculating Fibonacci numbers. For matrix chains, dynamic programming finds the optimal order for multiplying matrices with minimum computations. For Fibonacci numbers, it calculates values in linear time by storing previous solutions rather than exponentially recomputing them through recursion.
Dynamic Programming and Reinforcement Learning applied to Tetris GameSuelen Carvalho
Slides presented as a work to Artificial Intelligence's class at IME-USP. This presentation is about how reinforcement learning is applied to a Tetris game.
Dynamic programming is a method for solving optimization problems by breaking them down into smaller subproblems. It has four key steps: 1) characterize the structure of an optimal solution, 2) recursively define the value of an optimal solution, 3) compute the value of an optimal solution bottom-up, and 4) construct an optimal solution from the information computed. For a problem to be suitable for dynamic programming, it must have two properties: optimal substructure and overlapping subproblems. Dynamic programming avoids recomputing the same subproblems by storing and looking up previous results.
Dynamic programming is a technique that breaks problems into subproblems and saves results to optimize solutions without recomputing subproblems. It is commonly used in computer science, mathematics, economics, and operations research for problems like the Fibonacci series, knapsack problem, and traveling salesman problem. Dynamic programming improves efficiency by storing subproblem solutions and avoiding redundant calculations. It can find optimal and approximate solutions to large problems. For the Fibonacci series, a dynamic programming approach builds up the sequence by calculating each term from the previous two terms rather than recursively calculating all subproblems.
This file contains the contents about dynamic programming, greedy approach, graph algorithm, spanning tree concepts, backtracking and branch and bound approach.
The document discusses linear programming (LP) and the simplex method for solving LP problems. It provides the following key points:
- LP is simpler than nonlinear programming and many problems can be formulated as LP problems.
- The simplex method provides an efficient systematic approach to solve LP problems by moving between extreme points in finite steps.
- The simplex method works by starting at an initial basic feasible solution and iteratively moving to adjacent extreme points to optimize the objective function, until an optimal solution is found.
- George Dantzig developed the simplex method in 1947 to solve military planning problems, establishing it as the most commonly used algorithm for solving LP problems.
This document provides an overview of key algorithm analysis concepts including:
- Common algorithmic techniques like divide-and-conquer, dynamic programming, and greedy algorithms.
- Data structures like heaps, graphs, and trees.
- Analyzing the time efficiency of recursive and non-recursive algorithms using orders of growth, recurrence relations, and the master's theorem.
- Examples of specific algorithms that use techniques like divide-and-conquer, decrease-and-conquer, dynamic programming, and greedy strategies.
- Complexity classes like P, NP, and NP-complete problems.
Here are the steps to solve this ODE problem:
1. Define the ODE function:
function dydt = odefun(t,y)
dydt = -t.*y/10;
end
2. Solve the ODE:
[t,y] = ode45(@odefun,[0 10],10);
3. Plot the result:
plot(t,y)
xlabel('t')
ylabel('y(t)')
This uses ode45 to solve the ODE dy/dt = -t*y/10 on the interval [0,10] with initial condition y(0)=10.
What Is Dynamic Programming? | Dynamic Programming Explained | Programming Fo...Simplilearn
This presentation on 'What Is Dynamic Programming?' will acquaint you with a clear understanding of how this programming paradigm works with the help of a real-life example. In this Dynamic Programming Tutorial, you will understand why recursion is not compatible and how you can solve the problems involved in recursion using DP. Finally, we will cover the dynamic programming implementation of the Fibonacci series program. So, let's get started!
The topics covered in this presentation are:
1. Introduction
2. Real-Life Example of Dynamic Programming
3. Introduction to Dynamic Programming
4. Dynamic Programming Interpretation of Fibonacci Series Program
5. How Does Dynamic Programming Work?
What Is Dynamic Programming?
In computer science, something is said to be efficient if it is quick and uses minimal memory. By storing the solutions to subproblems, we can quickly look them up if the same problem arises again. Because there is no need to recompute the solution, this saves a significant amount of calculation time. But hold on! Efficiency comprises both time and space difficulty. But, why does it matter if we reduce the time required to solve the problem only to increase the space required? This is why it is critical to realize that the ultimate goal of Dynamic Programming is to obtain considerably quicker calculation time at the price of a minor increase in space utilized. Dynamic programming is defined as an algorithmic paradigm that solves a given complex problem by breaking it into several sub-problems and storing the results of those sub-problems to avoid the computation of the same sub-problem over and over again.
What is Programming?
Programming is an act of designing, developing, deploying an executlable software solution to the given user-defined problem.
Programming involves the following stages.
- Problem Statement
- Algorithms and Flowcharts
- Coding the program
- Debug the program.
- Documention
- Maintainence
Simplilearn’s Python Training Course is an all-inclusive program that will introduce you to the Python development language and expose you to the essentials of object-oriented programming, web development with Django and game development. Python has surpassed Java as the top language used to introduce U.S.
Learn more at: https://www.simplilearn.com/mobile-and-software-development/python-development-training
Dynamic programming is a powerful technique for solving optimization problems by breaking them down into overlapping subproblems. It works by storing solutions to already solved subproblems and building up to a solution for the overall problem. Three key aspects are defining the subproblems, writing the recurrence relation, and solving base cases to build up solutions bottom-up rather than top-down. The principle of optimality must also hold for a problem to be suitable for a dynamic programming approach. Examples discussed include shortest paths, coin change, knapsack problems, and calculating Fibonacci numbers.
The document discusses the technique of dynamic programming. It begins with an example of using dynamic programming to compute the Fibonacci numbers more efficiently than a naive recursive solution. This involves storing previously computed values in a table to avoid recomputing them. The document then presents the problem of finding the longest increasing subsequence in an array. It defines the problem and subproblems, derives a recurrence relation, and provides both recursive and iterative memoized algorithms to solve it in quadratic time using dynamic programming.
This document provides an overview of integer programming and the branch-and-bound technique for solving integer programming problems. It begins with definitions and examples of integer programming (IP), mixed integer programming (MIP), and binary integer programming (BIP). It then discusses perspectives on solving IP problems, including the possibility of enumeration and using LP relaxation. The bulk of the document describes the branch-and-bound technique and provides a detailed example of how it is applied to a binary integer programming problem. It concludes by discussing adaptations needed for the branch-and-bound algorithm to solve mixed integer programming problems.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
CAKE: Sharing Slices of Confidential Data on BlockchainClaudio Di Ciccio
Presented at the CAiSE 2024 Forum, Intelligent Information Systems, June 6th, Limassol, Cyprus.
Synopsis: Cooperative information systems typically involve various entities in a collaborative process within a distributed environment. Blockchain technology offers a mechanism for automating such processes, even when only partial trust exists among participants. The data stored on the blockchain is replicated across all nodes in the network, ensuring accessibility to all participants. While this aspect facilitates traceability, integrity, and persistence, it poses challenges for adopting public blockchains in enterprise settings due to confidentiality issues. In this paper, we present a software tool named Control Access via Key Encryption (CAKE), designed to ensure data confidentiality in scenarios involving public blockchains. After outlining its core components and functionalities, we showcase the application of CAKE in the context of a real-world cyber-security project within the logistics domain.
Paper: https://doi.org/10.1007/978-3-031-61000-4_16
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Things to Consider When Choosing a Website Developer for your Website | FODUUFODUU
Choosing the right website developer is crucial for your business. This article covers essential factors to consider, including experience, portfolio, technical skills, communication, pricing, reputation & reviews, cost and budget considerations and post-launch support. Make an informed decision to ensure your website meets your business goals.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
2. 2
Topics
What is Dynamic Programming
Binomial Coefficient
Floyd’s Algorithm
Chained Matrix Multiplication
Optimal Binary Search Tree
Traveling Salesperson
3. 3
Divide-and-Conquer: a top-down approach.
Many smaller instances are computed more
than once.
Dynamic programming: a bottom-up approach.
Solutions for smaller instances are stored in a
table for later use.
Why Dynamic Programming?
4. 4
An Algorithm Design Technique
A framework to solve Optimization problems
Elements of Dynamic Programming
Dynamic programming version of a recursive
algorithm.
Developing a Dynamic Programming Algorithm
– Example: Multiplying a Sequence of Matrices
Dynamic Programming
5. 5
Why Dynamic Programming?
• It sometimes happens that the natural way of dividing anIt sometimes happens that the natural way of dividing an
instance suggested by the structure of the problem leads us toinstance suggested by the structure of the problem leads us to
consider several overlapping subinstances.consider several overlapping subinstances.
• If we solve each of these independently, they will in turnIf we solve each of these independently, they will in turn
create a large number of identical subinstances.create a large number of identical subinstances.
• If we pay no attention to this duplication, it is likely that weIf we pay no attention to this duplication, it is likely that we
will end up with an inefficient algorithm.will end up with an inefficient algorithm.
• If, on the other hand, we take advantage of the duplication andIf, on the other hand, we take advantage of the duplication and
solve each subinstance only once, saving the solution for latersolve each subinstance only once, saving the solution for later
use, then a more efficient algorithm will result.use, then a more efficient algorithm will result.
6. 6
Why Dynamic Programming? …
The underlying idea of dynamic programming isThe underlying idea of dynamic programming is
thus quite simple: avoid calculating the same thingthus quite simple: avoid calculating the same thing
twice, usually by keeping a table of known results,twice, usually by keeping a table of known results,
which we fill up as subinstances are solved.which we fill up as subinstances are solved.
• Dynamic programming is aDynamic programming is a bottom-upbottom-up technique.technique.
• Examples:Examples:
1) Fibonacci numbers1) Fibonacci numbers
2) Computing a Binomial coefficient2) Computing a Binomial coefficient
7. 7
Dynamic Programming
• Dynamic ProgrammingDynamic Programming is a general algorithm designis a general algorithm design
technique.technique.
• Invented by American mathematician Richard Bellman inInvented by American mathematician Richard Bellman in
the 1950s to solve optimization problems.the 1950s to solve optimization problems.
• ““Programming” here means “planning”.Programming” here means “planning”.
• Main idea:Main idea:
• solve several smaller (overlapping) subproblems.solve several smaller (overlapping) subproblems.
• record solutions in a table so that each subproblem isrecord solutions in a table so that each subproblem is
only solved once.only solved once.
• final state of the table will be (or contain) solution.final state of the table will be (or contain) solution.
8. 8
Dynamic Programming
Define a container to store intermediate
results
Access container versus recomputing results
Fibonacci numbers example (top down)
– Use vector to store results as calculated so they
are not re-calculated
10. 10
Example: Fibonacci numbers
• Recall definition of Fibonacci numbers:
f(0) = 0
f(1) = 1
f(n) = f(n-1) + f(n-2) for n ≥ 2
• Computing the nth
Fibonacci number recursively (top-
down): f(n)
f(n-1) + f(n-2)
f(n-2) + f(n-3) f(n-3) + f(n-4)
11. 11
Fib vs. fibDyn
int fib(int n) {
if (n <= 1) return n; // stopping conditions
else return fib(n-1) + fib(n-2); // recursive step
}
int fibDyn(int n, vector<int>& fibList) {
int fibValue;
if (fibList[n] >= 0) // check for a previously computed result and return
return fibList[n];
// otherwise execute the recursive algorithm to obtain the result
if (n <= 1) // stopping conditions
fibValue = n;
else // recursive step
fibValue = fibDyn(n-1, fibList) + fibDyn(n-2, fibList);
// store the result and return its value
fibList[n] = fibValue;
return fibValue;
}
16. 16
Top down vs. Bottom up
Top down dynamic programming moves
through recursive process and stores results
as algorithm computes
Bottom up dynamic programming evaluates
by computing all function values in order,
starting at lowest and using previously
computed values.
17. 17
Examples of Dynamic Programming Algorithms
• Computing binomial coefficientsComputing binomial coefficients
• Optimal chain matrix multiplicationOptimal chain matrix multiplication
• Floyd’s algorithms for all-pairs shortest pathsFloyd’s algorithms for all-pairs shortest paths
• Constructing an optimal binary search treeConstructing an optimal binary search tree
• Some instances of difficult discrete optimization problems:Some instances of difficult discrete optimization problems:
• travelling salesmantravelling salesman
• knapsackknapsack
18. 18
A framework to solve Optimization problems
For each current choice:
– Determine what subproblem(s) would remain if this
choice were made.
– Recursively find the optimal costs of those
subproblems.
– Combine those costs with the cost of the current
choice itself to obtain an overall cost for this choice
Select a current choice that produced the
minimum overall cost.
19. 19
Elements of Dynamic Programming
Constructing solution to a problem by building it up
dynamically from solutions to smaller (or simpler) sub-
problems
– sub-instances are combined to obtain sub-instances of
increasing size, until finally arriving at the solution of the
original instance.
– make a choice at each step, but the choice may depend on the
solutions to sub-problems.
20. 20
Elements of Dynamic Programming …
Principle of optimality
– the optimal solution to any nontrivial instance of a problem is a
combination of optimal solutions to some of its sub-instances.
Memorization (for overlapping sub-problems)
– avoid calculating the same thing twice,
– usually by keeping a table of know results that fills up as sub-
instances are solved.
21. 21
Development of a dynamic programming
algorithm
Characterize the structure of an optimal solution
– Breaking a problem into sub-problem
– whether principle of optimality apply
Recursively define the value of an optimal solution
– define the value of an optimal solution based on value of solutions
to sub-problems
Compute the value of an optimal solution in a bottom-up fashion
– compute in a bottom-up fashion and save the values along the
way
– later steps use the save values of pervious steps
Construct an optimal solution from computed information
22. 22
Binomial Coefficient
Binomial coefficient:
Cannot compute using this formula because of
n!
Instead, use the following formula:
nkfor
knk
n
k
n
≤≤
−
=
0
)!(!
!
23. 23
Binomial Using Divide & Conquer
Binomial formula:
==
<<
−
+
−
−
=
)
0
(01
0
1
1
1
n
n
or C
n
Cnor kk
nk
k
n
C
k
n
C
k
n
C
24. 24
Binomial using Dynamic Programming
Just like Fibonacci, that formula is very inefficient
Instead, we can use the following:
niinnn
bnnCbainCanCba ),(...),(...)0,()( ++++=+ −
27. 27
Binomial Coefficient
Record the values in a table of n+1 rows and k+1 columns
0 1 2 3 … k-1 k
0 1
1 1 1
2 1 2 1
3 1 3 3 1
...
k 1 1
…
n-1 1
n 1
k
n
C
−
−
1
1
k
n
C
−
k
n
C
1
28. 28
Binomial Coefficient
ALGORITHM Binomial(n,k)
//Computes C(n, k) by the dynamic programming algorithm
//Input: A pair of nonnegative integers n ≥ k ≥ 0
//Output: The value of C(n ,k)
for i 0 to n do
for j 0 to min (i ,k) do
if j = 0 or j = k
C [i , j] 1
else C [i , j] C[i-1, j-1] + C[i-1, j]
return C [n, k]
)()(
2
)1(
)1(11),(
1
1
1 1 1 1 1
nkknk
kk
kiknA
k
i
i
j
n
ki
k
j
k
i
n
Ki
Θ∈−+
−
=
+−=+= ∑∑ ∑∑ ∑ ∑=
−
= += = = +=
29. 29
Floyd’s Algorithm: All pairs shortest paths
•Find shortest path when direct path doesn’t existFind shortest path when direct path doesn’t exist
•In a weighted graph, find shortest paths between every pair ofIn a weighted graph, find shortest paths between every pair of
verticesvertices
• Same idea: construct solution through series of matricesSame idea: construct solution through series of matrices
D(0), D(1), … using an initial subset of the vertices asD(0), D(1), … using an initial subset of the vertices as
intermediaries.intermediaries.
• Example:Example:
3
4
2
1
4
1
6
1
5
3
30. 30
Shortest Path
Optimization problem – more than one candidate for
the solution
Solution is the candidate with optimal value
Solution 1 – brute force
– Find all possible paths, compute minimum
– Efficiency?
Solution 2 – dynamic programming
– Algorithm that determines only lengths of shortest paths
– Modify to produce shortest paths as well
Worse than O(n2
)
35. 35
Floyd’s Algorithm: All pairs shortest paths
• ALGORITHM Floyd (W[1 … n, 1… n])ALGORITHM Floyd (W[1 … n, 1… n])
•For kFor k ← 1 to n do← 1 to n do
•For iFor i ← 1 to n do← 1 to n do
•For j ← 1 to n doFor j ← 1 to n do
•W[i, j] ← min{W[i,j], W{i, k] + W[k, j]}W[i, j] ← min{W[i,j], W{i, k] + W[k, j]}
•Return WReturn W
•Efficiency = ?Efficiency = ?
Θ(n)
36. 36
Example: All-pairs shortest-path problem
ExampleExample: Apply Floyd’s algorithm to find the t All-: Apply Floyd’s algorithm to find the t All-
pairs shortest-path problem of the digraph defined bypairs shortest-path problem of the digraph defined by
the following weight matrixthe following weight matrix
0 20 2 ∞∞ 1 81 8
6 0 3 2 ∞6 0 3 2 ∞
∞ ∞∞ ∞ 0 4 ∞0 4 ∞
∞ ∞∞ ∞ 2 0 32 0 3
3 ∞ ∞ ∞3 ∞ ∞ ∞ 00
38. 38
Chained Matrix Multiplication
Problem: Matrix-chain multiplication
– a chain of <A1, A2, …, An> of n matrices
– find a way that minimizes the number of scalar multiplications to
compute the product A1A2…An
Strategy:
Breaking a problem into sub-problem
– A1A2...Ak, Ak+1Ak+2…An
Recursively define the value of an optimal solution
– m[i,j] = 0 if i = j
– m[i,j]= min{i<=k<j} (m[i,k]+m[k+1,j]+pi-1pkpj)
– for 1 <= i <= j <= n
39. 39
Example
Suppose we want to multiply a 2x2 matrix
with a 3x4 matrix
Result is a 2x4 matrix
In general, an i x j matrix times a j x k matrix
requires i x j x k elementary multiplications
40. 40
Example
Consider multiplication of four matrices:
A x B x C x D
(20 x 2) (2 x 30) (30 x 12) (12 x 8)
Matrix multiplication is associative
A(B (CD)) = (AB) (CD)
Five different orders for multiplying 4 matrices
1. A(B (CD)) = 30*12*8 + 2*30*8 + 20*2*3 = 3,680
2. (AB) (CD) = 20*2*30 + 30*12*8 + 20*30*8 = 8,880
3. A ((BC) D) = 2*30*12 + 2*12*3 + 20*2*8 = 1,232
4. ((AB) C) D = 20*2*30 + 20*30*12 + 20*12*8 = 10,320
5. (A (BC)) D = 2*30*12 + 20*2*12 + 20*12*8 = 3,120
41. 41
Algorithm
int minmult (int n, const ind d[], index P[ ] [ ])
{
index i, j, k, diagonal;
int M[1..n][1..n];
for (i = 1; i <= n; i++)
M[i][i] = 0;
for (diagonal = 1; diagonal <= n-1; diagonal++)
for (i = 1; i <= n-diagonal; i++)
{ j = i + diagonal;
M[i] [j] = minimum(M[i][k] + M[k+1][j] + d[i-1]*d[k]*d[j]);
// minimun (i <= k <= j-1)
P[i] [j] = a value of k that gave the minimum;
}
return M[1][n];
}
42. 42
Optimal Binary Trees
Optimal way of constructing a binary search
tree
Minimum depth, balanced (if all keys have
same probability of being the search key)
What if probability is not all the same?
Multiply probability of accessing that key by
number of links to get to that key
44. 44
Traveling Salesperson
The Traveling Salesman Problem (TSP) is a
deceptively simple combinatorial problem. It
can be stated very simply:
A salesman spends his time visiting n cities
(or nodes) cyclically. In one tour he visits
each city just once, and finishes up where he
started. In what order should he visit them to
minimize the distance traveled?
45. 45
Why study?
The problem has some direct importance, since
quite a lot of practical applications can be put in this
form.
It also has a theoretical importance in complexity
theory, since the TSP is one of the class of "NP
Complete" combinatorial problems.
NP Complete problems are intractable in the sense
that no one has found any really efficient way of
solving them for large n.
– They are also known to be more or less equivalent to each
other; if you knew how to solve one kind of NP Complete
problem you could solve the lot.
46. 46
Efficiency
The holy grail is to find a solution algorithm
that gives an optimal solution in a time that
has a polynomial variation with the size n of
the problem.
The best that people have been able to do,
however, is to solve it in a time that varies
exponentially with n.
47. 47
Later…
We’ll get back to the traveling salesperson
problem in the next chapter….
49. 49
Chapter Summary
• Dynamic programming is similar to divide-and-
conquer.
• Dynamic programming is a bottom-up approach.
• Dynamic programming stores the results (small
instances) in the table and reuses it instead of
recomputing it.
• Two steps in development of a dynamic
programming algorithm:
• Establish a recursive property
• Solve an instance of the problem in a bottom-up
51. 51
Rules of Sudoku
• Place a number (1-9) in each blank cell.
• Each row (nine lines from left to right), column (also
nine lines from top to bottom) and 3x3 block bounded
by bold line (nine blocks) contains number from 1
through 9.
52. 52
A Little Help Please…
Try this:
– http://www.ccs.neu.edu/jpt/sudoku/