The document discusses the technique of dynamic programming. It begins with an example of using dynamic programming to compute the Fibonacci numbers more efficiently than a naive recursive solution. This involves storing previously computed values in a table to avoid recomputing them. The document then presents the problem of finding the longest increasing subsequence in an array. It defines the problem and subproblems, derives a recurrence relation, and provides both recursive and iterative memoized algorithms to solve it in quadratic time using dynamic programming.
I am Elijah L. I am an Algorithm Assignment Expert at programminghomeworkhelp.com. I hold a Bachelor’s Degree in Programming, Leeds University, UK. I have been helping students with their homework for the past 6 years. I solve assignments related to Algorithms.
Visit programminghomeworkhelp.com or email support@programminghomeworkhelp.com.You can also call on +1 678 648 4277 for any assistance with Algorithm assignments.
I am Charles G. I am a Signal Processing Assignment Expert at matlabassignmentexperts.com. I hold a Ph.D. in Matlab, The Pennsylvania State University. I have been helping students with their homework for the past 6 years. I solve assignments related to Signal Processing.
Visit matlabassignmentexperts.com or email info@matlabassignmentexperts.com.
You can also call on +1 678 648 4277 for any assistance with Signal Processing Assignments.
The document provides information about getting help with algorithm assignments. It lists a website, email address, and phone number that can be used for support regarding algorithm homework help.
This is a detailed review of ACM International Collegiate Programming Contest (ICPC) Northeastern European Regional Contest (NEERC) 2016 Problems. It includes a summary of problem and names of problem authors and detailed runs statistics for each problem.
Problem statements are available here:
http://neerc.ifmo.ru/information/problems.pdf
The beginning of the following video has the actual review (in Russian): https://www.youtube.com/watch?v=fN25KkNYsjA
I am Duncan V. I am a Digital Signal Processing Assignment Expert at matlabassignmentexperts.com. I hold a Ph.D. in Matlab, Ball State University, Indiana. I have been helping students with their homework for the past 8 years. I solve assignments related to Digital Signal Processing.
Visit matlabassignmentexperts.com or email info@matlabassignmentexperts.com.
You can also call on +1 678 648 4277 for any assistance with Digital Signal Processing Assignments.
A Primality test is an algorithm for determining whether an input number is Prime. Among other fields of mathematics, it is used for Cryptography. Factorization is thought to be a computationally difficult problem, whereas primality testing is comparatively easy (its running time is polynomial in the size of the input).
These questions are prepared by Classical Programming Experts and are asked during job interviews.The Solution to the given programs are prepared by Programming Experts and are often asked in job interviews. Knowing solution to these problems will help you clear your concepts.
The document summarizes the problems from the Northeastern European Regional Contest (NEERC). It provides statistics on each of the 12 problems, including the number of teams that solved it and the overall success rate. It then discusses the key ideas for solving three of the problems - Adjustment Office, Binary vs Decimal, and Cactus Jubilee - in 1-3 sentences each.
I am Elijah L. I am an Algorithm Assignment Expert at programminghomeworkhelp.com. I hold a Bachelor’s Degree in Programming, Leeds University, UK. I have been helping students with their homework for the past 6 years. I solve assignments related to Algorithms.
Visit programminghomeworkhelp.com or email support@programminghomeworkhelp.com.You can also call on +1 678 648 4277 for any assistance with Algorithm assignments.
I am Charles G. I am a Signal Processing Assignment Expert at matlabassignmentexperts.com. I hold a Ph.D. in Matlab, The Pennsylvania State University. I have been helping students with their homework for the past 6 years. I solve assignments related to Signal Processing.
Visit matlabassignmentexperts.com or email info@matlabassignmentexperts.com.
You can also call on +1 678 648 4277 for any assistance with Signal Processing Assignments.
The document provides information about getting help with algorithm assignments. It lists a website, email address, and phone number that can be used for support regarding algorithm homework help.
This is a detailed review of ACM International Collegiate Programming Contest (ICPC) Northeastern European Regional Contest (NEERC) 2016 Problems. It includes a summary of problem and names of problem authors and detailed runs statistics for each problem.
Problem statements are available here:
http://neerc.ifmo.ru/information/problems.pdf
The beginning of the following video has the actual review (in Russian): https://www.youtube.com/watch?v=fN25KkNYsjA
I am Duncan V. I am a Digital Signal Processing Assignment Expert at matlabassignmentexperts.com. I hold a Ph.D. in Matlab, Ball State University, Indiana. I have been helping students with their homework for the past 8 years. I solve assignments related to Digital Signal Processing.
Visit matlabassignmentexperts.com or email info@matlabassignmentexperts.com.
You can also call on +1 678 648 4277 for any assistance with Digital Signal Processing Assignments.
A Primality test is an algorithm for determining whether an input number is Prime. Among other fields of mathematics, it is used for Cryptography. Factorization is thought to be a computationally difficult problem, whereas primality testing is comparatively easy (its running time is polynomial in the size of the input).
These questions are prepared by Classical Programming Experts and are asked during job interviews.The Solution to the given programs are prepared by Programming Experts and are often asked in job interviews. Knowing solution to these problems will help you clear your concepts.
The document summarizes the problems from the Northeastern European Regional Contest (NEERC). It provides statistics on each of the 12 problems, including the number of teams that solved it and the overall success rate. It then discusses the key ideas for solving three of the problems - Adjustment Office, Binary vs Decimal, and Cactus Jubilee - in 1-3 sentences each.
This document summarizes 9 problems from the ACM ICPC 2013-2014 Northeastern European Regional Contest. It provides an overview of the key algorithms and approaches for each problem, describing them concisely in 1-3 sentences per problem. The problems cover a range of algorithmic topics including exhaustive search, dynamic programming, graph algorithms, geometry, and more.
The document describes solutions to several algorithmic problems:
1) It proposes using a sweeping line algorithm to check polygons for self-intersections and intersections between polygons.
2) It suggests a solution that uses left-hand and right-hand rule traversals from an entrance to a lair, precomputing blocked cells to efficiently check if an obstacle blocks both paths.
3) It outlines an approach using dynamic programming to build a protein sequence from prefix and suffix to maximize the number of explained peaks in a matrix, in O(n3) time.
This document discusses the complexity of primality testing. It begins by explaining what prime and composite numbers are, and why primality testing is important for applications like public-key cryptography that rely on the assumption that factoring large composite numbers is computationally difficult. It then covers algorithms for primality testing like the Monte Carlo algorithm and discusses their runtime complexities. It shows that while testing if a number is composite can be done in polynomial time, general number factoring is believed to require exponential time, making primality testing an important problem.
This document discusses approximation algorithms and introduces several combinatorial optimization problems. It begins by explaining that approximation algorithms are needed to find near-optimal solutions for problems that cannot be solved in polynomial time, such as set cover and bin packing. It then provides examples of problems that are in P, NP, and NP-complete. Several techniques for designing approximation algorithms are outlined, including greedy algorithms, linear programming, and semidefinite programming. Specific NP-complete problems like vertex cover, set cover, and independent set are introduced and approximations algorithms with performance guarantees are provided for set cover and vertex cover.
I am Travis W. I am a Computer Science Assignment Expert at programminghomeworkhelp.com. I hold a Master's in Computer Science, Leeds University. I have been helping students with their homework for the past 9 years. I solve assignments related to Computer Science.
Visit programminghomeworkhelp.com or email support@programminghomeworkhelp.com.You can also call on +1 678 648 4277 for any assistance with Computer Science assignments.
This document discusses the implementation of digital filters in fixed-point arithmetic on embedded systems. It presents the need for methodology and tools to design fixed-point embedded filter systems. The key steps are: 1) choosing a filter algorithm, 2) rounding coefficients to fixed-point, and 3) implementing the algorithm. Optimal implementations minimize degradation from quantization errors while meeting resource constraints. The document outlines a global flow from filter design to code generation and optimization.
Error control codes are necessary for transmission and storage of large volumes of date sensitive to errors. BCH codes and Reed Solomon codes are the most important class of multiple error correcting codes for binary and non-binary channels respectively. Peterson and later Berlekamp and Massey discovered powerful algorithms which became viable with the help of new digital technology. Use of Galois fields gave a structured approach to designing of these codes. This presentation deals with above in a very structured and systematic manner.
Gives a basic idea of Finite field theory and its uses in Elliptic cure cryptography. ECDLP and Diffie Helman key exchange and Elgamal Encryption with ECC.
This Presentation Elliptical Curve Cryptography give a brief explain about this topic, it will use to enrich your knowledge on this topic. Use this ppt for your reference purpose and if you have any queries you'll ask questions.
The document summarizes the Agrawal-Kayal-Saxena (AKS) primality test, which can determine if a number n is prime in polynomial time. It discusses different categories of primality tests, including tests where n being prime implies the test holds, tests where the test holding implies n is prime, and the AKS test where the test holding if and only if n is prime. The AKS algorithm is described along with proof of correctness and runtime analysis showing it runs in polynomial time. An example application of the AKS test to the number 1993 is provided to demonstrate it.
The document provides homework problems and solutions related to signals processing. It includes problems on determining the frequency and period of signals, properties of even and odd signals, sampling of continuous signals, and periodicity of sums of signals. It also provides detailed solutions and explanations to each problem.
I am Christopher Hemmingway. I am a Computer Science Assignment Expert at programminghomeworkhelp.com. I hold a Master's in Computer Science, Princeton University, Princeton. I have been helping students with their homework for the past 10 years. I solve assignments related to Computer Science.
Visit programminghomeworkhelp.com or email support@programminghomeworkhelp.com.You can also call on +1 678 648 4277 for any assistance with Computer Science assignments.
The document discusses functions in mathematics and programming. In mathematics, a function defines a relationship between inputs and outputs. The domain is the set of valid inputs, and the range is the set of valid outputs. In programming, functions perform actions and return values. The argument type specifies valid input types, analogous to the mathematical domain, while the return type specifies the output type, analogous to the range. The C standard library contains common mathematical functions like abs, sqrt, and cos. Functions can be used in expressions and assignments like variables.
I am Blake H. I am a Software Construction Assignment Expert at programminghomeworkhelp.com. I hold a PhD. in Programming, Curtin University, Australia. I have been helping students with their homework for the past 10 years. I solve assignments related to Software Construction.
Visit programminghomeworkhelp.com or email support@programminghomeworkhelp.com. You can also call on +1 678 648 4277 for any assistance with Software Construction Assignments.
Dynamic Programming design technique is one of the fundamental algorithm design techniques, and possibly one of the ones that are hardest to master for those who did not study it formally. In these slides (which are continuation of part 1 slides), we cover two problems: maximum value contiguous subarray, and maximum increasing subsequence.
The document contains exercises, hints, and solutions for analyzing algorithms from a textbook. It includes problems related to brute force algorithms, sorting algorithms like selection sort and bubble sort, and evaluating polynomials. The solutions analyze the time complexity of different algorithms, such as proving that a brute force polynomial evaluation algorithm is O(n^2) while a modified version is linear time. It also discusses whether sorting algorithms like selection sort and bubble sort preserve the original order of equal elements (i.e. whether they are stable).
This document discusses solving problems related to quantum mechanics and waves. It provides solutions to several problems involving waves on drum membranes, classical wave equations, particles in infinite and finite boxes, and the time evolution of waves. The document solves these problems through separation of variables, normal mode expansions, computing expectation values, and discussing qualitative features like dephasing and rephasing of waves. It also briefly discusses parameters for a two-slit light experiment.
This document contains a GATE exam question paper from 2000 with multiple choice questions in sections A and B testing knowledge of computer science topics. Section A contains 23 one-mark questions and section B contains 26 two-mark questions covering areas like algorithms, data structures, computer architecture, databases and more. The questions test understanding of concepts like binary trees, graphs, complexity analysis, regular expressions and more through matching, reasoning and problem solving questions.
The document discusses several topics in cryptography including prime numbers, primality testing algorithms, factorization algorithms, the Chinese Remainder Theorem, and modular exponentiation. It defines prime numbers and describes algorithms for determining if a number is prime like the trial division method and Miller-Rabin primality test. Factorization algorithms are used to break encryption. The Chinese Remainder Theorem can be used to solve simultaneous congruences and speed up computations performed modulo composite numbers. Euler's theorem and its generalization are also covered.
Skiena algorithm 2007 lecture16 introduction to dynamic programmingzukun
This document summarizes a lecture on dynamic programming. It begins by introducing dynamic programming as a powerful tool for solving optimization problems on ordered items like strings. It then contrasts greedy algorithms, which make locally optimal choices, with dynamic programming, which systematically searches all possibilities while storing results. The document provides examples of computing Fibonacci numbers and binomial coefficients using dynamic programming by storing partial results rather than recomputing them. It outlines three key steps to applying dynamic programming: formulating a recurrence, bounding subproblems, and specifying an evaluation order.
The document discusses finding the longest common subsequence between two sequences and provides an algorithm using dynamic programming. It explains using a matrix to store the current alignment results, where each cell Aij is calculated based on the adjacent cells, with scores considered. There are two steps - find the length of the LCS using the matrix, then trace back to find the exact alignment. It also discusses the knapsack problem and how dynamic programming can be applied to optimize combination problems.
This document summarizes 9 problems from the ACM ICPC 2013-2014 Northeastern European Regional Contest. It provides an overview of the key algorithms and approaches for each problem, describing them concisely in 1-3 sentences per problem. The problems cover a range of algorithmic topics including exhaustive search, dynamic programming, graph algorithms, geometry, and more.
The document describes solutions to several algorithmic problems:
1) It proposes using a sweeping line algorithm to check polygons for self-intersections and intersections between polygons.
2) It suggests a solution that uses left-hand and right-hand rule traversals from an entrance to a lair, precomputing blocked cells to efficiently check if an obstacle blocks both paths.
3) It outlines an approach using dynamic programming to build a protein sequence from prefix and suffix to maximize the number of explained peaks in a matrix, in O(n3) time.
This document discusses the complexity of primality testing. It begins by explaining what prime and composite numbers are, and why primality testing is important for applications like public-key cryptography that rely on the assumption that factoring large composite numbers is computationally difficult. It then covers algorithms for primality testing like the Monte Carlo algorithm and discusses their runtime complexities. It shows that while testing if a number is composite can be done in polynomial time, general number factoring is believed to require exponential time, making primality testing an important problem.
This document discusses approximation algorithms and introduces several combinatorial optimization problems. It begins by explaining that approximation algorithms are needed to find near-optimal solutions for problems that cannot be solved in polynomial time, such as set cover and bin packing. It then provides examples of problems that are in P, NP, and NP-complete. Several techniques for designing approximation algorithms are outlined, including greedy algorithms, linear programming, and semidefinite programming. Specific NP-complete problems like vertex cover, set cover, and independent set are introduced and approximations algorithms with performance guarantees are provided for set cover and vertex cover.
I am Travis W. I am a Computer Science Assignment Expert at programminghomeworkhelp.com. I hold a Master's in Computer Science, Leeds University. I have been helping students with their homework for the past 9 years. I solve assignments related to Computer Science.
Visit programminghomeworkhelp.com or email support@programminghomeworkhelp.com.You can also call on +1 678 648 4277 for any assistance with Computer Science assignments.
This document discusses the implementation of digital filters in fixed-point arithmetic on embedded systems. It presents the need for methodology and tools to design fixed-point embedded filter systems. The key steps are: 1) choosing a filter algorithm, 2) rounding coefficients to fixed-point, and 3) implementing the algorithm. Optimal implementations minimize degradation from quantization errors while meeting resource constraints. The document outlines a global flow from filter design to code generation and optimization.
Error control codes are necessary for transmission and storage of large volumes of date sensitive to errors. BCH codes and Reed Solomon codes are the most important class of multiple error correcting codes for binary and non-binary channels respectively. Peterson and later Berlekamp and Massey discovered powerful algorithms which became viable with the help of new digital technology. Use of Galois fields gave a structured approach to designing of these codes. This presentation deals with above in a very structured and systematic manner.
Gives a basic idea of Finite field theory and its uses in Elliptic cure cryptography. ECDLP and Diffie Helman key exchange and Elgamal Encryption with ECC.
This Presentation Elliptical Curve Cryptography give a brief explain about this topic, it will use to enrich your knowledge on this topic. Use this ppt for your reference purpose and if you have any queries you'll ask questions.
The document summarizes the Agrawal-Kayal-Saxena (AKS) primality test, which can determine if a number n is prime in polynomial time. It discusses different categories of primality tests, including tests where n being prime implies the test holds, tests where the test holding implies n is prime, and the AKS test where the test holding if and only if n is prime. The AKS algorithm is described along with proof of correctness and runtime analysis showing it runs in polynomial time. An example application of the AKS test to the number 1993 is provided to demonstrate it.
The document provides homework problems and solutions related to signals processing. It includes problems on determining the frequency and period of signals, properties of even and odd signals, sampling of continuous signals, and periodicity of sums of signals. It also provides detailed solutions and explanations to each problem.
I am Christopher Hemmingway. I am a Computer Science Assignment Expert at programminghomeworkhelp.com. I hold a Master's in Computer Science, Princeton University, Princeton. I have been helping students with their homework for the past 10 years. I solve assignments related to Computer Science.
Visit programminghomeworkhelp.com or email support@programminghomeworkhelp.com.You can also call on +1 678 648 4277 for any assistance with Computer Science assignments.
The document discusses functions in mathematics and programming. In mathematics, a function defines a relationship between inputs and outputs. The domain is the set of valid inputs, and the range is the set of valid outputs. In programming, functions perform actions and return values. The argument type specifies valid input types, analogous to the mathematical domain, while the return type specifies the output type, analogous to the range. The C standard library contains common mathematical functions like abs, sqrt, and cos. Functions can be used in expressions and assignments like variables.
I am Blake H. I am a Software Construction Assignment Expert at programminghomeworkhelp.com. I hold a PhD. in Programming, Curtin University, Australia. I have been helping students with their homework for the past 10 years. I solve assignments related to Software Construction.
Visit programminghomeworkhelp.com or email support@programminghomeworkhelp.com. You can also call on +1 678 648 4277 for any assistance with Software Construction Assignments.
Dynamic Programming design technique is one of the fundamental algorithm design techniques, and possibly one of the ones that are hardest to master for those who did not study it formally. In these slides (which are continuation of part 1 slides), we cover two problems: maximum value contiguous subarray, and maximum increasing subsequence.
The document contains exercises, hints, and solutions for analyzing algorithms from a textbook. It includes problems related to brute force algorithms, sorting algorithms like selection sort and bubble sort, and evaluating polynomials. The solutions analyze the time complexity of different algorithms, such as proving that a brute force polynomial evaluation algorithm is O(n^2) while a modified version is linear time. It also discusses whether sorting algorithms like selection sort and bubble sort preserve the original order of equal elements (i.e. whether they are stable).
This document discusses solving problems related to quantum mechanics and waves. It provides solutions to several problems involving waves on drum membranes, classical wave equations, particles in infinite and finite boxes, and the time evolution of waves. The document solves these problems through separation of variables, normal mode expansions, computing expectation values, and discussing qualitative features like dephasing and rephasing of waves. It also briefly discusses parameters for a two-slit light experiment.
This document contains a GATE exam question paper from 2000 with multiple choice questions in sections A and B testing knowledge of computer science topics. Section A contains 23 one-mark questions and section B contains 26 two-mark questions covering areas like algorithms, data structures, computer architecture, databases and more. The questions test understanding of concepts like binary trees, graphs, complexity analysis, regular expressions and more through matching, reasoning and problem solving questions.
The document discusses several topics in cryptography including prime numbers, primality testing algorithms, factorization algorithms, the Chinese Remainder Theorem, and modular exponentiation. It defines prime numbers and describes algorithms for determining if a number is prime like the trial division method and Miller-Rabin primality test. Factorization algorithms are used to break encryption. The Chinese Remainder Theorem can be used to solve simultaneous congruences and speed up computations performed modulo composite numbers. Euler's theorem and its generalization are also covered.
Skiena algorithm 2007 lecture16 introduction to dynamic programmingzukun
This document summarizes a lecture on dynamic programming. It begins by introducing dynamic programming as a powerful tool for solving optimization problems on ordered items like strings. It then contrasts greedy algorithms, which make locally optimal choices, with dynamic programming, which systematically searches all possibilities while storing results. The document provides examples of computing Fibonacci numbers and binomial coefficients using dynamic programming by storing partial results rather than recomputing them. It outlines three key steps to applying dynamic programming: formulating a recurrence, bounding subproblems, and specifying an evaluation order.
The document discusses finding the longest common subsequence between two sequences and provides an algorithm using dynamic programming. It explains using a matrix to store the current alignment results, where each cell Aij is calculated based on the adjacent cells, with scores considered. There are two steps - find the length of the LCS using the matrix, then trace back to find the exact alignment. It also discusses the knapsack problem and how dynamic programming can be applied to optimize combination problems.
This document discusses Fibonacci numbers and sequences. It introduces the Fibonacci sequence as modeling a problem posed by Leonardo Pisano Fibonacci of rabbits breeding. It then provides Matlab code to generate Fibonacci numbers in a vector or recursively as a function. It discusses properties of the Fibonacci sequence, including its relation to the golden ratio, and derives a closed-form analytic expression for Fibonacci numbers.
1) Recursion is a technique where a function calls itself to solve a problem. It breaks the problem down into smaller subproblems until it reaches a base case that can be solved directly.
2) A recursive function must have a base case where it does not call itself further, and a general case where it calls itself on a reduced problem. It works by repeatedly reducing the problem size until the base case is reached.
3) Examples of problems that can be solved recursively include calculating factorials, Fibonacci numbers, binary search of a sorted array, checking if a string is a palindrome, and solving the Towers of Hanoi puzzle.
Effective Algorithm for n Fibonacci Number By: Professor Lili SaghafiProfessor Lili Saghafi
Understand the definition of the Fibonacci numbers.
Understand the definition of the Recursive / Recursive Functions
Show that the naive algorithm for computing them is slow.
Efficiently create algorithms to compute large Fibonacci numbers.
The right algorithm makes all the difference.
What is an Algorithm
Time Complexity
Space Complexity
Asymptotic Notations
Recursive Analysis
Selection Sort
Insertion Sort
Recurrences
Substitution Method
Master Tree Method
Recursion Tree Method
The document discusses the dynamic programming approach to solving the Fibonacci numbers problem and the rod cutting problem. It explains that dynamic programming formulations first express the problem recursively but then optimize it by storing results of subproblems to avoid recomputing them. This is done either through a top-down recursive approach with memoization or a bottom-up approach by filling a table with solutions to subproblems of increasing size. The document also introduces the matrix chain multiplication problem and how it can be optimized through dynamic programming by considering overlapping subproblems.
This document discusses algorithms and their analysis. It begins by defining an algorithm and analyzing its time and space complexity. It then discusses different asymptotic notations used to describe an algorithm's runtime such as Big-O, Omega, and Theta notations. Examples are provided to illustrate how to determine the tight asymptotic bound of functions. The document also covers algorithm design techniques like divide-and-conquer and analyzes merge sort as an example. It concludes by defining recurrences used to describe algorithms and provides an example recurrence for merge sort.
The document provides an introduction to recursion and mathematical concepts relevant to analyzing algorithms. It begins by motivating selection and word puzzle problems to illustrate the importance of how algorithms scale to large inputs. It then reviews necessary mathematical topics like exponents, logarithms, and series. Finally, it defines recursion, provides examples of both correct and incorrect recursive functions, and discusses techniques for proving properties of recursive algorithms like proof by induction.
This document discusses dynamic programming and provides examples to illustrate the technique. It begins by defining dynamic programming as a bottom-up approach to problem solving where solutions to smaller subproblems are stored and built upon to solve larger problems. It then provides examples of dynamic programming algorithms for calculating Fibonacci numbers, binomial coefficients, and finding shortest paths using Floyd's algorithm. The key aspects of dynamic programming like avoiding recomputing solutions and storing intermediate results in tables are emphasized.
The document discusses how computer scientists measure the cost of algorithms using asymptotic analysis. It introduces the big-O, Omega, and Theta notations to classify algorithms based on how their running time grows relative to the size of the input. Big-O represents an upper bound, Omega a lower bound, and Theta a tight bound of an algorithm's growth rate. The analysis allows abstracting away machine-specific details to understand an algorithm's fundamental performance properties.
This file contains the contents about dynamic programming, greedy approach, graph algorithm, spanning tree concepts, backtracking and branch and bound approach.
Recursion is introduced, including definitions and examples of recursion in mathematics and programming. Key aspects of recursion covered include the recursive case, base case, and use of the call stack. The document discusses recursion compared to iteration, advantages and disadvantages of recursion, and techniques like memoization to improve recursive solutions. Examples of recursive algorithms like calculating factorials and Fibonacci numbers are provided to illustrate recursive concepts.
Lectures from a Python workshop I taught in 2013 at the University of Pittsburgh. These are introductory slides to teach important aspects of the Python language.
Update: python limits the number of recursion calls, so computing the factorial as shown in the slides may not be generally usable.
The document discusses recursive functions and provides examples of recursive algorithms for calculating factorial, greatest common divisor (GCD), Fibonacci numbers, power functions, and solving the Towers of Hanoi problem. Recursive functions are functions that call themselves during their execution. They break down problems into subproblems of the same type until reaching a base case. This recursive breakdown allows problems to be solved in a top-down, step-by-step manner.
There are two main ways to show that an algorithm works correctly: testing and proofs of correctness. Testing an algorithm on sample inputs can find bugs but does not guarantee correctness. Proving correctness mathematically is better but more difficult. To prove the correctness of a recursive algorithm, it must be proved by mathematical induction that each recursive call solves a smaller subproblem and that the base cases are solved correctly. Examples are given of using induction to prove the correctness of recursive algorithms for calculating Fibonacci numbers, finding the maximum value, and multiplying numbers.
The document discusses approximation algorithms for NP-hard problems. It begins with an introduction that defines approximation algorithms as algorithms that find feasible but not necessarily optimal solutions to optimization problems in polynomial time.
It then discusses different types of approximation schemes - absolute approximation where the approximate solution is within a constant of optimal, epsilon (ε)-approximation where the approximate solution is within a factor of ε times optimal, and polynomial time approximation schemes that run in polynomial time for any fixed ε.
The document provides examples of problems that admit absolute approximation algorithms, such as planar graph coloring and maximum programs stored on disks. It also discusses Graham's theorem, which proves that the largest processing time scheduling algorithm generates schedules within 1/3
This document provides a quick tour of the Python programming language. It introduces basic Python concepts like data types, variables, operators, conditional statements, loops, and functions. It explains how to get user input, perform type conversions, and work with common data types like integers, floats, strings, and booleans. It also demonstrates how to define functions, use default arguments and keyword arguments, and handle global variables. The document uses examples to illustrate concepts like arithmetic operations, string slicing, indexing, concatenation, and repetition.
This document is a presentation made for a YouTube video for my channel JESPROTECH about the evolution of tailrec through the years accross some of the most well known, and some not so known languages. The goal isn't to go through all languages ever invented, but to gain a prespective about tailrec that exists already in programming languages since the 1950's. The idea is to situate us in time and understand where we come from when we talk about tailrec and tail call optimization (TCO).
This document provides an overview of dynamic programming. It begins by explaining that dynamic programming is a technique for solving optimization problems by breaking them down into overlapping subproblems and storing the results of solved subproblems in a table to avoid recomputing them. It then provides examples of problems that can be solved using dynamic programming, including Fibonacci numbers, binomial coefficients, shortest paths, and optimal binary search trees. The key aspects of dynamic programming algorithms, including defining subproblems and combining their solutions, are also outlined.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
Low power architecture of logic gates using adiabatic techniquesnooriasukmaningtyas
The growing significance of portable systems to limit power consumption in ultra-large-scale-integration chips of very high density, has recently led to rapid and inventive progresses in low-power design. The most effective technique is adiabatic logic circuit design in energy-efficient hardware. This paper presents two adiabatic approaches for the design of low power circuits, modified positive feedback adiabatic logic (modified PFAL) and the other is direct current diode based positive feedback adiabatic logic (DC-DB PFAL). Logic gates are the preliminary components in any digital circuit design. By improving the performance of basic gates, one can improvise the whole system performance. In this paper proposed circuit design of the low power architecture of OR/NOR, AND/NAND, and XOR/XNOR gates are presented using the said approaches and their results are analyzed for powerdissipation, delay, power-delay-product and rise time and compared with the other adiabatic techniques along with the conventional complementary metal oxide semiconductor (CMOS) designs reported in the literature. It has been found that the designs with DC-DB PFAL technique outperform with the percentage improvement of 65% for NOR gate and 7% for NAND gate and 34% for XNOR gate over the modified PFAL techniques at 10 MHz respectively.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
4. Dynamic Programming
Extremely powerful algorithmic technique with
applications in optimization, scheduling,
planning, economics, bioinformatics, etc
At contests, probably the most popular type of
problems
5. Dynamic Programming
Extremely powerful algorithmic technique with
applications in optimization, scheduling,
planning, economics, bioinformatics, etc
At contests, probably the most popular type of
problems
A solution is usually not so easy to find, but
when found, is easily implementable
6. Dynamic Programming
Extremely powerful algorithmic technique with
applications in optimization, scheduling,
planning, economics, bioinformatics, etc
At contests, probably the most popular type of
problems
A solution is usually not so easy to find, but
when found, is easily implementable
Need a lot of practice!
9. Computing Fibonacci Numbers
Computing Fn
Input: An integer n ≥ 0.
Output: The n-th Fibonacci number Fn.
1 def f i b (n ) :
2 i f n <= 1:
3 return n
4 return f i b (n − 1) + f i b (n − 2)
14. Running Time
Essentially, the algorithm computes Fn as the
sum of Fn 1’s
Hence its running time is O(Fn)
But Fibonacci numbers grow exponentially fast:
Fn ≈ 𝜑n
, where 𝜑 = 1.618 . . . is the golden ratio
15. Running Time
Essentially, the algorithm computes Fn as the
sum of Fn 1’s
Hence its running time is O(Fn)
But Fibonacci numbers grow exponentially fast:
Fn ≈ 𝜑n
, where 𝜑 = 1.618 . . . is the golden ratio
E.g., F150 is already 31 decimal digits long
16. Running Time
Essentially, the algorithm computes Fn as the
sum of Fn 1’s
Hence its running time is O(Fn)
But Fibonacci numbers grow exponentially fast:
Fn ≈ 𝜑n
, where 𝜑 = 1.618 . . . is the golden ratio
E.g., F150 is already 31 decimal digits long
The Sun may die before your computer returns
F150
18. Reason
Many computations are repeated
“Those who cannot remember the past are
condemned to repeat it.” (George Santayana)
19. Reason
Many computations are repeated
“Those who cannot remember the past are
condemned to repeat it.” (George Santayana)
A simple, but crucial idea: instead of
recomputing the intermediate results, let’s store
them once they are computed
20. Memoization
1 def f i b (n ) :
2 i f n <= 1:
3 return n
4 return f i b (n − 1) + f i b (n − 2)
21. Memoization
1 def f i b (n ) :
2 i f n <= 1:
3 return n
4 return f i b (n − 1) + f i b (n − 2)
1 T = dict()
2
3 def f i b (n ) :
4 if n not in T:
5 i f n <= 1:
6 T[n] = n
7 else :
8 T[n] = f i b (n − 1) + f i b (n − 2)
9
10 return T[n]
22. Hm...
But do we really need all this fancy stuff
(recursion, memoization, dictionaries) to solve
this simple problem?
23. Hm...
But do we really need all this fancy stuff
(recursion, memoization, dictionaries) to solve
this simple problem?
After all, this is how you would compute F5 by
hand:
24. Hm...
But do we really need all this fancy stuff
(recursion, memoization, dictionaries) to solve
this simple problem?
After all, this is how you would compute F5 by
hand:
1 F0 = 0, F1 = 1
25. Hm...
But do we really need all this fancy stuff
(recursion, memoization, dictionaries) to solve
this simple problem?
After all, this is how you would compute F5 by
hand:
1 F0 = 0, F1 = 1
2 F2 = 0 + 1 = 1
26. Hm...
But do we really need all this fancy stuff
(recursion, memoization, dictionaries) to solve
this simple problem?
After all, this is how you would compute F5 by
hand:
1 F0 = 0, F1 = 1
2 F2 = 0 + 1 = 1
3 F3 = 1 + 1 = 2
27. Hm...
But do we really need all this fancy stuff
(recursion, memoization, dictionaries) to solve
this simple problem?
After all, this is how you would compute F5 by
hand:
1 F0 = 0, F1 = 1
2 F2 = 0 + 1 = 1
3 F3 = 1 + 1 = 2
4 F4 = 1 + 2 = 3
28. Hm...
But do we really need all this fancy stuff
(recursion, memoization, dictionaries) to solve
this simple problem?
After all, this is how you would compute F5 by
hand:
1 F0 = 0, F1 = 1
2 F2 = 0 + 1 = 1
3 F3 = 1 + 1 = 2
4 F4 = 1 + 2 = 3
5 F5 = 2 + 3 = 5
29. Iterative Algorithm
1 def f i b (n ) :
2 T = [ None ] * (n + 1)
3 T[ 0 ] , T[ 1 ] = 0 , 1
4
5 for i in range (2 , n + 1 ) :
6 T[ i ] = T[ i − 1] + T[ i − 2]
7
8 return T[ n ]
31. Hm Again...
But do we really need to waste so much space?
1 def f i b (n ) :
2 i f n <= 1:
3 return n
4
5 previous , c u r r e n t = 0 , 1
6 for _ in range (n − 1 ) :
7 new_current = p r e v i o u s + c u r r e n t
8 previous , c u r r e n t = current , new_current
9
10 return c u r r e n t
33. Running Time
O(n) additions
On the other hand, recall that Fibonacci
numbers grow exponentially fast: the binary
length of Fn is O(n)
34. Running Time
O(n) additions
On the other hand, recall that Fibonacci
numbers grow exponentially fast: the binary
length of Fn is O(n)
In theory: we should not treat such additions as
basic operations
35. Running Time
O(n) additions
On the other hand, recall that Fibonacci
numbers grow exponentially fast: the binary
length of Fn is O(n)
In theory: we should not treat such additions as
basic operations
In practice: just F100 does not fit into a 64-bit
integer type anymore, hence we need bignum
arithmetic
36. Summary
The key idea of dynamic programming: avoid
recomputing the same thing again!
37. Summary
The key idea of dynamic programming: avoid
recomputing the same thing again!
At the same time, the case of Fibonacci
numbers is a slightly artificial example of
dynamic programming since it is clear from the
very beginning what intermediate results we
need to compute the final result
38. Outline
1 1: Longest Increasing Subsequence
1.1: Warm-up
1.2: Subproblems and Recurrence Relation
1.3: Reconstructing a Solution
1.4: Subproblems Revisited
2 2: Edit Distance
2.1: Algorithm
2.2: Reconstructing a Solution
2.3: Final Remarks
3 3: Knapsack
3.1: Knapsack with Repetitions
3.2: Knapsack without Repetitions
3.3: Final Remarks
4 4: Chain Matrix Multiplication
4.1: Chain Matrix Multiplication
4.2: Summary
39. Longest Increasing Subsequence
Longest increasing subsequence
Input: An array A = [a0, a1, . . . , an−1].
Output: A longest increasing subsequence (LIS),
i.e., ai1
, ai2
, . . . , aik
such that
i1 < i2 < . . . < ik, ai1
< ai2
< · · · < aik
,
and k is maximal.
42. Analyzing an Optimal Solution
Consider the last element x of an optimal
increasing subsequence and its previous
element z:
z x
43. Analyzing an Optimal Solution
Consider the last element x of an optimal
increasing subsequence and its previous
element z:
z x
First of all, z < x
44. Analyzing an Optimal Solution
Consider the last element x of an optimal
increasing subsequence and its previous
element z:
z x
First of all, z < x
Moreover, the prefix of the IS ending at z must
be an optimal IS ending at z as otherwise the
initial IS would not be optimal:
z x
45. Analyzing an Optimal Solution
Consider the last element x of an optimal
increasing subsequence and its previous
element z:
z x
First of all, z < x
Moreover, the prefix of the IS ending at z must
be an optimal IS ending at z as otherwise the
initial IS would not be optimal:
z x
46. Analyzing an Optimal Solution
Consider the last element x of an optimal
increasing subsequence and its previous
element z:
z x
First of all, z < x
Moreover, the prefix of the IS ending at z must
be an optimal IS ending at z as otherwise the
initial IS would not be optimal:
z x
Optimal substructure by “cut-and-paste” trick
48. Subproblems and Recurrence Relation
Let LIS(i) be the optimal length of a LIS ending
at A[i]
Then
LIS(i) = 1+max{LIS(j): j < i and A[j] < A[i]}
49. Subproblems and Recurrence Relation
Let LIS(i) be the optimal length of a LIS ending
at A[i]
Then
LIS(i) = 1+max{LIS(j): j < i and A[j] < A[i]}
Convention: maximum of an empty set is equal
to zero
50. Subproblems and Recurrence Relation
Let LIS(i) be the optimal length of a LIS ending
at A[i]
Then
LIS(i) = 1+max{LIS(j): j < i and A[j] < A[i]}
Convention: maximum of an empty set is equal
to zero
Base case: LIS(0) = 1
51. Algorithm
When we have a recurrence relation at hand,
converting it to a recursive algorithm with
memoization is just a technicality
We will use a table T to store the results:
T[i] = LIS(i)
52. Algorithm
When we have a recurrence relation at hand,
converting it to a recursive algorithm with
memoization is just a technicality
We will use a table T to store the results:
T[i] = LIS(i)
Initially, T is empty. When LIS(i) is computed,
we store its value at T[i] (so that we will never
recompute LIS(i) again)
53. Algorithm
When we have a recurrence relation at hand,
converting it to a recursive algorithm with
memoization is just a technicality
We will use a table T to store the results:
T[i] = LIS(i)
Initially, T is empty. When LIS(i) is computed,
we store its value at T[i] (so that we will never
recompute LIS(i) again)
The exact data structure behind T is not that
important at this point: it could be an array or
a hash table
54. Memoization
1 T = dict ()
2
3 def l i s (A, i ) :
4 i f i not in T:
5 T[ i ] = 1
6
7 for j in range ( i ) :
8 i f A[ j ] < A[ i ] :
9 T[ i ] = max(T[ i ] , l i s (A, j ) + 1)
10
11 return T[ i ]
12
13 A = [7 , 2 , 1 , 3 , 8 , 4 , 9 , 1 , 2 , 6 , 5 , 9 , 3]
14 print (max( l i s (A, i ) for i in range ( len (A) ) ) )
55. Running Time
The running time is quadratic (O(n2
)): there are n
“serious” recursive calls (that are not just table
look-ups), each of them needs time O(n) (not
counting the inner recursive calls)
56. Table and Recursion
We need to store in the table T the value of
LIS(i) for all i from 0 to n − 1
57. Table and Recursion
We need to store in the table T the value of
LIS(i) for all i from 0 to n − 1
Reasonable choice of a data structure for T:
an array of size n
58. Table and Recursion
We need to store in the table T the value of
LIS(i) for all i from 0 to n − 1
Reasonable choice of a data structure for T:
an array of size n
Moreover, one can fill in this array iteratively
instead of recursively
59. Iterative Algorithm
1 def l i s (A) :
2 T = [ None ] * len (A)
3
4 for i in range ( len (A ) ) :
5 T[ i ] = 1
6 for j in range ( i ) :
7 i f A[ j ] < A[ i ] and T[ i ] < T[ j ] + 1:
8 T[ i ] = T[ j ] + 1
9
10 return max(T[ i ] for i in range ( len (A) ) )
60. Iterative Algorithm
1 def l i s (A) :
2 T = [ None ] * len (A)
3
4 for i in range ( len (A ) ) :
5 T[ i ] = 1
6 for j in range ( i ) :
7 i f A[ j ] < A[ i ] and T[ i ] < T[ j ] + 1:
8 T[ i ] = T[ j ] + 1
9
10 return max(T[ i ] for i in range ( len (A) ) )
Crucial property: when computing T[i], T[j] for
all j < i have already been computed
61. Iterative Algorithm
1 def l i s (A) :
2 T = [ None ] * len (A)
3
4 for i in range ( len (A ) ) :
5 T[ i ] = 1
6 for j in range ( i ) :
7 i f A[ j ] < A[ i ] and T[ i ] < T[ j ] + 1:
8 T[ i ] = T[ j ] + 1
9
10 return max(T[ i ] for i in range ( len (A) ) )
Crucial property: when computing T[i], T[j] for
all j < i have already been computed
Running time: O(n2
)
62. Outline
1 1: Longest Increasing Subsequence
1.1: Warm-up
1.2: Subproblems and Recurrence Relation
1.3: Reconstructing a Solution
1.4: Subproblems Revisited
2 2: Edit Distance
2.1: Algorithm
2.2: Reconstructing a Solution
2.3: Final Remarks
3 3: Knapsack
3.1: Knapsack with Repetitions
3.2: Knapsack without Repetitions
3.3: Final Remarks
4 4: Chain Matrix Multiplication
4.1: Chain Matrix Multiplication
4.2: Summary
64. Reconstructing a Solution
How to reconstruct an optimal IS?
In order to reconstruct it, for each subproblem
we will keep its optimal value and a choice
leading to this value
65. Adjusting the Algorithm
1 def l i s (A) :
2 T = [ None ] * len (A)
3 prev = [None] * len(A)
4
5 for i in range ( len (A ) ) :
6 T[ i ] = 1
7 prev[i] = -1
8 for j in range ( i ) :
9 i f A[ j ] < A[ i ] and T[ i ] < T[ j ] + 1:
10 T[ i ] = T[ j ] + 1
11 prev[i] = j
77. Unwinding Solution
1 l a s t = 0
2 for i in range (1 , len (A ) ) :
3 i f T[ i ] > T[ l a s t ] :
4 l a s t = i
5
6 l i s= [ ]
7 c u r r e n t = l a s t
8 while c u r r e n t >= 0:
9 l i s . append ( c u r r e n t )
10 c u r r e n t = prev [ c u r r e n t ]
11 l i s . r e v e r s e ()
12 return [A[ i ] for i in l i s ]
83. Summary
Optimal substructure property: any prefix of an
optimal increasing subsequence must be a
longest increasing subsequence ending at this
particular element
84. Summary
Optimal substructure property: any prefix of an
optimal increasing subsequence must be a
longest increasing subsequence ending at this
particular element
Subproblem: the length of an optimal increasing
subsequence ending at i-th element
85. Summary
Optimal substructure property: any prefix of an
optimal increasing subsequence must be a
longest increasing subsequence ending at this
particular element
Subproblem: the length of an optimal increasing
subsequence ending at i-th element
A recurrence relation for subproblems can be
immediately converted into a recursive algorithm
with memoization
86. Summary
Optimal substructure property: any prefix of an
optimal increasing subsequence must be a
longest increasing subsequence ending at this
particular element
Subproblem: the length of an optimal increasing
subsequence ending at i-th element
A recurrence relation for subproblems can be
immediately converted into a recursive algorithm
with memoization
A recursive algorithm, in turn, can be converted
into an iterative one
87. Summary
Optimal substructure property: any prefix of an
optimal increasing subsequence must be a
longest increasing subsequence ending at this
particular element
Subproblem: the length of an optimal increasing
subsequence ending at i-th element
A recurrence relation for subproblems can be
immediately converted into a recursive algorithm
with memoization
A recursive algorithm, in turn, can be converted
into an iterative one
An optimal solution can be recovered either by
using an additional bookkeeping info or by using
the computed solutions to all subproblems
88. Outline
1 1: Longest Increasing Subsequence
1.1: Warm-up
1.2: Subproblems and Recurrence Relation
1.3: Reconstructing a Solution
1.4: Subproblems Revisited
2 2: Edit Distance
2.1: Algorithm
2.2: Reconstructing a Solution
2.3: Final Remarks
3 3: Knapsack
3.1: Knapsack with Repetitions
3.2: Knapsack without Repetitions
3.3: Final Remarks
4 4: Chain Matrix Multiplication
4.1: Chain Matrix Multiplication
4.2: Summary
89. The Most Creative Part
In most DP algorithms, the most creative part is
coming up with the right notion of a subproblem
and a recurrence relation
90. The Most Creative Part
In most DP algorithms, the most creative part is
coming up with the right notion of a subproblem
and a recurrence relation
When a recurrence relation is written down, it
can be wrapped with memoization to get
a recursive algorithm
91. The Most Creative Part
In most DP algorithms, the most creative part is
coming up with the right notion of a subproblem
and a recurrence relation
When a recurrence relation is written down, it
can be wrapped with memoization to get
a recursive algorithm
In the previous video, we arrived at a reasonable
subproblem by analyzing the structure of an
optimal solution
92. The Most Creative Part
In most DP algorithms, the most creative part is
coming up with the right notion of a subproblem
and a recurrence relation
When a recurrence relation is written down, it
can be wrapped with memoization to get
a recursive algorithm
In the previous video, we arrived at a reasonable
subproblem by analyzing the structure of an
optimal solution
In this video, we’ll provide an alternative way of
arriving at subproblems: implement a naive
brute force solution, then optimize it
93. Brute Force: Plan
Need the longest increasing subsequence? No
problem! Just iterate over all subsequences and
select the longest one:
94. Brute Force: Plan
Need the longest increasing subsequence? No
problem! Just iterate over all subsequences and
select the longest one:
Start with an empty sequence
95. Brute Force: Plan
Need the longest increasing subsequence? No
problem! Just iterate over all subsequences and
select the longest one:
Start with an empty sequence
Extend it element by element recursively
96. Brute Force: Plan
Need the longest increasing subsequence? No
problem! Just iterate over all subsequences and
select the longest one:
Start with an empty sequence
Extend it element by element recursively
Keep track of the length of the sequence
97. Brute Force: Plan
Need the longest increasing subsequence? No
problem! Just iterate over all subsequences and
select the longest one:
Start with an empty sequence
Extend it element by element recursively
Keep track of the length of the sequence
This is going to be slow, but not to worry: we
will optimize it later
98. Brute Force: Code
1 def l i s (A, seq ) :
2 r e s u l t = len ( seq )
3
4 i f len ( seq ) == 0:
5 last_index = −1
6 last_element = float ( "−i n f " )
7 else :
8 last_index = seq [ −1]
9 last_element = A[ last_index ]
10
11 for i in range ( last_index + 1 , len (A ) ) :
12 i f A[ i ] > last_element :
13 r e s u l t = max( r e s u l t , l i s (A, seq + [ i ] ) )
14
15 return r e s u l t
16
17 print ( l i s (A=[7 , 2 , 1 , 3 , 8 , 4 , 9] , seq =[]))
100. Optimizing
At each step, we are trying to extend the
current sequence
For this, we pass the current sequence to each
recursive call
101. Optimizing
At each step, we are trying to extend the
current sequence
For this, we pass the current sequence to each
recursive call
At the same time, code inspection reveals that
we are not using all of the sequence: we are only
interested in its last element and its length
102. Optimizing
At each step, we are trying to extend the
current sequence
For this, we pass the current sequence to each
recursive call
At the same time, code inspection reveals that
we are not using all of the sequence: we are only
interested in its last element and its length
Let’s optimize!
103. Optimized Code
1 def l i s (A, seq_len , last_index ) :
2 i f last_index == −1:
3 last_element = float ( "−i n f " )
4 else :
5 last_element = A[ last_index ]
6
7 r e s u l t = seq_len
8
9 for i in range ( last_index + 1 , len (A ) ) :
10 i f A[ i ] > last_element :
11 r e s u l t = max( r e s u l t ,
12 l i s (A, seq_len + 1 , i ))
13
14 return r e s u l t
15
16 print ( l i s ( [ 3 , 2 , 7 , 8 , 9 , 5 , 8] , 0 , −1))
104. Optimizing Further
Inspecting the code further, we realize that
seq_len is not used for extending the current
sequence (we don’t need to know even the
length of the initial part of the sequence to
optimally extend it)
105. Optimizing Further
Inspecting the code further, we realize that
seq_len is not used for extending the current
sequence (we don’t need to know even the
length of the initial part of the sequence to
optimally extend it)
More formally, for any x,
extend(A, seq_len, i) is equal to
extend(A, seq_len - x, i) + x
106. Optimizing Further
Inspecting the code further, we realize that
seq_len is not used for extending the current
sequence (we don’t need to know even the
length of the initial part of the sequence to
optimally extend it)
More formally, for any x,
extend(A, seq_len, i) is equal to
extend(A, seq_len - x, i) + x
Hence, can optimize the code as follows:
max(result, 1 + seq_len + extend(A, 0, i))
107. Optimizing Further
Inspecting the code further, we realize that
seq_len is not used for extending the current
sequence (we don’t need to know even the
length of the initial part of the sequence to
optimally extend it)
More formally, for any x,
extend(A, seq_len, i) is equal to
extend(A, seq_len - x, i) + x
Hence, can optimize the code as follows:
max(result, 1 + seq_len + extend(A, 0, i))
Excludes seq_len from the list of parameters!
108. Resulting Code
1 def l i s (A, last_index ) :
2 i f last_index == −1:
3 last_element = float ( "−i n f " )
4 else :
5 last_element = A[ last_index ]
6
7 r e s u l t = 0
8
9 for i in range ( last_index + 1 , len (A ) ) :
10 i f A[ i ] > last_element :
11 r e s u l t = max( r e s u l t , 1 + l i s (A, i ))
12
13 return r e s u l t
14
15 print ( l i s ( [ 8 , 2 , 3 , 4 , 5 , 6 , 7] , −1))
109. Resulting Code
1 def l i s (A, last_index ) :
2 i f last_index == −1:
3 last_element = float ( "−i n f " )
4 else :
5 last_element = A[ last_index ]
6
7 r e s u l t = 0
8
9 for i in range ( last_index + 1 , len (A ) ) :
10 i f A[ i ] > last_element :
11 r e s u l t = max( r e s u l t , 1 + l i s (A, i ))
12
13 return r e s u l t
14
15 print ( l i s ( [ 8 , 2 , 3 , 4 , 5 , 6 , 7] , −1))
It remains to add memoization!
111. Summary
Subproblems (and recurrence relation on them)
is the most important ingredient of a dynamic
programming algorithm
Two common ways of arriving at the right
subproblem:
112. Summary
Subproblems (and recurrence relation on them)
is the most important ingredient of a dynamic
programming algorithm
Two common ways of arriving at the right
subproblem:
Analyze the structure of an optimal solution
113. Summary
Subproblems (and recurrence relation on them)
is the most important ingredient of a dynamic
programming algorithm
Two common ways of arriving at the right
subproblem:
Analyze the structure of an optimal solution
Implement a brute force solution and optimize it
114. Outline
1 1: Longest Increasing Subsequence
1.1: Warm-up
1.2: Subproblems and Recurrence Relation
1.3: Reconstructing a Solution
1.4: Subproblems Revisited
2 2: Edit Distance
2.1: Algorithm
2.2: Reconstructing a Solution
2.3: Final Remarks
3 3: Knapsack
3.1: Knapsack with Repetitions
3.2: Knapsack without Repetitions
3.3: Final Remarks
4 4: Chain Matrix Multiplication
4.1: Chain Matrix Multiplication
4.2: Summary
115. Statement
Edit distance
Input: Two strings A[0 . . . n − 1] and
B[0 . . . m − 1].
Output: The minimal number of insertions,
deletions, and substitutions needed to
transform A to B. This number is known
as edit distance or Levenshtein distance.
129. Subproblems
Let ED(i, j) be the edit distance of
A[0 . . . i − 1] and B[0 . . . j − 1].
We know for sure that the last column of an
optimal alignment is either an insertion, a
deletion, or a match/mismatch.
130. Subproblems
Let ED(i, j) be the edit distance of
A[0 . . . i − 1] and B[0 . . . j − 1].
We know for sure that the last column of an
optimal alignment is either an insertion, a
deletion, or a match/mismatch.
What is left is an optimal alignment of the
corresponding two prefixes (by cut-and-paste).
133. Recursive Algorithm
1 T = dict ()
2
3 def edit_distance (a , b , i , j ) :
4 i f not ( i , j ) in T:
5 i f i == 0: T[ i , j ] = j
6 e l i f j == 0: T[ i , j ] = i
7 else :
8 d i f f = 0 i f a [ i − 1] == b [ j − 1] else 1
9 T[ i , j ] = min(
10 edit_distance (a , b , i − 1 , j ) + 1 ,
11 edit_distance (a , b , i , j − 1) + 1 ,
12 edit_distance (a , b , i − 1 , j − 1) + d i f f )
13
14 return T[ i , j ]
15
16
17 print ( edit_distance ( a=" e d i t i n g " , b=" d i s t a n c e " ,
18 i =7, j =8))
134. Converting to a Recursive Algorithm
Use a 2D table to store the intermediate results
135. Converting to a Recursive Algorithm
Use a 2D table to store the intermediate results
ED(i, j) depends on ED(i − 1, j − 1),
ED(i − 1, j), and ED(i, j − 1):
0
n
i
0 m
j
(m
is)m
atch
insertion
deletion
136. Filling the Table
Fill in the table row by row or column by column:
0
n
i
0 m
j
0
n
i
0 m
j
137. Iterative Algorithm
1 def edit_distance (a , b ) :
2 T = [ [ f l o a t ( " i n f " ) ] * ( len (b) + 1)
3 for _ in range ( len ( a ) + 1 ) ]
4 for i in range ( len ( a ) + 1 ) :
5 T[ i ] [ 0 ] = i
6 for j in range ( len (b) + 1 ) :
7 T [ 0 ] [ j ] = j
8
9 for i in range (1 , len ( a ) + 1 ) :
10 for j in range (1 , len (b) + 1 ) :
11 d i f f = 0 i f a [ i − 1] == b [ j − 1] else 1
12 T[ i ] [ j ] = min(T[ i − 1 ] [ j ] + 1 ,
13 T[ i ] [ j − 1] + 1 ,
14 T[ i − 1 ] [ j − 1] + d i f f )
15
16 return T[ len ( a ) ] [ len (b ) ]
17
18
19 print ( edit_distance ( a=" d i s t a n c e " , b=" e d i t i n g " ))
138. Example
E D I T I N G
0 1 2 3 4 5 6 7
0 0 1 2 3 4 5 6 7
D 1 1
I 2 2
S 3 3
T 4 4
A 5 5
N 6 6
C 7 7
E 8 8
139. Example
E D I T I N G
0 1 2 3 4 5 6 7
0 0 1 2 3 4 5 6 7
D 1 1
I 2 2
S 3 3
T 4 4
A 5 5
N 6 6
C 7 7
E 8 8
140. Example
E D I T I N G
0 1 2 3 4 5 6 7
0 0 1 2 3 4 5 6 7
D 1 1 1
I 2 2
S 3 3
T 4 4
A 5 5
N 6 6
C 7 7
E 8 8
141. Example
E D I T I N G
0 1 2 3 4 5 6 7
0 0 1 2 3 4 5 6 7
D 1 1 1
I 2 2
S 3 3
T 4 4
A 5 5
N 6 6
C 7 7
E 8 8
142. Example
E D I T I N G
0 1 2 3 4 5 6 7
0 0 1 2 3 4 5 6 7
D 1 1 1 1
I 2 2
S 3 3
T 4 4
A 5 5
N 6 6
C 7 7
E 8 8
143. Example
E D I T I N G
0 1 2 3 4 5 6 7
0 0 1 2 3 4 5 6 7
D 1 1 1 1
I 2 2
S 3 3
T 4 4
A 5 5
N 6 6
C 7 7
E 8 8
144. Example
E D I T I N G
0 1 2 3 4 5 6 7
0 0 1 2 3 4 5 6 7
D 1 1 1 1 2
I 2 2
S 3 3
T 4 4
A 5 5
N 6 6
C 7 7
E 8 8
145. Example
E D I T I N G
0 1 2 3 4 5 6 7
0 0 1 2 3 4 5 6 7
D 1 1 1 1 2 3 4 5 6
I 2 2 2 2 1 2 3 4 5
S 3 3 3 3 2 2 3 4 5
T 4 4 4 4 3 2 3 4 5
A 5 5 5 5 4 3 3 4 5
N 6 6 6 6 5 4 4 3 4
C 7 7 7 7 6 5 5 4 4
E 8 8 7 8 7 6 6 5 5
147. Brute Force
Recursively construct an alignment column by
column
Then note, that for extending the partially
constructed alignment optimally, one only needs
to know the already used length of prefix of A
and the length of prefix of B
148. Outline
1 1: Longest Increasing Subsequence
1.1: Warm-up
1.2: Subproblems and Recurrence Relation
1.3: Reconstructing a Solution
1.4: Subproblems Revisited
2 2: Edit Distance
2.1: Algorithm
2.2: Reconstructing a Solution
2.3: Final Remarks
3 3: Knapsack
3.1: Knapsack with Repetitions
3.2: Knapsack without Repetitions
3.3: Final Remarks
4 4: Chain Matrix Multiplication
4.1: Chain Matrix Multiplication
4.2: Summary
150. Reconstructing a Solution
To reconstruct a solution, we go back from the
cell (n, m) to the cell (0, 0)
If ED(i, j) = ED(i − 1, j) + 1, then there exists
an optimal alignment whose last column is a
deletion
151. Reconstructing a Solution
To reconstruct a solution, we go back from the
cell (n, m) to the cell (0, 0)
If ED(i, j) = ED(i − 1, j) + 1, then there exists
an optimal alignment whose last column is a
deletion
If ED(i, j) = ED(i, j − 1) + 1, then there exists
an optimal alignment whose last column is an
insertion
152. Reconstructing a Solution
To reconstruct a solution, we go back from the
cell (n, m) to the cell (0, 0)
If ED(i, j) = ED(i − 1, j) + 1, then there exists
an optimal alignment whose last column is a
deletion
If ED(i, j) = ED(i, j − 1) + 1, then there exists
an optimal alignment whose last column is an
insertion
If ED(i, j) = ED(i − 1, j − 1) + diff(A[i], B[j]),
then match (if A[i] = B[j]) or mismatch (if
A[i] ̸= B[j])
153. Example
E D I T I N G
0 1 2 3 4 5 6 7
D 1 1 1 2 3 4 5 6
I 2 2 2 1 2 3 4 5
S 3 3 3 2 2 3 4 5
T 4 4 4 3 2 3 4 5
A 5 5 5 4 3 3 4 5
N 6 6 6 5 4 4 3 4
C 7 7 7 6 5 5 4 4
E 8 7 8 7 6 6 5 5
154. Example
(m
is)m
atch
insertion
deletion
E D I T I N G
0 1 2 3 4 5 6 7
D 1 1 1 2 3 4 5 6
I 2 2 2 1 2 3 4 5
S 3 3 3 2 2 3 4 5
T 4 4 4 3 2 3 4 5
A 5 5 5 4 3 3 4 5
N 6 6 6 5 4 4 3 4
C 7 7 7 6 5 5 4 4
E 8 7 8 7 6 6 5 5
E
G
155. Example
(m
is)m
atch
insertion
deletion
E D I T I N G
0 1 2 3 4 5 6 7
D 1 1 1 2 3 4 5 6
I 2 2 2 1 2 3 4 5
S 3 3 3 2 2 3 4 5
T 4 4 4 3 2 3 4 5
A 5 5 5 4 3 3 4 5
N 6 6 6 5 4 4 3 4
C 7 7 7 6 5 5 4 4
E 8 7 8 7 6 6 5 5
C E
- G
156. Example
(m
is)m
atch
insertion
deletion
E D I T I N G
0 1 2 3 4 5 6 7
D 1 1 1 2 3 4 5 6
I 2 2 2 1 2 3 4 5
S 3 3 3 2 2 3 4 5
T 4 4 4 3 2 3 4 5
A 5 5 5 4 3 3 4 5
N 6 6 6 5 4 4 3 4
C 7 7 7 6 5 5 4 4
E 8 7 8 7 6 6 5 5
N C E
N - G
157. Example
(m
is)m
atch
insertion
deletion
E D I T I N G
0 1 2 3 4 5 6 7
D 1 1 1 2 3 4 5 6
I 2 2 2 1 2 3 4 5
S 3 3 3 2 2 3 4 5
T 4 4 4 3 2 3 4 5
A 5 5 5 4 3 3 4 5
N 6 6 6 5 4 4 3 4
C 7 7 7 6 5 5 4 4
E 8 7 8 7 6 6 5 5
A N C E
I N - G
158. Example
(m
is)m
atch
insertion
deletion
E D I T I N G
0 1 2 3 4 5 6 7
D 1 1 1 2 3 4 5 6
I 2 2 2 1 2 3 4 5
S 3 3 3 2 2 3 4 5
T 4 4 4 3 2 3 4 5
A 5 5 5 4 3 3 4 5
N 6 6 6 5 4 4 3 4
C 7 7 7 6 5 5 4 4
E 8 7 8 7 6 6 5 5
T A N C E
T I N - G
159. Example
(m
is)m
atch
insertion
deletion
E D I T I N G
0 1 2 3 4 5 6 7
D 1 1 1 2 3 4 5 6
I 2 2 2 1 2 3 4 5
S 3 3 3 2 2 3 4 5
T 4 4 4 3 2 3 4 5
A 5 5 5 4 3 3 4 5
N 6 6 6 5 4 4 3 4
C 7 7 7 6 5 5 4 4
E 8 7 8 7 6 6 5 5
S T A N C E
- T I N - G
160. Example
(m
is)m
atch
insertion
deletion
E D I T I N G
0 1 2 3 4 5 6 7
D 1 1 1 2 3 4 5 6
I 2 2 2 1 2 3 4 5
S 3 3 3 2 2 3 4 5
T 4 4 4 3 2 3 4 5
A 5 5 5 4 3 3 4 5
N 6 6 6 5 4 4 3 4
C 7 7 7 6 5 5 4 4
E 8 7 8 7 6 6 5 5
I S T A N C E
I - T I N - G
161. Example
(m
is)m
atch
insertion
deletion
E D I T I N G
0 1 2 3 4 5 6 7
D 1 1 1 2 3 4 5 6
I 2 2 2 1 2 3 4 5
S 3 3 3 2 2 3 4 5
T 4 4 4 3 2 3 4 5
A 5 5 5 4 3 3 4 5
N 6 6 6 5 4 4 3 4
C 7 7 7 6 5 5 4 4
E 8 7 8 7 6 6 5 5
D I S T A N C E
D I - T I N - G
162. Example
(m
is)m
atch
insertion
deletion
E D I T I N G
0 1 2 3 4 5 6 7
D 1 1 1 2 3 4 5 6
I 2 2 2 1 2 3 4 5
S 3 3 3 2 2 3 4 5
T 4 4 4 3 2 3 4 5
A 5 5 5 4 3 3 4 5
N 6 6 6 5 4 4 3 4
C 7 7 7 6 5 5 4 4
E 8 7 8 7 6 6 5 5
- D I S T A N C E
E D I - T I N - G
163. Outline
1 1: Longest Increasing Subsequence
1.1: Warm-up
1.2: Subproblems and Recurrence Relation
1.3: Reconstructing a Solution
1.4: Subproblems Revisited
2 2: Edit Distance
2.1: Algorithm
2.2: Reconstructing a Solution
2.3: Final Remarks
3 3: Knapsack
3.1: Knapsack with Repetitions
3.2: Knapsack without Repetitions
3.3: Final Remarks
4 4: Chain Matrix Multiplication
4.1: Chain Matrix Multiplication
4.2: Summary
164. Saving Space
When filling in the matrix it is enough to keep
only the current column and the previous
column:
0
n
i
0 m
j
0
n
i
0 m
j
165. Saving Space
When filling in the matrix it is enough to keep
only the current column and the previous
column:
0
n
i
0 m
j
0
n
i
0 m
j
Thus, one can compute the edit distance of two
given strings A[1 . . . n] and B[1 . . . m] in time
O(nm) and space O(min{n, m}).
166. Reconstructing a Solution
However we need the whole table to find an
actual alignment (we trace an alignment from
the bottom right corner to the top left corner)
167. Reconstructing a Solution
However we need the whole table to find an
actual alignment (we trace an alignment from
the bottom right corner to the top left corner)
There exists an algorithm constructing an
optimal alignment in time O(nm) and space
O(n + m) (Hirschberg’s algorithm)
168. Weighted Edit Distance
The cost of insertions, deletions, and
substitutions is not necessarily identical
Spell checking: some substitutions are more
likely than others
Biology: some mutations are more likely than
others
173. Applications
Classical problem in combinatorial optimization
with applications in resource allocation,
cryptography, planning
Weights and values may mean various resources
(to be maximized or limited):
174. Applications
Classical problem in combinatorial optimization
with applications in resource allocation,
cryptography, planning
Weights and values may mean various resources
(to be maximized or limited):
Select a set of TV commercials (each commercial
has duration and cost) so that the total revenue is
maximal while the total length does not exceed the
length of the available time slot
175. Applications
Classical problem in combinatorial optimization
with applications in resource allocation,
cryptography, planning
Weights and values may mean various resources
(to be maximized or limited):
Select a set of TV commercials (each commercial
has duration and cost) so that the total revenue is
maximal while the total length does not exceed the
length of the available time slot
Purchase computers for a data center to achieve
the maximal performance under limited budget
187. Knapsack with repetitions problem
Input: Weights w0, . . . , wn−1 and values
v0, . . . , vn−1 of n items; total weight W
(vi’s, wi’s, and W are non-negative
integers).
Output: The maximum value of items whose
weight does not exceed W . Each item
can be used any number of times.
188. Analyzing an Optimal Solution
Consider an optimal solution and an item in it:
W
wi
189. Analyzing an Optimal Solution
Consider an optimal solution and an item in it:
W
wi
If we take this item out then we get an optimal
solution for a knapsack of total weight W − wi.
191. Subproblems
Let value(u) be the maximum value of knapsack
of weight u
value(u) = max
i : wi ≤w
{value(u − wi) + vi}
192. Subproblems
Let value(u) be the maximum value of knapsack
of weight u
value(u) = max
i : wi ≤w
{value(u − wi) + vi}
Base case: value(0) = 0
193. Subproblems
Let value(u) be the maximum value of knapsack
of weight u
value(u) = max
i : wi ≤w
{value(u − wi) + vi}
Base case: value(0) = 0
This recurrence relation is transformed into
a recursive algorithm in a straightforward way
194. Recursive Algorithm
1 T = dict ()
2
3 def knapsack (w, v , u ) :
4 i f u not in T:
5 T[ u ] = 0
6
7 for i in range ( len (w) ) :
8 i f w[ i ] <= u :
9 T[ u ] = max(T[ u ] ,
10 knapsack (w, v , u − w[ i ] ) + v [ i ] )
11
12 return T[ u ]
13
14
15 print ( knapsack (w=[6 , 3 , 4 , 2] ,
16 v =[30 , 14 , 16 , 9] , u=10))
196. Recursive into Iterative
As usual, one can transform a recursive
algorithm into an iterative one
For this, we gradually fill in an array T:
T[u] = value(u)
197. Recursive Algorithm
1 def knapsack (W, w, v ) :
2 T = [ 0 ] * (W + 1)
3
4 for u in range (1 , W + 1 ) :
5 for i in range ( len (w) ) :
6 i f w[ i ] <= u :
7 T[ u ] = max(T[ u ] , T[ u − w[ i ] ] + v [ i ] )
8
9 return T[W]
10
11
12 print ( knapsack (W=10, w=[6 , 3 , 4 , 2] ,
13 v =[30 , 14 , 16 , 9 ] ) )
202. Subproblems Revisited
Another way of arriving at subproblems:
optimizing brute force solution
Populate a list of used items one by one
203. Brute Force: Knapsack with Repetitions
1 def knapsack (W, w, v , items ) :
2 weight = sum(w[ i ] for i in items )
3 value = sum( v [ i ] for i in items )
4
5 for i in range ( len (w) ) :
6 i f weight + w[ i ] <= W:
7 value = max( value ,
8 knapsack (W, w, v , items + [ i ] ) )
9
10 return value
11
12 print ( knapsack (W=10, w=[6 , 3 , 4 , 2] ,
13 v =[30 , 14 , 16 , 9] , items =[]))
204. Subproblems
It remains to notice that the only important
thing for extending the current set of items is
the weight of this set
205. Subproblems
It remains to notice that the only important
thing for extending the current set of items is
the weight of this set
One then replaces items by their weight in the
list of parameters
206. Outline
1 1: Longest Increasing Subsequence
1.1: Warm-up
1.2: Subproblems and Recurrence Relation
1.3: Reconstructing a Solution
1.4: Subproblems Revisited
2 2: Edit Distance
2.1: Algorithm
2.2: Reconstructing a Solution
2.3: Final Remarks
3 3: Knapsack
3.1: Knapsack with Repetitions
3.2: Knapsack without Repetitions
3.3: Final Remarks
4 4: Chain Matrix Multiplication
4.1: Chain Matrix Multiplication
4.2: Summary
208. Knapsack without repetitions problem
Input: Weights w0, . . . , wn−1 and values
v0, . . . , vn−1 of n items; total weight W
(vi’s, wi’s, and W are non-negative
integers).
Output: The maximum value of items whose
weight does not exceed W . Each item
can be used at most once.
213. Subproblems
If the last item is taken into an optimal solution:
W
wn−1
then what is left is an optimal solution for a
knapsack of total weight W − wn−1 using items
0, 1, . . . , n − 2.
214. Subproblems
If the last item is taken into an optimal solution:
W
wn−1
then what is left is an optimal solution for a
knapsack of total weight W − wn−1 using items
0, 1, . . . , n − 2.
If the last item is not used, then the whole
knapsack must be filled in optimally with items
0, 1, . . . , n − 2.
215. Subproblems
For 0 ≤ u ≤ W and 0 ≤ i ≤ n, value(u, i) is
the maximum value achievable using a knapsack
of weight u and the first i items.
216. Subproblems
For 0 ≤ u ≤ W and 0 ≤ i ≤ n, value(u, i) is
the maximum value achievable using a knapsack
of weight u and the first i items.
Base case: value(u, 0) = 0, value(0, i) = 0
217. Subproblems
For 0 ≤ u ≤ W and 0 ≤ i ≤ n, value(u, i) is
the maximum value achievable using a knapsack
of weight u and the first i items.
Base case: value(u, 0) = 0, value(0, i) = 0
For i > 0, the item i − 1 is either used or not:
value(u, i) is equal to
max{value(u−wi−1, i−1)+vi−1, value(u, i−1)}
218. Recursive Algorithm
1 T = dict ()
2
3 def knapsack (w, v , u , i ) :
4 i f (u , i ) not in T:
5 i f i == 0:
6 T[ u , i ] = 0
7 else :
8 T[ u , i ] = knapsack (w, v , u , i − 1)
9 i f u >= w[ i − 1 ] :
10 T[ u , i ] = max(T[ u , i ] ,
11 knapsack (w, v , u − w[ i − 1] , i − 1) + v [ i − 1 ]
12
13 return T[ u , i ]
14
15
16 print ( knapsack (w=[6 , 3 , 4 , 2] ,
17 v =[30 , 14 , 16 , 9] , u=10, i =4))
219. Iterative Algorithm
1 def knapsack (W, w, v ) :
2 T = [ [ None ] * ( len (w) + 1) for _ in range (W + 1 ) ]
3
4 for u in range (W + 1 ) :
5 T[ u ] [ 0 ] = 0
6
7 for i in range (1 , len (w) + 1 ) :
8 for u in range (W + 1 ) :
9 T[ u ] [ i ] = T[ u ] [ i − 1]
10 i f u >= w[ i − 1 ] :
11 T[ u ] [ i ] = max(T[ u ] [ i ] ,
12 T[ u − w[ i − 1 ] ] [ i − 1] + v [ i − 1 ] )
13
14 return T[W] [ len (w) ]
15
16
17 print ( knapsack (W=10, w=[6 , 3 , 4 , 2] ,
18 v =[30 , 14 , 16 , 9 ] ) )
222. Analysis
Running time: O(nW )
Space: O(nW )
Space can be improved to O(W ) in the iterative
version: instead of storing the whole table, store
the current column and the previous one
223. Reconstructing a Solution
As it usually happens, an optimal solution can
be unwound by analyzing the computed
solutions to subproblems
224. Reconstructing a Solution
As it usually happens, an optimal solution can
be unwound by analyzing the computed
solutions to subproblems
Start with u = W , i = n
225. Reconstructing a Solution
As it usually happens, an optimal solution can
be unwound by analyzing the computed
solutions to subproblems
Start with u = W , i = n
If value(u, i) = value(u, i − 1), then item i − 1
is not taken. Update i to i − 1
226. Reconstructing a Solution
As it usually happens, an optimal solution can
be unwound by analyzing the computed
solutions to subproblems
Start with u = W , i = n
If value(u, i) = value(u, i − 1), then item i − 1
is not taken. Update i to i − 1
Otherwise
value(u, i) = value(u − wi−1, i − 1) + vi−1 and
the item i − i is taken. Update i to i − 1 and u
to u − wi−1
228. Subproblems Revisited
How to implement a brute force solution for the
knapsack without repetitions problem?
Process items one by one. For each item, either
take into a bag or not
229. 1 def knapsack (W, w, v , items , l a s t ) :
2 weight = sum(w[ i ] for i in items )
3
4 i f l a s t == len (w) − 1:
5 return sum( v [ i ] for i in items )
6
7 value = knapsack (W, w, v , items , l a s t + 1)
8 i f weight + w[ l a s t + 1] <= W:
9 items . append ( l a s t + 1)
10 value = max( value ,
11 knapsack (W, w, v , items , l a s t + 1))
12 items . pop ()
13
14 return value
15
16 print ( knapsack (W=10, w=[6 , 3 , 4 , 2] ,
17 v =[30 , 14 , 16 , 9] ,
18 items =[] , l a s t =−1))
230. Outline
1 1: Longest Increasing Subsequence
1.1: Warm-up
1.2: Subproblems and Recurrence Relation
1.3: Reconstructing a Solution
1.4: Subproblems Revisited
2 2: Edit Distance
2.1: Algorithm
2.2: Reconstructing a Solution
2.3: Final Remarks
3 3: Knapsack
3.1: Knapsack with Repetitions
3.2: Knapsack without Repetitions
3.3: Final Remarks
4 4: Chain Matrix Multiplication
4.1: Chain Matrix Multiplication
4.2: Summary
231. Recursive vs Iterative
If all subproblems must be solved then an
iterative algorithm is usually faster since it has
no recursion overhead
232. Recursive vs Iterative
If all subproblems must be solved then an
iterative algorithm is usually faster since it has
no recursion overhead
There are cases however when one does not
need to solve all subproblems and the knapsack
problem is a good example: assume that W and
all wi’s are multiples of 100; then value(w) is
not needed if w is not divisible by 100
233. Polynomial Time?
The running time O(nW ) is not polynomial
since the input size is proportional to log W , but
not W
234. Polynomial Time?
The running time O(nW ) is not polynomial
since the input size is proportional to log W , but
not W
In other words, the running time is O(n2log W
).
235. Polynomial Time?
The running time O(nW ) is not polynomial
since the input size is proportional to log W , but
not W
In other words, the running time is O(n2log W
).
E.g., for
W = 10 345 970 345 617 824 751
(twentу digits only!) the algorithm needs
roughly 1020
basic operations
236. Polynomial Time?
The running time O(nW ) is not polynomial
since the input size is proportional to log W , but
not W
In other words, the running time is O(n2log W
).
E.g., for
W = 10 345 970 345 617 824 751
(twentу digits only!) the algorithm needs
roughly 1020
basic operations
Solving the knapsack problem in truly
polynomial time is the essence of the P vs NP
problem, the most important open problem in
Computer Science (with a bounty of $1M)
237. Outline
1 1: Longest Increasing Subsequence
1.1: Warm-up
1.2: Subproblems and Recurrence Relation
1.3: Reconstructing a Solution
1.4: Subproblems Revisited
2 2: Edit Distance
2.1: Algorithm
2.2: Reconstructing a Solution
2.3: Final Remarks
3 3: Knapsack
3.1: Knapsack with Repetitions
3.2: Knapsack without Repetitions
3.3: Final Remarks
4 4: Chain Matrix Multiplication
4.1: Chain Matrix Multiplication
4.2: Summary
238. Chain matrix multiplication
Input: Chain of n matrices A0, . . . , An−1 to be
multiplied.
Output: An order of multiplication minimizing the
total cost of multiplication.
239. Clarifications
Denote the sizes of matrices A0, . . . , An−1 by
m0 × m1, m1 × m2, . . . , mn−1 × mn
respectively. I.e., the size of Ai is mi × mi+1
240. Clarifications
Denote the sizes of matrices A0, . . . , An−1 by
m0 × m1, m1 × m2, . . . , mn−1 × mn
respectively. I.e., the size of Ai is mi × mi+1
Matrix multiplication is not commutative (in
general, A × B ̸= B × A), but it is associative:
A × (B × C) = (A × B) × C
241. Clarifications
Denote the sizes of matrices A0, . . . , An−1 by
m0 × m1, m1 × m2, . . . , mn−1 × mn
respectively. I.e., the size of Ai is mi × mi+1
Matrix multiplication is not commutative (in
general, A × B ̸= B × A), but it is associative:
A × (B × C) = (A × B) × C
Thus A × B × C × D can be computed, e.g., as
(A × B) × (C × D) or (A × (B × C)) × D
242. Clarifications
Denote the sizes of matrices A0, . . . , An−1 by
m0 × m1, m1 × m2, . . . , mn−1 × mn
respectively. I.e., the size of Ai is mi × mi+1
Matrix multiplication is not commutative (in
general, A × B ̸= B × A), but it is associative:
A × (B × C) = (A × B) × C
Thus A × B × C × D can be computed, e.g., as
(A × B) × (C × D) or (A × (B × C)) × D
The cost of multiplying two matrices of size
p × q and q × r is pqr
243. Example: A × ((B × C) × D)
A
50 × 20
B
20 × 1
C
1 × 10
D
10 × 100
× × ×
cost:
244. Example: A × ((B × C) × D)
A
50 × 20
B × C
20 × 10
D
10 × 100
× ×
cost: 20 · 1 · 10
245. Example: A × ((B × C) × D)
A
50 × 20
B × C × D
20 × 100
×
cost: 20 · 1 · 10 + 20 · 10 · 100
246. Example: A × ((B × C) × D)
A × B × C × D
50 × 100
cost: 20 · 1 · 10 + 20 · 10 · 100 + 50 · 20 · 100 = 120 200
247. Example: (A × B) × (C × D)
A
50 × 20
B
20 × 1
C
1 × 10
D
10 × 100
× × ×
cost:
248. Example: (A × B) × (C × D)
A × B
50 × 1
C
1 × 10
D
10 × 100
× ×
cost: 50 · 20 · 1
249. Example: (A × B) × (C × D)
A × B
50 × 1
C × D
1 × 100
×
cost: 50 · 20 · 1 + 1 · 10 · 100
250. Example: (A × B) × (C × D)
A × B × C × D
50 × 100
cost: 50 · 20 · 1 + 1 · 10 · 100 + 50 · 1 · 100 = 7 000
251. Order as a Full Binary Tree
D
C
A B
((A × B) × C) × D
A
D
B C
A × ((B × C) × D)
D
A
B C
(A × (B × C)) × D
252. Analyzing an Optimal Tree
A0, . . . , Ai−1
Ai, . . . , Aj−1
Aj, . . . , Ak−1
Ak, . . . , An−1
each subtree computes
the product of Ap, . . . , Aq for some p ≤ q
254. Subproblems
Let M(i, j) be the minimum cost of computing
Ai × · · · × Aj−1
Then
M(i, j) = min
i<k<j
{M(i, k)+M(k, j)+mi ·mk ·mj}
255. Subproblems
Let M(i, j) be the minimum cost of computing
Ai × · · · × Aj−1
Then
M(i, j) = min
i<k<j
{M(i, k)+M(k, j)+mi ·mk ·mj}
Base case: M(i, i + 1) = 0
256. Recursive Algorithm
1 T = dict ()
2
3 def matrix_mult (m, i , j ) :
4 i f ( i , j ) not in T:
5 i f j == i + 1:
6 T[ i , j ] = 0
7 else :
8 T[ i , j ] = f l o a t ( " i n f " )
9 for k in range ( i + 1 , j ) :
10 T[ i , j ] = min(T[ i , j ] ,
11 matrix_mult (m, i , k ) +
12 matrix_mult (m, k , j ) +
13 m[ i ] * m[ j ] * m[ k ] )
14
15 return T[ i , j ]
16
17 print ( matrix_mult (m=[50 , 20 , 1 , 10 , 100] , i =0, j =4))
257. Converting to an Iterative Algorithm
We want to solve subproblems going from
smaller size subproblems to larger size ones
The size is the number of matrices needed to be
multiplied: j − i
A possible order:
258. Iterative Algorithm
1 def matrix_mult (m) :
2 n = len (m) − 1
3 T = [ [ f l o a t ( " i n f " ) ] * (n + 1) for _ in range (n + 1 ) ]
4
5 for i in range (n ) :
6 T[ i ] [ i + 1] = 0
7
8 for s in range (2 , n + 1 ) :
9 for i in range (n − s + 1 ) :
10 j = i + s
11 for k in range ( i + 1 , j ) :
12 T[ i ] [ j ] = min(T[ i ] [ j ] ,
13 T[ i ] [ k ] + T[ k ] [ j ] +
14 m[ i ] * m[ j ] * m[ k ] )
15
16 return T [ 0 ] [ n ]
17
18 print ( matrix_mult (m=[50 , 20 , 1 , 10 , 100]))
261. Final Remarks
Running time: O(n3
)
To unwind a solution, go from the cell (0, n) to
a cell (i, i + 1)
Brute force search: recursively enumerate all
possible trees
262. Outline
1 1: Longest Increasing Subsequence
1.1: Warm-up
1.2: Subproblems and Recurrence Relation
1.3: Reconstructing a Solution
1.4: Subproblems Revisited
2 2: Edit Distance
2.1: Algorithm
2.2: Reconstructing a Solution
2.3: Final Remarks
3 3: Knapsack
3.1: Knapsack with Repetitions
3.2: Knapsack without Repetitions
3.3: Final Remarks
4 4: Chain Matrix Multiplication
4.1: Chain Matrix Multiplication
4.2: Summary
263. Step 1 (the most important step)
Define subproblems and write down a
recurrence relation (with a base case)
either by analyzing the structure
of an optimal solution, or
by optimizing a brute force
solution
264. Subproblems: Review
1 Longest increasing subsequence: LIS(i) is the
length of longest common subsequence ending
at element A[i]
2 Edit distance: ED(i, j) is the edit distance
between prefixes of length i and j
3 Knapsack: K(w) is the optimal value of
a knapsack of total weight w
4 Chain matrix multiplication M(i, j) is the
optimal cost of multiplying matrices through i
to j − 1
265. Step 2
Convert a recurrence relation into a
recursive algorithm:
store a solution to each
subproblem in a table
before solving a subproblem check
whether its solution is already
stored in the table
266. Step 3
Convert a recursive algorithm into an
iterative algorithm:
initialize the table
go from smaller subproblems to
larger ones
specify an order of subproblems
267. Step 4
Prove an upper bound on the running
time. Usually the product of the
number of subproblems and the time
needed to solve a subproblem is a
reasonable estimate.
269. Step 6
Exploit the regular structure of the
table to check whether space can be
saved
270. Recursive vs Iterative
Advantages of iterative approach:
No recursion overhead
May allow saving space by exploiting a regular
structure of the table
Advantages of recursive approach:
May be faster if not all the subproblems need to be
solved
An order on subproblems is implicit