1.
What is dynamic programing?
In mathematics, computer science, economics, and bioinformatics, dynamic programming is a
method for solving complex problems by breaking them down into simpler sub problems. It is
applicable to problems exhibiting the properties of overlapping sub problems and optimal
substructure (described below). When applicable, the method takes far less time than naive
methods that don't take advantage of the sub problem overlap (like depth-first search).
The idea behind dynamic programming is quite simple. In general, to solve a given problem, we
need to solve different parts of the problem (sub problems), then combine the solutions of the
sub problems to reach an overall solution. Often when using a more naive method, many of the
sub problems are generated and solved many times. The dynamic programming approach seeks
to solve each sub problem only once, thus reducing the number of computations: once the
solution to a given sub problem has been computed, it is stored or "memo-ized": the next time
the same solution is needed, it is simply looked up. This approach is especially useful when the
number of repeating sub problems grows exponentially as a function of the size of the input.
Dynamic programming algorithms are used for optimization (for example, finding the shortest
path between two points, or the fastest way to multiply many matrices). A dynamic programming
algorithm will examine all possible ways to solve the problem and will pick the best solution.
Therefore, we can roughly think of dynamic programming as an intelligent, brute-force method
that enables us to go through all possible solutions to pick the best one. If the scope of the
problem is such that going through all possible solutions is possible and fast enough, dynamic
programming guarantees finding the optimal solution. The alternatives are many, such as using a
greedy algorithm, which picks the best possible choice "at any possible branch in the road".
While a greedy algorithm does not guarantee the optimal solution, it is faster. Fortunately, some
greedy algorithms (such as minimum spanning trees) are proven to lead to the optimal solution.
For example, let's say that you have to get from point A to point B as fast as possible, in a given
city, during rush hour. A dynamic programming algorithm will look into the entire traffic report,
looking into all possible combinations of roads you might take, and will only then tell you which
way is the fastest. Of course, you might have to wait for a while until the algorithm finishes, and
only then can you start driving. The path you will take will be the fastest one (assuming that
nothing changed in the external environment). On the other hand, a greedy algorithm will start
you driving immediately and will pick the road that looks the fastest at every intersection. As
you can imagine, this strategy might not lead to the fastest arrival time, since you might take
some "easy" streets and then find yourself hopelessly stuck in a traffic jam.
Figure 1.
2.
Figure 1. Finding the shortest path in a graph using optimal substructure; a straight line indicates
a single edge; a wavy line indicates a shortest path between the two vertices it connects (other
nodes on these paths are not shown); the bold line is the overall shortest path from start to goal.
• An algorithm design technique (like divide and conquer)
• Divide and conquer
Partition the problem into independent sub problems
Solve the sub problems recursively
Combine the solutions to solve the original problem
A divide and conquer approach would repeatedly solve the common sub problems
Dynamic programing
Dynamic programming solves every sub problem just once and stores the
answer in a table.
Used for optimization problems
A set of choices must be made to get an optimal solution
Find a solution with the optimal value(minimum or maximum)
There may be many solutions that return the optimal vale:an optimal solution
Dynamic Programming Algorithm
It has four steps:
1. Characterize the structure of an optimal solution
2. Recursively define the value of an optimal solution
3. Compute the value of an optimal solution in a bottom- up fashion
4. Construct an optimal solution from computed information.
Step 1-3 are the basis of dynamic programming solution to a problem.
o It gives the optimal value only
Be the first to comment