2. Optimization and Optimal value
• Any problem which could be solved using
more than one way is called an optimization
problem. For instance, we travel from one
point to another and we have more than one
routes.
• When we reach to our destination at a lowest
possible cost, it is the optimal value.
5. • Since she has no pockets in her
trouser, she needs coins as less as
possible.
• The problem to be solved here is
to give her 85 dollars with less
possible coins.
6. Available coins with Alex
Options are:
• Bags full of 50, 20, 10 and 5
dollars coins only
8. Optimal value at each step
• Greedy algorithm selects the
minimum or maximum (in this
case) at each step.
9. What is Greedy Algorithm
• It is an optimization problem-solving method which selects the locally
optimal (i.e. maximum or minimum in the available) choice at each
step and hope it achieve the global optimum.
• Greedy Algorithms work step-by-step, and always choose the steps
which provide immediate profit/benefit.
• It chooses the “locally optimal solution”, without thinking about future
consequences.
10. What is Greedy Algorithm
• Greedy algorithms may not always lead to the optimal global solution,
because it does not consider the entire data.
• The choice made by the greedy approach does not consider future data
and choices.
• In some cases making a decision that looks right at that moment gives
the best solution (Greedy), but in other cases, it doesn’t.
13. Dynamic Programming
• Most of the optimization algorithms (e.g. Greedy) work on simple princple;
solve the components of the main problem (i.e. subproblem), combine these
sub-solutions and get to an overall solution.
• Many subproblems are created and solved repeatedly . To reduce the number
of computations, the Dynamic Programming approach aims to answer each
subproblem just once.
• Once the solution to a given subproblem has been computed, it is saved or
"memoized," making it easy to find the next time the exact solution is required.
14. Dynamic Programming
• In daily life, we are using the aproach of
dynamic programming but we dont notice it.
• We ask a kid to count the sticks, he counts and
correctly answer of “4”
15. We actually applied dynamic programming
• We add another stick and ask
again, he doesn’t count it again
and still says the answer as 5.
• The kid memorize the previous
answer and hence, no need to
count it again.
16. 1. Optimal substructures
• If the overall optimal solution of a problem can be determined from the optimal
solutions of its subproblems, this is referred to as having an optimal substructure
property.
• We are aware that the nth Fibonacci number, or Fib(n), is nothing more than the
sum of the two preceding Fibonacci numbers, or Fib(n) = Fib(n-1) + Fib (n-2).
• A problem of size 'n' has been divided into subproblems of sizes 'n-1' and 'n-2,' as
shown by the equation above.
17. 2. Overlapping Subproblems
• If addressing the same subproblem occur more than once , that
subproblem is said to have overlapping subproblems.
18. An example of Fibonacci numbers
• Let's think about analyzing Fib (5). We can
see that Fib(5) is calculated by taking the
sum of Fib(4) and Fib(3), and Fib(4) is
calculated by taking the sum of Fib(3) and
Fib(2), and so on.
• Fib(3), Fib(2), Fib(1), and Fib(0) have all
undergone repeated evaluation. These are
nothing more than converging sub-
Problems.
19. Greedy vs Dynamic Programming
Greedy
• No guarantee of getting optimal
solution
• Efficient in terms of memory.
• Fast algorithm
Dynamic Programming
• Optimal solution is guaranteed as it
generally considers all possible cases and
then choose the best.
• It stores previous results, so consumes
memory.
• Comparatively slow