SlideShare a Scribd company logo
Greedy and Divide and Conquer
          Approach



           Authors:
     1.Devendra Agarwal
        2.Irfan Hudda
        3.karan Singh
     4.Ankush Sachdeva
Greedy Algorithms
 o Greedy algorithms are generally used in
   optimization problems
 o There is an optimal substructure to the
   problem
 o Greedy Algorithms are algorithms that
  try to maximize the function by making greedy choice
   at all points. In other words, we do at each step
   what seems best without planning ahead
  ignores future possibilities (no planning ahead)
Contd..
 • Usually O(n) or O(n lg n) Solutions.
 • But, We must be able to prove the
   correctness of a greedy algorithm if we are to
   be sure that it works
Problem 1.
 • A family goes for picnic along a straight road. There
   are k hotels in their way situated at distances
   d1,d2…dk from the start. The last hotel is their
   destination. The can travel at most L length in day.
   However, they need to stay at a hotel at night. Find a
   strategy for the family so that they can reach hotel k
   in minimum number of days    ?
Solution
 • The first solution that comes to mind is to
   travel on a particular day as long as possible,
   i.e. do not stop at a hotel if and only if the
   next hotel can be reached on the same day.
Proof
 Proof strategy for greedy algorithms:
 • Suppose there is a better solution which stops at
   hotels i1 < i2 ….ir and greedy solution stops at j1<
   j2….js and r<s
 • Consider the first place where the two solutions differ
   and let that index be p. Clearly ip < jp
 • On a given day when non greedy solution starts from
   hotel iq<jq, non greedy solution cannot end up
   ahead of greedy solution.
 • Therefore Greedy solution stays ahead
Problem 2
 • There are n processes that need to be executed.
   Process i takes a preprocessing time pi and time ti to
   execute afterwards. All preprocessing has to be done
   on a supercomputer and execution can be done on
   normal computers. We have only one supercomputer
   and n normal computers. Find a sequence in which to
   execute the processes such that the time taken to
   finish all the processes is minimum.
Solution
 • Sort in descending order of ti and execute
   them. At least this is the solution for which no
   counter example is available.
Proof
 • Let there be any other ordering
   sigma(1),sigma(2)….sigma(n) such that t_sigma(i) <
   t_sigma(i+1) for some i.
 • Prove that by executing sigma(i+1) before sigma(i),
   we will do no worse.
 • Keep applying this transformation until there are no
   inversions w.r.t. to t. This is our greedy solution.
Problem 3
 • A thief breaks into a shop only to find that the
   stitches of his bag have become loose. There are N
   items in the shop that he can choose with weight
   W[i] and value V[i]. The total weight that the bag
   can carry is W. Given that he can choose fraction
   amount of items find how should he choose the items
   to maximize the total value of the things he steals?
Solution:
 • Sort the values according to vi/wi in decreasing
   order. Choose the weight which has maximum vi/wi
   ratio . If wi<Remaining_W choose that item fully else
   taken fractional part of it .
Proof
 • Left as an exercise to the reader.
Knapsack Problem
Problem
 • Suppose we have n items U={u1, ..un},
   that we would like to insert into a
   knapsack of size C. Each item ui has a
   size si and a value vi.
 • A choose a subset of items S  U such
   that

    v
    ui S
            i   is max   and   s
                               ui S
                                       i   C
Dynamic Programming Solution
 • For 1 j C & 1 i n, let V[i,j] be the
   optimal value obtained by filling a
   knapsack of size j with items taken from
  {u1, ..ui}.
 • Also V[i,0]=0 and V[0,j]=0.

 • Our Goal is to find V[n,C]
Recursive Formula


                               0                        if i  0 or j  0
            
            
            
V [i, j ]               V [i  1, j ]                        if j  si
            
            
            max  [i  1, j ],V [i  1, j  si ]  vi 
                 V                                             otherwise
Algorithm: Knapsack
Input: Knapsack of size C, items U={u1, ..un}, with sizes s1, .., sn, and
   values v1, .., vn
Output:
          max  vi          s.t.     s      i   C
                ui S                ui S
For i  0 to n        V[i,0]  0 end for
For j  0 to C         V[0,j]  0 end for
  for i  1 to n
     for j  1 to C
          V[i,j] = V[i-1,j]
          if si j then V[i,j] =max{V[i-1,j], V[i-1,j- si] + vi}
        end for
  end for
end for
Return V[n,C]
Performance Analysis
 • Time = (nC)
 • Space = (nC)
 • But can be modified to use only (C)
   space by storing two rows only.
How does it work?

    0 1 2 3 .......................s1...................s2.............................................C
0   0000000000000000000000000000000000
1   0 0 0 0 0 0 0 0 0 0 0 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1
2   0 Copy the above red line here
.   0
.   0
.   0
.   0                 = max{          + v2,          }
.   0
n   0
How does it work?

    0 1 2 3 .......................s1...................s2.............................................C
0   0000000000000000000000000000000000
1   0 0 0 0 0 0 0 0 0 0 0 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1
2   0 Copy the above red line here
.   0
.   0
.   0
.   0                 = max{          + v2,          }
.   0
n   0
How does it work?

    0 1 2 3 .......................s1...................s2.............................................C
0   0000000000000000000000000000000000
1   0 0 0 0 0 0 0 0 0 0 0 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1
2   0 Copy the above red line here
.   0
.   0
.   0
.   0                 = max{          + v2,          }
.   0
n   0
How does it work?

    0 1 2 3 .......................s1...................s2.............................................C
0   0000000000000000000000000000000000
1   0 0 0 0 0 0 0 0 0 0 0 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1
2   0 Copy the above red line here
.   0
.   0
.   0
.   0                 = max{          + v2,          }
.   0
n   0
How does it work?

    0 1 2 3 .......................s1...................s2.............................................C
0   0000000000000000000000000000000000
1   0 0 0 0 0 0 0 0 0 0 0 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1
2   0 Copy the above red line here
.   0
.   0
.   0
.   0                 = max{          + v2,          }
.   0
n   0
Divide and Conquer
 • Three Steps:
 1. Divide Step: Divide the problem in two or more
    subproblem.
 2. Recur Step: Find the Solution of the smaller
    subproblems
 3. Merge Step: Merge the Solution to get the final
    Solution
Problem 1
 • Merge-sort is divide-n-conquer
 • Analysis of merge-sort
 T(n)=2*T(n/2) + O(n)
 T(n/2) : Time taken to sort the left / right part.
 O(n) : c*n time taken to merge them .
 Problem- Find number of inversions by
 modifying code for merge-sort slightly ?

More Related Content

Viewers also liked

Comparitive Analysis of Algorithm strategies
Comparitive Analysis of Algorithm strategiesComparitive Analysis of Algorithm strategies
Comparitive Analysis of Algorithm strategies
Talha Shaikh
 
virtual memory
virtual memoryvirtual memory
virtual memory
Abeer Naskar
 
35 algorithm-types
35 algorithm-types35 algorithm-types
35 algorithm-types
Kislay Bhardwaj L|PT,ECSA,C|EH
 
16. Algo analysis & Design - Data Structures using C++ by Varsha Patil
16. Algo analysis & Design - Data Structures using C++ by Varsha Patil16. Algo analysis & Design - Data Structures using C++ by Varsha Patil
16. Algo analysis & Design - Data Structures using C++ by Varsha Patil
widespreadpromotion
 
Asymptotic Notations
Asymptotic NotationsAsymptotic Notations
Asymptotic Notations
Rishabh Soni
 
Dbms ii mca-ch9-transaction-processing-2013
Dbms ii mca-ch9-transaction-processing-2013Dbms ii mca-ch9-transaction-processing-2013
Dbms ii mca-ch9-transaction-processing-2013
Prosanta Ghosh
 
Database System Architectures
Database System ArchitecturesDatabase System Architectures
Database System Architectures
Information Technology
 
Virtual memory
Virtual memoryVirtual memory
Virtual memory
Shashank Shetty
 
Data structures project
Data structures projectData structures project
Data structures project
sachinrajsachin
 
Virtual Memory and Paging
Virtual Memory and PagingVirtual Memory and Paging
Virtual Memory and Paging
Emery Berger
 
Relational database management system (rdbms) i
Relational database management system (rdbms) iRelational database management system (rdbms) i
Relational database management system (rdbms) i
Ravinder Kamboj
 
16. Concurrency Control in DBMS
16. Concurrency Control in DBMS16. Concurrency Control in DBMS
16. Concurrency Control in DBMS
koolkampus
 
Chapter 6 - Process Synchronization
Chapter 6 - Process SynchronizationChapter 6 - Process Synchronization
Chapter 6 - Process Synchronization
Wayne Jones Jnr
 
Cpu Scheduling Galvin
Cpu Scheduling GalvinCpu Scheduling Galvin
Cpu Scheduling Galvin
Sonali Chauhan
 
Databases: Concurrency Control
Databases: Concurrency ControlDatabases: Concurrency Control
Databases: Concurrency Control
Damian T. Gordon
 
Trees, Binary Search Tree, AVL Tree in Data Structures
Trees, Binary Search Tree, AVL Tree in Data Structures Trees, Binary Search Tree, AVL Tree in Data Structures
Trees, Binary Search Tree, AVL Tree in Data Structures
Gurukul Kangri Vishwavidyalaya - Faculty of Engineering and Technology
 
Chapter 7 - Deadlocks
Chapter 7 - DeadlocksChapter 7 - Deadlocks
Chapter 7 - Deadlocks
Wayne Jones Jnr
 
Dbms architecture
Dbms architectureDbms architecture
Dbms architecture
Shubham Dwivedi
 
Memory management
Memory managementMemory management
Memory management
Vishal Singh
 
12. Indexing and Hashing in DBMS
12. Indexing and Hashing in DBMS12. Indexing and Hashing in DBMS
12. Indexing and Hashing in DBMS
koolkampus
 

Viewers also liked (20)

Comparitive Analysis of Algorithm strategies
Comparitive Analysis of Algorithm strategiesComparitive Analysis of Algorithm strategies
Comparitive Analysis of Algorithm strategies
 
virtual memory
virtual memoryvirtual memory
virtual memory
 
35 algorithm-types
35 algorithm-types35 algorithm-types
35 algorithm-types
 
16. Algo analysis & Design - Data Structures using C++ by Varsha Patil
16. Algo analysis & Design - Data Structures using C++ by Varsha Patil16. Algo analysis & Design - Data Structures using C++ by Varsha Patil
16. Algo analysis & Design - Data Structures using C++ by Varsha Patil
 
Asymptotic Notations
Asymptotic NotationsAsymptotic Notations
Asymptotic Notations
 
Dbms ii mca-ch9-transaction-processing-2013
Dbms ii mca-ch9-transaction-processing-2013Dbms ii mca-ch9-transaction-processing-2013
Dbms ii mca-ch9-transaction-processing-2013
 
Database System Architectures
Database System ArchitecturesDatabase System Architectures
Database System Architectures
 
Virtual memory
Virtual memoryVirtual memory
Virtual memory
 
Data structures project
Data structures projectData structures project
Data structures project
 
Virtual Memory and Paging
Virtual Memory and PagingVirtual Memory and Paging
Virtual Memory and Paging
 
Relational database management system (rdbms) i
Relational database management system (rdbms) iRelational database management system (rdbms) i
Relational database management system (rdbms) i
 
16. Concurrency Control in DBMS
16. Concurrency Control in DBMS16. Concurrency Control in DBMS
16. Concurrency Control in DBMS
 
Chapter 6 - Process Synchronization
Chapter 6 - Process SynchronizationChapter 6 - Process Synchronization
Chapter 6 - Process Synchronization
 
Cpu Scheduling Galvin
Cpu Scheduling GalvinCpu Scheduling Galvin
Cpu Scheduling Galvin
 
Databases: Concurrency Control
Databases: Concurrency ControlDatabases: Concurrency Control
Databases: Concurrency Control
 
Trees, Binary Search Tree, AVL Tree in Data Structures
Trees, Binary Search Tree, AVL Tree in Data Structures Trees, Binary Search Tree, AVL Tree in Data Structures
Trees, Binary Search Tree, AVL Tree in Data Structures
 
Chapter 7 - Deadlocks
Chapter 7 - DeadlocksChapter 7 - Deadlocks
Chapter 7 - Deadlocks
 
Dbms architecture
Dbms architectureDbms architecture
Dbms architecture
 
Memory management
Memory managementMemory management
Memory management
 
12. Indexing and Hashing in DBMS
12. Indexing and Hashing in DBMS12. Indexing and Hashing in DBMS
12. Indexing and Hashing in DBMS
 

Similar to Lop1

Unit-3 greedy method, Prim's algorithm, Kruskal's algorithm.pdf
Unit-3 greedy method, Prim's algorithm, Kruskal's algorithm.pdfUnit-3 greedy method, Prim's algorithm, Kruskal's algorithm.pdf
Unit-3 greedy method, Prim's algorithm, Kruskal's algorithm.pdf
yashodamb
 
Assgnment=hungarian method
Assgnment=hungarian methodAssgnment=hungarian method
Assgnment=hungarian method
Joseph Konnully
 
Support Vector Machines Simply
Support Vector Machines SimplySupport Vector Machines Simply
Support Vector Machines Simply
Emad Nabil
 
Dynamic programming
Dynamic programmingDynamic programming
Dynamic programming
Melaku Bayih Demessie
 
Chapter 5.pptx
Chapter 5.pptxChapter 5.pptx
Chapter 5.pptx
Tekle12
 
ICPC 2015, Tsukuba : Unofficial Commentary
ICPC 2015, Tsukuba: Unofficial CommentaryICPC 2015, Tsukuba: Unofficial Commentary
ICPC 2015, Tsukuba : Unofficial Commentary
irrrrr
 
Bitwise
BitwiseBitwise
Bitwise
Axel Ryo
 
Daa chapter 3
Daa chapter 3Daa chapter 3
Daa chapter 3
B.Kirron Reddi
 
Mastering Greedy Algorithms: Optimizing Solutions for Efficiency"
Mastering Greedy Algorithms: Optimizing Solutions for Efficiency"Mastering Greedy Algorithms: Optimizing Solutions for Efficiency"
Mastering Greedy Algorithms: Optimizing Solutions for Efficiency"
22bcs058
 
Module 3_DAA (2).pptx
Module 3_DAA (2).pptxModule 3_DAA (2).pptx
Module 3_DAA (2).pptx
AnkitaVerma776806
 
Design and analysis of Algorithms - Lecture 15.ppt
Design and analysis of Algorithms - Lecture 15.pptDesign and analysis of Algorithms - Lecture 15.ppt
Design and analysis of Algorithms - Lecture 15.ppt
QurbanAli72
 
4 greedy methodnew
4 greedy methodnew4 greedy methodnew
4 greedy methodnew
abhinav108
 
Support Vector Machines is the the the the the the the the the
Support Vector Machines is the the the the the the the the theSupport Vector Machines is the the the the the the the the the
Support Vector Machines is the the the the the the the the the
sanjaibalajeessn
 
DynamicProgramming.pptx
DynamicProgramming.pptxDynamicProgramming.pptx
DynamicProgramming.pptx
SaimaShaheen14
 
super vector machines algorithms using deep
super vector machines algorithms using deepsuper vector machines algorithms using deep
super vector machines algorithms using deep
KNaveenKumarECE
 
Skiena algorithm 2007 lecture19 introduction to np complete
Skiena algorithm 2007 lecture19 introduction to np completeSkiena algorithm 2007 lecture19 introduction to np complete
Skiena algorithm 2007 lecture19 introduction to np complete
zukun
 
Dynamic Programming.pptx
Dynamic Programming.pptxDynamic Programming.pptx
Dynamic Programming.pptx
Thanga Ramya S
 
An Introduction to Linear Programming
An Introduction to Linear ProgrammingAn Introduction to Linear Programming
An Introduction to Linear Programming
Minh-Tri Pham
 
unit-4-dynamic programming
unit-4-dynamic programmingunit-4-dynamic programming
unit-4-dynamic programming
hodcsencet
 
Summary of "A Universally-Truthful Approximation Scheme for Multi-unit Auction"
Summary of "A Universally-Truthful Approximation Scheme for Multi-unit Auction"Summary of "A Universally-Truthful Approximation Scheme for Multi-unit Auction"
Summary of "A Universally-Truthful Approximation Scheme for Multi-unit Auction"
Thatchaphol Saranurak
 

Similar to Lop1 (20)

Unit-3 greedy method, Prim's algorithm, Kruskal's algorithm.pdf
Unit-3 greedy method, Prim's algorithm, Kruskal's algorithm.pdfUnit-3 greedy method, Prim's algorithm, Kruskal's algorithm.pdf
Unit-3 greedy method, Prim's algorithm, Kruskal's algorithm.pdf
 
Assgnment=hungarian method
Assgnment=hungarian methodAssgnment=hungarian method
Assgnment=hungarian method
 
Support Vector Machines Simply
Support Vector Machines SimplySupport Vector Machines Simply
Support Vector Machines Simply
 
Dynamic programming
Dynamic programmingDynamic programming
Dynamic programming
 
Chapter 5.pptx
Chapter 5.pptxChapter 5.pptx
Chapter 5.pptx
 
ICPC 2015, Tsukuba : Unofficial Commentary
ICPC 2015, Tsukuba: Unofficial CommentaryICPC 2015, Tsukuba: Unofficial Commentary
ICPC 2015, Tsukuba : Unofficial Commentary
 
Bitwise
BitwiseBitwise
Bitwise
 
Daa chapter 3
Daa chapter 3Daa chapter 3
Daa chapter 3
 
Mastering Greedy Algorithms: Optimizing Solutions for Efficiency"
Mastering Greedy Algorithms: Optimizing Solutions for Efficiency"Mastering Greedy Algorithms: Optimizing Solutions for Efficiency"
Mastering Greedy Algorithms: Optimizing Solutions for Efficiency"
 
Module 3_DAA (2).pptx
Module 3_DAA (2).pptxModule 3_DAA (2).pptx
Module 3_DAA (2).pptx
 
Design and analysis of Algorithms - Lecture 15.ppt
Design and analysis of Algorithms - Lecture 15.pptDesign and analysis of Algorithms - Lecture 15.ppt
Design and analysis of Algorithms - Lecture 15.ppt
 
4 greedy methodnew
4 greedy methodnew4 greedy methodnew
4 greedy methodnew
 
Support Vector Machines is the the the the the the the the the
Support Vector Machines is the the the the the the the the theSupport Vector Machines is the the the the the the the the the
Support Vector Machines is the the the the the the the the the
 
DynamicProgramming.pptx
DynamicProgramming.pptxDynamicProgramming.pptx
DynamicProgramming.pptx
 
super vector machines algorithms using deep
super vector machines algorithms using deepsuper vector machines algorithms using deep
super vector machines algorithms using deep
 
Skiena algorithm 2007 lecture19 introduction to np complete
Skiena algorithm 2007 lecture19 introduction to np completeSkiena algorithm 2007 lecture19 introduction to np complete
Skiena algorithm 2007 lecture19 introduction to np complete
 
Dynamic Programming.pptx
Dynamic Programming.pptxDynamic Programming.pptx
Dynamic Programming.pptx
 
An Introduction to Linear Programming
An Introduction to Linear ProgrammingAn Introduction to Linear Programming
An Introduction to Linear Programming
 
unit-4-dynamic programming
unit-4-dynamic programmingunit-4-dynamic programming
unit-4-dynamic programming
 
Summary of "A Universally-Truthful Approximation Scheme for Multi-unit Auction"
Summary of "A Universally-Truthful Approximation Scheme for Multi-unit Auction"Summary of "A Universally-Truthful Approximation Scheme for Multi-unit Auction"
Summary of "A Universally-Truthful Approximation Scheme for Multi-unit Auction"
 

Lop1

  • 1. Greedy and Divide and Conquer Approach Authors: 1.Devendra Agarwal 2.Irfan Hudda 3.karan Singh 4.Ankush Sachdeva
  • 2. Greedy Algorithms o Greedy algorithms are generally used in optimization problems o There is an optimal substructure to the problem o Greedy Algorithms are algorithms that  try to maximize the function by making greedy choice at all points. In other words, we do at each step what seems best without planning ahead  ignores future possibilities (no planning ahead)
  • 3. Contd.. • Usually O(n) or O(n lg n) Solutions. • But, We must be able to prove the correctness of a greedy algorithm if we are to be sure that it works
  • 4. Problem 1. • A family goes for picnic along a straight road. There are k hotels in their way situated at distances d1,d2…dk from the start. The last hotel is their destination. The can travel at most L length in day. However, they need to stay at a hotel at night. Find a strategy for the family so that they can reach hotel k in minimum number of days ?
  • 5. Solution • The first solution that comes to mind is to travel on a particular day as long as possible, i.e. do not stop at a hotel if and only if the next hotel can be reached on the same day.
  • 6. Proof Proof strategy for greedy algorithms: • Suppose there is a better solution which stops at hotels i1 < i2 ….ir and greedy solution stops at j1< j2….js and r<s • Consider the first place where the two solutions differ and let that index be p. Clearly ip < jp • On a given day when non greedy solution starts from hotel iq<jq, non greedy solution cannot end up ahead of greedy solution. • Therefore Greedy solution stays ahead
  • 7. Problem 2 • There are n processes that need to be executed. Process i takes a preprocessing time pi and time ti to execute afterwards. All preprocessing has to be done on a supercomputer and execution can be done on normal computers. We have only one supercomputer and n normal computers. Find a sequence in which to execute the processes such that the time taken to finish all the processes is minimum.
  • 8. Solution • Sort in descending order of ti and execute them. At least this is the solution for which no counter example is available.
  • 9. Proof • Let there be any other ordering sigma(1),sigma(2)….sigma(n) such that t_sigma(i) < t_sigma(i+1) for some i. • Prove that by executing sigma(i+1) before sigma(i), we will do no worse. • Keep applying this transformation until there are no inversions w.r.t. to t. This is our greedy solution.
  • 10. Problem 3 • A thief breaks into a shop only to find that the stitches of his bag have become loose. There are N items in the shop that he can choose with weight W[i] and value V[i]. The total weight that the bag can carry is W. Given that he can choose fraction amount of items find how should he choose the items to maximize the total value of the things he steals?
  • 11. Solution: • Sort the values according to vi/wi in decreasing order. Choose the weight which has maximum vi/wi ratio . If wi<Remaining_W choose that item fully else taken fractional part of it .
  • 12. Proof • Left as an exercise to the reader.
  • 14. Problem • Suppose we have n items U={u1, ..un}, that we would like to insert into a knapsack of size C. Each item ui has a size si and a value vi. • A choose a subset of items S  U such that v ui S i is max and s ui S i C
  • 15. Dynamic Programming Solution • For 1 j C & 1 i n, let V[i,j] be the optimal value obtained by filling a knapsack of size j with items taken from {u1, ..ui}. • Also V[i,0]=0 and V[0,j]=0. • Our Goal is to find V[n,C]
  • 16. Recursive Formula  0 if i  0 or j  0    V [i, j ]   V [i  1, j ] if j  si   max  [i  1, j ],V [i  1, j  si ]  vi   V otherwise
  • 17. Algorithm: Knapsack Input: Knapsack of size C, items U={u1, ..un}, with sizes s1, .., sn, and values v1, .., vn Output: max  vi s.t. s i C ui S ui S For i  0 to n V[i,0]  0 end for For j  0 to C V[0,j]  0 end for for i  1 to n for j  1 to C V[i,j] = V[i-1,j] if si j then V[i,j] =max{V[i-1,j], V[i-1,j- si] + vi} end for end for end for Return V[n,C]
  • 18. Performance Analysis • Time = (nC) • Space = (nC) • But can be modified to use only (C) space by storing two rows only.
  • 19. How does it work? 0 1 2 3 .......................s1...................s2.............................................C 0 0000000000000000000000000000000000 1 0 0 0 0 0 0 0 0 0 0 0 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 2 0 Copy the above red line here . 0 . 0 . 0 . 0 = max{ + v2, } . 0 n 0
  • 20. How does it work? 0 1 2 3 .......................s1...................s2.............................................C 0 0000000000000000000000000000000000 1 0 0 0 0 0 0 0 0 0 0 0 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 2 0 Copy the above red line here . 0 . 0 . 0 . 0 = max{ + v2, } . 0 n 0
  • 21. How does it work? 0 1 2 3 .......................s1...................s2.............................................C 0 0000000000000000000000000000000000 1 0 0 0 0 0 0 0 0 0 0 0 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 2 0 Copy the above red line here . 0 . 0 . 0 . 0 = max{ + v2, } . 0 n 0
  • 22. How does it work? 0 1 2 3 .......................s1...................s2.............................................C 0 0000000000000000000000000000000000 1 0 0 0 0 0 0 0 0 0 0 0 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 2 0 Copy the above red line here . 0 . 0 . 0 . 0 = max{ + v2, } . 0 n 0
  • 23. How does it work? 0 1 2 3 .......................s1...................s2.............................................C 0 0000000000000000000000000000000000 1 0 0 0 0 0 0 0 0 0 0 0 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 2 0 Copy the above red line here . 0 . 0 . 0 . 0 = max{ + v2, } . 0 n 0
  • 24. Divide and Conquer • Three Steps: 1. Divide Step: Divide the problem in two or more subproblem. 2. Recur Step: Find the Solution of the smaller subproblems 3. Merge Step: Merge the Solution to get the final Solution
  • 25. Problem 1 • Merge-sort is divide-n-conquer • Analysis of merge-sort T(n)=2*T(n/2) + O(n) T(n/2) : Time taken to sort the left / right part. O(n) : c*n time taken to merge them . Problem- Find number of inversions by modifying code for merge-sort slightly ?