This presentation discusses the knapsack problem and its two main versions: 0/1 and fractional. The 0/1 knapsack problem involves indivisible items that are either fully included or not included, and is solved using dynamic programming. The fractional knapsack problem allows items to be partially included, and is solved using a greedy algorithm. Examples are provided of solving each version using their respective algorithms. The time complexity of these algorithms is also presented. Real-world applications of the knapsack problem include cutting raw materials and selecting investments.
P, NP, NP-Complete, and NP-Hard
Reductionism in Algorithms
NP-Completeness and Cooks Theorem
NP-Complete and NP-Hard Problems
Travelling Salesman Problem (TSP)
Travelling Salesman Problem (TSP) - Approximation Algorithms
PRIMES is in P - (A hope for NP problems in P)
Millennium Problems
Conclusions
The document discusses two types of knapsack problems - the 0-1 knapsack problem and the fractional knapsack problem. The 0-1 knapsack problem uses dynamic programming to determine how to fill a knapsack to maximize the total value of items without exceeding the knapsack's weight limit, where each item is either fully included or not included. The fractional knapsack problem allows partial inclusion of items and can be solved greedily by always including a fraction of the highest value per unit weight item until the knapsack is full.
Given two integer arrays val[0...n-1] and wt[0...n-1] that represents values and weights associated with n items respectively. Find out the maximum value subset of val[] such that sum of the weights of this subset is smaller than or equal to knapsack capacity W. Here the BRANCH AND BOUND ALGORITHM is discussed .
The document discusses backtracking and branch and bound algorithms for solving subset and permutation problems. It explains that backtracking performs a depth-first search of the solution space tree, exploring nodes recursively without storing the entire tree. Branch and bound also searches the tree systematically but uses priority queues and bounding functions to prioritize parts of the tree most likely to contain solutions. Both algorithms can solve large problem instances by exploring only portions of the exponential-sized solution space trees as needed.
- NP-hard problems are at least as hard as problems in NP. A problem is NP-hard if any problem in NP can be reduced to it in polynomial time.
- Cook's theorem states that if the SAT problem can be solved in polynomial time, then every problem in NP can be solved in polynomial time.
- Vertex cover problem is proven to be NP-hard by showing that independent set problem reduces to it in polynomial time, meaning there is a polynomial time algorithm that converts any instance of independent set into an instance of vertex cover.
- Therefore, if there was a polynomial time algorithm for vertex cover, it could be used to solve independent set in polynomial time. Since independent set is NP-complete
The document discusses greedy algorithms and how they can be applied to solve optimization problems like the knapsack problem, activity selection problem, and job sequencing problem. It provides examples of greedy algorithms for each problem. A greedy algorithm works by making locally optimal choices at each step in the hope of finding a globally optimal solution. For the knapsack problem, the greedy approach sorts items by profit/weight ratio and fills the knapsack accordingly. For activity selection, it schedules activities based on earliest finish time. For job sequencing, it sorts jobs by profit and schedules the most profitable jobs that meet deadlines.
Catching race conditions is extremely difficult as it is an NP-hard problem. The document discusses several examples of race conditions that can occur between two groups of processes communicating through message passing. Different attempts are made to prevent race conditions by protecting shared memory locations, but each attempt has problems. The key lessons are that protection must be applied uniformly to all processes, and locks must only be released by their acquiring thread.
This presentation discusses the knapsack problem and its two main versions: 0/1 and fractional. The 0/1 knapsack problem involves indivisible items that are either fully included or not included, and is solved using dynamic programming. The fractional knapsack problem allows items to be partially included, and is solved using a greedy algorithm. Examples are provided of solving each version using their respective algorithms. The time complexity of these algorithms is also presented. Real-world applications of the knapsack problem include cutting raw materials and selecting investments.
P, NP, NP-Complete, and NP-Hard
Reductionism in Algorithms
NP-Completeness and Cooks Theorem
NP-Complete and NP-Hard Problems
Travelling Salesman Problem (TSP)
Travelling Salesman Problem (TSP) - Approximation Algorithms
PRIMES is in P - (A hope for NP problems in P)
Millennium Problems
Conclusions
The document discusses two types of knapsack problems - the 0-1 knapsack problem and the fractional knapsack problem. The 0-1 knapsack problem uses dynamic programming to determine how to fill a knapsack to maximize the total value of items without exceeding the knapsack's weight limit, where each item is either fully included or not included. The fractional knapsack problem allows partial inclusion of items and can be solved greedily by always including a fraction of the highest value per unit weight item until the knapsack is full.
Given two integer arrays val[0...n-1] and wt[0...n-1] that represents values and weights associated with n items respectively. Find out the maximum value subset of val[] such that sum of the weights of this subset is smaller than or equal to knapsack capacity W. Here the BRANCH AND BOUND ALGORITHM is discussed .
The document discusses backtracking and branch and bound algorithms for solving subset and permutation problems. It explains that backtracking performs a depth-first search of the solution space tree, exploring nodes recursively without storing the entire tree. Branch and bound also searches the tree systematically but uses priority queues and bounding functions to prioritize parts of the tree most likely to contain solutions. Both algorithms can solve large problem instances by exploring only portions of the exponential-sized solution space trees as needed.
- NP-hard problems are at least as hard as problems in NP. A problem is NP-hard if any problem in NP can be reduced to it in polynomial time.
- Cook's theorem states that if the SAT problem can be solved in polynomial time, then every problem in NP can be solved in polynomial time.
- Vertex cover problem is proven to be NP-hard by showing that independent set problem reduces to it in polynomial time, meaning there is a polynomial time algorithm that converts any instance of independent set into an instance of vertex cover.
- Therefore, if there was a polynomial time algorithm for vertex cover, it could be used to solve independent set in polynomial time. Since independent set is NP-complete
The document discusses greedy algorithms and how they can be applied to solve optimization problems like the knapsack problem, activity selection problem, and job sequencing problem. It provides examples of greedy algorithms for each problem. A greedy algorithm works by making locally optimal choices at each step in the hope of finding a globally optimal solution. For the knapsack problem, the greedy approach sorts items by profit/weight ratio and fills the knapsack accordingly. For activity selection, it schedules activities based on earliest finish time. For job sequencing, it sorts jobs by profit and schedules the most profitable jobs that meet deadlines.
Catching race conditions is extremely difficult as it is an NP-hard problem. The document discusses several examples of race conditions that can occur between two groups of processes communicating through message passing. Different attempts are made to prevent race conditions by protecting shared memory locations, but each attempt has problems. The key lessons are that protection must be applied uniformly to all processes, and locks must only be released by their acquiring thread.
This presentation contains information about the divide and conquer algorithm. It includes discussion regarding its part, technique, skill, advantages and implementation issues.
The Bellman–Ford algorithm is an algorithm that computes shortest paths from a single source vertex to all of the other vertices in a weighted digraph.
The branch-and-bound method is used to solve optimization problems by systematically evaluating potential solutions through traversing a state space tree. It improves on backtracking by not limiting the traversal order and using bounds to prune unpromising nodes. For the traveling salesperson problem, an initial tour provides an upper bound, local information at each node gives a lower bound, and nodes are expanded in best-first order until an optimal tour is found or proven impossible.
The document describes pushdown automata (PDA). A PDA has a tape, stack, finite control, and transition function. It accepts or rejects strings by reading symbols on the tape, pushing/popping symbols on the stack, and changing state according to the transition function. The transition function defines the possible moves of the PDA based on the current state, tape symbol, and stack symbol. If the PDA halts in a final state with an empty stack, the string is accepted. PDAs can recognize any context-free language. Examples are given of PDAs for specific languages.
The document discusses several algorithms for computing the convex hull of a set of points, including brute force, quick hull, divide and conquer, Graham's scan, and Jarvis march. It provides details on the time complexity of each algorithm, ranging from O(n^2) for brute force to O(n log n) for quick hull, divide and conquer, and Graham's scan. Jarvis march runs in O(nh) time where h is the number of points on the convex hull.
The document discusses the knapsack problem and greedy algorithms. It defines the knapsack problem as an optimization problem where given constraints and an objective function, the goal is to find the feasible solution that maximizes or minimizes the objective. It describes the knapsack problem has having two versions: 0-1 where items are indivisible, and fractional where items can be divided. The fractional knapsack problem can be solved using a greedy approach by sorting items by value to weight ratio and filling the knapsack accordingly until full.
A neural network maps a set of inputs to a set of outputs. It is composed of nodes or units connected by links with weights. A neural network can compute or approximate functions, perform pattern recognition, signal processing, and learn to do any of these. A perceptron is a basic type of neural network that uses a threshold activation function. It can be trained to learn functions using the perceptron learning rule, which adjusts the weights to minimize errors between the network's output and the target output.
The document discusses divide and conquer algorithms. It describes divide and conquer as a design strategy that involves dividing a problem into smaller subproblems, solving the subproblems recursively, and combining the solutions. It provides examples of divide and conquer algorithms like merge sort, quicksort, and binary search. Merge sort works by recursively sorting halves of an array until it is fully sorted. Quicksort selects a pivot element and partitions the array into subarrays of smaller and larger elements, recursively sorting the subarrays. Binary search recursively searches half-intervals of a sorted array to find a target value.
This document discusses pushdown automata (PDA) and how they can be used to accept context-free languages. It defines a PDA as a 7-tuple that includes a finite set of states, input symbols, stack symbols, initial state, initial stack symbol, set of final states, and a transition function. A context-free grammar is also defined as a 4-tuple that includes variables, terminals, production rules, and a start symbol. The document then shows how to construct a PDA that is equivalent to a given context-free grammar by defining transition rules based on the grammar productions. An example of converting a grammar to a PDA using this construction method is also provided.
Knapsack problem algorithm, greedy algorithmHoneyChintal
The document discusses the knapsack problem and algorithms to solve it. It describes the 0-1 knapsack problem, which does not allow breaking items, and the fractional knapsack problem, which does. It provides an example comparing the two. The document then explains the greedy algorithm approach to solve the fractional knapsack problem by calculating value to weight ratios and filling the knapsack with the highest ratio items first. Pseudocode for the greedy fractional knapsack algorithm is provided along with analysis of its time complexity.
The document provides an overview of perceptrons and neural networks. It discusses how neural networks are modeled after the human brain and consist of interconnected artificial neurons. The key aspects covered include the McCulloch-Pitts neuron model, Rosenblatt's perceptron, different types of learning (supervised, unsupervised, reinforcement), the backpropagation algorithm, and applications of neural networks such as pattern recognition and machine translation.
This document discusses NP-hard and NP-complete problems. It begins by defining the classes P, NP, NP-hard, and NP-complete. It then provides examples of NP-hard problems like the traveling salesperson problem, satisfiability problem, and chromatic number problem. It explains that to show a problem is NP-hard, one shows it is at least as hard as another known NP-hard problem. The document concludes by discussing how restricting NP-hard problems can result in problems that are solvable in polynomial time.
The document discusses the dynamic programming approach to solving the Fibonacci numbers problem and the rod cutting problem. It explains that dynamic programming formulations first express the problem recursively but then optimize it by storing results of subproblems to avoid recomputing them. This is done either through a top-down recursive approach with memoization or a bottom-up approach by filling a table with solutions to subproblems of increasing size. The document also introduces the matrix chain multiplication problem and how it can be optimized through dynamic programming by considering overlapping subproblems.
This document discusses NP-complete problems and their properties. Some key points:
- NP-complete problems have an exponential upper bound on runtime but only a polynomial lower bound, making them appear intractable. However, their intractability cannot be proven.
- NP-complete problems are reducible to each other in polynomial time. Solving one would solve all NP-complete problems.
- NP refers to problems that can be verified in polynomial time. P refers to problems that can be solved in polynomial time.
- A problem is NP-complete if it is in NP and all other NP problems can be reduced to it in polynomial time. Proving a problem is NP-complete involves showing
This document outlines a course on neural networks and fuzzy systems. The course is divided into two parts, with part one focusing on neural networks over 11 weeks, covering topics like perceptrons, multi-layer feedforward networks, and unsupervised learning. Part two focuses on fuzzy systems over 4 weeks, covering fuzzy set theory and fuzzy systems. The document also provides details on concepts like linear separability, decision boundaries, perceptron learning algorithms, and using neural networks to solve problems like AND, OR, and XOR gates.
This document discusses randomized algorithms. It begins by listing different categories of algorithms, including randomized algorithms. Randomized algorithms introduce randomness into the algorithm to avoid worst-case behavior and find efficient approximate solutions. Quicksort is presented as an example randomized algorithm, where randomness improves its average runtime from quadratic to linear. The document also discusses the randomized closest pair algorithm and a randomized algorithm for primality testing. Both introduce randomness to improve efficiency compared to deterministic algorithms for the same problems.
The depth buffer method is used to determine visibility in 3D graphics by testing the depth (z-coordinate) of each surface to determine the closest visible surface. It involves using two buffers - a depth buffer to store the depth values and a frame buffer to store color values. For each pixel, the depth value is calculated and compared to the existing value in the depth buffer, and if closer the color and depth values are updated in the respective buffers. This method is implemented efficiently in hardware and processes surfaces one at a time in any order.
Problem | Problem v/s Algorithm v/s Program | Types of Problems | Computational complexity | P class v/s NP class Problems | Polynomial time v/s Exponential time | Deterministic v/s non-deterministic Algorithms | Functions of non-deterministic Algorithms | Non-deterministic searching Algorithm | Non-deterministic sorting Algorithm | NP - Hard and NP - Complete Problems | Reduction | properties of reduction | Satisfiability problem and Algorithm
The document discusses backtracking and branch and bound algorithms. Backtracking incrementally builds candidates and abandons them (backtracks) when they cannot lead to a valid solution. Branch and bound systematically enumerates solutions and discards branches that cannot produce a better solution than the best found so far based on upper bounds. Examples provided are the N-Queens problem solved with backtracking and the knapsack problem solved with branch and bound. Pseudocode is given for both algorithms.
Amortized analysis allows analyzing the average performance of a sequence of operations on a data structure, even if some operations are expensive. There are three main methods for amortized analysis: aggregate analysis, accounting method, and potential method.
The accounting method assigns differing amortized costs to operations. When the amortized cost is higher than actual cost, the difference is stored as credit on objects. Later operations can use accumulated credits when their amortized cost is lower than actual cost.
The potential method associates extra "potential" with the data structure as a whole rather than individual objects. The amortized cost of an operation is the actual cost plus the change in potential before and after the operation. Maint
The document discusses various greedy algorithms including knapsack problems, minimum spanning trees, shortest path algorithms, and job sequencing. It provides descriptions of greedy algorithms, examples to illustrate how they work, and pseudocode for algorithms like fractional knapsack, Prim's, Kruskal's, Dijkstra's, and job sequencing. Key aspects covered include choosing the best option at each step and building up an optimal solution incrementally using greedy choices.
The greedy method constructs an optimal solution in stages by making locally optimal choices at each stage without reconsidering past decisions. It selects the choice that appears best at the current time without regard for its long-term consequences. The general greedy algorithm procedure selects the best choice from available inputs at each stage until a complete solution is reached. Examples demonstrate both when the greedy method succeeds in finding an optimal solution and when it fails to do so compared to alternative methods like dynamic programming.
This presentation contains information about the divide and conquer algorithm. It includes discussion regarding its part, technique, skill, advantages and implementation issues.
The Bellman–Ford algorithm is an algorithm that computes shortest paths from a single source vertex to all of the other vertices in a weighted digraph.
The branch-and-bound method is used to solve optimization problems by systematically evaluating potential solutions through traversing a state space tree. It improves on backtracking by not limiting the traversal order and using bounds to prune unpromising nodes. For the traveling salesperson problem, an initial tour provides an upper bound, local information at each node gives a lower bound, and nodes are expanded in best-first order until an optimal tour is found or proven impossible.
The document describes pushdown automata (PDA). A PDA has a tape, stack, finite control, and transition function. It accepts or rejects strings by reading symbols on the tape, pushing/popping symbols on the stack, and changing state according to the transition function. The transition function defines the possible moves of the PDA based on the current state, tape symbol, and stack symbol. If the PDA halts in a final state with an empty stack, the string is accepted. PDAs can recognize any context-free language. Examples are given of PDAs for specific languages.
The document discusses several algorithms for computing the convex hull of a set of points, including brute force, quick hull, divide and conquer, Graham's scan, and Jarvis march. It provides details on the time complexity of each algorithm, ranging from O(n^2) for brute force to O(n log n) for quick hull, divide and conquer, and Graham's scan. Jarvis march runs in O(nh) time where h is the number of points on the convex hull.
The document discusses the knapsack problem and greedy algorithms. It defines the knapsack problem as an optimization problem where given constraints and an objective function, the goal is to find the feasible solution that maximizes or minimizes the objective. It describes the knapsack problem has having two versions: 0-1 where items are indivisible, and fractional where items can be divided. The fractional knapsack problem can be solved using a greedy approach by sorting items by value to weight ratio and filling the knapsack accordingly until full.
A neural network maps a set of inputs to a set of outputs. It is composed of nodes or units connected by links with weights. A neural network can compute or approximate functions, perform pattern recognition, signal processing, and learn to do any of these. A perceptron is a basic type of neural network that uses a threshold activation function. It can be trained to learn functions using the perceptron learning rule, which adjusts the weights to minimize errors between the network's output and the target output.
The document discusses divide and conquer algorithms. It describes divide and conquer as a design strategy that involves dividing a problem into smaller subproblems, solving the subproblems recursively, and combining the solutions. It provides examples of divide and conquer algorithms like merge sort, quicksort, and binary search. Merge sort works by recursively sorting halves of an array until it is fully sorted. Quicksort selects a pivot element and partitions the array into subarrays of smaller and larger elements, recursively sorting the subarrays. Binary search recursively searches half-intervals of a sorted array to find a target value.
This document discusses pushdown automata (PDA) and how they can be used to accept context-free languages. It defines a PDA as a 7-tuple that includes a finite set of states, input symbols, stack symbols, initial state, initial stack symbol, set of final states, and a transition function. A context-free grammar is also defined as a 4-tuple that includes variables, terminals, production rules, and a start symbol. The document then shows how to construct a PDA that is equivalent to a given context-free grammar by defining transition rules based on the grammar productions. An example of converting a grammar to a PDA using this construction method is also provided.
Knapsack problem algorithm, greedy algorithmHoneyChintal
The document discusses the knapsack problem and algorithms to solve it. It describes the 0-1 knapsack problem, which does not allow breaking items, and the fractional knapsack problem, which does. It provides an example comparing the two. The document then explains the greedy algorithm approach to solve the fractional knapsack problem by calculating value to weight ratios and filling the knapsack with the highest ratio items first. Pseudocode for the greedy fractional knapsack algorithm is provided along with analysis of its time complexity.
The document provides an overview of perceptrons and neural networks. It discusses how neural networks are modeled after the human brain and consist of interconnected artificial neurons. The key aspects covered include the McCulloch-Pitts neuron model, Rosenblatt's perceptron, different types of learning (supervised, unsupervised, reinforcement), the backpropagation algorithm, and applications of neural networks such as pattern recognition and machine translation.
This document discusses NP-hard and NP-complete problems. It begins by defining the classes P, NP, NP-hard, and NP-complete. It then provides examples of NP-hard problems like the traveling salesperson problem, satisfiability problem, and chromatic number problem. It explains that to show a problem is NP-hard, one shows it is at least as hard as another known NP-hard problem. The document concludes by discussing how restricting NP-hard problems can result in problems that are solvable in polynomial time.
The document discusses the dynamic programming approach to solving the Fibonacci numbers problem and the rod cutting problem. It explains that dynamic programming formulations first express the problem recursively but then optimize it by storing results of subproblems to avoid recomputing them. This is done either through a top-down recursive approach with memoization or a bottom-up approach by filling a table with solutions to subproblems of increasing size. The document also introduces the matrix chain multiplication problem and how it can be optimized through dynamic programming by considering overlapping subproblems.
This document discusses NP-complete problems and their properties. Some key points:
- NP-complete problems have an exponential upper bound on runtime but only a polynomial lower bound, making them appear intractable. However, their intractability cannot be proven.
- NP-complete problems are reducible to each other in polynomial time. Solving one would solve all NP-complete problems.
- NP refers to problems that can be verified in polynomial time. P refers to problems that can be solved in polynomial time.
- A problem is NP-complete if it is in NP and all other NP problems can be reduced to it in polynomial time. Proving a problem is NP-complete involves showing
This document outlines a course on neural networks and fuzzy systems. The course is divided into two parts, with part one focusing on neural networks over 11 weeks, covering topics like perceptrons, multi-layer feedforward networks, and unsupervised learning. Part two focuses on fuzzy systems over 4 weeks, covering fuzzy set theory and fuzzy systems. The document also provides details on concepts like linear separability, decision boundaries, perceptron learning algorithms, and using neural networks to solve problems like AND, OR, and XOR gates.
This document discusses randomized algorithms. It begins by listing different categories of algorithms, including randomized algorithms. Randomized algorithms introduce randomness into the algorithm to avoid worst-case behavior and find efficient approximate solutions. Quicksort is presented as an example randomized algorithm, where randomness improves its average runtime from quadratic to linear. The document also discusses the randomized closest pair algorithm and a randomized algorithm for primality testing. Both introduce randomness to improve efficiency compared to deterministic algorithms for the same problems.
The depth buffer method is used to determine visibility in 3D graphics by testing the depth (z-coordinate) of each surface to determine the closest visible surface. It involves using two buffers - a depth buffer to store the depth values and a frame buffer to store color values. For each pixel, the depth value is calculated and compared to the existing value in the depth buffer, and if closer the color and depth values are updated in the respective buffers. This method is implemented efficiently in hardware and processes surfaces one at a time in any order.
Problem | Problem v/s Algorithm v/s Program | Types of Problems | Computational complexity | P class v/s NP class Problems | Polynomial time v/s Exponential time | Deterministic v/s non-deterministic Algorithms | Functions of non-deterministic Algorithms | Non-deterministic searching Algorithm | Non-deterministic sorting Algorithm | NP - Hard and NP - Complete Problems | Reduction | properties of reduction | Satisfiability problem and Algorithm
The document discusses backtracking and branch and bound algorithms. Backtracking incrementally builds candidates and abandons them (backtracks) when they cannot lead to a valid solution. Branch and bound systematically enumerates solutions and discards branches that cannot produce a better solution than the best found so far based on upper bounds. Examples provided are the N-Queens problem solved with backtracking and the knapsack problem solved with branch and bound. Pseudocode is given for both algorithms.
Amortized analysis allows analyzing the average performance of a sequence of operations on a data structure, even if some operations are expensive. There are three main methods for amortized analysis: aggregate analysis, accounting method, and potential method.
The accounting method assigns differing amortized costs to operations. When the amortized cost is higher than actual cost, the difference is stored as credit on objects. Later operations can use accumulated credits when their amortized cost is lower than actual cost.
The potential method associates extra "potential" with the data structure as a whole rather than individual objects. The amortized cost of an operation is the actual cost plus the change in potential before and after the operation. Maint
The document discusses various greedy algorithms including knapsack problems, minimum spanning trees, shortest path algorithms, and job sequencing. It provides descriptions of greedy algorithms, examples to illustrate how they work, and pseudocode for algorithms like fractional knapsack, Prim's, Kruskal's, Dijkstra's, and job sequencing. Key aspects covered include choosing the best option at each step and building up an optimal solution incrementally using greedy choices.
The greedy method constructs an optimal solution in stages by making locally optimal choices at each stage without reconsidering past decisions. It selects the choice that appears best at the current time without regard for its long-term consequences. The general greedy algorithm procedure selects the best choice from available inputs at each stage until a complete solution is reached. Examples demonstrate both when the greedy method succeeds in finding an optimal solution and when it fails to do so compared to alternative methods like dynamic programming.
The document discusses greedy algorithms and their applications. Greedy algorithms make locally optimal choices at each step to arrive at a global solution. They are used to solve optimization problems by considering inputs in order based on a selection measure. Applications mentioned include knapsack problem, minimum spanning tree, job sequencing, Huffman coding, and shortest path. Specific details are provided on the knapsack problem and job sequencing problem algorithms.
The document discusses the greedy method algorithmic approach. It provides an overview of greedy algorithms including that they make locally optimal choices at each step to find a global optimal solution. The document also provides examples of problems that can be solved using greedy methods like job sequencing, the knapsack problem, finding minimum spanning trees, and single source shortest paths. It summarizes control flow and applications of greedy algorithms.
This document outlines an algorithm design technique called the greedy method. It discusses several problems that can be solved using greedy algorithms, including the knapsack problem, job scheduling with deadlines, minimum cost spanning trees, and optimal storage on tapes. For each problem, it provides the general greedy approach, an algorithm to solve the problem greedily, and an example to illustrate the algorithm. It also compares the Prim's and Kruskal's algorithms for finding minimum cost spanning trees.
Dynamic programming is used to solve optimization problems by breaking them down into subproblems. It solves each subproblem only once, storing the results in a table to lookup when the subproblem recurs. This avoids recomputing solutions and reduces computation. The key is determining the optimal substructure of problems. It involves characterizing optimal solutions recursively, computing values in a bottom-up table, and tracing back the optimal solution. An example is the 0/1 knapsack problem to maximize profit fitting items in a knapsack of limited capacity.
The document discusses algorithms and the greedy method. It provides examples of problems that can be solved using greedy algorithms, including job sequencing with deadlines and finding minimum spanning trees. It then provides details of algorithms to solve these problems greedily. The job sequencing algorithm sequences jobs by deadline to maximize total profit. Prim's algorithm is described for finding minimum spanning trees by gradually building up the tree from the minimum cost edge at each step.
Unit-2 Branch & Bound Design of Algorithms.pptHarjotDhillon8
The document discusses several optimization searching strategies:
1) Branch and bound strategy uses two mechanisms - branching to generate solution space and bounding to prune branches where lower bound exceeds upper bound. It is efficient on average but worst case is exponential.
2) It describes applying branch and bound to the traveling salesman problem and 0/1 knapsack problem.
3) The A* algorithm is also discussed as it uses best-first search to find optimal solutions by estimating costs with heuristic functions.
The document discusses brute force algorithms and exhaustive search techniques. It provides examples of problems that can be solved using these approaches, such as computing powers and factorials, sorting, string matching, polynomial evaluation, the traveling salesman problem, knapsack problem, and the assignment problem. For each problem, it describes generating all possible solutions and evaluating them to find the best one. Most brute force algorithms have exponential time complexity, evaluating all possible combinations or permutations of the input.
The document discusses various math and string classes in Java. It covers:
- Constructing objects using the new operator and passing parameters.
- Using the Random class to generate random numbers.
- Declaring constants using final and static final.
- Basic arithmetic, increment/decrement, and math methods.
- Creating and manipulating strings using methods like length(), substring(), and concatenation.
- Drawing shapes on a frame using Graphics2D methods in a JComponent's paintComponent method.
The document discusses approximation algorithms for NP-complete problems. It introduces the concept of approximation ratios, which measure how close an approximate solution from a polynomial-time algorithm is to the optimal solution. The document then provides examples of approximation algorithms with a ratio of 2 for the vertex cover and traveling salesman problems. It also discusses using backtracking to find all possible solutions to the subset sum problem.
IRJET- Solving Quadratic Equations using C++ Application ProgramIRJET Journal
1) The document describes a C++ application program developed to solve quadratic equations. The program uses methods like factoring, completing the square, and the quadratic formula to find the solutions.
2) Field testing of the program showed students using it had an average score of 82.8% on a quadratic equations assessment, demonstrating the program's effectiveness.
3) Advantages of using such an application include reducing errors, supporting problem-solving processes, and creating awareness of mathematical concepts. It allows students to easily test conjectures and replay problem-solving steps.
The document summarizes various greedy algorithms and optimization problems that can be solved using greedy approaches. It discusses the greedy method, giving the definition that locally optimal decisions should lead to a globally optimal solution. Examples covered include picking numbers for largest sum, shortest paths, minimum spanning trees (using Kruskal's and Prim's algorithms), single-source shortest paths (using Dijkstra's algorithm), activity-on-edge networks, the knapsack problem, Huffman codes, and 2-way merging. Limitations of the greedy method are noted, such as how it does not always find the optimal solution for problems like shortest paths on a multi-stage graph.
The document discusses greedy algorithms and their use for optimization problems. It provides examples of using greedy approaches to solve scheduling and knapsack problems. Specifically, it describes how a greedy algorithm works by making locally optimal choices at each step in hopes of reaching a globally optimal solution. While greedy algorithms do not always find the true optimal, they often provide good approximations. The document also proves that certain greedy strategies, such as always selecting the item with the highest value to weight ratio for the knapsack problem, will find the true optimal solution.
The document discusses greedy algorithms and their properties. It describes how greedy algorithms work by making locally optimal choices at each step in the hope of reaching a globally optimal solution. Two examples are given: the activity selection problem and finding minimum spanning trees. Prim's algorithm for finding minimum spanning trees is described in detail, showing how it works by always selecting the lightest edge between the growing tree and remaining vertices.
The document discusses computer architecture and describes the basic components of a computer. It discusses the instruction cycle which involves fetching instructions from memory, decoding them, reading the effective address from memory, and executing the instruction. The basic computer has three types of instructions - memory reference, register reference, and input/output. Memory reference instructions refer to memory addresses and use direct or indirect addressing. Register reference instructions perform operations on registers. Input/output instructions are used for communication with external devices. The instruction cycle is then completed by fetching and executing the next instruction.
The document provides an overview of operating systems, including what they are, their goals and components. It describes how operating systems act as an intermediary between the user and computer hardware, executing programs and making resource allocation more efficient. It also summarizes the different types of operating systems like batch processing systems, time-sharing systems, personal computer systems, distributed systems, and real-time systems.
The document discusses various components of an Android application. It describes the four main types of app components: Activities, Services, Broadcast Receivers, and Content Providers. It provides details about what each component type represents and how it is implemented. It also discusses some additional concepts like fragments, views, layouts, intents and resources that are involved in building Android apps.
Mobile computing allows transmission of data, voice, and video through wireless devices without a fixed connection. It involves mobile communication infrastructure, mobile hardware devices like smartphones and tablets, and mobile software operating systems. The technology has advanced from 1G analog cellular to 2G digital cellular, 3G broadband cellular, 4G high-speed data, and upcoming 5G which will provide wireless internet speeds over 1 Gbps. Mobile computing provides benefits like location flexibility and enhanced productivity but also poses problems regarding security, authentication, health issues, and addiction.
A Monte Carlo simulation involves modeling a system with random variables to estimate outcomes. It repeats calculations using randomly generated values for the variables and averages the results. The document discusses using Monte Carlo simulations to model demand in business situations with uncertain variables. Examples show generating random numbers to simulate daily product demand over multiple days and calculating the average demand from the results.
The document discusses decision making under conditions of risk and uncertainty. It defines key terms like decision maker, alternatives, events, and payoff tables. It explains three types of decision making: decisions under certainty where outcomes are known; decisions under risk where outcomes are unknown but probabilities are known; and decisions under uncertainty where neither outcomes nor probabilities are known. It then discusses various decision making criteria that can be used under different conditions like maximax, maximin, minimax, Laplace, and Hurwicz criteria. Expected monetary value is introduced as a method to evaluate decisions under risk. Decision trees are also defined as a way to visually represent decision problems involving uncertainty.
This document provides information about project management applications including definitions of a project, project life cycle, and examples of projects. It also discusses network planning techniques such as Program Evaluation and Review Technique (PERT) and Critical Path Method (CPM). The key steps in CPM including forward and backward passes to determine earliest and latest start/finish times are explained. Formulas for calculating total float, free float, and independent float are provided. An example problem demonstrates drawing a network diagram and identifying the critical path and project duration.
The document describes a Monte Carlo simulation process for modeling uncertainty. It provides examples of simulating daily demand for a bakery and a car rental company using random numbers and probability distributions. For the bakery, the average daily demand over 5 days was calculated to be 17 units. For the car rental company, the average number of trips per week over 10 weeks was calculated to be 2.8 trips. The document demonstrates how Monte Carlo simulation can be used to model systems with uncertain variables and calculate average outcomes.
This document provides an overview of system dynamics and systems thinking. It defines key terms like systems, static vs dynamic systems, feedback loops, stocks and flows. Systems thinking focuses on how parts of a system interrelate and how systems work over time. System dynamics uses feedback loops and stocks/flows to model how complex systems change over time. It involves conceptualizing the system, formulating stock/flow diagrams, testing the model, and implementing it to test policies. Causal loop diagrams and stock/flow diagrams are introduced as tools to understand system structure and behavior.
This document discusses discrete event simulation and queueing systems. It provides definitions and explanations of key concepts in discrete event simulation including: entities and attributes, events, activities, system state, and components of discrete event simulation models. It also defines concepts related to queueing systems such as arrival processes, service times, queue disciplines, and how to simulate single and multiple server queueing systems. Simulation is presented as an important tool for analyzing complex, stochastic queueing systems when mathematical analysis is not possible.
Simulation involves developing a model of a real-world system over time to analyze its behavior and performance. The key aspects covered in this document include defining simulation as modeling the operation of a system over time through artificial history generation and observation. Simulation models can be used as analysis and design tools to predict the effects of changes to a system before actual implementation. Discrete event simulation is discussed as a common technique that models systems with state changes occurring at discrete points in time. The document also outlines the steps in a typical simulation study including problem formulation, model conceptualization, experimentation and analysis.
The document discusses Monte Carlo simulation methods. It begins by defining key terms like systems, models, simulation, random numbers, and Monte Carlo simulation. It then provides more details on Monte Carlo simulations, explaining that they are used to predict outcomes when random variables are present by running the model repeatedly with different random variable values and averaging the results. Several examples are given of fields that use Monte Carlo simulations. The document concludes by outlining the typical steps involved in a Monte Carlo simulation.
The document discusses the Java Abstract Window Toolkit (AWT). It describes that AWT is used to create graphical user interface applications in Java and its components are platform dependent. It then lists and describes various AWT components like containers, frames, panels, labels, buttons, checkboxes, lists, text fields, text areas, canvases and scroll bars. It also discusses how to create frames using inheritance and association. Finally, it provides examples of using buttons, text fields and text areas in AWT applications.
The document discusses Java packages and interfaces. It provides details about:
- What packages are and how they are used to prevent naming conflicts and organize classes.
- How to define a user-defined package with an example.
- What interfaces are and how they allow for multiple inheritance by implementing multiple interfaces.
- Examples of defining and implementing interfaces.
Inheritance is a mechanism in Java that allows one class to acquire the properties (fields and methods) of another class. The class that inherits is called the subclass, and the class being inherited from is called the superclass. This allows code reuse and establishes an is-a relationship between classes. There are three main types of inheritance in Java: single, multilevel, and hierarchical. Method overriding and dynamic method dispatch allow subclasses to provide their own implementation of methods defined in the superclass.
Classes and objects are fundamental concepts in object-oriented programming. A class defines common properties and behaviors of objects through fields and methods. An object is an instance of a class that represents a real-world entity with state (fields) and behavior (methods). Classes can inherit properties and behaviors from superclasses and implement interfaces. Objects are created from classes using constructors.
Unit 2-data types,Variables,Operators,Conitionals,loops and arraysDevaKumari Vijay
The document discusses various Java data types including primitive data types like byte, short, int, long, float, double, char, boolean and their ranges. It also explains variables in Java - local variables, instance variables, static variables. Different types of operators like arithmetic, assignment, comparison, logical, bitwise operators are defined along with examples. The document also covers conditional statements like if-else, switch case and different loops in Java - for, while, do-while loops along with examples. Break and continue statements in Java loops are also explained.
Java is an object-oriented programming language that was initially developed by James Gosling at Sun Microsystems in 1991. It is free to use, runs on all platforms, and is widely used for both desktop and mobile applications as well as large systems. Java code is compiled to bytecode that runs on a Java Virtual Machine, making Java programs platform independent. Key features of Java include being object-oriented, robust, secure, portable, high performance, and having a simple syntax. Java is commonly used to develop web applications, mobile apps, games, and for big data processing.
Introduction to design and analysis of algorithmDevaKumari Vijay
This document defines algorithms and describes how to analyze their efficiency. It states that an algorithm is a set of unambiguous instructions that accepts input and produces output within a finite number of steps. The document outlines criteria algorithms must satisfy like being definite, finite, and effective. It also describes different representations of algorithms like pseudocode and flowcharts. The document then discusses analyzing algorithms' time and space efficiency using asymptotic notations like Big-O, Big-Omega, and Big-Theta. It defines these notations and provides examples to classify algorithms' order of growth.
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
Physiology and chemistry of skin and pigmentation, hairs, scalp, lips and nail, Cleansing cream, Lotions, Face powders, Face packs, Lipsticks, Bath products, soaps and baby product,
Preparation and standardization of the following : Tonic, Bleaches, Dentifrices and Mouth washes & Tooth Pastes, Cosmetics for Nails.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
Assessment and Planning in Educational technology.pptxKavitha Krishnan
In an education system, it is understood that assessment is only for the students, but on the other hand, the Assessment of teachers is also an important aspect of the education system that ensures teachers are providing high-quality instruction to students. The assessment process can be used to provide feedback and support for professional development, to inform decisions about teacher retention or promotion, or to evaluate teacher effectiveness for accountability purposes.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
2. The general Method
The greedy method is a most straight
forward design technique
Most of these problems have n inputs and
require us to obtain a subset that satisfies
some constraints
Any subset that satisfies the constraint is
called a feasible solution
3. We need to find a feasible solution that
either maximizes or minimizes the given
objective function
The Greedy method suggest an algorithm
that works in stages, considering one
input at a time
At each stage a decision is made
regarding whether the particular input is
in an optimal solution
4. This is done by selecting the input in a
particular order determined by some
selection procedure
If the insertion of the next input into
partially constructed optimal solution will
result in an infeasible solution, then this
input is not added to the feasible solution.
Otherwise, it is added
6. The function select selects an input from
a[] and removes it .the selected input's
value is assigned to x. Feasible is a
Boolean valued function that determines
whether x can be included in to the
solution vector. The function union
combines x with the solution and updates
the objective function
7. Knapsack Problem
We are given n objects and a knapsack or a bag. Object i has a weight wi, and the
knapsack has a capacity m. If a fraction xi, 0<xi<1, of objects i is placed into the
knapsack, then a profit of pixi is earned.
The objective is to obtain a filling of the knapsack that maximizes the
total profit earned. Since the knapsack capacity is m, we require the total
weight of all chosen objects to be at most m. Formally the problem can
be stated as
The profits and weights are positive numbers.
A feasible solution (or filling) is any set (x1, x2,……,xn) satisfying
II and III above. An optimal solution is a feasible solution for which I is
maximized.
10. Exercise
1. Find an optimal solution to the knapsack instance n =4,
m=40,(p1,p2,p3,p4) = (20,40,35,45) and (w1,w2,w3,w4) = (20,25,10,15).
Strategy 1: consider the object in increasing order of weights
(w3,w4,w1,w2)
Remaining
capacity
object weight Fraction xi
included
40-10=30 3 10 1
30-15=15 4 15 1
15-15=0 1 20 15/20=3/4
Solution vector(x1,x2,x3,x4)=(3/4,0,1,1)
Profit=∑pixi=20*3/4+40*0+35*1+45*1=95
11. Strategy 2: consider the object in decreasing order of profit
(p4,p2,p3,p1)=(45,40,35,20)
Remaining
capacity
object weight Fraction xi
included
40-15=25 4 15 1
25-25=0 2 25 1
Solution vector(x1,x2,x3,x4)=(0,1,0,1)
Profit=∑pixi=20*0+40*1+35*0+45*1=85
12. Strategy 3: consider the object in decreasing order of profit/weight(pi/wi)
(P3/w3)>(p4/w4)>(p2/w2)>(p1/w1)
Remaining
capacity
object weight Fraction xi
included
40-10=30 3 10 1
30-15=15 4 15 1
15-15=0 2 25 15/25=3/5
Solution vector(x1,x2,x3,x4)=(0,3/4,1,1)
Profit=∑pixi=20*0+40*3/4+35*1+45*1=104
13. 0/1 Knapsack problem
In this method item cannot be broken which means object should be taken as a
whole or not taken. Hence it is called 0/1 knapsack Problem.
Each item is taken or not taken.
Cannot take a fractional amount of an item taken or take an item more than once.
Greedy Approach doesn't ensure an Optimal Solution.
Hence, in case of 0-1 Knapsack, the value of xi can be either 0 or 1, where other
constraints remain the same.
14. Find an optimal solution to the knapsack instance n =4, m=40,(p1,p2,p3,p4) =
(20,40,35,45) and (w1,w2,w3,w4) = (20,25,10,15).
Strategy 1: consider the object in increasing order of weights (w3,w4,w1,w2)
Solution vector(x1,x2,x3,x4)=(0,0,1,1)
Profit=∑pixi=20*0+40*0+35*1+45*1=80
Even it the capacity of knapsack is not full, we do not consider the next object
i.e)object 1 .
Remaining
capacity
object weight Fraction of
xi
considered
40-10=30 3 10 1
30-15=15 4 15 1
15. Strategy 2: consider the object in decreasing order of profits (p4,p3,p2,p1)
Even it the capacity of knapsack is not full, we do not consider the next
object i.e)object 3 .
Solution vector(x1,x2,x3,x4)=(0,1,0,1)
Profit=∑pixi=20*0+40*1+35*0+45*1=85
Remaining
capacity
object weight Fraction of
xi
considered
40-15=25 4 15 1
25-25=0 2 25 1
16. Strategy 3: consider the object in decreasing order of profits/weights
(p3/w3,p4/w4,p2/w2,p1/w1)
Even it the capacity of knapsack is not full, we do not consider the
next object i.e)object 2 .
Solution vector(x1,x2,x3,x4)=(0,0,1,1)
Profit=∑pixi=20*0+40*0+35*1+45*1=80
Remaining
capacity
object weight Fraction of
xi
considered
40-10=30 3 10 1
30-15=15 4 15 1
17. MST – Minimum Spanning Tree
Given a connected undirected graph we would like to
find the “cheapest” connected version of that graph
Remove all extra edges leaving just enough to be
connected – it will be a tree
A subgraph T=(V,E’)of G=(V,E) is a spanning tree if T is
a tree, and includes all vertices of G and subset of
edges(E’)
Find the tree that has the smallest sum of edge lengths
Given G = (V, E) and edge weights we, find the tree T =
(V, E') where E' ⊆ E and which also minimizes
is called Minimum Spanning Tree
Not necessarily unique
Many applications – cheapest phone network, etc.
CS 312 – Greedy Algorithms 17
we
eÎE'
å
18. Prims algorithm
Prims algorithm constructs a minimum spanning tree
through sequence of expanding sub trees.
It starts by selecting some arbitrary vertex V of graph of
vertices.
On each iteration tree expands I greedy manner by
attaching nearest node not in the tree.
Cost adjacency matrix gives the distance of present
vertex with all other vertices in the graph.
If the vertex is not reachable from current vertex the
distance is given as ∞
19. Prims algorithm
//Assume that G is connected and weighted graph
//Input: The cost adjacency matrix C and number of vertices n
//Output: Minimum weight spanning tree T
Algorithm prims(c,n)
{
for i=1 to n do
visited[i]=0
u=1
Visited[u]=1
while there is still unchosen vertices do
{
let(u,v)be the lightest edge between any chosen u and v
Visited[v]=1
T’=union(T,<u,v>)
}
Return T
}
21. KRUSKAL'S ALGORITHM
This algorithm is used for finding the minimum cost
spanning tree for every connected undirected graph.
In the algorithm,
-> E is the set of edges in graph G
-> G has 'n' vertices
-> cost[u,v] is the cost of edge(u,v)
-> ‘T' is the set of edges in the minimum cost spanning
tree
22. Step1:Arrange the edges in increasing order of
weights
Step 2: Consider all vertices as independent
component
Step 3:Consider the edge if they belong to
different components and does not form a cycle,
reject otherwise.
Step 4: Repeat step 3 until single component
containing all vertices are included.
23. Algorithm Kruskal(E,n)
{
//Let E is the list of edges
//n is number of vertices in given graph G
Sort E in increasing order of their edge weights;
Initially T=0
While ( T does not contain n-1 edges)do
{
find minimum cost edge not yet considered in E and call it as
(u,v)
If(u,v)Does not form a cycle
T=T+(u,v)
else
delete(u,v)
}
return T
}
24. Kruskal’s Algorithm
1 2 3
1 2
4 5 6
3 8
7
4
6
4
5
6
7
3
4
{1}{2}{3}{4}{5}{6}{7
}
Make a disjoint set for each vertex
24
33. Kruskal’s Algorithm
1 2 3
1 2
4 5 6
3 8
7
4
6
4
5
6
7
3
4
1: {1,2}
2: {2,3}
3: {4,5}
3: {6,7}
4: {1,4}
4: {2,5}
4: {4,7}
5: {3,5}
{1,2}{3}{4}{5}{6}{7}
Done when all vertices in one set
Then they are all connected
Exactly |V| - 1 edges
{1,2,3}{4}{5}{6}{7}
{1,2,3}{4,5}{6}{7}
{1,2,3}{4,5}{6,7}
{1,2,3,4,5}{6,7}
rejected
{1,2,3,4,5,6,7} done
33
34. Dijikstra’s Algorithm-Single
source shortest path
Algorithm finds shortest path from given vertex to all
other vertices in a digraph.
The length of the path is sum of cost of the edges on
the path
The algorithm finds shortest path from source ‘S’ to all
other vertices, to which there is a path.
It first finds shortest path to nearest vertex ,then to
second nearest using intermediate nodes and so on.
Before ith iteration algorithm finds shortest paths to (i-
1)vertices nearest to source.
35. //v=set of vertices
//c=cost adjacency matrix of digraph G(V,E)
//n=number of vertices in given graph
//D[i]=contains current shortest path to vertex I
//c[i][j] is the cost of going from vertex I to j.If there is no path, assume
//c[i][j]= ∞ and c[i][j]=0
Algorithm Dijikstra(V,C,D,n)
{
s={1}
for i=2 to n do
d[i]=C[1,i]
for i=1 to n do
{
choose a vertex W in V-S such that D[W] is minimum
S=S U W //add W to S
for each vertex V in V-S d
D[V]=min(D[v],D[W],C[W[V])
}
}
36. Single-Source Shortest Path Problem
Single-Source Shortest Path Problem - The
problem of finding shortest paths from a source
vertex v to all other vertices in the graph.
38. Dijkstra's algorithm
Dijkstra's algorithm - is a solution to the single-source
shortest path problem in graph theory.
Works on both directed and undirected graphs. However,
all edges must have nonnegative weights.
Input: Weighted graph G={E,V} and source vertex v∈V,
such that all edge weights are nonnegative
Output: Lengths of shortest paths (or the shortest paths
themselves) from a given source vertex v∈V to all other
vertices
39. Approach
The algorithm computes for each vertex v the distance
to v from the start vertex S, that is, the weight of a
shortest path between S and v.
The algorithm keeps track of the set of vertices for
which the distance has been computed, called w
Every vertex has a label D associated with it. For any
vertex v, D[v] stores an approximation of the distance
between v and w. The algorithm will update a D[v] value
when it finds a shorter path from w to v.
When a vertex w is added to S, its label D[v] is equal to
the actual (final) distance between the starting vertex S
and vertex v.
39
40. 40
Example: Initialization
A
G
F
B
E
C D
4 1
2
10
3
6
4
2
2
8
5
1
0
Pick vertex in List with minimum distance.
Distance(source) =0
S={A}
Find the nearest
vertex to source say w
i.e)D
V={A,B,C,D.E,F,G}
42. 42
Example: consider vertex with
minimum distance
Pick vertex in List with minimum distance, i.e., D
A
G
F
B
E
C D
4 1
2
10
3
6
4
2
2
8
5
1
0
1
43. 43
Example: Update neighbors
A
G
F
B
E
C D
4 1
2
10
3
6
4
2
2
8
5
1
0 2
3 3
1
9 5
Distance(C) = 1 + 2 = 3
Distance(E) = 1 + 2 = 3
Distance(F) = 1 + 8 = 9
Distance(G) = 1 + 4 = 5
Use D as intermediate
44. 44
Example: Continued...
A
G
F
B
E
C D
4 1
2
10
3
6
4
2
2
8
5
1
0 2
3 3
1
Pick vertex in List with minimum distance (B) and update neighbors
9 5
Note : distance(D) not
updated since D is
already known and
distance(E) not updated
since it is larger than
previously computed
46. 46
Example: Continued...
A
G
F
B
E
C D
4 1
2
10
3
6
4
2
2
8
5
1
0 2
3 3
1
8 5
Pick vertex List with minimum distance (C) and update neighbors
Distance(F) = 3 + 5 = 8
47. 47
Example: Continued...
A
G
F
B
E
C D
4 1
2
10
3
6
4
2
2
8
5
1
0 2
3 3
1
6 5
Distance(F) = min (8, 5+1) = 6
Previous distance
Pick vertex List with minimum distance (G) and update neighbors
48. 48
Example (end)
A
G
F
B
E
C D
4 1
2
10
3
6
4
2
2
8
5
1
0 2
3 3
1
Pick vertex not in S with lowest cost (F) and update neighbors
6 5
49. Job sequencing with Deadline
Job J1 J2 J3 J4 J5
Deadline 2 1 3 2 1
Profit 60 100 20 40 20
Solution
To solve this problem, the given jobs are sorted
according to their profit in a descending order.
Hence, after sorting, the jobs are ordered as
shown in the following table.
50. Job J2 J1 J4 J3 J5
Deadline 1 2 2 3 1
Profit 100 60 40 20 20
From this set of jobs, first we select J2, as it can be completed
within its deadline and contributes maximum profit.
Next, J1 is selected as it gives more profit compared to J4.
In the next clock, J4 cannot be selected as its deadline is over,
hence J3 is selected as it executes within its deadline.
The job J5 is discarded as it cannot be executed within its deadline.
Thus, the solution is the sequence of jobs (J2, J1, J3), which are
being executed within their deadline and gives maximum profit.