This document discusses several algorithms for finding single-source shortest paths in graphs:
1) Bellman-Ford algorithm can handle graphs with negative edge weights by detecting negative cycles. It runs in O(VE) time.
2) For directed acyclic graphs (DAGs), topological sorting followed by relaxation yields the shortest paths from a single source in O(V+E) time.
3) Dijkstra's algorithm uses a greedy approach to find single-source shortest paths in graphs with non-negative edge weights.
The branch-and-bound method is used to solve optimization problems by traversing a state space tree. It computes a bound at each node to determine if the node is promising. Better approaches traverse nodes breadth-first and choose the most promising node using a bounding heuristic. The traveling salesperson problem is solved using branch-and-bound by finding an initial tour, defining a bounding heuristic as the actual cost plus minimum remaining cost, and expanding promising nodes in best-first order until finding the minimal tour.
The document discusses different single-source shortest path algorithms. It begins by defining shortest path and different variants of shortest path problems. It then describes Dijkstra's algorithm and Bellman-Ford algorithm for solving the single-source shortest paths problem, even in graphs with negative edge weights. Dijkstra's algorithm uses relaxation and a priority queue to efficiently solve the problem in graphs with non-negative edge weights. Bellman-Ford can handle graphs with negative edge weights but requires multiple relaxation passes to converge. Pseudocode and examples are provided to illustrate the algorithms.
The document discusses the algorithm for constructing an optimal binary search tree. It involves calculating the cost table C and root table R based on the probabilities of keys. The cost C[i,j] is calculated recursively as the minimum of C[i,k-1] + C[k+1,j] + sum of probabilities from i to j. This is done by iterating through all subtrees until the overall minimum cost C[1,n] is found, with the corresponding roots in R. The example calculates the optimal binary search tree for keys A, B, C, D with probabilities 0.1, 0.2, 0.4, 0.3, finding a minimum cost of 1.7.
This document discusses the greedy algorithm approach and the knapsack problem. It defines greedy algorithms as choosing locally optimal solutions at each step in hopes of reaching a global optimum. The knapsack problem is described as packing items into a knapsack to maximize total value without exceeding weight capacity. An optimal knapsack algorithm is presented that sorts by value-to-weight ratio and fills highest ratios first. An example applies this to maximize profit of 440 by selecting full quantities of items B and A, and half of item C for a knapsack with capacity of 60.
The document discusses the greedy method algorithmic approach. It provides an overview of greedy algorithms including that they make locally optimal choices at each step to find a global optimal solution. The document also provides examples of problems that can be solved using greedy methods like job sequencing, the knapsack problem, finding minimum spanning trees, and single source shortest paths. It summarizes control flow and applications of greedy algorithms.
The document describes the backtracking method for solving problems that require finding optimal solutions. Backtracking involves building a solution one component at a time and using bounding functions to prune partial solutions that cannot lead to an optimal solution. It then provides examples of applying backtracking to solve the 8 queens problem by placing queens on a chessboard with no attacks. The general backtracking method and a recursive backtracking algorithm are also outlined.
This document discusses algorithms analysis and recurrence relations. It begins by defining recurrences as equations that describe a function in terms of its value on smaller inputs. Solving recurrences is important for determining an algorithm's actual running time. Several methods for solving recurrences are presented, including iteration, substitution, recursion trees, and the master method. Examples are provided to demonstrate each technique. Overall, the document provides an overview of recurrences and their analysis to determine algorithmic efficiency.
This document discusses the concept of dynamic programming. It provides examples of dynamic programming problems including assembly line scheduling and matrix chain multiplication. The key steps of a dynamic programming problem are: (1) characterize the optimal structure of a solution, (2) define the problem recursively, (3) compute the optimal solution in a bottom-up manner by solving subproblems only once and storing results, and (4) construct an optimal solution from the computed information.
The branch-and-bound method is used to solve optimization problems by traversing a state space tree. It computes a bound at each node to determine if the node is promising. Better approaches traverse nodes breadth-first and choose the most promising node using a bounding heuristic. The traveling salesperson problem is solved using branch-and-bound by finding an initial tour, defining a bounding heuristic as the actual cost plus minimum remaining cost, and expanding promising nodes in best-first order until finding the minimal tour.
The document discusses different single-source shortest path algorithms. It begins by defining shortest path and different variants of shortest path problems. It then describes Dijkstra's algorithm and Bellman-Ford algorithm for solving the single-source shortest paths problem, even in graphs with negative edge weights. Dijkstra's algorithm uses relaxation and a priority queue to efficiently solve the problem in graphs with non-negative edge weights. Bellman-Ford can handle graphs with negative edge weights but requires multiple relaxation passes to converge. Pseudocode and examples are provided to illustrate the algorithms.
The document discusses the algorithm for constructing an optimal binary search tree. It involves calculating the cost table C and root table R based on the probabilities of keys. The cost C[i,j] is calculated recursively as the minimum of C[i,k-1] + C[k+1,j] + sum of probabilities from i to j. This is done by iterating through all subtrees until the overall minimum cost C[1,n] is found, with the corresponding roots in R. The example calculates the optimal binary search tree for keys A, B, C, D with probabilities 0.1, 0.2, 0.4, 0.3, finding a minimum cost of 1.7.
This document discusses the greedy algorithm approach and the knapsack problem. It defines greedy algorithms as choosing locally optimal solutions at each step in hopes of reaching a global optimum. The knapsack problem is described as packing items into a knapsack to maximize total value without exceeding weight capacity. An optimal knapsack algorithm is presented that sorts by value-to-weight ratio and fills highest ratios first. An example applies this to maximize profit of 440 by selecting full quantities of items B and A, and half of item C for a knapsack with capacity of 60.
The document discusses the greedy method algorithmic approach. It provides an overview of greedy algorithms including that they make locally optimal choices at each step to find a global optimal solution. The document also provides examples of problems that can be solved using greedy methods like job sequencing, the knapsack problem, finding minimum spanning trees, and single source shortest paths. It summarizes control flow and applications of greedy algorithms.
The document describes the backtracking method for solving problems that require finding optimal solutions. Backtracking involves building a solution one component at a time and using bounding functions to prune partial solutions that cannot lead to an optimal solution. It then provides examples of applying backtracking to solve the 8 queens problem by placing queens on a chessboard with no attacks. The general backtracking method and a recursive backtracking algorithm are also outlined.
This document discusses algorithms analysis and recurrence relations. It begins by defining recurrences as equations that describe a function in terms of its value on smaller inputs. Solving recurrences is important for determining an algorithm's actual running time. Several methods for solving recurrences are presented, including iteration, substitution, recursion trees, and the master method. Examples are provided to demonstrate each technique. Overall, the document provides an overview of recurrences and their analysis to determine algorithmic efficiency.
This document discusses the concept of dynamic programming. It provides examples of dynamic programming problems including assembly line scheduling and matrix chain multiplication. The key steps of a dynamic programming problem are: (1) characterize the optimal structure of a solution, (2) define the problem recursively, (3) compute the optimal solution in a bottom-up manner by solving subproblems only once and storing results, and (4) construct an optimal solution from the computed information.
The document discusses the knapsack problem and greedy algorithms. It defines the knapsack problem as an optimization problem where given constraints and an objective function, the goal is to find the feasible solution that maximizes or minimizes the objective. It describes the knapsack problem has having two versions: 0-1 where items are indivisible, and fractional where items can be divided. The fractional knapsack problem can be solved using a greedy approach by sorting items by value to weight ratio and filling the knapsack accordingly until full.
1) The document describes the divide-and-conquer algorithm design paradigm. It can be applied to problems where the input can be divided into smaller subproblems, the subproblems can be solved independently, and the solutions combined to solve the original problem.
2) Binary search is provided as an example divide-and-conquer algorithm. It works by recursively dividing the search space in half and only searching the subspace containing the target value.
3) Finding the maximum and minimum elements in an array is also solved using divide-and-conquer. The array is divided into two halves, the max/min found for each subarray, and the overall max/min determined by comparing the subsolutions.
The document describes external sorting techniques used when data is too large to fit in main memory. It discusses two-way sorting which uses two tape drive pairs to alternately write sorted runs. It also covers multi-way merging which merges multiple runs simultaneously using a heap. The techniques can improve performance over standard internal sorting.
Given two integer arrays val[0...n-1] and wt[0...n-1] that represents values and weights associated with n items respectively. Find out the maximum value subset of val[] such that sum of the weights of this subset is smaller than or equal to knapsack capacity W. Here the BRANCH AND BOUND ALGORITHM is discussed .
Booth's algorithm is a method for multiplying two signed or unsigned integers in binary representation more efficiently than straightforward algorithms. It uses fewer additions and subtractions by representing the multiplicand as 2's complement numbers. The algorithm loads the multiplicand and multiplier into registers, initializes a third register to 0, and performs bitwise shifts and arithmetic operations (addition/subtraction of the multiplicand) on the registers based on the values of bits from the multiplier. This process builds up the product one bit at a time in a third register.
(1) Dynamic programming is an algorithm design technique that solves problems by breaking them down into smaller subproblems and storing the results of already solved subproblems. (2) It is applicable to problems where subproblems overlap and solving them recursively would result in redundant computations. (3) The key steps of a dynamic programming algorithm are to characterize the optimal structure, define the problem recursively in terms of optimal substructures, and compute the optimal solution bottom-up by solving subproblems only once.
The document discusses the divide and conquer algorithm design technique. It begins by explaining the basic approach of divide and conquer which is to (1) divide the problem into subproblems, (2) conquer the subproblems by solving them recursively, and (3) combine the solutions to the subproblems into a solution for the original problem. It then provides merge sort as a specific example of a divide and conquer algorithm for sorting a sequence. It explains that merge sort divides the sequence in half recursively until individual elements remain, then combines the sorted halves back together to produce the fully sorted sequence.
a. Concept and Definition✓
b. Inserting and Deleting nodes ✓
c. Linked implementation of a stack (PUSH/POP) ✓
d. Linked implementation of a queue (Insert/Remove) ✓
e. Circular List
• Stack as a circular list (PUSH/POP) ✓
• Queue as a circular list (Insert/Remove) ✓
f. Doubly Linked List (Insert/Remove) ✓
For more course related material:
https://github.com/ashim888/dataStructureAndAlgorithm/
Personal blog
www.ashimlamichhane.com.np
Recursion is a process where an object is defined in terms of smaller versions of itself. It involves a base case, which is the simplest definition that cannot be further reduced, and a recursive case that defines the object for larger inputs in terms of smaller ones until the base case is reached. Examples where recursion is commonly used include defining mathematical functions, number sequences, data structures, and language grammars. While recursion can elegantly solve problems, iterative algorithms are generally more efficient.
Greedy algorithms work by making locally optimal choices at each step to arrive at a global optimal solution. They require that the problem exhibits the greedy choice property and optimal substructure. Examples that can be solved with greedy algorithms include fractional knapsack problem, minimum spanning tree, and activity selection. The fractional knapsack problem is solved greedily by sorting items by value/weight ratio and filling the knapsack completely. The 0/1 knapsack problem differs in that items are indivisible.
This document discusses strongly connected components (SCCs) in directed graphs. It defines an SCC as a maximal set of vertices where each vertex is mutually reachable from every other. The algorithm works by running DFS on the original graph and its transpose to find SCCs, processing vertices in decreasing finishing time order from the first DFS. Vertices belonging to the same DFS tree in the second graph traversal are in the same SCC.
Bit-pair recoding is a technique that halves the maximum number of additions needed for multiplication by recoding the multiplier bits into pairs. It works by selecting the multiplicand (the number being multiplied) based on the values of bit pairs in the multiplier. For each bit pair, the selection table indicates which multiplicand (the regular value or the negative value) to use. This allows multiplication to be performed with only n/2 additions, where n is the number of bits in the multiplier.
Strassen's algorithm improves upon the basic matrix multiplication algorithm. It can multiply two N x N matrices using only 7 multiplications instead of the 8 required by basic multiplication. This is accomplished by dividing the matrices into sub-matrices and recursively applying a divide-and-conquer approach to multiply the sub-matrices using fewer operations than basic multiplication through combinations of additions, subtractions, and multiplications. The algorithm provides an asymptotic improvement over basic matrix multiplication.
Artificial Intelligence-- Search Algorithms Syed Ahmed
This document provides an overview of search techniques for problem solving. It discusses formulating problems as search tasks by defining states, operators, an initial state, and a goal test. It also covers uninformed search methods like breadth-first, depth-first, and iterative deepening, as well as informed search using heuristics. Example problems discussed include the vacuum world, 8-puzzle, 8-queens, and traveling salesman problem. State spaces and search graphs are used to represent problems formally.
A double-ended queue (DEQueue) allows nodes to be added and removed from both the front and rear ends. It supports four basic operations - insertion and deletion from the front and rear. The document describes the DEQueue data structure and provides pseudocode for handling cases when inserting or deleting from an empty versus non-empty queue.
This document provides information about priority queues and binary heaps. It defines a binary heap as a nearly complete binary tree where the root node has the maximum/minimum value. It describes heap operations like insertion, deletion of max/min, and increasing/decreasing keys. The time complexity of these operations is O(log n). Heapsort, which uses a heap data structure, is also covered and has overall time complexity of O(n log n). Binary heaps are often used to implement priority queues and for algorithms like Dijkstra's and Prim's.
This document provides an outline for a course on digital logic design. It includes the course title and credit hours, topics that will be covered such as Boolean algebra, logic gates, combinational and sequential circuits, programmable logic devices, and memory. It also lists recommended textbooks and provides the grading breakdown. Examples of analogue and digital quantities, signals, and number systems are given. Common logic gates such as AND, OR, NOT, NAND and NOR are described along with their truth tables and applications. Combinational circuits, functional devices, sequential circuits and memory are also introduced.
The document discusses shortest path problems and algorithms. It defines the shortest path problem as finding the minimum weight path between two vertices in a weighted graph. It presents the Bellman-Ford algorithm, which can handle graphs with negative edge weights but detects negative cycles. It also presents Dijkstra's algorithm, which only works for graphs without negative edge weights. Key steps of the algorithms include initialization, relaxation of edges to update distance estimates, and ensuring the shortest path property is satisfied.
The document discusses algorithms for solving single-source shortest path problems on directed graphs. It begins by defining the problem and describing Bellman-Ford's algorithm, which can handle graphs with negative edge weights but runs in O(VE) time. It then describes how Dijkstra's algorithm improves this to O(ElogV) time using a priority queue, but requires non-negative edge weights. It also discusses how shortest paths can be found in O(V+E) time on directed acyclic graphs by relaxing edges topologically. Finally, it discusses how difference constraints can be modeled as shortest path problems on a corresponding constraint graph.
The document discusses the knapsack problem and greedy algorithms. It defines the knapsack problem as an optimization problem where given constraints and an objective function, the goal is to find the feasible solution that maximizes or minimizes the objective. It describes the knapsack problem has having two versions: 0-1 where items are indivisible, and fractional where items can be divided. The fractional knapsack problem can be solved using a greedy approach by sorting items by value to weight ratio and filling the knapsack accordingly until full.
1) The document describes the divide-and-conquer algorithm design paradigm. It can be applied to problems where the input can be divided into smaller subproblems, the subproblems can be solved independently, and the solutions combined to solve the original problem.
2) Binary search is provided as an example divide-and-conquer algorithm. It works by recursively dividing the search space in half and only searching the subspace containing the target value.
3) Finding the maximum and minimum elements in an array is also solved using divide-and-conquer. The array is divided into two halves, the max/min found for each subarray, and the overall max/min determined by comparing the subsolutions.
The document describes external sorting techniques used when data is too large to fit in main memory. It discusses two-way sorting which uses two tape drive pairs to alternately write sorted runs. It also covers multi-way merging which merges multiple runs simultaneously using a heap. The techniques can improve performance over standard internal sorting.
Given two integer arrays val[0...n-1] and wt[0...n-1] that represents values and weights associated with n items respectively. Find out the maximum value subset of val[] such that sum of the weights of this subset is smaller than or equal to knapsack capacity W. Here the BRANCH AND BOUND ALGORITHM is discussed .
Booth's algorithm is a method for multiplying two signed or unsigned integers in binary representation more efficiently than straightforward algorithms. It uses fewer additions and subtractions by representing the multiplicand as 2's complement numbers. The algorithm loads the multiplicand and multiplier into registers, initializes a third register to 0, and performs bitwise shifts and arithmetic operations (addition/subtraction of the multiplicand) on the registers based on the values of bits from the multiplier. This process builds up the product one bit at a time in a third register.
(1) Dynamic programming is an algorithm design technique that solves problems by breaking them down into smaller subproblems and storing the results of already solved subproblems. (2) It is applicable to problems where subproblems overlap and solving them recursively would result in redundant computations. (3) The key steps of a dynamic programming algorithm are to characterize the optimal structure, define the problem recursively in terms of optimal substructures, and compute the optimal solution bottom-up by solving subproblems only once.
The document discusses the divide and conquer algorithm design technique. It begins by explaining the basic approach of divide and conquer which is to (1) divide the problem into subproblems, (2) conquer the subproblems by solving them recursively, and (3) combine the solutions to the subproblems into a solution for the original problem. It then provides merge sort as a specific example of a divide and conquer algorithm for sorting a sequence. It explains that merge sort divides the sequence in half recursively until individual elements remain, then combines the sorted halves back together to produce the fully sorted sequence.
a. Concept and Definition✓
b. Inserting and Deleting nodes ✓
c. Linked implementation of a stack (PUSH/POP) ✓
d. Linked implementation of a queue (Insert/Remove) ✓
e. Circular List
• Stack as a circular list (PUSH/POP) ✓
• Queue as a circular list (Insert/Remove) ✓
f. Doubly Linked List (Insert/Remove) ✓
For more course related material:
https://github.com/ashim888/dataStructureAndAlgorithm/
Personal blog
www.ashimlamichhane.com.np
Recursion is a process where an object is defined in terms of smaller versions of itself. It involves a base case, which is the simplest definition that cannot be further reduced, and a recursive case that defines the object for larger inputs in terms of smaller ones until the base case is reached. Examples where recursion is commonly used include defining mathematical functions, number sequences, data structures, and language grammars. While recursion can elegantly solve problems, iterative algorithms are generally more efficient.
Greedy algorithms work by making locally optimal choices at each step to arrive at a global optimal solution. They require that the problem exhibits the greedy choice property and optimal substructure. Examples that can be solved with greedy algorithms include fractional knapsack problem, minimum spanning tree, and activity selection. The fractional knapsack problem is solved greedily by sorting items by value/weight ratio and filling the knapsack completely. The 0/1 knapsack problem differs in that items are indivisible.
This document discusses strongly connected components (SCCs) in directed graphs. It defines an SCC as a maximal set of vertices where each vertex is mutually reachable from every other. The algorithm works by running DFS on the original graph and its transpose to find SCCs, processing vertices in decreasing finishing time order from the first DFS. Vertices belonging to the same DFS tree in the second graph traversal are in the same SCC.
Bit-pair recoding is a technique that halves the maximum number of additions needed for multiplication by recoding the multiplier bits into pairs. It works by selecting the multiplicand (the number being multiplied) based on the values of bit pairs in the multiplier. For each bit pair, the selection table indicates which multiplicand (the regular value or the negative value) to use. This allows multiplication to be performed with only n/2 additions, where n is the number of bits in the multiplier.
Strassen's algorithm improves upon the basic matrix multiplication algorithm. It can multiply two N x N matrices using only 7 multiplications instead of the 8 required by basic multiplication. This is accomplished by dividing the matrices into sub-matrices and recursively applying a divide-and-conquer approach to multiply the sub-matrices using fewer operations than basic multiplication through combinations of additions, subtractions, and multiplications. The algorithm provides an asymptotic improvement over basic matrix multiplication.
Artificial Intelligence-- Search Algorithms Syed Ahmed
This document provides an overview of search techniques for problem solving. It discusses formulating problems as search tasks by defining states, operators, an initial state, and a goal test. It also covers uninformed search methods like breadth-first, depth-first, and iterative deepening, as well as informed search using heuristics. Example problems discussed include the vacuum world, 8-puzzle, 8-queens, and traveling salesman problem. State spaces and search graphs are used to represent problems formally.
A double-ended queue (DEQueue) allows nodes to be added and removed from both the front and rear ends. It supports four basic operations - insertion and deletion from the front and rear. The document describes the DEQueue data structure and provides pseudocode for handling cases when inserting or deleting from an empty versus non-empty queue.
This document provides information about priority queues and binary heaps. It defines a binary heap as a nearly complete binary tree where the root node has the maximum/minimum value. It describes heap operations like insertion, deletion of max/min, and increasing/decreasing keys. The time complexity of these operations is O(log n). Heapsort, which uses a heap data structure, is also covered and has overall time complexity of O(n log n). Binary heaps are often used to implement priority queues and for algorithms like Dijkstra's and Prim's.
This document provides an outline for a course on digital logic design. It includes the course title and credit hours, topics that will be covered such as Boolean algebra, logic gates, combinational and sequential circuits, programmable logic devices, and memory. It also lists recommended textbooks and provides the grading breakdown. Examples of analogue and digital quantities, signals, and number systems are given. Common logic gates such as AND, OR, NOT, NAND and NOR are described along with their truth tables and applications. Combinational circuits, functional devices, sequential circuits and memory are also introduced.
The document discusses shortest path problems and algorithms. It defines the shortest path problem as finding the minimum weight path between two vertices in a weighted graph. It presents the Bellman-Ford algorithm, which can handle graphs with negative edge weights but detects negative cycles. It also presents Dijkstra's algorithm, which only works for graphs without negative edge weights. Key steps of the algorithms include initialization, relaxation of edges to update distance estimates, and ensuring the shortest path property is satisfied.
The document discusses algorithms for solving single-source shortest path problems on directed graphs. It begins by defining the problem and describing Bellman-Ford's algorithm, which can handle graphs with negative edge weights but runs in O(VE) time. It then describes how Dijkstra's algorithm improves this to O(ElogV) time using a priority queue, but requires non-negative edge weights. It also discusses how shortest paths can be found in O(V+E) time on directed acyclic graphs by relaxing edges topologically. Finally, it discusses how difference constraints can be modeled as shortest path problems on a corresponding constraint graph.
Bellman-Ford is an algorithm for finding shortest paths in a graph with positive or negative edge weights. It uses dynamic programming to iteratively update the shortest path costs from a source node to all other nodes. If the shortest path costs have not converged after |V|-1 iterations, there must be a negative weight cycle present in the graph. The Bellman-Ford algorithm returns both the shortest path costs and a predecessor graph that implicitly represents the shortest paths.
Here, we look at the problem of going from a source s to a possible multiple destinations. At them, each of the Lemmas, Theorems and Corollaries used to prove the properties of the
1. Bellman-Ford
2. Dijkstra
are examined in detail.
Overview of Single Source Shortest Path
Types of Single Source Shortest Path Algorithm
Representation of Single Source Shortest Path
Initialization
Relaxation
Implementation of Dijkstra's Algorithm
Does Dijkstra’s Algorithm Always Work?
Implementation of Bellman-Ford Algorithm
Negative Weight Cycles in Bellman-Ford Algorithm
Graphs: Finding shortest paths
Topics:
Definitions
Floyd-Warshall algorithm
Dijkstra algorithm
Bellman-Ford algorithm
Teaching material for the course of "Tecniche di Programmazione" at Politecnico di Torino in year 2012/2013. More information: http://bit.ly/tecn-progr
The document discusses algorithms for finding shortest paths in graphs. It describes Dijkstra's algorithm and Bellman-Ford algorithm for solving the single-source shortest paths problem and Floyd-Warshall algorithm for solving the all-pairs shortest paths problem. Dijkstra's algorithm uses a priority queue to efficiently find shortest paths from a single source node to all others, assuming non-negative edge weights. Bellman-Ford handles graphs with negative edge weights but is slower. Floyd-Warshall finds shortest paths between all pairs of nodes in a graph.
This document provides an overview of the Bellman-Ford algorithm for finding single-source shortest paths in a weighted graph. It begins with an outline of the lecture topics, then provides context on weighted graphs and shortest path problems. The key properties of shortest paths and edge relaxation are described. The Bellman-Ford algorithm is presented as performing relaxation of all edges |V|-1 times to find shortest paths of length up to |V|-1, to account for all possible path lengths without cycles. Examples are provided and it is shown that the algorithm runs in O(VE) time and O(V+E) space. Correctness of the algorithm is discussed based on earlier theorems regarding shortest path properties.
The Bellman–Ford algorithm is an algorithm that computes shortest paths from a single source vertex to all of the other vertices in a weighted digraph. It is slower than Dijkstra's algorithm for the same problem, but more versatile, as it is capable of handling graphs in which some of the edge weights are negative numbers.
Algorithm Design and Complexity - Course 10Traian Rebedea
The document provides an overview of algorithms for finding shortest paths in graphs. It discusses Dijkstra's algorithm and the Bellman-Ford algorithm for solving the single-source shortest paths (SSSP) problem. Dijkstra's algorithm uses a greedy approach and only works for graphs with non-negative edge weights, while Bellman-Ford can handle graphs with negative edge weights but requires more iterations to relax all edges. The document also covers properties like optimal substructure, triangle inequality, and initialization procedures that are common to SSSP algorithms.
The document discusses algorithms for solving the single-source shortest path problem, including Dijkstra's algorithm, Bellman-Ford algorithm, and shortest paths in directed acyclic graphs (DAGs). It reviews the key ideas behind relaxation and how algorithms like Bellman-Ford and Dijkstra's use relaxation to iteratively improve the upper bound on shortest path distances. Running times of the algorithms are analyzed, with Dijkstra's algorithm taking O(ElogV) time and Bellman-Ford taking O(VE) time.
The document discusses shortest path algorithms for graphs. It defines different variants of shortest path problems like single-source, single-destination, and all-pairs shortest paths. It presents algorithms like Bellman-Ford, Dijkstra's algorithm, and Floyd-Warshal algorithm to solve these problems. Bellman-Ford handles negative edge weights but has a higher time complexity of O(V^3) compared to Dijkstra's which only works for positive edges. Floyd-Warshal solves the all-pairs shortest paths problem in O(V^3) time using dynamic programming and matrix multiplication.
The document describes the Bellman-Ford algorithm for finding the shortest paths in a graph. It begins by defining the shortest path problem and describing applications that can be modeled as shortest path problems, such as network routing. It then explains that Bellman-Ford can find single-source shortest paths in graphs with positive or negative edge weights, unlike Dijkstra's algorithm which only works for positive edges. The core of the algorithm uses relaxation to iteratively update the shortest path estimates over multiple rounds until convergence. Pseudocode is provided to demonstrate how the relaxation process is repeated for all edges |V|-1 times to find a shortest path from the source node to all other nodes.
a graph search algorithm that solves the single-source shortest path problem for a graph with non-negative edge path costs, producing a shortest path tree. This algorithm is often used in routing and as a subroutine in other graph algorithms.
The document discusses shortest path algorithms for weighted graphs. It introduces Dijkstra's algorithm and the Bellman-Ford algorithm for finding shortest paths. Dijkstra's algorithm works for graphs with non-negative edge weights, while Bellman-Ford can handle graphs with negative edge weights. The document also describes how to find shortest paths in directed acyclic graphs and compute all-pairs shortest paths.
Bellman Ford's algorithm finds the shortest paths from a source vertex to all other vertices in a weighted graph, even if edge weights are negative. It works by repeatedly relaxing all edges to update the distance and previous vertex for each vertex. After iterating through all edges |V|-1 times, if an edge can still be relaxed, then a negative cycle exists in the graph.
1. The document discusses algorithms for finding maximum flow in a flow network.
2. It provides solutions to exercises on flow networks, residual networks, minimum cuts, and the Ford-Fulkerson and Edmonds-Karp algorithms.
3. Key concepts covered include the capacity constraint and flow conservation properties of flows, defining the residual network and augmenting paths, and using these concepts in the Ford-Fulkerson method to iteratively find the maximum flow by increasing flow along augmenting paths until none remain.
This document discusses network security and firewalls. It describes how firewalls provide perimeter defense and control access between interconnected networks. Several types of firewalls are mentioned, including packet filtering firewalls, stateful firewalls, application-level gateways, and proxies. The document also briefly discusses access control models, multilevel security models like Bell-LaPadula, and security evaluation standards such as Common Criteria.
Cryptography and Network Security_Chapter 1.pptshanthishyam
This document discusses key concepts in internet security. It defines computer, network, and internet security. It outlines the ITU-T X.800 standard for defining security requirements and describes security attacks, services, and mechanisms. Finally, it discusses models for implementing network security including cryptographic techniques and access controls.
The document discusses IEEE 802 subgroups and local area network (LAN) technologies such as token bus and token ring. It provides details on:
- IEEE 802 subgroups and their responsibilities for various networking standards
- How token passing works on token bus and token ring networks, with stations passing a token frame that allows the holder to transmit data
- Standards such as IEEE 802.4 for token bus and IEEE 802.5 for token ring
- Key aspects of token ring and token bus networks, including frame formats, priority schemes, and how data is transmitted and errors are handled.
Three dimensional geometric transformationsshanthishyam
This document discusses 3D geometric transformations in OpenGL, including translation, rotation, and scaling. It provides the mathematical definitions and matrix representations for transformations around the X, Y, and Z axes as well as arbitrary axes. It also covers combining multiple transformations through matrix multiplication and the order of transformations. OpenGL functions for common transformations like glTranslate, glRotate, and glScale are also presented.
The document discusses the OSI reference model and TCP/IP reference model.
The OSI model has 7 layers - physical, data link, network, transport, session, presentation and application layer. Each layer performs a well-defined function with minimal information flow across layer boundaries.
The TCP/IP model has 4 layers - link, internet, transport and application. The internet layer uses IP to allow packets to independently travel across networks. Transport layer uses TCP for reliable connections and UDP for fast delivery. Application layer contains protocols like HTTP, FTP, SMTP.
This document discusses database concepts and security models. It covers relational database concepts like tables, relations, attributes, tuples, primary keys and foreign keys. It then discusses security requirements for databases like physical integrity, logical integrity, element integrity, auditability, access control and availability. It describes the SQL security model of users, actions, objects, privileges and views. It also covers weaknesses of the discretionary access control model and alternatives like mandatory access controls.
This document discusses different types of spline curves used for modeling, including polynomial splines, interpolation splines, and approximation splines. It describes linear spline interpolation which uses linear polynomials to pass through control points. Cubic spline interpolation is discussed next, using cubic polynomials and additional constraints to determine coefficients. Natural cubic splines are described as satisfying C2 continuity across knots. Hermit splines are introduced as allowing local control of tangents at endpoints. Blending functions for Hermit splines are provided.
This document discusses symmetric encryption techniques in cryptography and network security. Symmetric encryption, also called conventional or secret-key encryption, involves a sender and recipient sharing a common key to encrypt plaintext into ciphertext and decrypt ciphertext back into plaintext. The document defines basic terminology related to symmetric encryption techniques, including plaintext, ciphertext, cipher, key, encryption, decryption, cryptography, cryptanalysis, and cryptology.
This document discusses different types of computer networks at various scales, from personal area networks to wide area networks and the Internet. It describes the key characteristics of different network types, including transmission technologies, scale, examples of wireless and wired local area networks, metropolitan area networks, and wide area networks. The document also discusses topics like virtual private networks, Internet service providers, and how networks have evolved to connect individuals and organizations globally.
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
ESPP presentation to EU Waste Water Network, 4th June 2024 “EU policies driving nutrient removal and recycling
and the revised UWWTD (Urban Waste Water Treatment Directive)”
Or: Beyond linear.
Abstract: Equivariant neural networks are neural networks that incorporate symmetries. The nonlinear activation functions in these networks result in interesting nonlinear equivariant maps between simple representations, and motivate the key player of this talk: piecewise linear representation theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
Describing and Interpreting an Immersive Learning Case with the Immersion Cub...Leonel Morgado
Current descriptions of immersive learning cases are often difficult or impossible to compare. This is due to a myriad of different options on what details to include, which aspects are relevant, and on the descriptive approaches employed. Also, these aspects often combine very specific details with more general guidelines or indicate intents and rationales without clarifying their implementation. In this paper we provide a method to describe immersive learning cases that is structured to enable comparisons, yet flexible enough to allow researchers and practitioners to decide which aspects to include. This method leverages a taxonomy that classifies educational aspects at three levels (uses, practices, and strategies) and then utilizes two frameworks, the Immersive Learning Brain and the Immersion Cube, to enable a structured description and interpretation of immersive learning cases. The method is then demonstrated on a published immersive learning case on training for wind turbine maintenance using virtual reality. Applying the method results in a structured artifact, the Immersive Learning Case Sheet, that tags the case with its proximal uses, practices, and strategies, and refines the free text case description to ensure that matching details are included. This contribution is thus a case description method in support of future comparative research of immersive learning cases. We then discuss how the resulting description and interpretation can be leveraged to change immersion learning cases, by enriching them (considering low-effort changes or additions) or innovating (exploring more challenging avenues of transformation). The method holds significant promise to support better-grounded research in immersive learning.
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...Sérgio Sacani
Context. With a mass exceeding several 104 M⊙ and a rich and dense population of massive stars, supermassive young star clusters
represent the most massive star-forming environment that is dominated by the feedback from massive stars and gravitational interactions
among stars.
Aims. In this paper we present the Extended Westerlund 1 and 2 Open Clusters Survey (EWOCS) project, which aims to investigate
the influence of the starburst environment on the formation of stars and planets, and on the evolution of both low and high mass stars.
The primary targets of this project are Westerlund 1 and 2, the closest supermassive star clusters to the Sun.
Methods. The project is based primarily on recent observations conducted with the Chandra and JWST observatories. Specifically,
the Chandra survey of Westerlund 1 consists of 36 new ACIS-I observations, nearly co-pointed, for a total exposure time of 1 Msec.
Additionally, we included 8 archival Chandra/ACIS-S observations. This paper presents the resulting catalog of X-ray sources within
and around Westerlund 1. Sources were detected by combining various existing methods, and photon extraction and source validation
were carried out using the ACIS-Extract software.
Results. The EWOCS X-ray catalog comprises 5963 validated sources out of the 9420 initially provided to ACIS-Extract, reaching a
photon flux threshold of approximately 2 × 10−8 photons cm−2
s
−1
. The X-ray sources exhibit a highly concentrated spatial distribution,
with 1075 sources located within the central 1 arcmin. We have successfully detected X-ray emissions from 126 out of the 166 known
massive stars of the cluster, and we have collected over 71 000 photons from the magnetar CXO J164710.20-455217.
2. Outline
General Lemmas and Theorems.
CLRS now does this last. We’ll still do it first.
Bellman-Ford algorithm.
DAG algorithm.
Dijkstra’s algorithm.
We will skip Section 24.4.
3. General Results (Relaxation)
Corollary: Let p = SP from s to v, where p = s u v. Then,
δ(s, v) = δ(s, u) + w(u, v).
Lemma 24.1: Let p = ‹v1, v2, …, vk› be a SP from v1 to vk. Then,
pij = ‹vi, vi+1, …, vj› is a SP from vi to vj, where 1 i j k.
So, we have the optimal-substructure property.
Bellman-Ford’s algorithm uses dynamic programming.
Dijkstra’s algorithm uses the greedy approach.
Let δ(u, v) = weight of SP from u to v.
p'
Lemma 24.10: Let s V. For all edges (u,v) E, we have
δ(s, v) δ(s, u) + w(u,v).
4. RelaxationInitialize(G, s)
for each v V[G] do
d[v] := ;
[v] := NIL
od;
d[s] := 0
Relax(u, v, w)
if d[v] > d[u] + w(u, v) then
d[v] := d[u] + w(u, v);
[v] := u
fi
Algorithms keep track of d[v], [v]. Initialized as follows:
These values are changed when an edge (u, v) is relaxed:
5. Properties of Relaxation
d[v], if not , is the length of some path from s to v.
d[v] either stays the same or decreases with time
Therefore, if d[v] = (s, v) at any time, this holds
thereafter
Note that d[v] (s, v) always
After i iterations of relaxing on all (u,v), if the shortest
path to v has i edges, then d[v] = (s, v).
Comp 122, Fall 2003
Single-
source SPs -
5
6. Properties of Relaxation
Comp 122, Fall 2003
Single-
source SPs -
6
Consider any algorithm in which d[v], and [v] are first initialized
by calling Initialize(G, s) [s is the source], and are only changed by
calling Relax. We have:
Lemma 24.11: ( v:: d[v] (s, v)) is an invariant.
Implies d[v] doesn’t change once d[v] = (s, v).
Proof:
Initialize(G, s) establishes invariant. If call to Relax(u, v, w)
changes d[v], then it establishes:
d[v] = d[u] + w(u, v)
(s, u) + w(u, v) , invariant holds before call.
(s, v) , by Lemma 24.10.
Corollary 24.12: If there is no path from s to v, then
d[v] = δ(s, v) = is an invariant.
7. More Properties
Comp 122, Fall 2003
Single-
source SPs -
7
Lemma 24.14: Let p = SP from s to v, where p = s u v.
If d[u] = δ(s, u) holds at any time prior to calling Relax(u, v, w),
then d[v] = δ(s, v) holds at all times after the call.
p'
Proof:
After the call we have:
d[v] d[u] + w(u, v) , by Lemma 24.13.
= (s, u) + w(u, v) , d[u] = (s, u) holds.
= (s, v) , by corollary to Lemma 24.1.
By Lemma 24.11, d[v] δ(s, v), so d[v] = δ(s, v).
Lemma 24.13: Immediately after relaxing edge (u, v) by calling
Relax(u, v, w), we have d[v] d[u] + w(u, v).
8. Bellman-Ford returns a compact representation of the
set of shortest paths from s to all other vertices in the
graph reachable from s. This is contained in the
predecessor subgraph.
Comp 122, Fall 2003
Single-
source SPs -
8
9. Predecessor Subgraph
Comp 122, Fall 2003
Single-
source SPs -
9
Lemma 24.16: Assume given graph G has no negative-weight cycles
reachable from s. Let G = predecessor subgraph. G is always a
tree with root s (i.e., this property is an invariant).
Proof:
Two proof obligations:
(1) G is acyclic.
(2) There exists a unique path from source s to each vertex in V.
Proof of (1):
Suppose there exists a cycle c = ‹v0, v1, …, vk›, where v0 = vk.
We have [vi] = vi-1 for i = 1, 2, …, k.
Assume relaxation of (vk-1, vk) created the cycle.
We show cycle has a negative weight.
Note: Cycle must be reachable from s. (Why?)
10. Proof of (1) (Continued)
Comp 122, Fall 2003
Single-
source SPs -
10
Before call to Relax(vk-1, vk, w):
[vi] = vi-1 for i = 1, …, k–1.
Implies d[vi] was last updated by “d[vi] := d[vi-1] + w(vi-1, vi)”
for i = 1, …, k–1. [Because Relax updates .]
Implies d[vi] d[vi-1] + w(vi-1, vi) for i = 1, …, k–1. [Lemma 24.13]
Because [vk] is changed by call, d[vk] > d[vk-1] + w(vk-1, vk). Thus,
cycle!
weight
-
neg.
i.e.,
0,
)
v
,
w(v
,
]
d[v
]
d[v
Because
)
v
,
w(v
]
d[v
))
v
,
w(v
]
(d[v
]
d[v
k
1
i
i
1
i
k
1
i
1
i
k
1
i
i
k
1
i
k
1
i
i
1
i
1
i
k
1
i
k
1
i
i
1
i
1
i
i
11. Comment on Proof
d[vi] d[vi-1] + w(vi-1,vi) for i = 1, …, k–1 because when
Relax(vi-1 ,vi , w) was called, there was an equality, and
d[vi-1] may have gotten smaller by further calls to Relax.
d[vk] > d[vk-1] + w(vk-1,vk) before the last call to Relax
because that last call changed d[vk].
Comp 122, Fall 2003
Single-
source SPs -
11
12. Proof of (2)
Comp 122, Fall 2003
Single-
source SPs -
12
Proof of (2):
( v: v V:: ( path from s to v)) is an invariant.
So, for any v in V, at least 1 path from s to v.
Show 1 path.
Assume 2 paths.
s u
y
x
z v
impossible!
13. Lemma 24.17
Comp 122, Fall 2003
Single-
source SPs -
13
Lemma 24.17: Same conditions as before. Call Initialize & repeatedly
call Relax until d[v] = δ(s, v) for all v in V. Then, G is a shortest-path
tree rooted at s.
Proof:
Key Proof Obligation: For all v in V, the unique simple path p from
s to v in G (path exists by Lemma 24.16) is a shortest path from s to v
in G.
Let p = ‹v0, v1, …, vk›, where v0 = s and vk = v.
We have d[vi] = δ(s, vi)
d[vi] d[vi-1] + w(vi-1, vi) (reasoning as before)
Implies w(vi-1, vi) δ(s, vi) – δ(s, vi-1).
14. Proof (Continued)
Comp 122, Fall 2003
Single-
source SPs -
14
)
v
,
δ(s
)
v
,
δ(s
)
v
,
δ(s
))
v
,
δ(s
)
v
,
δ(s
(
)
v
,
w(v
w(p)
k
0
k
1
-
i
i
k
1
i
k
1
i
i
1
i
So, equality holds and p is a shortest path.
15. Bellman-Ford Algorithm
Comp 122, Fall 2003
Single-
source SPs -
15
Can have negative-weight edges. Will “detect” reachable negative-weight
cycles.
Initialize(G, s);
for i := 1 to |V[G]| –1 do
for each (u, v) in E[G] do
Relax(u, v, w)
od
od;
for each (u, v) in E[G] do
if d[v] > d[u] + w(u, v) then
return false
fi
od;
return true
Time
Complexity
is O(VE).
16. So if Bellman-Ford has not converged after V(G) - 1
iterations, then there cannot be a shortest path tree, so
there must be a negative weight cycle.
Comp 122, Fall 2003
Single-
source SPs -
16
17. Example
Comp 122, Fall 2003
Single-
source SPs -
17
0
z
u v
x y
6
5
–3
9
7
7
8
–2
–4
2
18. Example
Comp 122, Fall 2003
Single-
source SPs -
18
0
7
6
z
u v
x y
6
5
–3
9
7
7
8
–2
–4
2
19. Example
Comp 122, Fall 2003
Single-
source SPs -
19
0
2
7
4
6
z
u v
x y
6
5
–3
9
7
7
8
–2
–4
2
20. Example
Comp 122, Fall 2003
Single-
source SPs -
20
0
2
7
4
2
z
u v
x y
6
5
–3
9
7
7
8
–2
–4
2
21. Example
Comp 122, Fall 2003
Single-
source SPs -
21
0
-2
7
4
2
z
u v
x y
6
5
–3
9
7
7
8
–2
–4
2
22. Another Look
Comp 122, Fall 2003
Single-
source SPs -
22
Note: This is essentially dynamic programming.
Let d(i, j) = cost of the shortest path from s to i that is at most j hops.
d(i, j) =
0 if i = s j = 0
if i s j = 0
min({d(k, j–1) + w(k, i): i Adj(k)}
{d(i, j–1)}) if j > 0
z u v x y
1 2 3 4 5
0 0
1 0 6 7
2 0 6 4 7 2
3 0 2 4 7 2
4 0 2 4 7 –2
j
i
23. Lemma 24.2
Comp 122, Fall 2003
Single-
source SPs -
23
Lemma 24.2: Assuming no negative-weight cycles reachable from
s, d[v] = (s, v) holds upon termination for all vertices v reachable
from s.
Proof:
Consider a SP p, where p = ‹v0, v1, …, vk›, where v0 = s and vk = v.
Assume k |V| – 1, otherwise p has a cycle.
Claim: d[vi] = (s, vi) holds after the ith pass over edges.
Proof follows by induction on i.
By Lemma 24.11, once d[vi] = (s, vi) holds, it continues to hold.
24. Correctness
Comp 122, Fall 2003
Single-
source SPs -
24
Claim: Algorithm returns the correct value.
(Part of Theorem 24.4. Other parts of the theorem follow easily from earlier results.)
Case 1: There is no reachable negative-weight cycle.
Upon termination, we have for all (u, v):
d[v] = (s, v) , by lemma 24.2 (last slide) if v is reachable;
d[v] = (s, v) = otherwise.
(s, u) + w(u, v) , by Lemma 24.10.
= d[u] + w(u, v)
So, algorithm returns true.
25. Case 2
Comp 122, Fall 2003
Single-
source SPs -
25
Case 2: There exists a reachable negative-weight cycle
c = ‹v0, v1, …, vk›, where v0 = vk.
We have i = 1, …, k w(vi-1, vi) < 0. (*)
Suppose algorithm returns true. Then, d[vi] d[vi-1] + w(vi-1, vi) for
i = 1, …, k. (because Relax didn’t change any d[vi] ). Thus,
i = 1, …, k d[vi] i = 1, …, k d[vi-1] + i = 1, …, k w(vi-1, vi)
But, i = 1, …, k d[vi] = i = 1, …, k d[vi-1].
Can show no d[vi] is infinite. Hence, 0 i = 1, …, k w(vi-1, vi).
Contradicts (*). Thus, algorithm returns false.
26. Shortest Paths in DAGs
Comp 122, Fall 2003
Single-
source SPs -
26
Topologically sort vertices in G;
Initialize(G, s);
for each u in V[G] (in order) do
for each v in Adj[u] do
Relax(u, v, w)
od
od
27. Example
Comp 122, Fall 2003
Single-
source SPs -
27
0
r s t u v w
5 2 7 –1 –2
6 1
3
2
4
28. Example
Comp 122, Fall 2003
Single-
source SPs -
28
0
r s t u v w
5 2 7 –1 –2
6 1
3
2
4
29. Example
Comp 122, Fall 2003
Single-
source SPs -
29
0 2 6
r s t u v w
5 2 7 –1 –2
6 1
3
2
4
30. Example
Comp 122, Fall 2003
Single-
source SPs -
30
0 2 6 6 4
r s t u v w
5 2 7 –1 –2
6 1
3
2
4
31. Example
Comp 122, Fall 2003
Single-
source SPs -
31
0 2 6 5 4
r s t u v w
5 2 7 –1 –2
6 1
3
2
4
32. Example
Comp 122, Fall 2003
Single-
source SPs -
32
0 2 6 5 3
r s t u v w
5 2 7 –1 –2
6 1
3
2
4
33. Example
Comp 122, Fall 2003
Single-
source SPs -
33
0 2 6 5 3
r s t u v w
5 2 7 –1 –2
6 1
3
2
4
34. Dijkstra’s Algorithm
Comp 122, Fall 2003
Single-
source SPs -
34
Assumes no negative-weight edges.
Maintains a set S of vertices whose SP from s has been determined.
Repeatedly selects u in V–S with minimum SP estimate (greedy choice).
Store V–S in priority queue Q.
Initialize(G, s);
S := ;
Q := V[G];
while Q do
u := Extract-Min(Q);
S := S {u};
for each v Adj[u] do
Relax(u, v, w)
od
od
35. Example
Comp 122, Fall 2003
Single-
source SPs -
35
0
s
u v
x y
10
1
9
2
4 6
5
2 3
7
36. Example
Comp 122, Fall 2003
Single-
source SPs -
36
0
5
10
s
u v
x y
10
1
9
2
4 6
5
2 3
7
37. Example
Comp 122, Fall 2003
Single-
source SPs -
37
0
7
5
14
8
s
u v
x y
10
1
9
2
4 6
5
2 3
7
38. Example
Comp 122, Fall 2003
Single-
source SPs -
38
0
7
5
13
8
s
u v
x y
10
1
9
2
4 6
5
2 3
7
39. Example
Comp 122, Fall 2003
Single-
source SPs -
39
0
7
5
9
8
s
u v
x y
10
1
9
2
4 6
5
2 3
7
40. Example
Comp 122, Fall 2003
Single-
source SPs -
40
0
7
5
9
8
s
u v
x y
10
1
9
2
4 6
5
2 3
7