Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Unit ii-ppt
1.
2. I. OPTIMIZATION PROBLEM
Learning Objectives:
Ability to formulate a linear program.
Ability to represent graphically the decision space of a
linear program.
Ability to find, based on the graphical representation, the
optimal solution of the linear program.
Objectives:
Translate word problems into mathematical functions.
Determine the absolute maximum and minimum values of a
function over an interval [a,b].
Utilize a graphing calculator to represent and solve
optimization problems
3. Optimization: The Idea
Transform the program to improve efficiency
Performance: faster execution
Size: smaller executable, smaller memory footprint
Steps to Optimization:
Read the problem.
Reread the problem.
Draw a picture or graph if appropriate.
Identify the given information
What quantity needs to be maximized or minimized?
Find an appropriate equation for what needs to be maximized or
minimized, and reduce it to one variable.
Reread the question and make sure you have answered what was
asked
4. Optimization Problem:
An optimization problem is the problem of finding the
best solution from all feasible solutions.
A set of instances of an optimization problem,
A set of valid solutions
e.g.
Traveling Salesman Problem (TSP)
Minimal Spanning Tree (MST)
Shortest Path (SP)
Linear Programming (LP)
5. Ingredients:
An optimization problem is specified by defining instances,
solutions, and costs.
Instances: The instances are the possible inputs to the problem.
Solutions for Instance: Each instance has an exponentially large set of
solutions. A solution is valid if it meets a set of criteria determined by
the instance at hand.
Measure of Success: Each solution has an easy-to-compute cost, value,
or measure of success that is to be minimized or maximized
Specification of an Optimization Problem:
Preconditions: The input is one instance.
Postconditions: The output is one of the valid solutions for this
instance with optimal (minimum or maximum as the case may be)
measure of success. (The solution to be outputted need not be
unique.)
6. II. GRAPH SEARCH ALGORITHM:
A "graph" in this context is made up of "vertices" or
"nodes" and lines called edges that connect them.
A graph G = (V,E) is composed of:
V: set of vertices
E: set of edges connecting the vertices in V
An edge e = (u,v) is a pair of vertices
Example:
a
b
c
d
e
7. Types of graphs
Undirected: edge (u, v) = (v, u); for all v, (v, v) E (No self
loops.)
Directed: (u, v) is edge from u to v, denoted as u
v. Self
loops are allowed.
Weighted: each edge has an associated weight, given by a
weight function w : E R.
Dense: |E| |V|2.
Sparse: |E| << |V|2.
8. Oriented (Directed) Graph:
A graph where edges are directed
Directed vs. Undirected Graph:
An undirected graph is one in which the pair of vertices in a
edge is unordered, (v0, v1) = (v1,v0)
A directed graph is one in which each edge is a directed pair
of vertices, <v0, v1> != <v1,v0>
9. III. Generic Search Algorithm
This algorithm to search for a solution path in a graph.
The algorithm is independent of any particular graph.
The Reachability Problem:
Preconditions: The input is a graph G (either directed or
undirected) and a source node s.
Postconditions: The output consists of all the nodes u that
are reachable by a path in G from s.
10. Code:
algorithm GenericSearch (G, s)
<pre-cond>: G is a (directed or undirected) graph, and s is one of its nodes.
<post-cond>: The output consists of all the nodes u that are reachable by a path
in G from s.€
begin
foundHandled = ∅
foundNotHandled = {s}
loop
<loop-invariant>: See LI1, LI2.
exit when foundNotHandled = ∅
let u be some node from foundNotHandled
for each v connected to u
if v has not previously been found then
add v to foundNotHandled
end if
end for
move u from foundNotHandled to foundHandled
end loop
return foundHandled
end algorithm
11. 1: Procedure Search(G,S,goal)
2:
Inputs
3:
G: graph with nodes N and arcs A
4:
S: set of start nodes
5:
goal: Boolean function of states
6:
Output
7:
path from a member of S to a node for which goal is true
8:
or ⊥ if there are no solution paths
9:
Local
10:
Frontier: set of paths
11:
Frontier ←{⟨s⟩: s∈S}
12:
while (Frontier ≠{})
13:
select and remove ⟨s0,...,sk⟩ from Frontier
14:
if ( goal(sk)) then
15:
return ⟨s0,...,sk⟩
16:
Frontier ←Frontier ∪{⟨s0,...,sk,s⟩: ⟨sk,s⟩∈A}
17:
return ⊥
12. IV.Breadth-First Search
In breadth-first search the frontier is implemented as a FIFO
(first-in, first-out) queue. Thus, the path that is selected from
the frontier is the one that was added earliest.
13. Pseudocode
Input: A graph G and a root v of G
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
procedure BFS(G,v):
create a queue Q
create a set V
enqueue v onto Q
add v to V
while Q is not empty:
t ← Q.dequeue()
if t is what we are looking for:
return t
for all edges e in G.adjacentEdges(t) do
u ← G.adjacentVertex(t,e)
if u is not in V:
add u to V
enqueue u onto Q
return none
14. Breadth-first search is useful when
• space is not a problem;
• you want to find the solution containing the fewest arcs;
• few solutions may exist, and at least one has a short
path length; and
• infinite paths may exist, because it explores all of the search
space, even with infinite paths.
It is a poor method when all solutions have a long path
length or there is some heuristic knowledge available. It is not used
very often because of its space complexity.
15. V. Dijkstra’s Algorithm: Finding shortest
paths in order
Closest node to s is 1 hop away
2nd closest node to s is 1 hop
away from s or w”
3rd closest node to s is 1 hop
away from s, w”, or x
Find shortest paths from
source S to all other
destinations
w'
z
w
x
s
w
"
z'
x'
17. Pseudocode
1 function Dijkstra(Graph, source):
2 for each vertex v in Graph:
// Initializations
3
dist[v] := infinity ;
// Unknown distance function from
4
// source to v
5
previous[v] := undefined ;
// Previous node in optimal path
6 end for
// from source
7
8 dist[source] := 0 ;
// Distance from source to source
9 Q := the set of all nodes in Graph ;
// All nodes in the graph are
10
// unoptimized – thus are in Q
11 while Q is not empty:
// The main loop
12
u := vertex in Q with smallest distance in dist[] ; // Source node in first case
13
remove u from Q ;
14
if dist[u] = infinity:
15
break ;
// all remaining vertices are
16
end if
// inaccessible from source
17
18
for each neighbor v of u:
// where v has not yet been
19
// removed from Q.
20
alt := dist[u] + dist_between(u, v) ;
21
if alt < dist[v]:
// Relax (u,v,a)
22
dist[v] := alt ;
23
previous[v] := u ;
24
decrease-key v in Q;
// Reorder v in the Queue
25
end if
26
end for
27 end while
28 return dist;
29 endfunction
18. VI. Depth-First Search
The first strategy is depth-first search. In depth-first search,
the frontier acts like a last-in first-out (LIFO) stack. The elements
are added to the stack one at a time. The one selected and taken
off the frontier at any time is the last element that was added.
19. Depth-First Search
Algorithm DFS(v); Input: A vertex v in a graph
Output: A labeling of the edges as “discovery” edges and
“backedges”
for each edge e incident on v do
if edge e is unexplored then let w be the other endpoint of e
if vertex w is unexplored then label e as a discovery edge
recursively call DFS(w)
else
label e as a backedge
20. Depth-first search is appropriate when either
• space is restricted;
• many solutions exist, perhaps with long path lengths, particularly for the case
where nearly all paths lead to a solution; or
• the order of the neighbors of a node are added to the stack can be tuned so that
solutions are found on the first try.
It is a poor method when
• it is possible to get caught in infinite paths; this occurs when the graph is
infinite or when there are cycles in the graph; or
• solutions exist at shallow depth, because in this case the search may look at
many long paths before finding the short solutions.
21. DFS vs. BFS
F
B
A
G
DFS Process
D
C
start
E
destination
A DFS on A
G
D
B
A
B DFS on B
A
Call DFS on G
C
B
A
DFS on C
D
B
A
Call DFS on D
Return to call on B
found destination - done!
Path is implicitly stored in DFS recursion
Path is: A, B, D, G
22. DFS vs. BFS Contd…
F
B
A start
G
destination
D
C
E
BFS Process
rear
front
A
rear
front
B
Initial call to BFS on A Dequeue A
Add A to queue
Add B
rear
front
G
Dequeue D
Add G
rear
front
D C
Dequeue B
Add C, D
found destination - done!
Path must be stored separately
rear
front
D
Dequeue C
Nothing to add
23. VII.Recursive Depth First Search
Recursive:
Recursion is the process of repeating items in a self-similar way. For instance,
when the surfaces of two mirrors are exactly parallel with each other the nested images
that occur are a form of infinite recursion
A classic example of recursion is the definition of the factorial function, given here in
C code:
unsigned int factorial(unsigned int n)
{
if (n == 0)
{ return 1;
}
else
{ return n * factorial(n - 1);
}
}