Processing & Properties of Floor and Wall Tiles.pptx
Β
AOT2 Single Variable Optimization Algorithms.pdf
1. Advanced Optimization Theory
MED573
Single Variable Optimization Algorithms
Dr. Aditi Sengupta
Department of Mechanical Engineering
IIT (ISM) Dhanbad
Email: aditi@iitism.ac.in
1
2. Introduction
2
Single-variable algorithms involve one variable and are the building blocks
for more complex multivariable algorithms.
Two distinct types of algorithms: (i) Direct search methods and (ii) Gradient-
based optimization methods.
Direct search methods use values of the objective function to locate the
minimum (and hence, optimal point).
Gradient-based methods use the first and/or second derivatives of objective
function to locate the minimum.
3. (i) Local optimal point: x* is the local minimum if
no point in the neighbourhood has a function
value smaller than f(x*).
(ii) Global optimal point: x** is global minimum if
no point in the entire search has a smaller function
value than f(x**).
(iii) Inflection point: x* is inflection point if function
value increases locally as x* increases and
decreases locally as x* reduces.
3
Optimality Criteria
4. β’ For x* to be a local minimum:
β’ First condition alone suggests that x* is either a minimum, maximum
or an inflection point.
β’ Both conditions together means x* is a minimum.
4
Identifying Local, Global Minima and Inflection Points
5. Suppose at point x*, the first derivative is zero and first nonzero higher order
derivative is n; then
β’ If n is odd, x* is an inflection point.
β’ If n is even, x* is a local optimum.
a. If the derivative is positive, x* is local minimum
b. If the derivative is negative, x* is local maximum
5
Conditions of Optimality
8. β’ Minimum of function is found in two phases:
(i) First, a crude method is used to find lower and upper bounds of the
minimum.
(ii) Afterwards, a sophisticated method is used to search within the limits
for optimal solution with the desired accuracy.
β’ Two methods: (a) Exhaustive Search Method and (b) Bounding Phase
Method.
8
Bracketing Methods
9. β’ Optimum of function is bracketed by calculating function
values at number of equally spaced points.
β’ Search begins from lower bound and three consecutive
function values are compared based on unimodality
assumption on the function.
β’ Based on the comparison, search is either terminated or
continued by replacing one of the three points with a new
point.
β’ Process continues till minimum is achieved.
9
Exhaustive Search Method
a b
x1 x2 x3
f(x) is unimodal in interval a β€
x β€ b i.f.f. it is monotonic on
either side of the optimal point
x* in the interval.
10. Step 1: Set x1 = a, Ξx = (b-a)/n (n is number of intermediate points), x2 = x1 + Ξx, x3 = x2 +
Ξx.
Step 2: If ππ π₯π₯1 β₯ ππ π₯π₯2 β€ ππ π₯π₯3 , the minimum point lies in (x1, x3),
Terminate!
Else x1 = x2, x2 = x3, x3 = x2 + Ξx, go to Step 3.
Step 3: Is π₯π₯3 β€ ππ? If yes, go to Step 2;
Else no minimum exists in (a,b) or a boundary point (a or b) is the minimum point.
Final interval obtained by using this algorithm has accuracy of 2(b-a)/n for which (n/2 + 2)
number of function evaluations are necessary.
10
Exhaustive Search Method: Algorithm
a b
x1 x2 x3
11. 11
Exhaustive Search Method: Example
Minimize f(x) = x2 + 54/x in the interval (0,5)
Plot shows that minimum lies at x* = 3 for
which f(x*) = 27, fβ(x*) = 0 and fββ(x*) = 6.
Thus, x* = 3 is local minimum as per
minimality conditions.
Let us bracket the minimum point by
evaluating 11 different function values, thus n
= 10.
13. 13
Exhaustive Search Method: Example
Step 1: x1 = a = 0 and b = 5, then Ξx = (5-
0)/10 = 0.5. We set x2 = 0.5 and x3 = 1.
Step 2: Computing function values at x1 to x3
as
f(0) = β, f(0.5) = 108.25, f(1) = 55
f(x1) > f(x2) > f(x3), thus minimum does not lie
in interval (0,1). So, we set x1 = 0.5, x2 = 1, x3
= 1.5 and go to Step 3.
14. 14
Exhaustive Search Method: Example
Step 3: We can see that x3 < 5, so we go to
Step 2. This completes one iteration of the
exhaustive search method.
Step 2: We calculate function values again,
f(x3) = 38.25.
Again, f(x1) > f(x2) > f(x3) so minimum does
not lie in interval (0.5, 1.5).
Set x1 = 1, x2 = 1.5, x3 = 2 and move to Step 3.
Step 3: Again x3 < 5, so we have to go back to
Step 2.
15. 15
Exhaustive Search Method: Example
Step 2: Function value at x3 = 2 is f(x3) = 31.
Since, f(x1) > f(x2) > f(x3), we continue with
Step 3 by setting x1 = 1.5, x2 = 2, x3 = 2.5.
Step 3: At this iteration, x3 < 5 so we go to
Step 2.
Step 2: Function value at x3 = 2.5 is f(x3) =
27.85. As before, we find f(x1) > f(x2) > f(x3)
and thus we go to Step 3.
New set of points is x1 = 2, x2 = 2.5, x3 = 3
Step 3: Once again, x3 < 5. Thus, go to Step 2.
16. 16
Exhaustive Search Method: Example
Step 2: Function value at x3 = 3 is f(x3) = 27.
Since, f(x1) > f(x2) > f(x3), we go to Step 3 by
setting x1 = 2.5, x2 = 3, x3 = 3.5.
Step 3: At this iteration, x3 < 5 so we go to
Step 2.
Step 2: Here, f(x3=3.5) = 27.68. At this
iteration, we have f(x1) > f(x2) < f(x3). We can
terminate the algorithm!
Thus, bound for minimum, x* is (2.5, 3.5). For n = 10, accuracy of solution is 2(5-0)/10 = 1.
For n = 10,000, obtained interval is (2.9995,
3.0005).
17. β’ Bounding phase method is used to bracket the minimum of a unimodal
function.
β’ Algorithm begins with an initial guess, finds a search direction based
on two or more function evaluations in vicinity of initial guess.
β’ After this, an exponential search strategy is adopted to reach optimum.
β’ Faster than exhaustive search method.
17
Bounding Phase Method
18. Step 1: Choose an initial guess x(0) and an increment Ξ. Set k = 0.
Step 2: If f(x(0) - |Ξ|) β₯ f(x(0)) β₯ f(x(0) + |Ξ|), then Ξ is positive;
Else if f(x(0) - |Ξ|) β€ f(x(0)) β€ f(x(0) + |Ξ|), then Ξ is negative;
Else go to Step 1.
Step 3: Set x(k+1) = x(k) + 2k Ξ.
Step 4: If f(x(k+1)) < f(x(k)), set k = k+1 and go to Step 3;
Else the minimum lies in interval (x(k-1), x(k+1)) and Terminate!
18
Bounding Phase Method: Algorithm
19. 19
Bounding Phase Method: Example
Minimize f(x) = x2 + 54/x in the interval (0,5)
Plot shows that minimum lies at x* = 3 for
which f(x*) = 27, fβ(x*) = 0 and fββ(x*) = 6.
Thus, x* = 3 is local minimum as per
minimality conditions.
Let us bracket the minimum point by using
bounding phase method.
20. 20
Bounding Phase Method: Example
Step 1: We choose initial guess x(0) = 0.6 and an
increment Ξ = 0.5. We also set k = 0.
Step 2: Next, we calculate function values to
proceed with the algorithm:
f(0.6-0.5) = 540.01, f(0.6) = 90.36, f(0.6+0.5) =
50.301.
We observe that f(0.1) > f(0.6) > f(1.1). Thus,
we set Ξ = +0.5.
Step 3: We compute the next guess: x(1) = x(0) +
20 Ξ = 1.1.
21. 21
Bounding Phase Method: Example
Step 4: The function value at x(1) is 50.301
which is less than f(x(0)). Next, we set k = 1 and
go to Step 3. One iteration of the bounding
phase algorithm is complete.
Step 3: The next guess is x(2) = x(1) + 21 Ξ = 2.1.
Step 4: Function value at x(2) is 30.124 which is
smaller than f(x(1)). Thus, we set k = 2 and
move to Step 3.
Step 3: The next guess is x(3) = x(3) + 22 Ξ = 4.1.
Step 4: f(x(3)) = 29.981 < f(x(2)) = 31.124, so we
set k = 3.
22. 22
Bounding Phase Method: Example
Step 3: The next guess is x(4) = x(3) + 23 Ξ = 8.1.
Step 4: Function value at x(4) is 72.277 which is
larger than f(x(3)) = 29.981. Thus, we terminate
with the interval obtained as (2.1, 8.1).
With Ξ = 0.5, the bracketing obtained is poor
but number of function evaluations is only 7.
Bounding phase method approaches optimum
exponentially but accuracy is not good.
For exhaustive search method, number of
iterations required for accurate solutions is
large.
23. Sophisticated algorithms are needed after the minimum point is
bracketed.
Here, we discuss three such algorithms that work on principle
of region elimination and require smaller function evaluations.
23
Region-Elimination Methods
a b
x1 x2
f(x)
x
Let us consider points x1 and x2 which lie in interval (a,b) and satisfy x1
< x2. For minimizing unimodal functions, note the following:
(i) If f(x1) > f(x2) then minimum does not lie in (a,x1).
(ii) If f(x1) < f(x2) then minimum does not lie in (x2, b).
(iii)If f(x1) = f(x2) then minimum does not lie in (a,x1) and (x2,b).
24. 24
a b
x1 x2
f(x)
x
Region-Elimination Methods
Consider a unimodal function, as in the figure.
If f(x1) > f(x2), the minimum point x* cannot lie on the l.h.s of
x1. Thus, we can eliminate region (a, x1) and our interval of
interest reduces from (a, b) to (x1, b).
If f(x1) < f(x2), the minimum point x* cannot lie to the r.h.s. of
x2. Thus, we can eliminate region (x2, b).
If f(x1) = f(x2), we can conclude that regions (a, x1) and (x2, b)
can be eliminated with the assumption that there exists only
one local minimum in (a, b).
25. β’ Function values at three equidistant points are considered which
divide the search space into four regions.
β’ If f(x1) < f(xm), minimum cannot lie beyond xm. So the interval
reduces from (a, b) to (a, xm). Search space reduces by 50%.
β’ If f(x1) > f(xm), minimum cannot lie in interval (a, x1). This
reduces search space only by 25%.
β’ Next, we compare function values at xm and x2 to further
eliminate 25% of search space.
β’ Process continues till small enough interval is obtained.
25
Interval Halving Method
a b
x1 x2
f(x)
x
xm
26. 26
Interval Halving Method: Algorithm
a b
x1 x2
f(x)
x
xm
Step 1: Choose a lower bound a, an upper bound b and a small
number, Ξ΅. Let xm = (a+b)/2, Lo = L = b β a. Compute f(xm).
Step 2: Set x1 = a + L/4, x2 = b β L/4. Compute f(x1) and f(x2).
Step 3: If f(x1) < f(xm) set b = xm and xm = x1; go to Step 5
Else go to Step 4.
Step 4: If f(x2) < f(xm) set a = xm and xm = x2; go to Step 5
Else set a = x1, b = x2; go to Step 5.
Step 5: Calculate L = b β a. If |L| < Ξ΅, Terminate!
Else go to Step 2.
At every iteration, two function evaluations are
performed and interval reduces by half.
Thus, interval reduces to 0.5n/2 Lo after n
function evaluations.
To achieve accuracy of Ξ΅, function evaluations
needed are (0.5)n/2 (b-a) = Ξ΅.
27. 27
Interval Halving Method: Example
Minimize f(x) = x2 + 54/x in the interval (0,5)
Plot shows that minimum lies at x* = 3 for which
f(x*) = 27, fβ(x*) = 0 and fββ(x*) = 6.
Thus, x* = 3 is local minimum as per minimality
conditions.
Let us solve this unimodal, single-variable function
using the interval halving method.
28. 28
Interval Halving Method: Example
Step 1: We choose a = 0, b = 5 and Ξ΅ = 10-3, where xm is the
mid-point of the search interval. Thus, xm = (0+5)/2 = 2.5.
Initial interval length Lo = L = 5-0 = 5. Function value at xm
is f(xm) = 27.85.
Step 2: Set x1 = 0 + 5/4 = 1.25 and x2 = 5 β 5/4 = 3.75.
Function values are f(x1) = 44.76 and f(x2) = 28.46.
Step 3: We see that f(x1) > f(xm), so we go to Step 4.
Step 4: Again, we see that f(x2) > f(xm), so the intervals (0,
1.25) and (3.75, 5) are dropped. Next, we set a = 1.25 and b
= 3.75.
29. 29
Interval Halving Method: Example
Step 5: The new interval is L = 3.75 β 1.25 = 2.5. Since, |L|
is not small, we continue to Step 2. This completes one
iteration of the interval halving method.
Step 2: We now compute new x1 and x2:
x1 = 1.25 + 2.5/4 = 1.875, x2 = 3.75 β 2.5/4 = 3.125
Function values are f(x1) = 32.32 and f(x2) = 27.05.
Step 3: We see that f(x1) > f(xm), so we go to Step 4.
Step 4: Here, f(x2) < f(xm) so we eliminate interval (1.25,
2.5) and set a = 2.5 and xm = 3.125.
Step 5: At end of second iteration, new interval length is L
= 3.75 β 2.5 = 1.25. Since |L| is not smaller than Ξ΅, so we go
to Step 2 again.
30. 30
Interval Halving Method: Example
Step 2: We compute x1 = 2.8125 and x2 = 3.4375 for
which function values are f(x1) = 27.11 and f(x2) =
27.53.
Step 3: We observe that f(x1) > f(xm), so we go to Step
4.
Step 4: Here, f(x2) > f(xm) so we drop the boundary
intervals and set a = 2.8125, b = 3.4375.
Step 5: The new interval L = 0.625 which is still larger
than Ξ΅. So the process has to be continued.
31. β’ Search interval is reduced according to Fibonacci numbers.
β’ For two consecutive Fibonacci numbers, Fn-2 and Fn-1, the third number is calculated as
Fn = Fn-1 + Fn-2 where n = 2, 3, 4 β¦
β’ Search algorithm which only needs one function evaluation per iteration.
β’ Principle of Fibonacci search is that out of two points needed for region elimination, one
is always the previous point.
β’ Leads to 38.2% reduction of search space β greater than 25% in interval halving method.
31
Fibonacci Search Method
32. β’ At iteration k, two intermediate points (x1 and x2), each L*
k away
from the either end of the search space (L = b β a) are chosen.
β’ When region elimination removes a portion of search space
depending on function values at x1 or x2, remaining space is Lk.
β’ Define L*k = (Fn-k+1/Fn+1)L and Lk = (Fn-k+2/Fn+1)L, such that Lk β
L*k = L*k+1 β one of points in iteration k remains for iteration
(k+1).
β’ If (a, x2) is eliminated in kth iteration, x1 is at distance (Lk β L*k ) or
L*k+1 from x2 in (k+1)th iteration.
β’ Algorithm usually starts at k = 2.
32
Fibonacci Search Method
a b
x2 x1
L*k L*k
Lk
33. Step 1: Choose a lower bound a and upper bound b. Set L = b β a. Assume the
desired number of functions evaluations to be n. Set k = 2.
Step 2: Compute L*k = (Fn-k+1/Fn+1)L . Set x1 = a + L*k and x2 = b β L*k.
Step 3: Compute either f(x1) or f(x2) and use region-elimination rules to eliminate
a region. Set new a and b.
Step 4: Is k = n? If no, set k = k+1 and go to Step 2
Else Terminate!
Interval reduces to (2/Fn+1)L after n function evaluations. Thus, for desired
accuracy, Ξ΅ the required function evaluations is calculated as 2(b-a)/Fn+1 = Ξ΅.
33
Fibonacci Search Method: Algorithm
34. Minimize f(x) = x2 + 54/x
Step 1: We choose a = 0 and b = 5. Thus, initial
interval is L = 5. Set number of function evaluations as
n = 3 and k = 2.
Step 2: We compute L*2 as:
L*2 = (F3-2+1/F3+1)L = (F2/F4).5 = (2/5)*5 = 2
Set x1 = 0 + 2 = 2 and x2 = 5 β 2 = 3.
Step 3: We compute function values, f(x1) = 31 and
f(x2) = 27. Since f(x1) > f(x2), we eliminate region (0,
2). Next, we set a = 2 and b = 5.
34
Fibonacci Search Method: Example
35. Step 4: Since k = 2 β n = 3, we set k = 3 and go to Step 2.
This completes one iteration of the Fibonacci search
method.
Step 2: We compute L*3 = (F1/F4)L = (1/5)*5 = 1. Set x1 =
2 + 1 = 3 and x2 = 5 β 1 = 4.
Step 3: One of the points (x1 = 3) was evaluated in
previous iteration. We only need f(x2 = 4) = 29.5.
We see that f(x1) < f(x2), so we set a = 2 and b = x2 = 4.
Step 4: At this iteration, k = n = 3 and we terminate the
algorithm. The final interval is (2,4).
35
Fibonacci Search Method: Example
36. β’ Golden section search method overcomes two problems of the Fibonacci search method:
(i) calculation and storage of Fibonacci numbers at every iteration.
(ii) proportion of eliminated region is uneven.
β’ Search space (a, b) is linearly mapped to unit interval search space (0, 1).
36
Golden Section Search Method
β’ Next, two points at distance T from either end
of search space are chosen so that eliminated
region is (1 - T) to that in previous iteration.
β’ Achieved by equating (1 β T) with (T x T) to get
T = 0.618.
37. Step 1: Choose a lower bound a, upper bound b and a small number, Ξ΅. Normalize x by using w
= (x β a)/ (b β a). Thus, aw = 0, bw = 1 and Lw = 1. Set k = 1.
Step 2: Set w1 = aw + 0.618Lw and w2 = bw β 0.618Lw. Compute f(w1) or f(w2), depending on
which of the two was not evaluated earlier. Use fundamental region elimination rules to
eliminate a region. Set new aw and bw.
37
Golden Section Search Method: Algorithm
Step 3: Is |Lw| < Ξ΅? If no, set k = k + 1, go to Step
2. Else Terminate!
Interval reduces to (0.618)n-1 after n function
evaluations. To achieve accuracy of Ξ΅, we need:
(0.618)n-1 (b β a) = Ξ΅.
38. Minimize f(x) = x2 + 54/x
Step 1: We choose a = 0 and b = 5, so the transformation
equation becomes w = x/5. Thus, aw = 0, bw = 1 and Lw = 1.
For the transformed variable w, the function is f(w) = 25w2 +
54/(5w).
In the w-space, minimum is at w* = 3/5 = 0.6. Iteration
counter, k is set to 1.
Step 2: Set w1 = 0 + 0.618 = 0.618 and w2 = 1 β 0.618 =
0.382 for which f(w1) = 27.02 and f(w2) = 31.92. As f(w1) <
f(w2), minimum cannot lie below w = 0.382. Thus, we
eliminate (a, w2) or (0, 0.382).
Set aw = 0.382 and bw = 1. At this stage, Lw = 1 β 0.382 =
0.618.
38
Golden Section Search Method: Example
39. Step 3: Since |Lw| is larger than Ξ΅, we set k = 2 and
move to Step 2. One iteration of golden section search
method is complete.
Step 2: We set w1 = 0.382 + (0.618)0.618 = 0.764 and
w2 = 1 β (0.618)0.618 = 0.618.
We only need to calculate f(w1) = 28.73 as f(w2) was
calculated in previous iteration. Since, f(w1) > f(w2) we
eliminate region (0.764,1). Thus, new bounds are:
aw = 0.382, bw = 0.764 and Lw = 0.764 β 0.382 = 0.382.
Step 3: Since Lw > Ξ΅, we go to Step 2 after setting k = 3.
39
Golden Section Search Method: Example
40. Step 2: We set w1 = 0.618 and w2 = 0.528, of which
f(w1) has been calculated before .
We only need to calculate f(w2) = 27.43. Since, f(w1) <
f(w2) we eliminate region (0.382, 0.528). Thus, new
bounds are:
aw = 0.528, bw = 0.764 and Lw = 0.764 β 0.528 = 0.236.
Step 3: At the end of the third iteration, Lw = 0.236
which is larger than prescribed accuracy, Ξ΅. Steps 2 and
3 have to be continued until desired accuracy is
achieved.
40
Golden Section Search Method: Example
41. β’ Methods discussed so far worked with direct function values, here
algorithms require derivative information.
β’ Gradient-based methods are predominantly used and are found to be
effective despite difficulties in obtaining derivatives in certain
situations.
β’ Optimality property of gradient going to zero for local or global
optimum is used to terminate the search process.
41
Gradient-based Methods
42. β’ Goal of unconstrained local optimization methods is to achieve a point having
derivative values going to zero.
β’ In this method, a linear approximation to the first derivative of the function is
made using Taylorβs series expansion. This is equated to zero to find the next
guess.
β’ If the current point at iteration t is x(t), the point in next iteration is governed by
the following (only linear terms of Taylorβs expansion are retained):
x(t+1) = x(t) β fβ(x(t))/fββ(x(t))
42
Newton-Raphson Method
43. Step 1: Choose initial guess x(1) and a small number Ξ΅. Set k = 1. Compute
fβ(x(1)).
Step 2: Compute fββ(x(k)).
Step 3: Calculate x(k+1) = x(k) β fβ(x(k))/fββ(x(k)). Compute fβ(x(k+1)).
Step 4: If |fβ(x(k+1))| < Ξ΅, Terminate!
Else set k = k + 1 and go to Step 2.
43
Newton-Raphson Method: Algorithm
44. At π₯π₯ π‘π‘
, first and second derivatives are computed with central difference method as:
ππβ²
π₯π₯ π‘π‘
=
ππ π₯π₯ π‘π‘ +βπ₯π₯ π‘π‘ βππ(π₯π₯ π‘π‘ ββπ₯π₯ π‘π‘ )
2βπ₯π₯(π‘π‘) Eq. (1)
ππβ²β² π₯π₯ π‘π‘ =
ππ π₯π₯ π‘π‘ +βπ₯π₯ π‘π‘ β2ππ(π₯π₯ π‘π‘ )+ππ(π₯π₯ π‘π‘ ββπ₯π₯ π‘π‘ )
(βπ₯π₯ π‘π‘ )2 Eq. (2)
The parameter βπ₯π₯ π‘π‘
is a small value, usually about 1% of π₯π₯ π‘π‘
:
βπ₯π₯(π‘π‘) = {
0.01 π₯π₯ π‘π‘
, ππππ π₯π₯ π‘π‘
> 0.01
0.0001, πππππππππππππππππ
Eq. (3)
44
Newton-Raphson Method: Algorithm
45. Minimize f(x) = x2 + 54/x
Step 1: We choose initial guess x(1) = 1 and termination factor of Ξ΅ = 10-3. Set k = 1. We
compute fβ(x(1)) using Eq. (1) and βπ₯π₯ π‘π‘
computed using Eq. (3) is 0.01.
The computed derivative is -52.005, whereas the exact derivative is -52. So, the computed
derivative is accepted and we proceed to Step 2.
Step 2: The exact value of fββ(x(1)) is 110, while the computed value by using Eq. (2) is 110.011
Step 3: We compute the next guess as:
x(2) = x(1) β fβ(x(1))/fββ(x(1)) = 1-(-52.005)/(110.011) = 1.473.
The derivative fβ(x(2)) is computed using Eq. (1) and found to be -21.944.
45
Newton-Raphson Method: Example
46. Step 4: Since |fβ(x(2))| is > Ξ΅, we set k = 2 and go to Step 2. This completes one iteration of the
Newton-Raphson method.
Step 2: We begin the second iteration by computing the second derivative numerically at x(2),
which is found to be 35.796.
Step 3: The next guess is computed using the Taylor expansion and is found to be x(3) = 2.086.
The first derivative is computed using Eq. (1) and is fβ(x(3)) = -8.239.
Step 4: Since |fβ(x(3))| is > Ξ΅, we set k = 3 and go to Step 2. This marks end of second iteration.
46
Newton-Raphson Method: Example
47. Step 2: The second derivative at x(3) is fββ(x(3)) = 13.899.
Step 3: The next guess is computed as x(4) = 2.679 and the derivative is fβ(x(4)) = -
2.167.
Step 4: The absolute value of the derivative is not smaller than Ξ΅, so the search
continues with Step 2.
After three more iterations, it is found that x(7) = 3.0001 and fβ(x(4)) = - 4 x 10-8. This is
small enough to terminate the algorithm.
47
Newton-Raphson Method: Example
48. β’ Computation of second derivative is avoided β only the first derivative is
used. Only for unimodal functions.
β’ Function value and sign of first derivative at two points is used to eliminate
certain portion of search space.
β’ Minimum is bracketed in interval (a, b) using derivative information, if two
conditions: (i) fβ(a) < 0 and (ii) fβ(b) > 0 are satisfied.
β’ Two initial boundary points bracketing the minimum are required and two
consecutive points with derivatives having opposite signs are chosen for
next iteration.
48
Bisection Method
49. 49
Bisection Method: Algorithm
Step 1: Choose two points a and b such that fβ(a) < 0 and fβ(b) > 0. Also choose a
small number Ξ΅. Set x1 = a and x2 = b.
Step 2: Calculate z = (x1 + x2)/2 and evaluate fβ(z).
Step 3: If |fβ(z)| β€ Ξ΅, Terminate!
Else if fβ(z) < 0 set x1 = z and go to Step 2
Else if fβ(z) > 0 set x2 = z and go to Step 2
Sign of first derivative of mid-point of search region is used to eliminate half the
region.
If derivative < 0 β minimum cannot lie in left half of search region
If derivative > 0 β minimum cannot lie in right half of search region
50. 50
Bisection Method: Example
Minimize f(x) = x2 + 54/x
Step 1: Choose two points a = 2 and b = 5 such that fβ(a) =
-9.501 and fβ(b) = 7.841. Also choose a small number Ξ΅ =
10-3. Set x1 = a and x2 = b.
Step 2: Calculate z = (x1 + x2)/2 = 3.5 and evaluate fβ(z) =
2.591.
Step 3: Since fβ(z) > 0, right half of the search plane is
eliminated. Next, we set x1 = 2 and x2 = z = 3.5 and go to
Step 2. This completes one iteration of the bisection
method.
Step 2: We compute z = (x1 + x2)/2 = 2.75 and evaluate
fβ(z) = -1.641.
51. 51
Step 3: Since fβ(z) < 0, we set x1 = z = 2.75 and x2 = 3.5 and
go to Step 2.
Step 2: The new mid-point z is average of the two bounds: z =
3.125 for which fβ(z) = 0.72.
Step 3: Since fβ(z) > 0, we set x1 = 2.75 and x2 = z = 3.125
and go to Step 2.
Step 2: The new point z is calculated as z = (x1 + x2)/2 =
2.9375 for which fβ(z) = -0.38303.
Step 3: Since fβ(z) < 0, we set x1 = z = 2.9375 and x2 = 3.125
and continue till |fβ(z)| β€ Ξ΅.
Bisection Method: Example
52. β’ In secant method, magnitude and sign of derivatives are used to create a new
point.
β’ As boundary points have derivatives with opposite signs and derivatives vary
linearly, there exists a point between these two points with a zero derivative.
β’ For x1 and x2, fβ(x1)*fβ(x2) β€ 0 and there exists a point z which has zero derivative
value, given by
π§π§ = π₯π₯2 β
πππ(π₯π₯2)
(ππβ² π₯π₯2 βπππ(π₯π₯1))/(π₯π₯2βπ₯π₯1)
Eq. (4)
β’ In one iteration more than half of the search space may be eliminated depending
on gradient values.
52
Secant Method
53. 53
Secant Method: Algorithm
Step 1: Choose two points a and b such that fβ(a) < 0 and fβ(b) > 0. Also choose a
small number Ξ΅. Set x1 = a and x2 = b.
Step 2: Calculate z based on Eq. (4) and evaluate fβ(z).
Step 3: If |fβ(z)| β€ Ξ΅, Terminate!
Else if fβ(z) < 0 set x1 = z and go to Step 2
Else if fβ(z) > 0 set x2 = z and go to Step 2
Sign of first derivative of z of search region is used to eliminate half the region.
If derivative < 0 β minimum cannot lie on left of search region
If derivative > 0 β minimum cannot lie on right of search region
54. 54
Secant Method: Example
Minimize f(x) = x2 + 54/x
Step 1: We begin with initial points a = 2 and b = 5 having
derivatives fβ(a) = -9.501 and fβ(b) = 7.841 with opposite signs.
Also choose a small number Ξ΅ = 10-3. Set x1 = a and x2 = b.
Step 2: We now calculate z using Eq. (4) as z = 5 β (fβ(5)-
fβ(2))/(5-2) = 3.644. Next, we evaluate fβ(z) = 3.221.
Step 3: Since fβ(z) > 0, we eliminate the right part (z, b) of the
search region.
The eliminated region has length (b β z) = 1.356 which is less
than half of search space (b β a)/2 = 2.5.
Next, we set x1 = 2 and x2 = z = 3.644 and go to Step 2. This
completes one iteration of the secant method.
55. 55
Secant Method: Example
Step 2: The next point is computed using Eq. (4) as z =
3.228. The derivative is evaluated as fβ(z) = 1.127.
Step 3: Since fβ(z) > 0, we eliminate the right part of the
search region, i.e. (3.228, 3.644).
The eliminated region has length of 0.416 which is less than
half of previous search space (3.644 β 2)/2 = 0.822.
Next, we set x1 = 2 and x2 = z = 3.288 and go to Step 2.
Step 2: The new point is z = 3.101. The derivative is
evaluated as fβ(z) = 0.586.
Step 3: Since fβ(z) > Ξ΅, we will continue to Step 2, until
desired accuracy is achieved.