SlideShare a Scribd company logo
1 of 55
Download to read offline
Advanced Optimization Theory
MED573
Single Variable Optimization Algorithms
Dr. Aditi Sengupta
Department of Mechanical Engineering
IIT (ISM) Dhanbad
Email: aditi@iitism.ac.in
1
Introduction
2
Single-variable algorithms involve one variable and are the building blocks
for more complex multivariable algorithms.
Two distinct types of algorithms: (i) Direct search methods and (ii) Gradient-
based optimization methods.
Direct search methods use values of the objective function to locate the
minimum (and hence, optimal point).
Gradient-based methods use the first and/or second derivatives of objective
function to locate the minimum.
(i) Local optimal point: x* is the local minimum if
no point in the neighbourhood has a function
value smaller than f(x*).
(ii) Global optimal point: x** is global minimum if
no point in the entire search has a smaller function
value than f(x**).
(iii) Inflection point: x* is inflection point if function
value increases locally as x* increases and
decreases locally as x* reduces.
3
Optimality Criteria
β€’ For x* to be a local minimum:
β€’ First condition alone suggests that x* is either a minimum, maximum
or an inflection point.
β€’ Both conditions together means x* is a minimum.
4
Identifying Local, Global Minima and Inflection Points
Suppose at point x*, the first derivative is zero and first nonzero higher order
derivative is n; then
β€’ If n is odd, x* is an inflection point.
β€’ If n is even, x* is a local optimum.
a. If the derivative is positive, x* is local minimum
b. If the derivative is negative, x* is local maximum
5
Conditions of Optimality
6
7
β€’ Minimum of function is found in two phases:
(i) First, a crude method is used to find lower and upper bounds of the
minimum.
(ii) Afterwards, a sophisticated method is used to search within the limits
for optimal solution with the desired accuracy.
β€’ Two methods: (a) Exhaustive Search Method and (b) Bounding Phase
Method.
8
Bracketing Methods
β€’ Optimum of function is bracketed by calculating function
values at number of equally spaced points.
β€’ Search begins from lower bound and three consecutive
function values are compared based on unimodality
assumption on the function.
β€’ Based on the comparison, search is either terminated or
continued by replacing one of the three points with a new
point.
β€’ Process continues till minimum is achieved.
9
Exhaustive Search Method
a b
x1 x2 x3
f(x) is unimodal in interval a ≀
x ≀ b i.f.f. it is monotonic on
either side of the optimal point
x* in the interval.
Step 1: Set x1 = a, Ξ”x = (b-a)/n (n is number of intermediate points), x2 = x1 + Ξ”x, x3 = x2 +
Ξ”x.
Step 2: If 𝑓𝑓 π‘₯π‘₯1 β‰₯ 𝑓𝑓 π‘₯π‘₯2 ≀ 𝑓𝑓 π‘₯π‘₯3 , the minimum point lies in (x1, x3),
Terminate!
Else x1 = x2, x2 = x3, x3 = x2 + Ξ”x, go to Step 3.
Step 3: Is π‘₯π‘₯3 ≀ 𝑏𝑏? If yes, go to Step 2;
Else no minimum exists in (a,b) or a boundary point (a or b) is the minimum point.
Final interval obtained by using this algorithm has accuracy of 2(b-a)/n for which (n/2 + 2)
number of function evaluations are necessary.
10
Exhaustive Search Method: Algorithm
a b
x1 x2 x3
11
Exhaustive Search Method: Example
Minimize f(x) = x2 + 54/x in the interval (0,5)
Plot shows that minimum lies at x* = 3 for
which f(x*) = 27, f’(x*) = 0 and f’’(x*) = 6.
Thus, x* = 3 is local minimum as per
minimality conditions.
Let us bracket the minimum point by
evaluating 11 different function values, thus n
= 10.
12
How to Plot a Function in Matlab
13
Exhaustive Search Method: Example
Step 1: x1 = a = 0 and b = 5, then Ξ”x = (5-
0)/10 = 0.5. We set x2 = 0.5 and x3 = 1.
Step 2: Computing function values at x1 to x3
as
f(0) = ∞, f(0.5) = 108.25, f(1) = 55
f(x1) > f(x2) > f(x3), thus minimum does not lie
in interval (0,1). So, we set x1 = 0.5, x2 = 1, x3
= 1.5 and go to Step 3.
14
Exhaustive Search Method: Example
Step 3: We can see that x3 < 5, so we go to
Step 2. This completes one iteration of the
exhaustive search method.
Step 2: We calculate function values again,
f(x3) = 38.25.
Again, f(x1) > f(x2) > f(x3) so minimum does
not lie in interval (0.5, 1.5).
Set x1 = 1, x2 = 1.5, x3 = 2 and move to Step 3.
Step 3: Again x3 < 5, so we have to go back to
Step 2.
15
Exhaustive Search Method: Example
Step 2: Function value at x3 = 2 is f(x3) = 31.
Since, f(x1) > f(x2) > f(x3), we continue with
Step 3 by setting x1 = 1.5, x2 = 2, x3 = 2.5.
Step 3: At this iteration, x3 < 5 so we go to
Step 2.
Step 2: Function value at x3 = 2.5 is f(x3) =
27.85. As before, we find f(x1) > f(x2) > f(x3)
and thus we go to Step 3.
New set of points is x1 = 2, x2 = 2.5, x3 = 3
Step 3: Once again, x3 < 5. Thus, go to Step 2.
16
Exhaustive Search Method: Example
Step 2: Function value at x3 = 3 is f(x3) = 27.
Since, f(x1) > f(x2) > f(x3), we go to Step 3 by
setting x1 = 2.5, x2 = 3, x3 = 3.5.
Step 3: At this iteration, x3 < 5 so we go to
Step 2.
Step 2: Here, f(x3=3.5) = 27.68. At this
iteration, we have f(x1) > f(x2) < f(x3). We can
terminate the algorithm!
Thus, bound for minimum, x* is (2.5, 3.5). For n = 10, accuracy of solution is 2(5-0)/10 = 1.
For n = 10,000, obtained interval is (2.9995,
3.0005).
β€’ Bounding phase method is used to bracket the minimum of a unimodal
function.
β€’ Algorithm begins with an initial guess, finds a search direction based
on two or more function evaluations in vicinity of initial guess.
β€’ After this, an exponential search strategy is adopted to reach optimum.
β€’ Faster than exhaustive search method.
17
Bounding Phase Method
Step 1: Choose an initial guess x(0) and an increment Ξ”. Set k = 0.
Step 2: If f(x(0) - |Ξ”|) β‰₯ f(x(0)) β‰₯ f(x(0) + |Ξ”|), then Ξ” is positive;
Else if f(x(0) - |Ξ”|) ≀ f(x(0)) ≀ f(x(0) + |Ξ”|), then Ξ” is negative;
Else go to Step 1.
Step 3: Set x(k+1) = x(k) + 2k Ξ”.
Step 4: If f(x(k+1)) < f(x(k)), set k = k+1 and go to Step 3;
Else the minimum lies in interval (x(k-1), x(k+1)) and Terminate!
18
Bounding Phase Method: Algorithm
19
Bounding Phase Method: Example
Minimize f(x) = x2 + 54/x in the interval (0,5)
Plot shows that minimum lies at x* = 3 for
which f(x*) = 27, f’(x*) = 0 and f’’(x*) = 6.
Thus, x* = 3 is local minimum as per
minimality conditions.
Let us bracket the minimum point by using
bounding phase method.
20
Bounding Phase Method: Example
Step 1: We choose initial guess x(0) = 0.6 and an
increment Ξ” = 0.5. We also set k = 0.
Step 2: Next, we calculate function values to
proceed with the algorithm:
f(0.6-0.5) = 540.01, f(0.6) = 90.36, f(0.6+0.5) =
50.301.
We observe that f(0.1) > f(0.6) > f(1.1). Thus,
we set Ξ” = +0.5.
Step 3: We compute the next guess: x(1) = x(0) +
20 Ξ” = 1.1.
21
Bounding Phase Method: Example
Step 4: The function value at x(1) is 50.301
which is less than f(x(0)). Next, we set k = 1 and
go to Step 3. One iteration of the bounding
phase algorithm is complete.
Step 3: The next guess is x(2) = x(1) + 21 Ξ” = 2.1.
Step 4: Function value at x(2) is 30.124 which is
smaller than f(x(1)). Thus, we set k = 2 and
move to Step 3.
Step 3: The next guess is x(3) = x(3) + 22 Ξ” = 4.1.
Step 4: f(x(3)) = 29.981 < f(x(2)) = 31.124, so we
set k = 3.
22
Bounding Phase Method: Example
Step 3: The next guess is x(4) = x(3) + 23 Ξ” = 8.1.
Step 4: Function value at x(4) is 72.277 which is
larger than f(x(3)) = 29.981. Thus, we terminate
with the interval obtained as (2.1, 8.1).
With Ξ” = 0.5, the bracketing obtained is poor
but number of function evaluations is only 7.
Bounding phase method approaches optimum
exponentially but accuracy is not good.
For exhaustive search method, number of
iterations required for accurate solutions is
large.
Sophisticated algorithms are needed after the minimum point is
bracketed.
Here, we discuss three such algorithms that work on principle
of region elimination and require smaller function evaluations.
23
Region-Elimination Methods
a b
x1 x2
f(x)
x
Let us consider points x1 and x2 which lie in interval (a,b) and satisfy x1
< x2. For minimizing unimodal functions, note the following:
(i) If f(x1) > f(x2) then minimum does not lie in (a,x1).
(ii) If f(x1) < f(x2) then minimum does not lie in (x2, b).
(iii)If f(x1) = f(x2) then minimum does not lie in (a,x1) and (x2,b).
24
a b
x1 x2
f(x)
x
Region-Elimination Methods
Consider a unimodal function, as in the figure.
If f(x1) > f(x2), the minimum point x* cannot lie on the l.h.s of
x1. Thus, we can eliminate region (a, x1) and our interval of
interest reduces from (a, b) to (x1, b).
If f(x1) < f(x2), the minimum point x* cannot lie to the r.h.s. of
x2. Thus, we can eliminate region (x2, b).
If f(x1) = f(x2), we can conclude that regions (a, x1) and (x2, b)
can be eliminated with the assumption that there exists only
one local minimum in (a, b).
β€’ Function values at three equidistant points are considered which
divide the search space into four regions.
β€’ If f(x1) < f(xm), minimum cannot lie beyond xm. So the interval
reduces from (a, b) to (a, xm). Search space reduces by 50%.
β€’ If f(x1) > f(xm), minimum cannot lie in interval (a, x1). This
reduces search space only by 25%.
β€’ Next, we compare function values at xm and x2 to further
eliminate 25% of search space.
β€’ Process continues till small enough interval is obtained.
25
Interval Halving Method
a b
x1 x2
f(x)
x
xm
26
Interval Halving Method: Algorithm
a b
x1 x2
f(x)
x
xm
Step 1: Choose a lower bound a, an upper bound b and a small
number, Ξ΅. Let xm = (a+b)/2, Lo = L = b – a. Compute f(xm).
Step 2: Set x1 = a + L/4, x2 = b – L/4. Compute f(x1) and f(x2).
Step 3: If f(x1) < f(xm) set b = xm and xm = x1; go to Step 5
Else go to Step 4.
Step 4: If f(x2) < f(xm) set a = xm and xm = x2; go to Step 5
Else set a = x1, b = x2; go to Step 5.
Step 5: Calculate L = b – a. If |L| < Ξ΅, Terminate!
Else go to Step 2.
At every iteration, two function evaluations are
performed and interval reduces by half.
Thus, interval reduces to 0.5n/2 Lo after n
function evaluations.
To achieve accuracy of Ξ΅, function evaluations
needed are (0.5)n/2 (b-a) = Ξ΅.
27
Interval Halving Method: Example
Minimize f(x) = x2 + 54/x in the interval (0,5)
Plot shows that minimum lies at x* = 3 for which
f(x*) = 27, f’(x*) = 0 and f’’(x*) = 6.
Thus, x* = 3 is local minimum as per minimality
conditions.
Let us solve this unimodal, single-variable function
using the interval halving method.
28
Interval Halving Method: Example
Step 1: We choose a = 0, b = 5 and Ξ΅ = 10-3, where xm is the
mid-point of the search interval. Thus, xm = (0+5)/2 = 2.5.
Initial interval length Lo = L = 5-0 = 5. Function value at xm
is f(xm) = 27.85.
Step 2: Set x1 = 0 + 5/4 = 1.25 and x2 = 5 – 5/4 = 3.75.
Function values are f(x1) = 44.76 and f(x2) = 28.46.
Step 3: We see that f(x1) > f(xm), so we go to Step 4.
Step 4: Again, we see that f(x2) > f(xm), so the intervals (0,
1.25) and (3.75, 5) are dropped. Next, we set a = 1.25 and b
= 3.75.
29
Interval Halving Method: Example
Step 5: The new interval is L = 3.75 – 1.25 = 2.5. Since, |L|
is not small, we continue to Step 2. This completes one
iteration of the interval halving method.
Step 2: We now compute new x1 and x2:
x1 = 1.25 + 2.5/4 = 1.875, x2 = 3.75 – 2.5/4 = 3.125
Function values are f(x1) = 32.32 and f(x2) = 27.05.
Step 3: We see that f(x1) > f(xm), so we go to Step 4.
Step 4: Here, f(x2) < f(xm) so we eliminate interval (1.25,
2.5) and set a = 2.5 and xm = 3.125.
Step 5: At end of second iteration, new interval length is L
= 3.75 – 2.5 = 1.25. Since |L| is not smaller than Ξ΅, so we go
to Step 2 again.
30
Interval Halving Method: Example
Step 2: We compute x1 = 2.8125 and x2 = 3.4375 for
which function values are f(x1) = 27.11 and f(x2) =
27.53.
Step 3: We observe that f(x1) > f(xm), so we go to Step
4.
Step 4: Here, f(x2) > f(xm) so we drop the boundary
intervals and set a = 2.8125, b = 3.4375.
Step 5: The new interval L = 0.625 which is still larger
than Ξ΅. So the process has to be continued.
β€’ Search interval is reduced according to Fibonacci numbers.
β€’ For two consecutive Fibonacci numbers, Fn-2 and Fn-1, the third number is calculated as
Fn = Fn-1 + Fn-2 where n = 2, 3, 4 …
β€’ Search algorithm which only needs one function evaluation per iteration.
β€’ Principle of Fibonacci search is that out of two points needed for region elimination, one
is always the previous point.
β€’ Leads to 38.2% reduction of search space – greater than 25% in interval halving method.
31
Fibonacci Search Method
β€’ At iteration k, two intermediate points (x1 and x2), each L*
k away
from the either end of the search space (L = b – a) are chosen.
β€’ When region elimination removes a portion of search space
depending on function values at x1 or x2, remaining space is Lk.
β€’ Define L*k = (Fn-k+1/Fn+1)L and Lk = (Fn-k+2/Fn+1)L, such that Lk –
L*k = L*k+1 β†’ one of points in iteration k remains for iteration
(k+1).
β€’ If (a, x2) is eliminated in kth iteration, x1 is at distance (Lk – L*k ) or
L*k+1 from x2 in (k+1)th iteration.
β€’ Algorithm usually starts at k = 2.
32
Fibonacci Search Method
a b
x2 x1
L*k L*k
Lk
Step 1: Choose a lower bound a and upper bound b. Set L = b – a. Assume the
desired number of functions evaluations to be n. Set k = 2.
Step 2: Compute L*k = (Fn-k+1/Fn+1)L . Set x1 = a + L*k and x2 = b – L*k.
Step 3: Compute either f(x1) or f(x2) and use region-elimination rules to eliminate
a region. Set new a and b.
Step 4: Is k = n? If no, set k = k+1 and go to Step 2
Else Terminate!
Interval reduces to (2/Fn+1)L after n function evaluations. Thus, for desired
accuracy, Ξ΅ the required function evaluations is calculated as 2(b-a)/Fn+1 = Ξ΅.
33
Fibonacci Search Method: Algorithm
Minimize f(x) = x2 + 54/x
Step 1: We choose a = 0 and b = 5. Thus, initial
interval is L = 5. Set number of function evaluations as
n = 3 and k = 2.
Step 2: We compute L*2 as:
L*2 = (F3-2+1/F3+1)L = (F2/F4).5 = (2/5)*5 = 2
Set x1 = 0 + 2 = 2 and x2 = 5 – 2 = 3.
Step 3: We compute function values, f(x1) = 31 and
f(x2) = 27. Since f(x1) > f(x2), we eliminate region (0,
2). Next, we set a = 2 and b = 5.
34
Fibonacci Search Method: Example
Step 4: Since k = 2 β‰  n = 3, we set k = 3 and go to Step 2.
This completes one iteration of the Fibonacci search
method.
Step 2: We compute L*3 = (F1/F4)L = (1/5)*5 = 1. Set x1 =
2 + 1 = 3 and x2 = 5 – 1 = 4.
Step 3: One of the points (x1 = 3) was evaluated in
previous iteration. We only need f(x2 = 4) = 29.5.
We see that f(x1) < f(x2), so we set a = 2 and b = x2 = 4.
Step 4: At this iteration, k = n = 3 and we terminate the
algorithm. The final interval is (2,4).
35
Fibonacci Search Method: Example
β€’ Golden section search method overcomes two problems of the Fibonacci search method:
(i) calculation and storage of Fibonacci numbers at every iteration.
(ii) proportion of eliminated region is uneven.
β€’ Search space (a, b) is linearly mapped to unit interval search space (0, 1).
36
Golden Section Search Method
β€’ Next, two points at distance T from either end
of search space are chosen so that eliminated
region is (1 - T) to that in previous iteration.
β€’ Achieved by equating (1 – T) with (T x T) to get
T = 0.618.
Step 1: Choose a lower bound a, upper bound b and a small number, Ξ΅. Normalize x by using w
= (x – a)/ (b – a). Thus, aw = 0, bw = 1 and Lw = 1. Set k = 1.
Step 2: Set w1 = aw + 0.618Lw and w2 = bw – 0.618Lw. Compute f(w1) or f(w2), depending on
which of the two was not evaluated earlier. Use fundamental region elimination rules to
eliminate a region. Set new aw and bw.
37
Golden Section Search Method: Algorithm
Step 3: Is |Lw| < Ξ΅? If no, set k = k + 1, go to Step
2. Else Terminate!
Interval reduces to (0.618)n-1 after n function
evaluations. To achieve accuracy of Ξ΅, we need:
(0.618)n-1 (b – a) = Ξ΅.
Minimize f(x) = x2 + 54/x
Step 1: We choose a = 0 and b = 5, so the transformation
equation becomes w = x/5. Thus, aw = 0, bw = 1 and Lw = 1.
For the transformed variable w, the function is f(w) = 25w2 +
54/(5w).
In the w-space, minimum is at w* = 3/5 = 0.6. Iteration
counter, k is set to 1.
Step 2: Set w1 = 0 + 0.618 = 0.618 and w2 = 1 – 0.618 =
0.382 for which f(w1) = 27.02 and f(w2) = 31.92. As f(w1) <
f(w2), minimum cannot lie below w = 0.382. Thus, we
eliminate (a, w2) or (0, 0.382).
Set aw = 0.382 and bw = 1. At this stage, Lw = 1 – 0.382 =
0.618.
38
Golden Section Search Method: Example
Step 3: Since |Lw| is larger than Ξ΅, we set k = 2 and
move to Step 2. One iteration of golden section search
method is complete.
Step 2: We set w1 = 0.382 + (0.618)0.618 = 0.764 and
w2 = 1 – (0.618)0.618 = 0.618.
We only need to calculate f(w1) = 28.73 as f(w2) was
calculated in previous iteration. Since, f(w1) > f(w2) we
eliminate region (0.764,1). Thus, new bounds are:
aw = 0.382, bw = 0.764 and Lw = 0.764 – 0.382 = 0.382.
Step 3: Since Lw > Ξ΅, we go to Step 2 after setting k = 3.
39
Golden Section Search Method: Example
Step 2: We set w1 = 0.618 and w2 = 0.528, of which
f(w1) has been calculated before .
We only need to calculate f(w2) = 27.43. Since, f(w1) <
f(w2) we eliminate region (0.382, 0.528). Thus, new
bounds are:
aw = 0.528, bw = 0.764 and Lw = 0.764 – 0.528 = 0.236.
Step 3: At the end of the third iteration, Lw = 0.236
which is larger than prescribed accuracy, Ξ΅. Steps 2 and
3 have to be continued until desired accuracy is
achieved.
40
Golden Section Search Method: Example
β€’ Methods discussed so far worked with direct function values, here
algorithms require derivative information.
β€’ Gradient-based methods are predominantly used and are found to be
effective despite difficulties in obtaining derivatives in certain
situations.
β€’ Optimality property of gradient going to zero for local or global
optimum is used to terminate the search process.
41
Gradient-based Methods
β€’ Goal of unconstrained local optimization methods is to achieve a point having
derivative values going to zero.
β€’ In this method, a linear approximation to the first derivative of the function is
made using Taylor’s series expansion. This is equated to zero to find the next
guess.
β€’ If the current point at iteration t is x(t), the point in next iteration is governed by
the following (only linear terms of Taylor’s expansion are retained):
x(t+1) = x(t) – f’(x(t))/f’’(x(t))
42
Newton-Raphson Method
Step 1: Choose initial guess x(1) and a small number Ξ΅. Set k = 1. Compute
f’(x(1)).
Step 2: Compute f’’(x(k)).
Step 3: Calculate x(k+1) = x(k) – f’(x(k))/f’’(x(k)). Compute f’(x(k+1)).
Step 4: If |f’(x(k+1))| < Ξ΅, Terminate!
Else set k = k + 1 and go to Step 2.
43
Newton-Raphson Method: Algorithm
At π‘₯π‘₯ 𝑑𝑑
, first and second derivatives are computed with central difference method as:
𝑓𝑓′
π‘₯π‘₯ 𝑑𝑑
=
𝑓𝑓 π‘₯π‘₯ 𝑑𝑑 +βˆ†π‘₯π‘₯ 𝑑𝑑 βˆ’π‘“π‘“(π‘₯π‘₯ 𝑑𝑑 βˆ’βˆ†π‘₯π‘₯ 𝑑𝑑 )
2βˆ†π‘₯π‘₯(𝑑𝑑) Eq. (1)
𝑓𝑓′′ π‘₯π‘₯ 𝑑𝑑 =
𝑓𝑓 π‘₯π‘₯ 𝑑𝑑 +βˆ†π‘₯π‘₯ 𝑑𝑑 βˆ’2𝑓𝑓(π‘₯π‘₯ 𝑑𝑑 )+𝑓𝑓(π‘₯π‘₯ 𝑑𝑑 βˆ’βˆ†π‘₯π‘₯ 𝑑𝑑 )
(βˆ†π‘₯π‘₯ 𝑑𝑑 )2 Eq. (2)
The parameter βˆ†π‘₯π‘₯ 𝑑𝑑
is a small value, usually about 1% of π‘₯π‘₯ 𝑑𝑑
:
βˆ†π‘₯π‘₯(𝑑𝑑) = {
0.01 π‘₯π‘₯ 𝑑𝑑
, 𝑖𝑖𝑖𝑖 π‘₯π‘₯ 𝑑𝑑
> 0.01
0.0001, π‘œπ‘œπ‘œπ‘œπ‘œπ‘œπ‘œπ‘œπ‘œπ‘œπ‘œπ‘œπ‘œπ‘œπ‘œπ‘œπ‘œ
Eq. (3)
44
Newton-Raphson Method: Algorithm
Minimize f(x) = x2 + 54/x
Step 1: We choose initial guess x(1) = 1 and termination factor of Ξ΅ = 10-3. Set k = 1. We
compute f’(x(1)) using Eq. (1) and βˆ†π‘₯π‘₯ 𝑑𝑑
computed using Eq. (3) is 0.01.
The computed derivative is -52.005, whereas the exact derivative is -52. So, the computed
derivative is accepted and we proceed to Step 2.
Step 2: The exact value of f’’(x(1)) is 110, while the computed value by using Eq. (2) is 110.011
Step 3: We compute the next guess as:
x(2) = x(1) – f’(x(1))/f’’(x(1)) = 1-(-52.005)/(110.011) = 1.473.
The derivative f’(x(2)) is computed using Eq. (1) and found to be -21.944.
45
Newton-Raphson Method: Example
Step 4: Since |f’(x(2))| is > Ξ΅, we set k = 2 and go to Step 2. This completes one iteration of the
Newton-Raphson method.
Step 2: We begin the second iteration by computing the second derivative numerically at x(2),
which is found to be 35.796.
Step 3: The next guess is computed using the Taylor expansion and is found to be x(3) = 2.086.
The first derivative is computed using Eq. (1) and is f’(x(3)) = -8.239.
Step 4: Since |f’(x(3))| is > Ξ΅, we set k = 3 and go to Step 2. This marks end of second iteration.
46
Newton-Raphson Method: Example
Step 2: The second derivative at x(3) is f’’(x(3)) = 13.899.
Step 3: The next guess is computed as x(4) = 2.679 and the derivative is f’(x(4)) = -
2.167.
Step 4: The absolute value of the derivative is not smaller than Ξ΅, so the search
continues with Step 2.
After three more iterations, it is found that x(7) = 3.0001 and f’(x(4)) = - 4 x 10-8. This is
small enough to terminate the algorithm.
47
Newton-Raphson Method: Example
β€’ Computation of second derivative is avoided – only the first derivative is
used. Only for unimodal functions.
β€’ Function value and sign of first derivative at two points is used to eliminate
certain portion of search space.
β€’ Minimum is bracketed in interval (a, b) using derivative information, if two
conditions: (i) f’(a) < 0 and (ii) f’(b) > 0 are satisfied.
β€’ Two initial boundary points bracketing the minimum are required and two
consecutive points with derivatives having opposite signs are chosen for
next iteration.
48
Bisection Method
49
Bisection Method: Algorithm
Step 1: Choose two points a and b such that f’(a) < 0 and f’(b) > 0. Also choose a
small number Ξ΅. Set x1 = a and x2 = b.
Step 2: Calculate z = (x1 + x2)/2 and evaluate f’(z).
Step 3: If |f’(z)| ≀ Ξ΅, Terminate!
Else if f’(z) < 0 set x1 = z and go to Step 2
Else if f’(z) > 0 set x2 = z and go to Step 2
Sign of first derivative of mid-point of search region is used to eliminate half the
region.
If derivative < 0 – minimum cannot lie in left half of search region
If derivative > 0 – minimum cannot lie in right half of search region
50
Bisection Method: Example
Minimize f(x) = x2 + 54/x
Step 1: Choose two points a = 2 and b = 5 such that f’(a) =
-9.501 and f’(b) = 7.841. Also choose a small number Ξ΅ =
10-3. Set x1 = a and x2 = b.
Step 2: Calculate z = (x1 + x2)/2 = 3.5 and evaluate f’(z) =
2.591.
Step 3: Since f’(z) > 0, right half of the search plane is
eliminated. Next, we set x1 = 2 and x2 = z = 3.5 and go to
Step 2. This completes one iteration of the bisection
method.
Step 2: We compute z = (x1 + x2)/2 = 2.75 and evaluate
f’(z) = -1.641.
51
Step 3: Since f’(z) < 0, we set x1 = z = 2.75 and x2 = 3.5 and
go to Step 2.
Step 2: The new mid-point z is average of the two bounds: z =
3.125 for which f’(z) = 0.72.
Step 3: Since f’(z) > 0, we set x1 = 2.75 and x2 = z = 3.125
and go to Step 2.
Step 2: The new point z is calculated as z = (x1 + x2)/2 =
2.9375 for which f’(z) = -0.38303.
Step 3: Since f’(z) < 0, we set x1 = z = 2.9375 and x2 = 3.125
and continue till |f’(z)| ≀ Ξ΅.
Bisection Method: Example
β€’ In secant method, magnitude and sign of derivatives are used to create a new
point.
β€’ As boundary points have derivatives with opposite signs and derivatives vary
linearly, there exists a point between these two points with a zero derivative.
β€’ For x1 and x2, f’(x1)*f’(x2) ≀ 0 and there exists a point z which has zero derivative
value, given by
𝑧𝑧 = π‘₯π‘₯2 βˆ’
𝑓𝑓𝑓(π‘₯π‘₯2)
(𝑓𝑓′ π‘₯π‘₯2 βˆ’π‘“π‘“π‘“(π‘₯π‘₯1))/(π‘₯π‘₯2βˆ’π‘₯π‘₯1)
Eq. (4)
β€’ In one iteration more than half of the search space may be eliminated depending
on gradient values.
52
Secant Method
53
Secant Method: Algorithm
Step 1: Choose two points a and b such that f’(a) < 0 and f’(b) > 0. Also choose a
small number Ξ΅. Set x1 = a and x2 = b.
Step 2: Calculate z based on Eq. (4) and evaluate f’(z).
Step 3: If |f’(z)| ≀ Ξ΅, Terminate!
Else if f’(z) < 0 set x1 = z and go to Step 2
Else if f’(z) > 0 set x2 = z and go to Step 2
Sign of first derivative of z of search region is used to eliminate half the region.
If derivative < 0 – minimum cannot lie on left of search region
If derivative > 0 – minimum cannot lie on right of search region
54
Secant Method: Example
Minimize f(x) = x2 + 54/x
Step 1: We begin with initial points a = 2 and b = 5 having
derivatives f’(a) = -9.501 and f’(b) = 7.841 with opposite signs.
Also choose a small number Ξ΅ = 10-3. Set x1 = a and x2 = b.
Step 2: We now calculate z using Eq. (4) as z = 5 – (f’(5)-
f’(2))/(5-2) = 3.644. Next, we evaluate f’(z) = 3.221.
Step 3: Since f’(z) > 0, we eliminate the right part (z, b) of the
search region.
The eliminated region has length (b – z) = 1.356 which is less
than half of search space (b – a)/2 = 2.5.
Next, we set x1 = 2 and x2 = z = 3.644 and go to Step 2. This
completes one iteration of the secant method.
55
Secant Method: Example
Step 2: The next point is computed using Eq. (4) as z =
3.228. The derivative is evaluated as f’(z) = 1.127.
Step 3: Since f’(z) > 0, we eliminate the right part of the
search region, i.e. (3.228, 3.644).
The eliminated region has length of 0.416 which is less than
half of previous search space (3.644 – 2)/2 = 0.822.
Next, we set x1 = 2 and x2 = z = 3.288 and go to Step 2.
Step 2: The new point is z = 3.101. The derivative is
evaluated as f’(z) = 0.586.
Step 3: Since f’(z) > Ξ΅, we will continue to Step 2, until
desired accuracy is achieved.

More Related Content

Similar to AOT2 Single Variable Optimization Algorithms.pdf

Scientific Computing II Numerical Tools & Algorithms - CEI40 - AGA
Scientific Computing II Numerical Tools & Algorithms - CEI40 - AGAScientific Computing II Numerical Tools & Algorithms - CEI40 - AGA
Scientific Computing II Numerical Tools & Algorithms - CEI40 - AGAAhmed Gamal Abdel Gawad
Β 
Opt simple single_000
Opt simple single_000Opt simple single_000
Opt simple single_000sheetslibrary
Β 
designanalysisalgorithm_unit-v-part2.pptx
designanalysisalgorithm_unit-v-part2.pptxdesignanalysisalgorithm_unit-v-part2.pptx
designanalysisalgorithm_unit-v-part2.pptxarifimad15
Β 
Introduction to comp.physics ch 3.pdf
Introduction to comp.physics ch 3.pdfIntroduction to comp.physics ch 3.pdf
Introduction to comp.physics ch 3.pdfJifarRaya
Β 
Synthetic and Remainder Theorem of Polynomials.ppt
Synthetic and Remainder Theorem of Polynomials.pptSynthetic and Remainder Theorem of Polynomials.ppt
Synthetic and Remainder Theorem of Polynomials.pptMarkVincentDoria1
Β 
optimization methods by using matlab.pptx
optimization methods by using matlab.pptxoptimization methods by using matlab.pptx
optimization methods by using matlab.pptxabbas miry
Β 
Numerical analysis stationary variables
Numerical analysis  stationary variablesNumerical analysis  stationary variables
Numerical analysis stationary variablesSHAMJITH KM
Β 
Numerical method
Numerical methodNumerical method
Numerical methodKumar Gaurav
Β 
Introduction to Functions
Introduction to FunctionsIntroduction to Functions
Introduction to FunctionsMelanie Loslo
Β 

Similar to AOT2 Single Variable Optimization Algorithms.pdf (20)

Scientific Computing II Numerical Tools & Algorithms - CEI40 - AGA
Scientific Computing II Numerical Tools & Algorithms - CEI40 - AGAScientific Computing II Numerical Tools & Algorithms - CEI40 - AGA
Scientific Computing II Numerical Tools & Algorithms - CEI40 - AGA
Β 
Chapter 3
Chapter 3Chapter 3
Chapter 3
Β 
Opt simple single_000
Opt simple single_000Opt simple single_000
Opt simple single_000
Β 
03 optimization
03 optimization03 optimization
03 optimization
Β 
CI_L01_Optimization.pdf
CI_L01_Optimization.pdfCI_L01_Optimization.pdf
CI_L01_Optimization.pdf
Β 
designanalysisalgorithm_unit-v-part2.pptx
designanalysisalgorithm_unit-v-part2.pptxdesignanalysisalgorithm_unit-v-part2.pptx
designanalysisalgorithm_unit-v-part2.pptx
Β 
Introduction to comp.physics ch 3.pdf
Introduction to comp.physics ch 3.pdfIntroduction to comp.physics ch 3.pdf
Introduction to comp.physics ch 3.pdf
Β 
Assignment5
Assignment5Assignment5
Assignment5
Β 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Β 
Singlevaropt
SinglevaroptSinglevaropt
Singlevaropt
Β 
Synthetic and Remainder Theorem of Polynomials.ppt
Synthetic and Remainder Theorem of Polynomials.pptSynthetic and Remainder Theorem of Polynomials.ppt
Synthetic and Remainder Theorem of Polynomials.ppt
Β 
optimization methods by using matlab.pptx
optimization methods by using matlab.pptxoptimization methods by using matlab.pptx
optimization methods by using matlab.pptx
Β 
Term paper
Term paperTerm paper
Term paper
Β 
Remainder theorem
Remainder theoremRemainder theorem
Remainder theorem
Β 
Chapter 3
Chapter 3Chapter 3
Chapter 3
Β 
Numerical methods generating polynomial
Numerical methods generating polynomialNumerical methods generating polynomial
Numerical methods generating polynomial
Β 
Numerical analysis stationary variables
Numerical analysis  stationary variablesNumerical analysis  stationary variables
Numerical analysis stationary variables
Β 
Numerical method
Numerical methodNumerical method
Numerical method
Β 
Introduction to Functions
Introduction to FunctionsIntroduction to Functions
Introduction to Functions
Β 
Lecture6
Lecture6Lecture6
Lecture6
Β 

Recently uploaded

IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024Mark Billinghurst
Β 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVRajaP95
Β 
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSMANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSSIVASHANKAR N
Β 
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINEDJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINEslot gacor bisa pakai pulsa
Β 
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
Β 
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...Soham Mondal
Β 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxAsutosh Ranjan
Β 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
Β 
Model Call Girl in Narela Delhi reach out to us at πŸ”8264348440πŸ”
Model Call Girl in Narela Delhi reach out to us at πŸ”8264348440πŸ”Model Call Girl in Narela Delhi reach out to us at πŸ”8264348440πŸ”
Model Call Girl in Narela Delhi reach out to us at πŸ”8264348440πŸ”soniya singh
Β 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxwendy cai
Β 
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)Suman Mia
Β 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130Suhani Kapoor
Β 
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escortsranjana rawat
Β 
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).pptssuser5c9d4b1
Β 
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...ZTE
Β 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )Tsuyoshi Horigome
Β 
Processing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxProcessing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxpranjaldaimarysona
Β 

Recently uploaded (20)

IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024
Β 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
Β 
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSMANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
Β 
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINEDJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
Β 
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
Β 
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
Β 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptx
Β 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
Β 
Model Call Girl in Narela Delhi reach out to us at πŸ”8264348440πŸ”
Model Call Girl in Narela Delhi reach out to us at πŸ”8264348440πŸ”Model Call Girl in Narela Delhi reach out to us at πŸ”8264348440πŸ”
Model Call Girl in Narela Delhi reach out to us at πŸ”8264348440πŸ”
Β 
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCRCall Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Β 
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
Β 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptx
Β 
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)
Β 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
Β 
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
Β 
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
Β 
β˜… CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
β˜… CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCRβ˜… CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
β˜… CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
Β 
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
Β 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )
Β 
Processing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxProcessing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptx
Β 

AOT2 Single Variable Optimization Algorithms.pdf

  • 1. Advanced Optimization Theory MED573 Single Variable Optimization Algorithms Dr. Aditi Sengupta Department of Mechanical Engineering IIT (ISM) Dhanbad Email: aditi@iitism.ac.in 1
  • 2. Introduction 2 Single-variable algorithms involve one variable and are the building blocks for more complex multivariable algorithms. Two distinct types of algorithms: (i) Direct search methods and (ii) Gradient- based optimization methods. Direct search methods use values of the objective function to locate the minimum (and hence, optimal point). Gradient-based methods use the first and/or second derivatives of objective function to locate the minimum.
  • 3. (i) Local optimal point: x* is the local minimum if no point in the neighbourhood has a function value smaller than f(x*). (ii) Global optimal point: x** is global minimum if no point in the entire search has a smaller function value than f(x**). (iii) Inflection point: x* is inflection point if function value increases locally as x* increases and decreases locally as x* reduces. 3 Optimality Criteria
  • 4. β€’ For x* to be a local minimum: β€’ First condition alone suggests that x* is either a minimum, maximum or an inflection point. β€’ Both conditions together means x* is a minimum. 4 Identifying Local, Global Minima and Inflection Points
  • 5. Suppose at point x*, the first derivative is zero and first nonzero higher order derivative is n; then β€’ If n is odd, x* is an inflection point. β€’ If n is even, x* is a local optimum. a. If the derivative is positive, x* is local minimum b. If the derivative is negative, x* is local maximum 5 Conditions of Optimality
  • 6. 6
  • 7. 7
  • 8. β€’ Minimum of function is found in two phases: (i) First, a crude method is used to find lower and upper bounds of the minimum. (ii) Afterwards, a sophisticated method is used to search within the limits for optimal solution with the desired accuracy. β€’ Two methods: (a) Exhaustive Search Method and (b) Bounding Phase Method. 8 Bracketing Methods
  • 9. β€’ Optimum of function is bracketed by calculating function values at number of equally spaced points. β€’ Search begins from lower bound and three consecutive function values are compared based on unimodality assumption on the function. β€’ Based on the comparison, search is either terminated or continued by replacing one of the three points with a new point. β€’ Process continues till minimum is achieved. 9 Exhaustive Search Method a b x1 x2 x3 f(x) is unimodal in interval a ≀ x ≀ b i.f.f. it is monotonic on either side of the optimal point x* in the interval.
  • 10. Step 1: Set x1 = a, Ξ”x = (b-a)/n (n is number of intermediate points), x2 = x1 + Ξ”x, x3 = x2 + Ξ”x. Step 2: If 𝑓𝑓 π‘₯π‘₯1 β‰₯ 𝑓𝑓 π‘₯π‘₯2 ≀ 𝑓𝑓 π‘₯π‘₯3 , the minimum point lies in (x1, x3), Terminate! Else x1 = x2, x2 = x3, x3 = x2 + Ξ”x, go to Step 3. Step 3: Is π‘₯π‘₯3 ≀ 𝑏𝑏? If yes, go to Step 2; Else no minimum exists in (a,b) or a boundary point (a or b) is the minimum point. Final interval obtained by using this algorithm has accuracy of 2(b-a)/n for which (n/2 + 2) number of function evaluations are necessary. 10 Exhaustive Search Method: Algorithm a b x1 x2 x3
  • 11. 11 Exhaustive Search Method: Example Minimize f(x) = x2 + 54/x in the interval (0,5) Plot shows that minimum lies at x* = 3 for which f(x*) = 27, f’(x*) = 0 and f’’(x*) = 6. Thus, x* = 3 is local minimum as per minimality conditions. Let us bracket the minimum point by evaluating 11 different function values, thus n = 10.
  • 12. 12 How to Plot a Function in Matlab
  • 13. 13 Exhaustive Search Method: Example Step 1: x1 = a = 0 and b = 5, then Ξ”x = (5- 0)/10 = 0.5. We set x2 = 0.5 and x3 = 1. Step 2: Computing function values at x1 to x3 as f(0) = ∞, f(0.5) = 108.25, f(1) = 55 f(x1) > f(x2) > f(x3), thus minimum does not lie in interval (0,1). So, we set x1 = 0.5, x2 = 1, x3 = 1.5 and go to Step 3.
  • 14. 14 Exhaustive Search Method: Example Step 3: We can see that x3 < 5, so we go to Step 2. This completes one iteration of the exhaustive search method. Step 2: We calculate function values again, f(x3) = 38.25. Again, f(x1) > f(x2) > f(x3) so minimum does not lie in interval (0.5, 1.5). Set x1 = 1, x2 = 1.5, x3 = 2 and move to Step 3. Step 3: Again x3 < 5, so we have to go back to Step 2.
  • 15. 15 Exhaustive Search Method: Example Step 2: Function value at x3 = 2 is f(x3) = 31. Since, f(x1) > f(x2) > f(x3), we continue with Step 3 by setting x1 = 1.5, x2 = 2, x3 = 2.5. Step 3: At this iteration, x3 < 5 so we go to Step 2. Step 2: Function value at x3 = 2.5 is f(x3) = 27.85. As before, we find f(x1) > f(x2) > f(x3) and thus we go to Step 3. New set of points is x1 = 2, x2 = 2.5, x3 = 3 Step 3: Once again, x3 < 5. Thus, go to Step 2.
  • 16. 16 Exhaustive Search Method: Example Step 2: Function value at x3 = 3 is f(x3) = 27. Since, f(x1) > f(x2) > f(x3), we go to Step 3 by setting x1 = 2.5, x2 = 3, x3 = 3.5. Step 3: At this iteration, x3 < 5 so we go to Step 2. Step 2: Here, f(x3=3.5) = 27.68. At this iteration, we have f(x1) > f(x2) < f(x3). We can terminate the algorithm! Thus, bound for minimum, x* is (2.5, 3.5). For n = 10, accuracy of solution is 2(5-0)/10 = 1. For n = 10,000, obtained interval is (2.9995, 3.0005).
  • 17. β€’ Bounding phase method is used to bracket the minimum of a unimodal function. β€’ Algorithm begins with an initial guess, finds a search direction based on two or more function evaluations in vicinity of initial guess. β€’ After this, an exponential search strategy is adopted to reach optimum. β€’ Faster than exhaustive search method. 17 Bounding Phase Method
  • 18. Step 1: Choose an initial guess x(0) and an increment Ξ”. Set k = 0. Step 2: If f(x(0) - |Ξ”|) β‰₯ f(x(0)) β‰₯ f(x(0) + |Ξ”|), then Ξ” is positive; Else if f(x(0) - |Ξ”|) ≀ f(x(0)) ≀ f(x(0) + |Ξ”|), then Ξ” is negative; Else go to Step 1. Step 3: Set x(k+1) = x(k) + 2k Ξ”. Step 4: If f(x(k+1)) < f(x(k)), set k = k+1 and go to Step 3; Else the minimum lies in interval (x(k-1), x(k+1)) and Terminate! 18 Bounding Phase Method: Algorithm
  • 19. 19 Bounding Phase Method: Example Minimize f(x) = x2 + 54/x in the interval (0,5) Plot shows that minimum lies at x* = 3 for which f(x*) = 27, f’(x*) = 0 and f’’(x*) = 6. Thus, x* = 3 is local minimum as per minimality conditions. Let us bracket the minimum point by using bounding phase method.
  • 20. 20 Bounding Phase Method: Example Step 1: We choose initial guess x(0) = 0.6 and an increment Ξ” = 0.5. We also set k = 0. Step 2: Next, we calculate function values to proceed with the algorithm: f(0.6-0.5) = 540.01, f(0.6) = 90.36, f(0.6+0.5) = 50.301. We observe that f(0.1) > f(0.6) > f(1.1). Thus, we set Ξ” = +0.5. Step 3: We compute the next guess: x(1) = x(0) + 20 Ξ” = 1.1.
  • 21. 21 Bounding Phase Method: Example Step 4: The function value at x(1) is 50.301 which is less than f(x(0)). Next, we set k = 1 and go to Step 3. One iteration of the bounding phase algorithm is complete. Step 3: The next guess is x(2) = x(1) + 21 Ξ” = 2.1. Step 4: Function value at x(2) is 30.124 which is smaller than f(x(1)). Thus, we set k = 2 and move to Step 3. Step 3: The next guess is x(3) = x(3) + 22 Ξ” = 4.1. Step 4: f(x(3)) = 29.981 < f(x(2)) = 31.124, so we set k = 3.
  • 22. 22 Bounding Phase Method: Example Step 3: The next guess is x(4) = x(3) + 23 Ξ” = 8.1. Step 4: Function value at x(4) is 72.277 which is larger than f(x(3)) = 29.981. Thus, we terminate with the interval obtained as (2.1, 8.1). With Ξ” = 0.5, the bracketing obtained is poor but number of function evaluations is only 7. Bounding phase method approaches optimum exponentially but accuracy is not good. For exhaustive search method, number of iterations required for accurate solutions is large.
  • 23. Sophisticated algorithms are needed after the minimum point is bracketed. Here, we discuss three such algorithms that work on principle of region elimination and require smaller function evaluations. 23 Region-Elimination Methods a b x1 x2 f(x) x Let us consider points x1 and x2 which lie in interval (a,b) and satisfy x1 < x2. For minimizing unimodal functions, note the following: (i) If f(x1) > f(x2) then minimum does not lie in (a,x1). (ii) If f(x1) < f(x2) then minimum does not lie in (x2, b). (iii)If f(x1) = f(x2) then minimum does not lie in (a,x1) and (x2,b).
  • 24. 24 a b x1 x2 f(x) x Region-Elimination Methods Consider a unimodal function, as in the figure. If f(x1) > f(x2), the minimum point x* cannot lie on the l.h.s of x1. Thus, we can eliminate region (a, x1) and our interval of interest reduces from (a, b) to (x1, b). If f(x1) < f(x2), the minimum point x* cannot lie to the r.h.s. of x2. Thus, we can eliminate region (x2, b). If f(x1) = f(x2), we can conclude that regions (a, x1) and (x2, b) can be eliminated with the assumption that there exists only one local minimum in (a, b).
  • 25. β€’ Function values at three equidistant points are considered which divide the search space into four regions. β€’ If f(x1) < f(xm), minimum cannot lie beyond xm. So the interval reduces from (a, b) to (a, xm). Search space reduces by 50%. β€’ If f(x1) > f(xm), minimum cannot lie in interval (a, x1). This reduces search space only by 25%. β€’ Next, we compare function values at xm and x2 to further eliminate 25% of search space. β€’ Process continues till small enough interval is obtained. 25 Interval Halving Method a b x1 x2 f(x) x xm
  • 26. 26 Interval Halving Method: Algorithm a b x1 x2 f(x) x xm Step 1: Choose a lower bound a, an upper bound b and a small number, Ξ΅. Let xm = (a+b)/2, Lo = L = b – a. Compute f(xm). Step 2: Set x1 = a + L/4, x2 = b – L/4. Compute f(x1) and f(x2). Step 3: If f(x1) < f(xm) set b = xm and xm = x1; go to Step 5 Else go to Step 4. Step 4: If f(x2) < f(xm) set a = xm and xm = x2; go to Step 5 Else set a = x1, b = x2; go to Step 5. Step 5: Calculate L = b – a. If |L| < Ξ΅, Terminate! Else go to Step 2. At every iteration, two function evaluations are performed and interval reduces by half. Thus, interval reduces to 0.5n/2 Lo after n function evaluations. To achieve accuracy of Ξ΅, function evaluations needed are (0.5)n/2 (b-a) = Ξ΅.
  • 27. 27 Interval Halving Method: Example Minimize f(x) = x2 + 54/x in the interval (0,5) Plot shows that minimum lies at x* = 3 for which f(x*) = 27, f’(x*) = 0 and f’’(x*) = 6. Thus, x* = 3 is local minimum as per minimality conditions. Let us solve this unimodal, single-variable function using the interval halving method.
  • 28. 28 Interval Halving Method: Example Step 1: We choose a = 0, b = 5 and Ξ΅ = 10-3, where xm is the mid-point of the search interval. Thus, xm = (0+5)/2 = 2.5. Initial interval length Lo = L = 5-0 = 5. Function value at xm is f(xm) = 27.85. Step 2: Set x1 = 0 + 5/4 = 1.25 and x2 = 5 – 5/4 = 3.75. Function values are f(x1) = 44.76 and f(x2) = 28.46. Step 3: We see that f(x1) > f(xm), so we go to Step 4. Step 4: Again, we see that f(x2) > f(xm), so the intervals (0, 1.25) and (3.75, 5) are dropped. Next, we set a = 1.25 and b = 3.75.
  • 29. 29 Interval Halving Method: Example Step 5: The new interval is L = 3.75 – 1.25 = 2.5. Since, |L| is not small, we continue to Step 2. This completes one iteration of the interval halving method. Step 2: We now compute new x1 and x2: x1 = 1.25 + 2.5/4 = 1.875, x2 = 3.75 – 2.5/4 = 3.125 Function values are f(x1) = 32.32 and f(x2) = 27.05. Step 3: We see that f(x1) > f(xm), so we go to Step 4. Step 4: Here, f(x2) < f(xm) so we eliminate interval (1.25, 2.5) and set a = 2.5 and xm = 3.125. Step 5: At end of second iteration, new interval length is L = 3.75 – 2.5 = 1.25. Since |L| is not smaller than Ξ΅, so we go to Step 2 again.
  • 30. 30 Interval Halving Method: Example Step 2: We compute x1 = 2.8125 and x2 = 3.4375 for which function values are f(x1) = 27.11 and f(x2) = 27.53. Step 3: We observe that f(x1) > f(xm), so we go to Step 4. Step 4: Here, f(x2) > f(xm) so we drop the boundary intervals and set a = 2.8125, b = 3.4375. Step 5: The new interval L = 0.625 which is still larger than Ξ΅. So the process has to be continued.
  • 31. β€’ Search interval is reduced according to Fibonacci numbers. β€’ For two consecutive Fibonacci numbers, Fn-2 and Fn-1, the third number is calculated as Fn = Fn-1 + Fn-2 where n = 2, 3, 4 … β€’ Search algorithm which only needs one function evaluation per iteration. β€’ Principle of Fibonacci search is that out of two points needed for region elimination, one is always the previous point. β€’ Leads to 38.2% reduction of search space – greater than 25% in interval halving method. 31 Fibonacci Search Method
  • 32. β€’ At iteration k, two intermediate points (x1 and x2), each L* k away from the either end of the search space (L = b – a) are chosen. β€’ When region elimination removes a portion of search space depending on function values at x1 or x2, remaining space is Lk. β€’ Define L*k = (Fn-k+1/Fn+1)L and Lk = (Fn-k+2/Fn+1)L, such that Lk – L*k = L*k+1 β†’ one of points in iteration k remains for iteration (k+1). β€’ If (a, x2) is eliminated in kth iteration, x1 is at distance (Lk – L*k ) or L*k+1 from x2 in (k+1)th iteration. β€’ Algorithm usually starts at k = 2. 32 Fibonacci Search Method a b x2 x1 L*k L*k Lk
  • 33. Step 1: Choose a lower bound a and upper bound b. Set L = b – a. Assume the desired number of functions evaluations to be n. Set k = 2. Step 2: Compute L*k = (Fn-k+1/Fn+1)L . Set x1 = a + L*k and x2 = b – L*k. Step 3: Compute either f(x1) or f(x2) and use region-elimination rules to eliminate a region. Set new a and b. Step 4: Is k = n? If no, set k = k+1 and go to Step 2 Else Terminate! Interval reduces to (2/Fn+1)L after n function evaluations. Thus, for desired accuracy, Ξ΅ the required function evaluations is calculated as 2(b-a)/Fn+1 = Ξ΅. 33 Fibonacci Search Method: Algorithm
  • 34. Minimize f(x) = x2 + 54/x Step 1: We choose a = 0 and b = 5. Thus, initial interval is L = 5. Set number of function evaluations as n = 3 and k = 2. Step 2: We compute L*2 as: L*2 = (F3-2+1/F3+1)L = (F2/F4).5 = (2/5)*5 = 2 Set x1 = 0 + 2 = 2 and x2 = 5 – 2 = 3. Step 3: We compute function values, f(x1) = 31 and f(x2) = 27. Since f(x1) > f(x2), we eliminate region (0, 2). Next, we set a = 2 and b = 5. 34 Fibonacci Search Method: Example
  • 35. Step 4: Since k = 2 β‰  n = 3, we set k = 3 and go to Step 2. This completes one iteration of the Fibonacci search method. Step 2: We compute L*3 = (F1/F4)L = (1/5)*5 = 1. Set x1 = 2 + 1 = 3 and x2 = 5 – 1 = 4. Step 3: One of the points (x1 = 3) was evaluated in previous iteration. We only need f(x2 = 4) = 29.5. We see that f(x1) < f(x2), so we set a = 2 and b = x2 = 4. Step 4: At this iteration, k = n = 3 and we terminate the algorithm. The final interval is (2,4). 35 Fibonacci Search Method: Example
  • 36. β€’ Golden section search method overcomes two problems of the Fibonacci search method: (i) calculation and storage of Fibonacci numbers at every iteration. (ii) proportion of eliminated region is uneven. β€’ Search space (a, b) is linearly mapped to unit interval search space (0, 1). 36 Golden Section Search Method β€’ Next, two points at distance T from either end of search space are chosen so that eliminated region is (1 - T) to that in previous iteration. β€’ Achieved by equating (1 – T) with (T x T) to get T = 0.618.
  • 37. Step 1: Choose a lower bound a, upper bound b and a small number, Ξ΅. Normalize x by using w = (x – a)/ (b – a). Thus, aw = 0, bw = 1 and Lw = 1. Set k = 1. Step 2: Set w1 = aw + 0.618Lw and w2 = bw – 0.618Lw. Compute f(w1) or f(w2), depending on which of the two was not evaluated earlier. Use fundamental region elimination rules to eliminate a region. Set new aw and bw. 37 Golden Section Search Method: Algorithm Step 3: Is |Lw| < Ξ΅? If no, set k = k + 1, go to Step 2. Else Terminate! Interval reduces to (0.618)n-1 after n function evaluations. To achieve accuracy of Ξ΅, we need: (0.618)n-1 (b – a) = Ξ΅.
  • 38. Minimize f(x) = x2 + 54/x Step 1: We choose a = 0 and b = 5, so the transformation equation becomes w = x/5. Thus, aw = 0, bw = 1 and Lw = 1. For the transformed variable w, the function is f(w) = 25w2 + 54/(5w). In the w-space, minimum is at w* = 3/5 = 0.6. Iteration counter, k is set to 1. Step 2: Set w1 = 0 + 0.618 = 0.618 and w2 = 1 – 0.618 = 0.382 for which f(w1) = 27.02 and f(w2) = 31.92. As f(w1) < f(w2), minimum cannot lie below w = 0.382. Thus, we eliminate (a, w2) or (0, 0.382). Set aw = 0.382 and bw = 1. At this stage, Lw = 1 – 0.382 = 0.618. 38 Golden Section Search Method: Example
  • 39. Step 3: Since |Lw| is larger than Ξ΅, we set k = 2 and move to Step 2. One iteration of golden section search method is complete. Step 2: We set w1 = 0.382 + (0.618)0.618 = 0.764 and w2 = 1 – (0.618)0.618 = 0.618. We only need to calculate f(w1) = 28.73 as f(w2) was calculated in previous iteration. Since, f(w1) > f(w2) we eliminate region (0.764,1). Thus, new bounds are: aw = 0.382, bw = 0.764 and Lw = 0.764 – 0.382 = 0.382. Step 3: Since Lw > Ξ΅, we go to Step 2 after setting k = 3. 39 Golden Section Search Method: Example
  • 40. Step 2: We set w1 = 0.618 and w2 = 0.528, of which f(w1) has been calculated before . We only need to calculate f(w2) = 27.43. Since, f(w1) < f(w2) we eliminate region (0.382, 0.528). Thus, new bounds are: aw = 0.528, bw = 0.764 and Lw = 0.764 – 0.528 = 0.236. Step 3: At the end of the third iteration, Lw = 0.236 which is larger than prescribed accuracy, Ξ΅. Steps 2 and 3 have to be continued until desired accuracy is achieved. 40 Golden Section Search Method: Example
  • 41. β€’ Methods discussed so far worked with direct function values, here algorithms require derivative information. β€’ Gradient-based methods are predominantly used and are found to be effective despite difficulties in obtaining derivatives in certain situations. β€’ Optimality property of gradient going to zero for local or global optimum is used to terminate the search process. 41 Gradient-based Methods
  • 42. β€’ Goal of unconstrained local optimization methods is to achieve a point having derivative values going to zero. β€’ In this method, a linear approximation to the first derivative of the function is made using Taylor’s series expansion. This is equated to zero to find the next guess. β€’ If the current point at iteration t is x(t), the point in next iteration is governed by the following (only linear terms of Taylor’s expansion are retained): x(t+1) = x(t) – f’(x(t))/f’’(x(t)) 42 Newton-Raphson Method
  • 43. Step 1: Choose initial guess x(1) and a small number Ξ΅. Set k = 1. Compute f’(x(1)). Step 2: Compute f’’(x(k)). Step 3: Calculate x(k+1) = x(k) – f’(x(k))/f’’(x(k)). Compute f’(x(k+1)). Step 4: If |f’(x(k+1))| < Ξ΅, Terminate! Else set k = k + 1 and go to Step 2. 43 Newton-Raphson Method: Algorithm
  • 44. At π‘₯π‘₯ 𝑑𝑑 , first and second derivatives are computed with central difference method as: 𝑓𝑓′ π‘₯π‘₯ 𝑑𝑑 = 𝑓𝑓 π‘₯π‘₯ 𝑑𝑑 +βˆ†π‘₯π‘₯ 𝑑𝑑 βˆ’π‘“π‘“(π‘₯π‘₯ 𝑑𝑑 βˆ’βˆ†π‘₯π‘₯ 𝑑𝑑 ) 2βˆ†π‘₯π‘₯(𝑑𝑑) Eq. (1) 𝑓𝑓′′ π‘₯π‘₯ 𝑑𝑑 = 𝑓𝑓 π‘₯π‘₯ 𝑑𝑑 +βˆ†π‘₯π‘₯ 𝑑𝑑 βˆ’2𝑓𝑓(π‘₯π‘₯ 𝑑𝑑 )+𝑓𝑓(π‘₯π‘₯ 𝑑𝑑 βˆ’βˆ†π‘₯π‘₯ 𝑑𝑑 ) (βˆ†π‘₯π‘₯ 𝑑𝑑 )2 Eq. (2) The parameter βˆ†π‘₯π‘₯ 𝑑𝑑 is a small value, usually about 1% of π‘₯π‘₯ 𝑑𝑑 : βˆ†π‘₯π‘₯(𝑑𝑑) = { 0.01 π‘₯π‘₯ 𝑑𝑑 , 𝑖𝑖𝑖𝑖 π‘₯π‘₯ 𝑑𝑑 > 0.01 0.0001, π‘œπ‘œπ‘œπ‘œπ‘œπ‘œπ‘œπ‘œπ‘œπ‘œπ‘œπ‘œπ‘œπ‘œπ‘œπ‘œπ‘œ Eq. (3) 44 Newton-Raphson Method: Algorithm
  • 45. Minimize f(x) = x2 + 54/x Step 1: We choose initial guess x(1) = 1 and termination factor of Ξ΅ = 10-3. Set k = 1. We compute f’(x(1)) using Eq. (1) and βˆ†π‘₯π‘₯ 𝑑𝑑 computed using Eq. (3) is 0.01. The computed derivative is -52.005, whereas the exact derivative is -52. So, the computed derivative is accepted and we proceed to Step 2. Step 2: The exact value of f’’(x(1)) is 110, while the computed value by using Eq. (2) is 110.011 Step 3: We compute the next guess as: x(2) = x(1) – f’(x(1))/f’’(x(1)) = 1-(-52.005)/(110.011) = 1.473. The derivative f’(x(2)) is computed using Eq. (1) and found to be -21.944. 45 Newton-Raphson Method: Example
  • 46. Step 4: Since |f’(x(2))| is > Ξ΅, we set k = 2 and go to Step 2. This completes one iteration of the Newton-Raphson method. Step 2: We begin the second iteration by computing the second derivative numerically at x(2), which is found to be 35.796. Step 3: The next guess is computed using the Taylor expansion and is found to be x(3) = 2.086. The first derivative is computed using Eq. (1) and is f’(x(3)) = -8.239. Step 4: Since |f’(x(3))| is > Ξ΅, we set k = 3 and go to Step 2. This marks end of second iteration. 46 Newton-Raphson Method: Example
  • 47. Step 2: The second derivative at x(3) is f’’(x(3)) = 13.899. Step 3: The next guess is computed as x(4) = 2.679 and the derivative is f’(x(4)) = - 2.167. Step 4: The absolute value of the derivative is not smaller than Ξ΅, so the search continues with Step 2. After three more iterations, it is found that x(7) = 3.0001 and f’(x(4)) = - 4 x 10-8. This is small enough to terminate the algorithm. 47 Newton-Raphson Method: Example
  • 48. β€’ Computation of second derivative is avoided – only the first derivative is used. Only for unimodal functions. β€’ Function value and sign of first derivative at two points is used to eliminate certain portion of search space. β€’ Minimum is bracketed in interval (a, b) using derivative information, if two conditions: (i) f’(a) < 0 and (ii) f’(b) > 0 are satisfied. β€’ Two initial boundary points bracketing the minimum are required and two consecutive points with derivatives having opposite signs are chosen for next iteration. 48 Bisection Method
  • 49. 49 Bisection Method: Algorithm Step 1: Choose two points a and b such that f’(a) < 0 and f’(b) > 0. Also choose a small number Ξ΅. Set x1 = a and x2 = b. Step 2: Calculate z = (x1 + x2)/2 and evaluate f’(z). Step 3: If |f’(z)| ≀ Ξ΅, Terminate! Else if f’(z) < 0 set x1 = z and go to Step 2 Else if f’(z) > 0 set x2 = z and go to Step 2 Sign of first derivative of mid-point of search region is used to eliminate half the region. If derivative < 0 – minimum cannot lie in left half of search region If derivative > 0 – minimum cannot lie in right half of search region
  • 50. 50 Bisection Method: Example Minimize f(x) = x2 + 54/x Step 1: Choose two points a = 2 and b = 5 such that f’(a) = -9.501 and f’(b) = 7.841. Also choose a small number Ξ΅ = 10-3. Set x1 = a and x2 = b. Step 2: Calculate z = (x1 + x2)/2 = 3.5 and evaluate f’(z) = 2.591. Step 3: Since f’(z) > 0, right half of the search plane is eliminated. Next, we set x1 = 2 and x2 = z = 3.5 and go to Step 2. This completes one iteration of the bisection method. Step 2: We compute z = (x1 + x2)/2 = 2.75 and evaluate f’(z) = -1.641.
  • 51. 51 Step 3: Since f’(z) < 0, we set x1 = z = 2.75 and x2 = 3.5 and go to Step 2. Step 2: The new mid-point z is average of the two bounds: z = 3.125 for which f’(z) = 0.72. Step 3: Since f’(z) > 0, we set x1 = 2.75 and x2 = z = 3.125 and go to Step 2. Step 2: The new point z is calculated as z = (x1 + x2)/2 = 2.9375 for which f’(z) = -0.38303. Step 3: Since f’(z) < 0, we set x1 = z = 2.9375 and x2 = 3.125 and continue till |f’(z)| ≀ Ξ΅. Bisection Method: Example
  • 52. β€’ In secant method, magnitude and sign of derivatives are used to create a new point. β€’ As boundary points have derivatives with opposite signs and derivatives vary linearly, there exists a point between these two points with a zero derivative. β€’ For x1 and x2, f’(x1)*f’(x2) ≀ 0 and there exists a point z which has zero derivative value, given by 𝑧𝑧 = π‘₯π‘₯2 βˆ’ 𝑓𝑓𝑓(π‘₯π‘₯2) (𝑓𝑓′ π‘₯π‘₯2 βˆ’π‘“π‘“π‘“(π‘₯π‘₯1))/(π‘₯π‘₯2βˆ’π‘₯π‘₯1) Eq. (4) β€’ In one iteration more than half of the search space may be eliminated depending on gradient values. 52 Secant Method
  • 53. 53 Secant Method: Algorithm Step 1: Choose two points a and b such that f’(a) < 0 and f’(b) > 0. Also choose a small number Ξ΅. Set x1 = a and x2 = b. Step 2: Calculate z based on Eq. (4) and evaluate f’(z). Step 3: If |f’(z)| ≀ Ξ΅, Terminate! Else if f’(z) < 0 set x1 = z and go to Step 2 Else if f’(z) > 0 set x2 = z and go to Step 2 Sign of first derivative of z of search region is used to eliminate half the region. If derivative < 0 – minimum cannot lie on left of search region If derivative > 0 – minimum cannot lie on right of search region
  • 54. 54 Secant Method: Example Minimize f(x) = x2 + 54/x Step 1: We begin with initial points a = 2 and b = 5 having derivatives f’(a) = -9.501 and f’(b) = 7.841 with opposite signs. Also choose a small number Ξ΅ = 10-3. Set x1 = a and x2 = b. Step 2: We now calculate z using Eq. (4) as z = 5 – (f’(5)- f’(2))/(5-2) = 3.644. Next, we evaluate f’(z) = 3.221. Step 3: Since f’(z) > 0, we eliminate the right part (z, b) of the search region. The eliminated region has length (b – z) = 1.356 which is less than half of search space (b – a)/2 = 2.5. Next, we set x1 = 2 and x2 = z = 3.644 and go to Step 2. This completes one iteration of the secant method.
  • 55. 55 Secant Method: Example Step 2: The next point is computed using Eq. (4) as z = 3.228. The derivative is evaluated as f’(z) = 1.127. Step 3: Since f’(z) > 0, we eliminate the right part of the search region, i.e. (3.228, 3.644). The eliminated region has length of 0.416 which is less than half of previous search space (3.644 – 2)/2 = 0.822. Next, we set x1 = 2 and x2 = z = 3.288 and go to Step 2. Step 2: The new point is z = 3.101. The derivative is evaluated as f’(z) = 0.586. Step 3: Since f’(z) > Ξ΅, we will continue to Step 2, until desired accuracy is achieved.