2. Multivariable Functions
than one
□ Functions depends on more
variable
□ In a multivariable function, the gradient of
a function is not a scalar quantity; instead
it is a vector quantity
□ The objective function is a function of N
variables represented by x1, x2, . . . , xN.
x(t)
is
an N
□ The gradient vector at any point
represented by ∇f(x(t)) which is
dimensional vector given as follows:
3. Gradient of a multivariable function
□ Geometrically
, the gradient vector is
normal to the tangent plane at the point
x*,
□ Also, it points in the direction of
maximum increase in the function
5. Unidirectional Search
□ Many multivariable optimization techniques
use successive unidirectional search techniques
to find the minimum point along a particular
search direction..
□ A unidirectional search is a search performed
by comparing function values only along a
specified direction.
□ A unidirectional search is performed from a
point x(t) and in a specified direction s(t)..
□ Any arbitrary point on that line can be
expressed as follows:
□ The parameter α is a scalar quantity, specifying
8. Direct Search Methods
□ Use function values only.
□ In a single-variable function
optimization, there are only two search
directions a point
modified—either in the
can be
positive x-
direction or the negative x-direction
□ In multi-variable function optimization,
each variable can be modified either in
the positive or in the negative direction,
thereby totaling 2N
different ways
9. Box’s Evolutionary
Optimization Method
□ Developed by G. E. P. Box in 1957
□ The algorithm requires (2N
+1) points, of
2N
which are corner points of an N-
dimensional hypercube centred on the
other point
□ All (2N
+ 1) function values are compared
and the best point is identified
□ In the next iteration, another hypercube is
formed around this best point.
□ If at any iteration, an improved point is not
found, the size of the hypercube is reduced.
□ This process continues until the hypercube
10. Algorithm for Box Evolutionary
optimization method
⚫ Step 1 Choose an initial point x(0)
and
size reduction parameters Δi for all
design variables, i = 1, 2, . . . ,N. Choose a
termination parameter ϵ. Set
⚫ Step 2 If ∥ Δ ∥ < ϵ, Terminate; Else create
2N
points by adding and subtracting Δi/2
from each variable at the point
⚫ Step 3 Compute function values at all
(2N
+1) points. Find the point having the
minimum function value. Designate the
minimum point to be
⚫ Step 4 If , reduce size parameters Δi =
11. ⚫ In the above algorithm, x(0)
is always set
as the current best point.
⚫ Thus, at the end of simulation, x(0)
becomes the obtained optimum point.
⚫ It is evident from the algorithm that at
most 2N
functions are evaluated at each
iteration.
⚫ Thus, the required number of function
evaluations increases exponentially
with N.
Box’s Evolutionary
Optimization Method
13. ⚫ It is interesting to note that although the
minimum point is found, the algorithm
does not terminate at this step.
⚫ Since the current point is the minimum,
no other point can be found better than
x(0) = (3, 2)T
⚫ therefore, in subsequent iterations the
value of the size parameter will
continue to decrease (according to Step
4 of the algorithm).
⚫ When the value ∥ Δ ∥ becomes smaller
than ϵ, the algorithm terminates.
14. Simplex Search Method
□ The number of points in the initial simplex is
much less compared to that in Box’s
evolutionary optimization method
□ This reduces the number of function
evaluations required in each iteration.
□ For N variables only (N + 1) points are used
in the initial simplex
□ It is important that the points chosen for the
initial simplex should not form a zero-
volume N-dimensional hypercube.
□ Thus, in a function with two variables, the
chosen three points in the simplex should
not lie along a line
15. □ At each iteration, the worst point in the
simplex is found first.
□ Then, a new simplex is formed from the
old simplex by fixed rules that steer the
search away from the worst point in the
simplex.
□ The extent of steering depends on the
relative function values of the simplex.
Four
depen
different situations may arise
ding on the function values.
Simplex Search Method
16. □ This algorithm was originally proposed by Spendley,
et al. (1962) and later modified by Nelder and Mead
(1965).
□ At first, the centroid (xc) of all points except worst
point is determined.
□ Thereafter, the worst point in the simplex is
reflected about the centroid and a new point xr is
found.
□ If the function value at this point is better than the
best point in the simplex, the reflection is
considered to have taken the simplex to a good
region
□ Thus, an expansion along the direction from the
Simplex Search Method
17. □ If the function value at the reflected point is
worse than the worst point in the simplex, the
reflection is considered to have taken the simplex
to a bad region in the search space.
centroid to the reflected point is made .
□ Thus, a contraction in the direction from the
□ The amount of contraction is controlled by a
factor β (a negative value of β is used).
□ If the function value at the reflected point is better
than the worst point in the simplex, a contraction
is made with a positive β value
□ The default scenario is the reflected point itself.
The obtained new point replaces the worst point
in the simplex and the algorithm continues with
Simplex Search Method
18. Algorithm Simplex Search Method
⚫ Step 1 Choose γ > 1, β ∈ (0, 1), and a
termination parameter ϵ. Create an initial
simplex.
⚫ Step 2 Find xh (the worst point), xl (the best point),
and xg (next to the worst point). Calculate
⚫ Step 3 Calculate the reflected point xr = 2xc − xh. Set xnew
= xr.
□ If f(xr) < f(xl), set xnew = (1 + γ)xc − γxh (expansion);
□ Else if f(xr) ≥ f(xh), set xnew = (1 − β)xc + βxh (contraction);
□ Else if f(xg) < f(xr) < f(xh), set xnew = (1 + β)xc − βxh
(contraction).
□ Calculate f(xnew) and replace xh by xnew.
⚫ Step 4 , Terminate;
21. Hooke-Jeeves Pattern Search
Method
⚫ The pattern search method works by creating a set
of search directions iteratively. The created search
directions should be such that they completely
span the search space..
⚫ In a N-dimensional problem, this requires at least
N linearly independent search directions.
⚫ In the Hooke-Jeeves method, a combination of
exploratory moves and pattern moves is made
iteratively.
⚫ An exploratory move is performed in the vicinity
of the current point systematically to find the best
point around the current point.
⚫ Thereafter, two such points are used to make a
pattern move.
22. 1. Algorithm of Exploratory
move
Assume that the current solution (the base
point) is denoted by xc
. Assume also that the
variable xc
i is perturbed by Δi. Set i = 1 and x
= xc
.
□ Step 1 Calculate f = f(x), f+
=f(xi+Δi) and
f−
=f(xi−Δi).
□ Step 2 Find fmin = min(f, f+
, f−
). Set x
corresponds to fmin.
□ Step 3 Is i = N? If no, set i = i + 1 and go to
Step 1; Else x is the result and go to Step 4.
□ Step 4 If x ̸= xc
, success; Else failure.
23. Exploratory move
□ In the exploratory move, the current point is
perturbed in positive and negative directions
along each variable one at a time and the best
point is recorded.
□ The current point is changed to the best point
at the end of each variable perturbation.
□ If the point found at the end of all variable
point, the exploratory move is a
perturbations is different than the original
success,
otherwise the exploratory move is a failure.
□ In any case, the best point is considered to be
the outcome of the exploratory move.
24. 2. Pattern move
⚫ A new point is found by jumping from the
xc
current best point along a direction
connecting the previous best point x(k−1)
and the
current base point x(k)
as follows:
⚫ The Hooke-Jeeves method comprises of an
iterative application of an exploratory move in
the locality of the current point and a
subsequent jump using the pattern move.
⚫ If the pattern move does not take the solution
to a better region, the pattern move is not
accepted and the extent of the exploratory
search is reduced.
25. ⚫ Step 1 Choose a starting point x(0)
, variable increments
Δi (i = 1, 2, . . . ,N), a step reduction factor α > 1, and a
termination parameter, ϵ. Set k = 0.
⚫ Step 2 Perform an exploratory move with x(k) as the
base point. Say x is the outcome of the exploratory move.
If the exploratory move is a success, set x(k+1)
= x and go to
Step 4; Else go to Step 3.
⚫ Step 3 Is ∥ Δ ∥ < ϵ? If yes, Terminate; Else set Δi = Δi/α
for i = 1, 2, . . . ,N and go to Step 2.
⚫ Step 4 Set k = k + 1 and perform the pattern move: x(k+1)
p
= x(k)
+ (x(k)
− x(k−1)
).
⚫ Step 5 Perform another exploratory move using x(k+1)
p as
the base point. Let the result be x(k+1).
⚫ Step 6 Is f(x(k+1)
) < f(x(k)
)? If yes, go to Step 4; Else go
to Step 3.
1. Algorithm of Pattern move
27. Gradient-based Methods
□ Direct search methods described
require many function evaluations to
converge to the minimum point.
□ Gradient-based methods discussed
exploit the derivative information of the
function and are usually faster search
methods.
□ where the derivative information is
easily available, gradient-based
methods are very efficient.
28. Search Direction
□ The first derivative ∇f(x(t)
) at any point
x(t)
represents the direction of the
maximum increase of the function
value.
29. Search Direction
□ Finding a point with the minimum function
value, ideally searching along the opposite to
the first derivative direction, that is, we should
search along −∇f(x(t)) direction.
□ Any search direction s(t) having smaller
function value than that at the current point
x(t). Thus, a search direction s(t) that satisfies
the following relation is a descent direction.
Descent direction
A search direction, s(t), is a descent direction at
point x(t) if the condition ∇f(x(t)) · s(t) ≤ 0 is
satisfied in the vicinity of the point x(t).
x(k+1)
=x(k)
+α.s(k)
30. Cauchy’s (steepest descent)
Method
□ The steepest descent method uses the
gradient vector at each point as the search
direction for each iteration.
□ The search direction used in Cauchy’s method
is the negative of the gradient at any
particular point x(t)
:
s(k)
= −∇f(x(k)
).
31. □ Since this direction gives maximum descent in
function values, it is also known as the steepest
descent method.
□ At every iteration, the derivative is computed at
the current point and a unidirectional search is
performed in the negative to this derivative
direction to find the minimum point along that
direction.
□ The minimum point becomes the current point
and the search is continued from this point.
□ The algorithm continues until a point having a
algorithm guarantees improvement in
small enough gradient vector is found. This
the
function value at every iteration.
Cauchy’s (steepest descent)
Method
32. □ Step 1 Choose a maximum number of iterations
x(0)
,
M to be performed, an initial point two
termination parameters ϵ1, ϵ2, and set k = 0.
□ Step 2 Calculate ∇f(x(k)
), the first derivative at
the point x(k)
.
□ Step 3 If ∥∇f(x( k )
)∥ ≤ ϵ1, Terminate; Else if k ≥ M;
Terminate; Else go to Step 4.
□ Step 4 Perform a unidirectional search to find α(k)
using ϵ2 such that f(x(k+1)
) = f(x(k)
−α(k)
∇f(x(k)
)) is
minimum. One criterion for termination is when
|∇f(x(k+1)
) · ∇f(x(k))| ≤ ϵ2.
∥x(k+1)
−x(k)
∥
□ Step 5 Is /∥x( k )
∥ ≤ ϵ1? If yes,
Terminate; Else set k = k + 1 and go to Step
2.
Algorithm :Cauchy’s (steepest
descent) Method