2. INTORDUCTION
Optimization is the process of finding the conditions
that give maximum or minimum values of a function.
Optimization includes finding "best available" values of
some objective function given a defined domain,
including a variety of different types of objective
functions and different types of domains
3. The elements of the mathematical statement of
optimization are:
Objective
Function Constraints
Equality
Constraints
Inequality
Constraints
Optimization
goal
Maximizing
(ex: cooling,
production...)
Minimizing
(ex: cost)
4. Unconstrained Optimization
(without using derivatives)
Line search
(one variable)
Dichotomous
Search
Golden Section
Method
Multi-dimensional
search
Cyclic
Coordinate
Method
Hooke-Jeeves
Method
Multi-Dimensional Constrained optimization
Using penalty function
5. Main Goal of Optimization Techniques
f(x*)
x*
The optimization methods are used to find the minimizer x*
of f(x)
6. Main Goal of Optimization Techniques
Find Xsol so that F(Xsol) is minimum
Xsol might be one variable or vector of more than one variable
7. Dichotomous Search
One Dimensional optimization approach
Free-Derivative Method
Applies only for Unimodal Function
Sequential search method
Minimize objective function over a certain interval
8. Dichotomous Search
One Dimensional optimization approach
Work on the objective function that is
dependent on only one variable
Example
Objective function
dependent on 1 variable
𝑓 𝑥 = 𝑒−𝑥
− cos 𝑥+0.5
𝑓 𝑥 = 𝑥(1 −
2
3
𝑥)
Objective function
dependent on 2 variables
𝑓 𝑥1, 𝑥2 = (𝑥1−2)4 + (𝑥1−2𝑥2)2
𝑓 𝑥1, 𝑥2 = 3𝑥1
2
− 2𝑥1𝑥2 + 𝑥2
2
+ 4 𝑥1+3 𝑥2
9. Dichotomous Search
One Dimensional optimization approach
Free-Derivative Method
No need to compute the derivatives of
the objective function
10. Dichotomous Search
One Dimensional optimization approach
Free-Derivative Method
Applies only for Unimodal Function
A unimodal function on an interval [a, b]
has exactly one point where a maximum
or a minimum occurs in the interval.
11. Dichotomous Search
• One Dimensional optimization approach
• Free-Derivative Method
No need to compute the derivatives of the objective function
12. Dichotomous Search
• One Dimensional optimization approach
• Free-Derivative Method
• Applies only for Unimodal Function
A unimodal function on an interval [a, b] has exactly one point
where a maximum or a minimum occurs in the interval.
●
Maximum
●
Minimum
13. Dichotomous Search
One Dimensional optimization approach
Work on the objective function that is
dependent on only one variable
Example
Objective function
dependent on 1 variable
𝑓 𝑥 = 𝑒−𝑥
− cos 𝑥+0.5
𝑓 𝑥 = 𝑥(1 −
2
3
𝑥)
Objective function
dependent on 2 variables
𝑓 𝑥1, 𝑥2 = (𝑥1−2)4 + (𝑥1−2𝑥2)2
𝑓 𝑥1, 𝑥2 = 3𝑥1
2
− 2𝑥1𝑥2 + 𝑥2
2
+ 4 𝑥1+3 𝑥2
14. Dichotomous Search
• One Dimensional optimization approach
• Free-Derivative Method
• Applies only for Unimodal Function
• Sequential search method
The same sequence is repeated so many times that the
wanted accuracy is achieved
The result of any experiment influences the location
of the subsequent experiment
15. f(x*)
x*
Dichotomous Search
• One Dimensional optimization approach
• Free-Derivative Method
• Applies only for Unimodal Function
• Sequential search method
• Minimize objective function over a certain interval
Find the value x*in the
interval so that f(x*) is
minimum
16. Dichotomous Search
1. Consider a unimodal function f which is known to
have a minimum in the interval [a b]
2. The interval [a b] is called the range of uncertainty
3. Xsolution can be located by repeatedly reducing the
range of uncertainty by half until sufficiently small
range is obtained
17. 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2
x
0.4
0.5
0.6
0.7
0.8
0.9
1
1.1
1.2
f(x)
Dichotomous Search
1. Consider a unimodal function f which is known to have a minimum
in the interval [a1 b1]
The function is : 𝑓 𝑥 =
𝑥3
3
−
𝑥2
2
− 𝑥 + 2
a1
b1
𝑎1𝑏1 = [1 2]
𝑥 𝜖[𝑎 𝑏]
18. 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2
x
0.4
0.5
0.6
0.7
0.8
0.9
1
1.1
1.2
f(x)
Dichotomous Search
1. Consider a unimodal function f which is known to have a minimum
in the interval [a1 b1]
2. The interval interval [a1 b1] is called the range of uncertainty
a1
b1
Range of uncertainty
19. 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2
x
0.4
0.5
0.6
0.7
0.8
0.9
1
1.1
1.2
f(x)
Dichotomous Search
1. Consider a unimodal function f which is known to have a minimum
in the interval [a1 b1]
2. The interval [a1 b1] is called the range of uncertainty
3. Xsol can be located by repeatedly reducing the range of uncertainty
by half until sufficiently small range is obtained
a1
b1
Range of uncertainty
20. 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2
x
0.4
0.5
0.6
0.7
0.8
0.9
1
1.1
1.2
f(x)
Dichotomous Search
1. Consider a unimodal function f which is known to have a minimum
in the interval [a b]
2. The interval [a b] is called the range of uncertainty
3. Xsol can be located by repeatedly reducing the range of uncertainty
by half until sufficiently small range is obtained
an bn
Desired range
of uncertainty
𝑥𝑠𝑜𝑙 =
𝑎𝑛 + 𝑏𝑛
2
How to repeatedly reduce
the range of uncertainty by
half ?
a1
b1
1st Range of uncertainty
21. 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2
x
0.4
0.5
0.6
0.7
0.8
0.9
1
1.1
1.2
f(x)
Dichotomous Search
a1
b1
c1d1
2𝜖
How to repeatedly reduce the range of uncertainty by half ?
1. Place two first test points c and d symmetrically on both sides
of the centerline on a distance 2ε from each other
𝑐1 =
𝑎1+𝑏1
2
− 𝜀 ; 𝑑1 =
𝑎1+𝑏1
2
+ 𝜀
22. 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2
x
0.4
0.5
0.6
0.7
0.8
0.9
1
1.1
1.2
f(x)
Dichotomous Search
a1
b1
c1d1
How to repeatedly reduce the range of uncertainty by half ?
2. Check if f(c)<f(d)
The range of uncertainty
is subdivided to the right
f(c)
f(d)
f(c1)>f(d1)
a2 b2
[a2 b2] =[c1 b1]
1st Range of uncertainty
The half of the range corresponding to the higher function value is
eliminated, which leaves the new interval [a2 b2]
2nd Range of uncertainty
23. 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2
x
0.4
0.5
0.6
0.7
0.8
0.9
1
1.1
1.2
f(x)
Dichotomous Search
How to repeatedly reduce the range of uncertainty by half ?
3. The new interval is then divided into two equal parts and two
new points are placed on both sides of the centerline
a2 b2
c2d2
2nd Range of uncertainty
24. 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2
x
0.4
0.5
0.6
0.7
0.8
0.9
1
1.1
1.2
f(x)
Dichotomous Search
How to repeatedly reduce the range of uncertainty by half ?
4. Repeat step 2
a2 b2
c2d2
The range of uncertainty
is subdivided to the left
f(c2)<f(d2)
[a3 b3] =[a2 d2]
2nd Range of uncertainty
3rd Range of
uncertainty
a3 b3
25. Dichotomous Search
The steps of this method are summarized as following :
1. The initial interval [a b] is divided into two equal parts
2. Two first test points c and d are placed symmetrically on both sides
of the centerline on a distance 2ε from each other
{ 𝑐 =
𝑎+𝑏
2
− 𝜀 ; 𝑑 =
𝑎+𝑏
2
+ 𝜀 }
3. The function values f(x) corresponding to the both points are
calculated and compared { f(c) & f(d) }
4. The half of the range corresponding to the higher function value is
eliminated, which leaves the new interval [a b]
5. The new interval is then divided into two equal parts and two new
points are placed on both sides of the centerline
6. This sequence is continued by eliminating always the half
corresponding to the higher function value until the desired range of
uncertainty is reached.
27. Dichotomous Search
The algorithm for the above steps are the following :
1. Initialize :
i. Choose 𝜀 =
ii. Choose desired length of uncertainty ∆= 0≤∆≤b1-a1
Note : You should choose ∆ to be greater than 2𝜀 to reach
end condition
Given a<c <d<b & (d-c)=2𝜀
Why?
(b-a)<2𝜀 Can’t be satisfied at any iteration
Thus choose ∆> 2𝜀
(bn-an)< ∆
Note : Reaching
desired length of
uncertainty is the
end condition
28. Dichotomous Search
The algorithm for the above steps are the following :
1. Initialize :
i. Choose 𝜀 =
ii. Choose desired length of uncertainty ∆= 0≤∆≤b-a
Note : You should choose ∆ to be greater than 2𝜀 to reach
end condition
2. If bk-ak≤ ∆ stop. The solution will be 𝑥𝑠𝑜𝑙 =
𝑎𝑘+𝑏𝑘
2
. Otherwise
consider :𝑐𝑘 =
𝑎𝑘+𝑏𝑘
2
− 𝜀 ; 𝑑𝑘 =
𝑎𝑘+𝑏𝑘
2
+ 𝜀 and go to step 3
3. If 𝑓(𝑐𝑘) < 𝑓(𝑑𝑘) , then 𝑎𝑘+1 = 𝑎𝑘 and 𝑏𝑘+1 = 𝑑𝑘.
Else 𝑎𝑘+1 = 𝑐𝑘 and 𝑏𝑘+1 = 𝑏𝑘
4. Replace k by k+1 and repeat step 2 to 4.
29. Dichotomous Search
This algorithm is used to find a minimum. The same
method can be used to find a maximum by finding the
minimum of the negative of the function .
32. How?
1. Any function in MATLAB needs inputs and outputs
a. Objective function
b. [a b] interval
c. ∆
d. 𝜀
a. Xsol
b. Number of
iterations
Inputs
Outputs
2. You need a loop that breaks at the stopping condition for /while loop
3. You need an if condition inside the loop 2 different choices
No need to store all values of a and b
4. Update a or b at each iteration
5. Define c and d at each iteration
6. Evaluate f(c) and f(d) at each iteration Use feval ; 2 function evaluations/iteration
Recall
33. Apply it on the example shown in
the previous slides
𝑓 𝑥 =
𝑥3
3
−
𝑥2
2
− 𝑥 + 2 𝑎1𝑏1 = [1 2]
𝑥 𝜖[𝑎 𝑏]
𝜀 = 10−3
∆= 10−2
Recall ∆> 2𝜀
34. Apply it on the example shown in
the previous slides
𝑓 𝑥 =
𝑥3
3
−
𝑥2
2
− 𝑥 + 2 𝑎1𝑏1 = [1 2]
𝑥 𝜖[𝑎 𝑏]
𝜀 = 10−3
∆= 10−2
Recall ∆> 2𝜀
Xsol=1.6209
itr=7
Editor's Notes
Mathematical optimization is the selection of a best element (with regard to some criteria) from some set of available alternatives.
Optimization is essentially about finding the best solution to a given problem from a set of feasible solutions. It consists of three components: • the objective or objectives, that is, what do we want to optimize? • a solution (decision) vector, that is, how can we achieve the optimal objective? • the set of all feasible solutions, that is, among which possible options may we choose to optimize?
Explain on the graph more
Find X so that F(X) is minimum
Explain on the graph more
Find X so that F(X) is minimum
The goal is to find for the variable the value minimizing the objective function F(x).
A function is unimodal if only one extremum is existing in the range investigated
Add a graph better unimodal vs non unimodal
wanted accuracy (range of uncertainty)
Need to change the experiment name
The interval is repeatedly reduced until the minimum is localized to a small enough interval. At Lines 11 and 12, two test points are computed. Depending upon the value the function φ at these points, the interval is either sub-divided to the left or right.
The interval is repeatedly reduced until the minimum is localized to a small enough interval. At Lines 11 and 12, two test points are computed. Depending upon the value the function φ at these points, the interval is either sub-divided to the left or right.
The interval is repeatedly reduced until the minimum is localized to a small enough interval. At Lines 11 and 12, two test points are computed. Depending upon the value the function φ at these points, the interval is either sub-divided to the left or right.
The interval is repeatedly reduced until the minimum is localized to a small enough interval. At Lines 11 and 12, two test points are computed. Depending upon the value the function φ at these points, the interval is either sub-divided to the left or right.
The interval is repeatedly reduced until the minimum is localized to a small enough interval. At Lines 11 and 12, two test points are computed. Depending upon the value the function φ at these points, the interval is either sub-divided to the left or right.
The interval is repeatedly reduced until the minimum is localized to a small enough interval. At Lines 11 and 12, two test points are computed. Depending upon the value the function φ at these points, the interval is either sub-divided to the left or right.
The interval is repeatedly reduced until the minimum is localized to a small enough interval. At Lines 11 and 12, two test points are computed. Depending upon the value the function φ at these points, the interval is either sub-divided to the left or right.
4 can be written also like that :The interval is either sub-divided to the left or right depending upon the value the function f at c and d.
4 and 5 is the same choose one of them
4 and 5 is the same choose one of them
This algorithm is used to find a minimum. The same method can be used to find a maximum with the exception that the elimination is always done from the side of the smaller function