2. TOPICS TO BE COVERED
1) Optimization and its application
2) Different types of extremes
3) Unconstrained minimization
4) Convergence criteria
5) One- dimensional linear search
3. INTRODUCTION
The process of optimization is the process of obtaining the ‘best’, if it is possible
to measure and change what is ‘good’ or ‘bad’. In practice, one wishes the
‘most’ or ‘maximum’ (e.g., salary) or the ‘least’ or ‘minimum’ (e.g., expenses).
Optimization practice is , thus, the collection of techniques, methods, procedures, and
algorithms that can be used to find the optima.
4. APPLICATIONS OF OPTIMIZATION
1) Modeling,
2) Characterization, and design of devices, circuits, and systems;
3) Design of tools, instruments, and equipment;
4) Design of structures and buildings;
5) Approximation theory, curve fitting, solution of systems of equations;
6) Forecasting, production scheduling, quality control;
7) Neural networks and adaptive systems
8) Inventory control, accounting, budgeting
5. DIFFERENT TYPES OF EXTREMES IN OBJECTIVE FUNCTION CURVE
Local minima = A, C, F
Local maxima = B, E
Global minima = C
Global maxima = E
Inflexion point = D
6. With functions of two variables we have got a new critical
point, i.e., the saddle point.
The required conditions for it are:
1)
𝜕𝑓
𝜕𝑥
=
𝜕𝑓
𝜕𝑦
= 0
2)
𝜕2 𝑓
𝜕𝑥2
𝜕2 𝑓
𝜕𝑦2 −
𝜕2 𝑓
𝜕𝑥𝜕𝑦
2
< 0
The figure aside is a graph of the function,
f(x,y)= x2 – y2 and it has a saddle point at the origin
10. NEWTON-RAPHSON METHOD
It considers a linear approximation to the first derivative of the function using the
Taylor’s series expansion. Subsequently, this expression is equated to zero to find
the initial guess. If the current point at iteration t is xt , the point in the next
iteration is governed by the nature of the following expression.
𝑥 𝑡+1
= 𝑥 𝑡
−
𝑓|(𝑥 𝑡)
𝑓||(𝑥 𝑡)
The iteration process is assumed to have converged when the derivative, is close
to zero.
| 𝑓|(𝑥 𝑡) | ≤ ε
where ε is a small quantity
11.
12.
13.
14.
15. STEEPEST DESCENT (CAUCHY’S) METHOD
This method is generally used to optimize a multi-variable design. The
search direction used in this method is the negative of the gradient at any
particular point Xt
Since this direction provides maximum descent in function values, it is
called steepest descent method. At every iteration, the derivative is
computed at the current point and a unidirectional search is performed in
the negative to this derivative direction to find the minimum point along
that direction. The minimum point becomes the current point and the
search is continued from this point. The procedure continues until a point
having a small enough gradient vector is found.
16. The steps followed in the present method is mentioned sequentially below
17.
18.
19.
20. OTHER LINE SEARCH ALGORITHMS
(UNCONSTRAINED)
1) One-dimensional line search
Powell's quadratic interpolation algorithm
2) First order line search
Conjugate gradient methods
3) Second order line search descent methods
Modified Newton's method
Quasi-Newton methods
21. REFERENCES
1) Andreas Antoniou, Wu-Sheng Lu-Practical optimization_ algorithms and
engineering applications-Springer (2007)
2) Practical Mathematical Optimization Panos M. Pardalos, Donald W. Heam Vol-97
3) Numerical Methods in Engineering With Python 3 , Third Edition- Jaan Kiusalaas
4) Essentials Of Computational Chemistry Theories And Models -Christopher Cramer
Editor's Notes
Suppose at point x*, 𝜕𝑓(𝑥) 𝜕𝑥 =0; 𝜕 2 𝑓(𝑥) 𝜕 2 𝑥 =𝑛
If n is odd, x* is an inflection point while if n is even, x* is a local optimum. In the case, x* is a local minima, there are two possibilities. If the second derivative is positive, x* is a local minimum. However, if the second derivative is negative, x* is a local maximum.
It is assumed that F{x) is unimodal over the interval [a, b], i.e. that it has a minimum A* within the interval and that F{x) is strictly descending for A < A* and strictly ascending for A > A*