SlideShare a Scribd company logo
1 of 66
Download to read offline
Optimization Methods
in Engineering Design
Day-6
Course Materials
• Arora, Introduction to Optimum Design, 3e, Elsevier,
(https://www.researchgate.net/publication/273120102_Introductio
n_to_Optimum_design)
• Parkinson, Optimization Methods for Engineering Design, Brigham
Young University
(http://apmonitor.com/me575/index.php/Main/BookChapters)
• Iqbal, Fundamental Engineering Optimization Methods, BookBoon
(https://bookboon.com/en/fundamental-engineering-optimization-
methods-ebook)
Direct Search Methods
• The direct search methods are gradient-free methods that solve the
optimization problem based on function evaluations.
– Nelder-Mead Simplex Algorithm. Originally derived for solving
parameter estimation problems, the Nelder-Mead algorithm also
solves unconstrained optimization problems.
– Stochastic methods. Simulated annealing is the most common.
– Evolutionary algorithms. These algorithms are modeled after
biological evolution, e.g., genetic algorithm (GA).
– Swarm intelligence. These methods model the flocking behavior in
intelligent species, e.g., particle swarm optimization (PSO), ant
colony optimization (ACO), etc.
– Metaheuristics. General population behavior based methods, e.g.
harmony search.
Nelder-Mead Algorithm
• The Nelder-Mead algorithm finds the minimum by enclosing it in a
simplex, i.e., a convex hull of 𝑛 + 1 non-degenerate vertices, and
gradually shrinking it.
– The algorithm is implemented in MATLAB ‘fminsearch’ function.
• Let 𝑥0, 𝑥1, … , 𝑥𝑛 define the vertices of the simplex with associated
function values 𝑓𝑗 = 𝑓 𝑥𝑗 , 𝑗 = 0, . . , 𝑛; the NM method evaluates
one or two additional points in each iteration, followed by one of
the following transformations on the simplex:
– Reflection away from the worst vertex, i.e., the one with highest
function value.
– Shrinkage towards the best vertex, i.e., the one with least value.
– Expansion if the function value improves.
– Contraction in the neighborhood of a minimum.
Nelder-Mead Transformations
• Reflect
• Expand
• Contract
• Shrink
Nelder-Mead Algorithm
1. Initialize: given a point 𝑥0, compute 𝑥𝑗 = 𝑥0 + ℎ𝑗𝑒𝑗, 𝑗 = 1, … , 𝑛 as
the vertices of simplex 𝑆. Choose constants 𝛼, 𝛽, 𝛾, 𝛿 to satisfy
1 < 𝛾 > 𝛼 > 0, 0 < 𝛽 < 1, 0 < 𝛿 < 1; for example, choose
𝛼 = 1, 𝛽 = 0.5, 𝛾 = 2, 𝛿 = 0.5
2. Check termination. Exit if marginal improvement in the function
value is below tolerance, or if the simplex size falls below a certain
minimum criterion.
3. Ordering. Rank the vertices of 𝑆 in the order of function value
𝑓0 ≤ 𝑓1 ≤ ⋯ ≤ 𝑓𝑛
4. Find centroid. Let 𝑓ℎ = max
𝑗
𝑓𝑗 , 𝑓𝑠 = max
𝑗≠ℎ
𝑓𝑗 , 𝑓𝑙 = min
𝑗
𝑓𝑗 ; compute
𝑐 =
1
𝑛
𝑥𝑗
𝑗≠ℎ
Nelder-Mead Algorithm
5. Reflect. Compute the reflection point, 𝑥𝑟 = 𝑐 + 𝛼(𝑐 − 𝑥ℎ).
– Expand. If 𝑓𝑟 < 𝑓𝑙, compute the expansion point, 𝑥𝑒 = 𝑐 + 𝛾(𝑥𝑟 −
𝑐); if 𝑓𝑒 < 𝑓𝑟, replace 𝑥ℎ by 𝑥𝑒, otherwise replace 𝑥ℎ by 𝑥𝑟
– Replace. If 𝑓𝑙 ≤ 𝑓𝑟 < 𝑓𝑠, replace 𝑥ℎ by 𝑥𝑟
– Contract outside. if 𝑓𝑠 ≤ 𝑓𝑟 < 𝑓ℎ, compute the contraction point,
𝑥𝑐 = 𝑐 + 𝛽(𝑥𝑟 − 𝑐); if 𝑓𝑐 < 𝑓𝑟, replace 𝑥ℎ by 𝑥𝑐, otherwise go to 6
– Contract inside. If 𝑓𝑟 > 𝑓ℎ, compute the contraction point,
𝑥𝑐 = 𝑐 + 𝛽(𝑥ℎ − 𝑐); if 𝑓𝑐 < 𝑓ℎ, replace 𝑥ℎ by 𝑥𝑐, otherwise go to 6
6. Shrink. In case of no result from step 5, compute 𝑛 new vertices
as: 𝑥𝑗 = 𝑥𝑙 + 𝛿 𝑥𝑗 − 𝑥𝑙
7. Go to 2
Nelder-Mead Algorithm
%nelder-mead unconstrained optimization algorithm
%input: x0; funciton: @f
%f=@(x) 1/2*x'*[4 -1;-1 3]*x+[3 2]*x;
%x0=[0 0]';
a=1;b=.5;c=2;d=.5;
nvar=2;
I=eye(nvar);
tol=1e-8;
pts=cell(1,nvar+1);
fpts=zeros(1,nvar+1);
pts{1}=x0;
fpts(1)=f(x0);
xsum=[0;0];
for i=1:nvar,
pts{i+1}=x0+I(:,i);
fpts(i+1)=f(pts{i+1});
xsum=xsum+pts{i+1};
end
Nelder-Mead Algorithm
while max(abs(diff(fpts)))>tol,
[fsort,ix]=sort(fpts);
xh=pts{ix(end)};
xsum=xsum-xh;
xc=xsum/nvar; %centroid
xr=xc+a*(xc-xh); %reflect
discretize(f(xr), [-Inf fsort([1 end-1 end]) Inf]);
switch ans
case 1, xe=xc+c*(xr-xc); if f(xe)<f(xr), xr=xe; end
case 3, xco=xc+b*(xr-xc); if f(xco)<f(xr), xr=xco; end
case 4, xci=xc+b*(xh-xc); if f(xci)<f(xh), xr=xci;
else
for i=1:nvar+1, pts{i}=d*pts{i}; fpts(i)=f(pts{i}); end
xr=d*xh; sum=d*sum;
end
end
fpts(ix(end))=f(xr); xsum=xsum+xr;
pts{ix(end)}=xr; disp(min(fpts))
end
disp([xsum'/3 f(xsum/3)])
Design Example: Insulated Spherical Tank
Problem: choose the insulation thickness (𝑡) to minimize the life-cycle
costs of a spherical tank of radius 𝑅.
Life cycle costs: 𝑐2𝐴𝑡 + 𝑐3𝐺 + 𝑐4𝐺 ∗ 𝑝𝑤𝑓
Annual heat gain: 𝐺 = 365 × 24 × Δ𝑇 ×
𝐴
𝜌×𝑡
Surface area: 𝐴 = 4𝜋𝑅2 [𝑚2
]
Thermal resistivity: 𝜌 𝑚 ⋅ 𝑠𝑒𝑐 ⋅ °𝐶/𝐽
Equipment insulation cost: 𝑐2 $/𝑚3
Equipment refrigeration cost: 𝑐3 $/𝑊ℎ
Annual operating cost: 𝑐4 $/𝑊ℎ
Present worth factor: 𝑝𝑤𝑓 =
𝐴
𝑖
1 −
1
1+𝑖 𝑛
Note, there are no constraints in this problem
Design Example: Insulated Spherical Tank
% spherical insulated tank lifecycle cooling costs, Arora p.26
% objective: min life-cycle cost; variable: thickness (t)
R=3; %radius [m]
c1=10e3; %thermal resistivity [Cm/W]
c2=1e3; %insulation cost/m3
c3=1; %installation cost/Whr
c4=.01; %operating cost/Whr
dT=5; %temp difference
ir=.05; %interest rate
n=10; %life in years
A=4*pi*R^2;
G=365*24*dT*A/(c1*t); %heat gain [Whr]
pwf=(1-1/(1+ir)^n)/ir; %present worth factor
LC=c2*A*t+(c3+pwf*c4)*G; %life-cycle cost
f=@(t) c2*A*t+(c3+pwf*c4)*365*24*dT*A/(c1*t); %objective
fminsearch(f,.1) %use Nelder-Mead algorithm
ans =
0.0687
Hooke-Jeeves Pattern Search
• The pattern search works by locally evaluating a set of points along
N linearly independent search directions and polling the results.
• It uses a combination of exploratory moves and pattern moves to
find the optimum
– An exploratory move is performed in the vicinity of current point
along search directions
– The results of exploratory moves are polled to find an improved
objective and the new design point
– Two local moves are used to make a pattern move to jump to a
new location
Pattern Search Algorithm
• Initialize: choose initial point 𝑥0
, mesh size Δ𝑖, 𝑖 = 1, … , 𝑛,
expansion factor 𝛼 > 1, termination parameter 𝜖
• For 𝑘 = 0,1, …
– Check termination. If Δ < 𝜖, quit
– Perform a set of exploratory moves as: 𝑥𝑘
± Δ𝑖, 𝑖 = 1, … , 𝑛.
– Poll (check objective at) the perturbed points and compare with
the current point. If the poll is successful, i.e., if an improved
objective is found, move to that point and increase the mesh size
by 𝛼
– If poll is unsuccessful, set Δ𝑖 = Δ𝑖/𝛼 and repeat exploratory moves
– If two successful polls result in moves along the same direction,
make a pattern move as 𝑥𝑝
𝑘+1
= 𝑥𝑘
+ (𝑥𝑘
− 𝑥𝑘−1
)
– Set 𝑘 = 𝑘 + 1
Pattern Search
• For example, assume that the initial point is: x0 = [2.1 1.7]
• Using a mesh size of one, the mesh points are selected as:
[1 0] + x0 = [3.1 1.7]
[0 1] + x0 = [2.1 2.7]
[-1 0] + x0 = [1.1 1.7]
[0 -1] + x0 = [2.1 0.7]
• The next point is x1 = [1.1 1.7]
Simulated Annealing
• Simulated annealing (SA) is modeled after annealing of solids, i.e.,
heating it to liquid state and slowly cooling it while maintaining
thermal equilibrium.
• During annealing, the atoms undertake random displacements. A
move with negative change in energy state is accepted; a positive
change is accepted with probability: 𝑃 = 𝑒−Δ𝐸/𝑘𝑇
, where 𝑘 is
Boltzmann constant and 𝑇 is absolute temperature.
• When applied to engineering problems, the objective function is
analogous to energy, and Boltzmann constant is replaced by
average change in the objective function.
• The algorithm is started at some initial temperature parameter 𝑇0,
that is gradually reduced to simulate the annealing process.
Simulated Annealing
• At each setting of temperature variable, random design changes are
introduced; a change with lower objective value is accepted; a
change with higher objective value is accepted with a probability
𝑃 = 𝑒−Δ𝐸/Δ𝐸𝑎𝑣𝑒𝑇
.
• Once steady-state is reached, or after a certain number of changes,
the temperature is reduced and the process repeated.
• Although simulated annealing can be used for continuous
problems, it is especially effective when applied to combinatorial
problems.
Simulated Annealing
Simulated Annealing
• Let 𝑇(𝑘) describe the schedule of annealing the temperature 𝑇;
then probability of acceptance of a design change is given as:
ℎ Δ𝐸 =
𝑒𝐸𝑘+1/𝑇
𝑒𝐸𝑘+1/𝑇+𝑒𝐸𝑘/𝑇 ≅
1
1+𝑒Δ𝐸/𝑇 where Δ𝐸 = 𝐸𝑘+1 − 𝐸𝑘
• The probability distribution of the design perturbations is assumed
to be normal, i.e., 𝑔 Δ𝑥 = 2𝜋𝑇 𝑛/2𝑒−
1
2
Δ𝑥2
𝑇
• Theoretically, the global minimum of the energy function 𝐸 𝑥 can
be reached if 𝑇0 is selected large enough and 𝑇(𝑘) is selected to
decrease no faster than 𝑇𝑘 =
𝑇0
ln 𝑘
• For faster quenching, the above schedule may be replaced by:
𝑇𝑘 =
𝑇0
𝑘
Simulated Annealing
• A schedule for 𝑇(𝑘) can be based on the acceptance probability of
the worst case design: let 𝑃𝑠 and 𝑃𝑓 denote the desired probability
at the beginning and at termination, then a schedule for 𝑇 is
developed as:
𝑇𝑠 = −
1
ln 𝑃𝑠
; 𝑇𝑓 = −
1
ln 𝑃𝑓
; 𝐹 =
𝑇𝑓
𝑇𝑠
1/(𝑁−1)
; 𝑇𝑛+1 = 𝐹𝑇𝑛
For example, let 𝑃𝑠 = 0.5, 𝑃𝑓 = 10−8
, 𝑁 = 100; then 𝑇𝑠 = 1.4426,
𝑇𝑓 = 0.054278, 𝐹 = 0.9674.
• An exponential schedule using a factor 𝐹 < 1 can also be drawn,
where 𝑇𝑘 = 𝑇0𝑒 𝐹−1 𝑘
Simulated Annealing
1. Pick an initial design; start at a high value of temperature variable
(𝑇); pick 𝑁𝑆, number of cycles before temperature reduction, and
optionally N, the total number of perturbations.
2. Start a cycle. Perturb one variable at a time; accept the new point
if perturbation results in a lower value of the objective function. If
perturbation results in a higher objective, accept it with
probability: 𝑃 = 𝑒−Δ𝐸/Δ𝐸𝑎𝑣𝑒𝑇
, where Δ𝐸𝑎𝑣𝑔 is running average of
accepted objective variations.
3. After completing 𝑁𝑆 cycles (or if steady-state has been reached,
lower the temperature as per the desired schedule, e.g.,
𝑇𝑛+1 = 𝐹𝑇𝑛
4. go to 2
Simulated Annealing
Simulated Annealing
Simulated Annealing
• Simulated annealing was developed for unconstrained problems. In
the case of constrained problems, possible approaches are:
– Reject the infeasible solutions generated in the process
– Use a penalty function to add the constraints to the objective
• Simulated annealing is particularly suited to discrete problems. In
the case of continuous problems, SA is more effective when
constraint surface is highly irregular with multiple local minima.
• For general continuous problems, gradient based methods (e.g.,
GRG) are much faster and hence the preferred choice.
Simulated Annealing Code
%initialize: specify nvar, xl, xu
d=xu-xl; x=(xu+xl)/2; %initial design
xopt=x; kx=zeros(1,nvar); %current optimal, count
T0=1; T=T0*ones(1,nvar); %set temperature
Pf=1e-6; m=10; %acceptance probability
while any(T)>-1/log(Pf)
x=xopt;
for j=1:m %start of cycle
dx=2*T.*(rand(1,nvar)-.5); %apply random variation
dX=diag(dx);
for k=1:nvar
dx=limits(x,dX(k,:),xl,xu); %adjust limits
fx=f(x+dx); gx=g(x+dx); %objective & constraints
px=exp(-fx/T(k))/(exp(-fx/T(k))+exp(-f(x)/T(k)));
%acceptance probability
Simulated Annealing Code
if any(gx>0), continue %constraint violation
elseif fx>f(x) && rand()>px, continue %random accept
else x=x+dx;
if f(x)<f(xopt), xopt=x; end %record current opt
kx=kx+(dx~=0); %acceptance count
end, end, end
T=T0./log(kx); %adjust temperature
if all(kx>100*nvar), break, end %exceed count
end
function dx = limits (x,dx,xl,xu)
while any(x+dx<xl)
idl=find(x+dx<xl);
dx(idl)=(1-rand(size(idl))).*dx(idl); %adjust lower bound
end
while any(x+dx>xu),
idu=find(x+dx>xu);
dx(idu)=(1-rand(size(idu))).*dx(idu); %adjust upper bound
end, end
Design Example: Symmetric Two-Bar Truss
Problem: design a symmetrical two-bar truss of minimum mass to
support a fixed load 𝑃. The truss has height 𝐻, and span 𝐵.
Design variables: diameter 𝑑 , height 𝐻
𝑙 = (𝐵/2)2+𝐻2, 𝐴 = 𝜋𝑑𝑡
Total weight: 𝑊 = 2𝜌𝑙𝐴,
Constraints:
Axial stress: 𝜎 =
𝑃𝑙
2𝜋𝑑𝑡𝐻
≤ 𝜎𝑎
Buckling stress: 𝜎𝑏 =
𝜋2 𝑑2+𝑡2 𝐸
8𝑙2 ≤ 𝜎𝑎
Deflection: 𝜀 =
𝑃𝑙3
2𝜋𝑑𝑡𝐻2𝐸
≤ 𝜀𝑚𝑎𝑥
Design Example: Symmetric Two-Bar Truss
Let the design variables be: diameter 𝑑 and height 𝐻
Then, the design optimization problem is defined as:
Objective: min
d,H
2𝜋𝑑𝑡𝜌 (𝐵/2)2+𝐻2
Subject to:
𝜎𝑏
𝜎
− 1 ≤ 0,
𝜎
𝜎𝑎
− 1 ≤ 0,
𝜀
𝜀𝑚𝑎𝑥
− 1 ≤ 0,
For a particular problem, let:
𝑃 = 66 𝑘𝑖𝑝𝑠; 𝐵 = 60 𝑖𝑛; 𝑡 = 0.15; 𝜌 = 0.3
𝑙𝑏
𝑖𝑛3
; 𝐸 = 30 × 106
𝑙𝑏
𝑖𝑛2
;
𝜎𝑎 = 1 × 105 𝑝𝑠𝑖; 𝜀𝑚𝑎𝑥 = 0.25 𝑖𝑛
SA Example: Two-bar Truss
• A selection of SA results for two-bar truss:
𝑑 𝐻 𝑊 𝑇
1.2500 27.5000 14.3835 1.0000 1.0000
1.0441 27.6401 12.0417 1.0000 1.0000
1.0441 27.2462 11.9632 1.4427 0.4343
1.0441 27.1396 11.9421 0.9102 0.3693
1.0441 27.1367 11.9415 0.6213 0.2507
1.0441 27.1297 11.9401 0.5139 0.2354
1.0404 27.3274 11.9377 0.2423 0.1573
1.0404 27.3126 11.9348 0.2404 0.1555
1.0404 27.3104 11.9343 0.2387 0.1552
1.0404 27.3048 11.9332 0.2378 0.1547
1.0404 27.3038 11.9330 0.2276 0.1495
1.0404 27.3028 11.9328 0.2269 0.1493
1.0348 27.5877 11.9247 0.2039 0.1368
1.0348 27.5817 11.9235 0.2036 0.1367
1.0348 27.5758 11.9223 0.2036 0.1365
1.0348 27.5754 11.9222 0.1910 0.1306
SA Example: Two-bar Truss
• Design with three variables: 𝐻, 𝑑, 𝑡
Objective: min
d,H
2𝜋𝑑𝑡𝜌 (𝐵/2)2+𝐻2
Subject to:
𝜎𝑏
𝜎
− 1 ≤ 0,
𝜎
𝜎𝑎
− 1 ≤ 0,
𝜀
𝜀𝑚𝑎𝑥
− 1 ≤ 0,
• SA Results
𝐻 =29.9010 in, 𝑑 =1.8631 in, 𝑡 =0.0799 in; 𝑓 =11.8801 lbs
• Note, the problem has multiple optima
Genetic Algorithm
• GA is inspired by the process of natural selection in the biological
evolution.
• GA is characterized by three basic operations that guide reproduction:
– Selection of the fittest for mating
– Crossover of genetic information during mating
– Mutation, i.e., introduction of random changes during reproduction
• When applied to the optimization problems, design variables are termed
as genes, a chromosome represents a trial solution to the problem. A
population is a collection of chromosomes.
• Members from the population are chosen for mating based on their
fitness. Application of crossover and mutation yields a new generation
with improved average fitness than the previous generation.
• The process continues until the improvement becomes negligible.
Genetic Algorithm
The steps in the application of a GA are:
• Determine a coding scheme (genetic representation) of variables; two
possible choices are value representation and binary representation.
• Pick a crossover and mutation rate; typical values for binary
representation are 0.8 and 0.001-0.01, respectively.
• Develop an initial population of (20-100) design choices represented by
chromosomes evenly spread in the design space.
• Use a fitness function to evaluate and rank the chromosomes.
• Select a mating pool from the population using one of the following:
– Roulette selection. The probability of a chromosome being picked is in
proportion to its fitness.
– Tournament selection. A subset of population is randomly selected and
those with highest fitness are included in the mating pool.
Genetic Algorithm
• Use crossover among pairs of parents to generate two children for the
next generation:
– Binary coding. Use a crossover point to divide the chromosome. Copy the
first part and cross the second one among children.
– Value coding. Each gene is separately considered for crossover. Single-
point, uniform, or blend crossover can be considered.
• Occasionally, perform mutation to randomly change the design:
– Change individual bits in binary coding.
– Change parameter values (genes) in value coding.
• Evaluate the new generation for fitness. Retain individuals with higher
fitness for reproduction.
• The parent generation also competes in the selection process (elitism).
• Continue for specific number of generations or till the improvement in
average fitness value falls below a specified tolerance.
Binary Coding
• Binary coding was originally used to represent design choices
– Precision =
𝑈𝑖−𝐿𝑖
2𝑛−1
(smallest change in variable)
– Base 10 integer value: 𝑥𝑖𝑛𝑡10 =
2𝑛−1
𝑈−𝐿
𝑥 − 𝐿
– Real value: 𝑥 =
𝑈−𝐿
2𝑛−1
𝑥𝑖𝑛𝑡10 + 𝐿
– Example: let 𝑥 = 3.567 with a range of 0 to 10; then for 8-bit
representation, 𝑥𝑖𝑛𝑡10 =
255
10
3.567 = 91
• A chromosome is created by combining binary strings of design
variables together.
Value Coding
• Design variables are assembled together in a chromosome using
numbers. In MATLAB, this can be done using a structure array.
𝑥 = {𝑔𝑒𝑛𝑒1, 𝑔𝑒𝑛𝑒2, … , 𝑔𝑒𝑛𝑒𝑛}
• Scaling. For best results, objective function and constraints are
scaled by their maximum values, i.e., values attained when the
design parameters are at their maximum.
Fitness
• If there are no constraints, fitness equals the value of the objective
function 𝑓.
• If constraints are present, we may use a penalty parameter to write
𝑓𝑖𝑡𝑛𝑒𝑠𝑠 = 𝑓 + 𝑃𝑔, where 𝑔 is the maximum constraint violation
given as: 𝑔 = max 0, 𝑔1, 𝑔2, … , 𝑔𝑚 , where 𝑔 = 0 indicates a
feasible design.
• Alternatively, fitness may be based on the maximum value of the
objective in the current population, i.e., 𝑓𝑖𝑡𝑛𝑒𝑠𝑠 = 𝑓max
𝑓𝑒𝑎𝑠
+ 𝑔.
Crossover
• Let the crossover probability = 0.8. Generate a random number to
determine if crossover is to be performed.
• Single-point crossover. Generate a random integer between 1 and 𝑛
to determine the crossover point at gene 𝑖
• Uniform crossover. Generate a random number 𝑟 for each of the 𝑛
genes; perform crossover for individual genes
• Blend crossover. Generate a random number 𝑟 for each of the 𝑛
genes, then obtain the children genes as:
𝑦1 = 𝑟𝑥1 + 1 − 𝑟 𝑥2, 𝑦2 = 1 − 𝑟 𝑥1 + 𝑟𝑥2
Mutation
• Mutation: pick a mutation parameter, 0 ≤ 𝛽 < 1 (e.g., 𝛽 = 0.5).
– For 𝛽 = 0, mutation probability is uniform in successive
generations
– For 𝛽 > 0, the mutation probability gradually decreases
• Compute uniformity parameter 𝛼 as: 𝛼 = 1 −
𝑗−1
𝑀
𝛽
, where 𝑗 is
the current generation number, and 𝑀 is the total number of
generations.
• Pick a random number 𝑟 between 𝑥min and 𝑥max; then perform the
mutation as:
If 𝑟 ≤ 𝑥 then 𝑦 = 𝑥min + 𝑟 − 𝑥min
𝛼
𝑥 − 𝑥min
1−𝛼
If 𝑟 > 𝑥 then 𝑦 = 𝑥max − 𝑥max − 𝑟 𝛼
𝑥max − 𝑥 1−𝛼
Dynamic Mutation
• Mutation parameter: 𝛽 = 1 means uniform mutation; 𝛽 = 0 means
no mutation.
• Uniformity parameter: 𝛼 = 1 means mutated variable is picked
uniformly over its range; 𝛼 < 1 favors values near the current value
of the variable.
Elitism
• Combine the N children with N parents to obtain 2N designs
• Sort the designs by fitness values and pick the N most fit designs
Design Example: Three-bar Truss
• Three-bar truss (Ref: Parkinson, p.5-4)
Design variables: 𝑥1 = 𝐴1, 𝑥2 = 𝐴2
The normalized objective and constraints are obtained as:
𝑓 = 1.429𝑥1 + 0.57𝑥2
𝑔1: 0.3386 − 1.354𝑥1 − 1.323𝑥2 ≤ 0
𝑔2: 0.2463 − 1.261𝑥1 − 1.232𝑥2 ≤ 0
𝑔3: −2𝑥1 ≤ 0, 𝑔4: −2𝑥2 ≤ 0
GA Example
• First generation:
• Roulette selection of parents; let 𝛾 = 1.5 (fitness pressure)
Design 𝑥1 𝑥2 𝑓 𝑔 𝑓𝑖𝑡𝑛𝑒𝑠𝑠
1 0.2833 0.1408 0.4852 0 0.4852
2 0.0248 0.0316 0.0535 0.2632 0.2632 + 0.8657
3 0.1384 0.4092 0.4314 0 0.4314
4 0.3229 0.1386 0.5406 0 0.5406
5 0.0481 0.1625 0.1615 0.0585 0.0585 + 0.8657
6 0.4921 0.2845 0.8657 0 0.8657
Design 𝑓𝑖𝑡𝑛𝑒𝑠𝑠 1/𝑓𝑖𝑡𝑛𝑒𝑠𝑠 𝛾
Normalized Cumulative
1 0.4852 2.9588 0.2424 0.2424
2 1.1289 0.8337 0.0683 0.3107
3 0.4314 3.5293 0.2892 0.5999
4 0.5406 2.5159 0.2061 0.8061
5 0.9242 1.1255 0.0922 0.8983
6 0.8657 1.2415 0.1017 1.0
GA Example
Design Example: Coil Spring
Problem: design a minimum mass spring to carry a given axial load 𝑃 without
material failure while satisfying minimum deflection and minimum surge
wave frequency requirements
Design variables: mean coil diameter 𝐷 , wire diameter 𝑑 , number of
active coils (𝑁)
Design equations:
Spring mass: 𝑚 =
1
4
𝑁 + 𝑄 𝜋2
𝐷𝑑2
𝜌
Load deflection: 𝑃 = 𝐾𝛿, where 𝐾 =
𝑑4𝐺
8𝐷3𝑁
Shear stress: 𝜏 =
8𝑘𝑃𝐷
𝜋𝑑3
Stress concentration factor: 𝑘 =
4𝐷−𝑑
4(𝐷−𝑑)
+ 0.615
𝑑
𝐷
Frequency of surge waves: 𝜔 =
𝑑
2𝜋𝑁𝐷2
𝐺
2𝜌
Design Example: Coil Spring
The optimization problem is formulated as:
Objective: min 𝑓 𝑁, 𝑑, 𝐷 = 𝑁 + 𝑄 𝐷𝑑2
Constraints: 𝜏 ≤ 𝜏𝑎, 𝜔 ≥ 𝜔0, 𝐷 + 𝑑 ≤ 𝐷0, 𝛿 =
𝑃
𝐾
≥ Δ,
Variable bounds:
𝑑𝑚𝑖𝑛 ≤ 𝑑 ≤ 𝑑𝑚𝑎𝑥, 𝐷𝑚𝑖𝑛 ≤ 𝐷 ≤ 𝐷𝑚𝑎𝑥, 𝑁𝑚𝑖𝑛 ≤ 𝑁 ≤ 𝑁𝑚𝑎𝑥
Assume the following parameter values:
𝑃 = 10 𝑙𝑏, Δ = 0.5 𝑖𝑛, 𝛾 = 0.285
𝑙𝑏
𝑖𝑛3 , 𝜔0 = 100 𝐻𝑧,
𝐷0 = 1.5 𝑖𝑛, 𝜏𝑎 = 80,000
𝑙𝑏
𝑖𝑛2
, 𝐺 = 1.15 × 107
𝑙𝑏
𝑖𝑛2
, 𝑄 = 2
GA Example: Coil Spring
% Coil spring model (Arora, p. 43)
% Design variables: coil diameter (D), wire diameter (d),
number of active coils (N)
xl=[.5 .01 1]; %lower limits
xu=[1.5 .15 11]; %upper limits
x=(xl+xu)/2; %trial design
%parameters
P=10; %load [lb]
Q=2; %inactive coils
Del=.5; %min deflection [in]
Dmax=1.5; %max diameter [in]
gam=.285; %weight density [lb/in3]
gr=386; %gravity [in/sec2]
oml=100; %min frequency [Hz]
G=1.15e7; %shear modulus [lb/in2]
taumax=80e3; %max shear stress [lb/in2]
rho=gam/gr; %mass density
GA Example: Coil Spring
m=@(x) pi^2/4*(x(3)+Q)*x(1)*x(2)^2*rho; %spring mass
K=@(x) x(2)^4*G/(8*x(1)^3*x(3)); %spring constant
k=@(x) (x(1)-x(2)/4)/(x(1)-x(2))+.615*x(2)/x(1);
%stress concentration factor
tau=@(x) 8*k(x)*P*x(1)/(pi*x(2)^3); %shear stress
om=@(x) x(2)/(2*pi*x(1)^2*x(3))*sqrt(G*gr/(2*gam));
%surge frequency
del=@(x) P/K(x); %deflection
f=@(x) x(1)*x(2)*x(2)*(x(3)+Q); %objective, x=[D,d,N]
g=@(x) [tau(x)/taumax-1; oml/om(x)-1; Del/del(x)-1;
(x(1)+x(2))/Dmax-1]; %constraints
MATLAB Optimization Problem Structure
• Problem structure, specified as a structure with the following fields:
– objective — Objective function
– fitnessfcn — Fitness function
– nvars — number of variables
– x0 — Starting point
– Aineq — Matrix for linear inequality constraints
– bineq — Vector for linear inequality constraints
– Aeq — Matrix for linear equality constraints
– beq — Vector for linear equality constraints
– lb — Lower bound for x
– ub — Upper bound for x
– nonlcon — Nonlinear constraint function
– solver — ‘ga'
– options — Options created with optimoptions or psoptimset
– rngstate — Optional field to reset the state of the RNG
GA Example: Coil Spring
Opt.nvars=3; %number of variables
Opt.fitnessfcn=f; %fitness function
Opt.nonlcon=@(x) deal(g(x),[]); %nonlinear constraints
Opt.lb=xl; %lower bounds
Opt.ub=xu; %upper bounds
Opt.solver='ga‘ %solver
Opt.IntCon=[3] %integer variables
Opt.x0=[1 .08 6]; %initial guess
Opt.options=gaoptimset(@ga) %GA options
>> [x,fval]=ga(Opt) %Solve the problem using GA
Optimization terminated: average change in the penalty fitness
value less than options.FunctionTolerance
and constraint violation is less than options.ConstraintTolerance.
x =
0.5601 0.0590 5.0000
fval =
0.0136
Swarm Intelligence
• Swarm intelligence models the collective behavior of species in the
biological kingdom Examples include ant and termite colonies,
schools of fish, flocks of birds, herds of animals, etc.
• Swarm intelligence manifests in artificial systems composed of
intelligent agents that coordinate using decentralized control and
self-organization.
• A typical swarm intelligence system has the following
characteristics:
– It is composed of many individuals that are relatively
homogeneous;
– The interactions among the individuals are based on simple
behavioral rules that exploit only local information;
– The overall behavior of the group emerges from the interactions of
individuals with each other and with their environment.
Particle Swarm Optimization
• The design space is initialized with a random population of
solutions (particles) with associated fitness values. Particles move
around the search space with designated velocities.
• Each particle’s position is iteratively updated based on:
– its known best location (pbest), and
– the overall best location achieved by any particle (gbest).
• The update equations are:
v[] = v[] + c1*rand()*(pbest[]- present[]) + c2*rand()*(gbest[] - present[]);
present[] = persent[] + v[]
where v[] is the particle velocity, present[] is the current position, pbest[]
and gbest[] are defined above, rand() is a random number between (0,1),
and c1, c2 are learning factors in the range [0,4]. Usually c1 = c2 = 2.
http://www.swarmintelligence.org/tutorials.php
Particle Swarm Optimization
• Parameters that need to be tuned in PSO include:
– The number of particles: the typical range is 20 - 40. More particles
may be included for difficult problems.
– Vmax: it determines the maximum change in particle position in an
iteration. The range of the particle may be used as Vmax. For
example, if a particle has a range [-10,10], then Vmax = 20.
– Learning factors: c1 and c2 usually equal to 2. Other values have
been suggested, where c1 equals to c2 and ranges from [0, 4].
– The stopping condition: the maximum number of iterations the
PSO execute and the minimum error requirement.
PSO Algorithm
1. Initialize: choose neighborhood size N, inertia W, stall counter c=0,
the self-adjustment weight, c1, and social adjustment weight, c2.
2. Create an initial population of particles; set initial velocities in the
range [-r, r].
3. Compute objective function value of each particle. Record the
current best position p(i) of each particle, and the global best
position g(i).
4. Iterate: choose a random subset S of N particles; find fopt(S), the
best local fitness value, and g(S), the position of the neighbor with
best fitness.
– Update particle velocity: v = W*v + y1*u1.*(p-x) + y2*u2.*(g-x)
– Update particle position x = x + v
– Enforce the bounds: if any particle is outside the bound, set it
equal to the bound.
PSO Algorithm
5. Evaluate the objective function f(x)
– If f(x)<f(p), set p=x
– If f(x)<f(g), then
• Set c = max(0, c-1).
• If c < 2, then set W = 2*W.
• If c > 5, then set W = W/2.
– Otherwise, Set c=c+1
6. Stop if max number of iterations is exceeded, or if the relative
change in the best objective function value g over the last M
iterations is less than a tolerance parameter.
7. Go to 3.
Design Example: Three-bar Truss
• Three-bar truss (Ref: Parkinson, p.5-4)
Design variables: 𝑥1 = 𝐴1, 𝑥2 = 𝐴2
• For 𝑥1 = 𝑥2 = 0.5, we have 𝑓 = 70, 𝑔1 = 28350, 𝑔2 = 60900, 𝑔3 = 𝑔4 =
0.5; hence the scaled objective and constraints are obtained as:
𝑓 = 1.429𝑥1 + 0.57𝑥2
𝑔1: 0.3386 − 1.354𝑥1 − 1.323𝑥2 ≤ 0
𝑔2: 0.2463 − 1.261𝑥1 − 1.232𝑥2 ≤ 0
𝑔3: −2𝑥1 ≤ 0, 𝑔4: −2𝑥2 ≤ 0
MATLAB Example: Three-bar Truss
MATLAB commands
P=20e3; %load
h=40; b=60; %dimensions
l1=sqrt(h^2+b^2/4); %length
xu=[.5,.5]; %upper limit for variables
fu=abs([2*l1 h]*xu(:)); %objective fcn
cf=[2*l1 h]/fu; %scale coefficients
f=@(x) cf*x(:); %define objective function
gu=abs([9600;15e3;0;0]-[384e2 375e2; 768e2 75e3; 1 0; 0 1]
*xu(:)); %maximum constraint values
cg0=[9600;15e3;0;0]./gu; %scale constraints
cg=[384e2 375e2; 768e2 75e3; 1 0; 0 1]./[gu gu];
g=@(x) cg0-cg*x(:); %define constraint function
Three-bar Truss Design Using GA
MATLAB commands
Opt=struct('solver','ga','fitnessfcn',f,'nvars',2);
Opt.x0=[.01,.01];
Opt.Aineq=-cg;
Opt.bineq=-cg0;
Opt.lb=[0,0];
Opt.ub=[.5,.5];
Opt.options=[];
[x,fval]=ga(Opt); %Solve the problem using GA
Optimization terminated: average change in the fitness
value less than options.FunctionTolerance.
>> x,fval
x =
0.0006 0.2546
fval =
0.1463
Three-bar Truss Design Using PSO
MATLAB Commands
Opt.solver='particleswarm'
Opt.objective=f
[x,fval]=particleswarm(Opt); %try PSO
Optimization ended: relative change in the objective
value over the last OPTIONS.MaxStallIterations
iterations is less than OPTIONS.FunctionTolerance.
>> x,fval
x =
0 0
fval =
0
Three-bar Truss Using Pattern Search
MATLAB Commands
Opt.solver='patternsearch'
[x,fval]=patternsearch(Opt); %try pattern search
Optimization terminated: mesh size less than
options.MeshTolerance.
>> x,fval
x =
0 0.2552
fval =
0.1459
Three-bar Truss Design Using SA
MATLAB Commands
Opt.solver='simulannealbnd'
[x,fval]=patternsearch(Opt); %try simulated annealing
Optimization terminated: change in best function value
less than options.FunctionTolerance.
>> x,fval
x =
1.0e-05 *
0.0159 0.2206
fval =
1.4882e-06
• However, using our own SA code,
x = 0.0000 0.2588
f = 0.1479
Example: Minimum Thrust Design
• Problem: Select an engine for a business jet keeping in view the
thrust requirements
• Background: aircraft thrust requirements are dictated by the
minimum thrust requirements during:
– Take off
– Climb
– Cruise
– Sustained turn
– Service ceiling
Example: Minimum Thrust Design
• Thrust requirement during cruise and constant velocity turn:
𝑇 = 𝑞𝑆 𝐶𝐷𝑚𝑖𝑛 + 𝑘
𝑛𝑊
𝑞𝑆
2
+
Δ𝑃
𝑉
where
– 𝑞 = dynamic pressure
– 𝑆 = surface area
– 𝐶𝐷𝑚𝑖𝑛 = minimum drag coefficient
– 𝑛 = load factor
– 𝑘 = lift induced drag coefficient
– Δ𝑃 = excess power
Example: Minimum Thrust Design
• Thrust requirement during TO run: 𝑆𝐺 =
𝑉𝐿𝑂𝐹
2𝑎
where 𝑎 =
𝑇−𝑞𝑆𝐶𝐷𝑇𝑂
𝑊
− 𝜇 1 −
𝑞𝑆𝐶𝐿𝑇𝑂
𝑊
𝑔
• Thrust requirement during climb:
𝑇 =
𝑉𝑉
𝑉
𝑊 + 𝑞𝑆 𝐶𝐷𝑚𝑖𝑛 + 𝑘
𝑊
𝑞𝑆
2
• Thrust requirement at service ceiling:
𝑇 =
1.667
𝑉
𝑊 + 𝑞𝑆 𝐶𝐷𝑚𝑖𝑛 + 𝑘
𝑊
𝑞𝑆
2
where 𝑞 =
𝑊
𝑆
𝑘
3𝐶𝐷𝑚𝑖𝑛
Example: Minimum Thrust Design
gwt=38875; %gross weight[lb]
Tmin=.1799*gwt; %minimum thrust
roc=50; %rate of climb [fps]@Vcl
Sg=5000; %take off run [ft]@Vlo/sqrt(2)
Vcr=533.4; %cruise velocity [KTAS]@Acr
Vcl=171; %climb velocity {KCAS]
Vlo=112; %lift off velocity {KCAS]
Vst=102; %stall speed [KCAS]
Acr=43000; %cruises altitude [ft]
Aceil=45000; %service ceiling [ft]
mu=.04; %ground friction
CDmin=.0225; %minimum drag coeff
CLto=.8; %lift coefficient @TO
CDto=.0325; %drag coefficient @TO
Ps=0; %energy state
dsl=.002378; %air density @SL [slug/ft3]
kd=.68756e-5; %air density variation constant
gr=32.174; %gravity
n=1; %load factor
Example: Minimum Thrust Design
e=@(x) 1.78*(1-.045*x(3)^.68)-.64; %span efficiency k=@(x)
1/(pi*x(3)*e(x)); %lift induced drag coefficient;
S=@(x) x(2)^2/x(3);
WSR=@(x) gwt*x(3)/x(2)^2;
%TO run
rho=dsl; %sea-level
qto=1/2*rho*(1.688*Vlo)^2/2; %dynamic pressure [lb/ft2]
TWRto=@(x) (1.688*Vlo)^2/(2*gr*Sg)+qto*CDto/WSR(x)+mu*(1-
qto*CLto/WSR(x)); %thrust-weight ratio
%climb
qcl=1/2*rho*(1.688*Vcl)^2; %dynamic pressure [lb/ft2]
TWRcl=@(x)
roc/(1.688*Vcl)+qcl*(CDmin/WSR(x)+k(x)*(n/qcl)^2*WSR(x))+Ps/Vc
r; %thrust-weight ratio
%stall
qst=1/2*rho*(1.688*Vst)^2;
CLmax=@(x) gwt/(qst*S(x)); %max lift coefficient
Example: Minimum Thrust Design
%cruise
drho=(1-kd*Acr)^4.2561;
rho=dsl*drho; %air density
rho=5.09e-4;
q=1/2*rho*(1.688*Vcr)^2; %dynamic pressure [lb/ft2]
CL=@(x) gwt/(q*S(x));
CD=@(x) CDmin+CL(x)^2/(pi*x(3)*e(x));
Tcr=@(x) q*S(x)*CD(x);
TWRcr=@(x) q*(CDmin/WSR(x)+k(x)*(n/q)^2*WSR(x))+Ps/Vcr;
%thrust-weight ratio
%CV turn
n=2; %load factor
CDtn=@(x) CDmin+k(x)*(n*gwt/(q*S(x)))^2;
Ttn=@(x) q*S(x)*CDtn(x); %thrust needed
%TWRtn=@(x) q*(CDmin/WSR(x)+k(x)*(n/q)^2*WSR(x))+Ps/Vcr;
%thrust-weight ratio
Example: Minimum Thrust Design
%ceiling
Vv=1.667; %[fps]
drho=(1-kd*Aceil)^4.2561;
rho=dsl*drho; %air density
qsc=@(x) gwt/S(x)*sqrt(k(x)/(3*CDmin));
CDsc=@(x) CDmin+k(x)*(gwt/(qsc(x)*S(x)))^2;
Tsc=@(x) Vv/sqrt(2*qsc(x)/rho)*gwt+qsc(x)*S(x)*CDsc(x);
TWRsc=@(x)
Vv/sqrt(2/rho*WSR(x)*sqrt(k(x)/(3*CDmin)))+4*sqrt(k(x)*
CDmin/3); %thrust-weight ratio
f=@(x) (10*S(x)+x(1))/gwt; %objective
g=@(x) [TWRto(x)/x(1)*gwt-1; TWRcl(x)/x(1)*gwt-1;
Tcr(x)/x(1)-1; Ttn(x)/x(1)-1; Tsc(x)/x(1)-1;
CLmax(x)/2.5-1]; %constraints
View publication stats
View publication stats

More Related Content

Similar to Optimum engineering design - Day 6. Classical optimization methods

2Multi_armed_bandits.pptx
2Multi_armed_bandits.pptx2Multi_armed_bandits.pptx
2Multi_armed_bandits.pptxZhiwuGuo1
 
daa-unit-3-greedy method
daa-unit-3-greedy methoddaa-unit-3-greedy method
daa-unit-3-greedy methodhodcsencet
 
Paper study: Attention, learn to solve routing problems!
Paper study: Attention, learn to solve routing problems!Paper study: Attention, learn to solve routing problems!
Paper study: Attention, learn to solve routing problems!ChenYiHuang5
 
CS8451 - Design and Analysis of Algorithms
CS8451 - Design and Analysis of AlgorithmsCS8451 - Design and Analysis of Algorithms
CS8451 - Design and Analysis of AlgorithmsKrishnan MuthuManickam
 
A machine learning method for efficient design optimization in nano-optics
A machine learning method for efficient design optimization in nano-optics A machine learning method for efficient design optimization in nano-optics
A machine learning method for efficient design optimization in nano-optics JCMwave
 
Response surface method
Response surface methodResponse surface method
Response surface methodIrfan Hussain
 
Paper Study: Melding the data decision pipeline
Paper Study: Melding the data decision pipelinePaper Study: Melding the data decision pipeline
Paper Study: Melding the data decision pipelineChenYiHuang5
 
Ch19_Response_Surface_Methodology.pptx
Ch19_Response_Surface_Methodology.pptxCh19_Response_Surface_Methodology.pptx
Ch19_Response_Surface_Methodology.pptxSriSusilawatiIslam
 
Vehicle Routing Problem using PSO (Particle Swarm Optimization)
Vehicle Routing Problem using PSO (Particle Swarm Optimization)Vehicle Routing Problem using PSO (Particle Swarm Optimization)
Vehicle Routing Problem using PSO (Particle Swarm Optimization)Niharika Varshney
 
BeyondClassicalSearch.ppt
BeyondClassicalSearch.pptBeyondClassicalSearch.ppt
BeyondClassicalSearch.pptGauravWani20
 
BeyondClassicalSearch.ppt
BeyondClassicalSearch.pptBeyondClassicalSearch.ppt
BeyondClassicalSearch.pptjpradha86
 
Chapter 5.pptx
Chapter 5.pptxChapter 5.pptx
Chapter 5.pptxTekle12
 
13Kernel_Machines.pptx
13Kernel_Machines.pptx13Kernel_Machines.pptx
13Kernel_Machines.pptxKarasuLee
 
A machine learning method for efficient design optimization in nano-optics
A machine learning method for efficient design optimization in nano-opticsA machine learning method for efficient design optimization in nano-optics
A machine learning method for efficient design optimization in nano-opticsJCMwave
 
Numerical Techniques
Numerical TechniquesNumerical Techniques
Numerical TechniquesYasir Mahdi
 

Similar to Optimum engineering design - Day 6. Classical optimization methods (20)

Unit 2 in daa
Unit 2 in daaUnit 2 in daa
Unit 2 in daa
 
2Multi_armed_bandits.pptx
2Multi_armed_bandits.pptx2Multi_armed_bandits.pptx
2Multi_armed_bandits.pptx
 
daa-unit-3-greedy method
daa-unit-3-greedy methoddaa-unit-3-greedy method
daa-unit-3-greedy method
 
Paper study: Attention, learn to solve routing problems!
Paper study: Attention, learn to solve routing problems!Paper study: Attention, learn to solve routing problems!
Paper study: Attention, learn to solve routing problems!
 
CS8451 - Design and Analysis of Algorithms
CS8451 - Design and Analysis of AlgorithmsCS8451 - Design and Analysis of Algorithms
CS8451 - Design and Analysis of Algorithms
 
A machine learning method for efficient design optimization in nano-optics
A machine learning method for efficient design optimization in nano-optics A machine learning method for efficient design optimization in nano-optics
A machine learning method for efficient design optimization in nano-optics
 
Response surface method
Response surface methodResponse surface method
Response surface method
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
 
Paper Study: Melding the data decision pipeline
Paper Study: Melding the data decision pipelinePaper Study: Melding the data decision pipeline
Paper Study: Melding the data decision pipeline
 
Ch19_Response_Surface_Methodology.pptx
Ch19_Response_Surface_Methodology.pptxCh19_Response_Surface_Methodology.pptx
Ch19_Response_Surface_Methodology.pptx
 
Vehicle Routing Problem using PSO (Particle Swarm Optimization)
Vehicle Routing Problem using PSO (Particle Swarm Optimization)Vehicle Routing Problem using PSO (Particle Swarm Optimization)
Vehicle Routing Problem using PSO (Particle Swarm Optimization)
 
BeyondClassicalSearch.ppt
BeyondClassicalSearch.pptBeyondClassicalSearch.ppt
BeyondClassicalSearch.ppt
 
BeyondClassicalSearch.ppt
BeyondClassicalSearch.pptBeyondClassicalSearch.ppt
BeyondClassicalSearch.ppt
 
Chapter 5.pptx
Chapter 5.pptxChapter 5.pptx
Chapter 5.pptx
 
13Kernel_Machines.pptx
13Kernel_Machines.pptx13Kernel_Machines.pptx
13Kernel_Machines.pptx
 
Scalable k-means plus plus
Scalable k-means plus plusScalable k-means plus plus
Scalable k-means plus plus
 
A machine learning method for efficient design optimization in nano-optics
A machine learning method for efficient design optimization in nano-opticsA machine learning method for efficient design optimization in nano-optics
A machine learning method for efficient design optimization in nano-optics
 
DAA Notes.pdf
DAA Notes.pdfDAA Notes.pdf
DAA Notes.pdf
 
Daa unit 1
Daa unit 1Daa unit 1
Daa unit 1
 
Numerical Techniques
Numerical TechniquesNumerical Techniques
Numerical Techniques
 

More from SantiagoGarridoBulln

Genetic Algorithms. Algoritmos Genéticos y cómo funcionan.
Genetic Algorithms. Algoritmos Genéticos y cómo funcionan.Genetic Algorithms. Algoritmos Genéticos y cómo funcionan.
Genetic Algorithms. Algoritmos Genéticos y cómo funcionan.SantiagoGarridoBulln
 
OptimumEngineeringDesign-Day2a.pdf
OptimumEngineeringDesign-Day2a.pdfOptimumEngineeringDesign-Day2a.pdf
OptimumEngineeringDesign-Day2a.pdfSantiagoGarridoBulln
 
OptimumEngineeringDesign-Day-1.pdf
OptimumEngineeringDesign-Day-1.pdfOptimumEngineeringDesign-Day-1.pdf
OptimumEngineeringDesign-Day-1.pdfSantiagoGarridoBulln
 
Lecture_Slides_Mathematics_06_Optimization.pdf
Lecture_Slides_Mathematics_06_Optimization.pdfLecture_Slides_Mathematics_06_Optimization.pdf
Lecture_Slides_Mathematics_06_Optimization.pdfSantiagoGarridoBulln
 
CI L11 Optimization 3 GlobalOptimization.pdf
CI L11 Optimization 3 GlobalOptimization.pdfCI L11 Optimization 3 GlobalOptimization.pdf
CI L11 Optimization 3 GlobalOptimization.pdfSantiagoGarridoBulln
 
complete-manual-of-multivariable-optimization.pdf
complete-manual-of-multivariable-optimization.pdfcomplete-manual-of-multivariable-optimization.pdf
complete-manual-of-multivariable-optimization.pdfSantiagoGarridoBulln
 
slides-linear-programming-introduction.pdf
slides-linear-programming-introduction.pdfslides-linear-programming-introduction.pdf
slides-linear-programming-introduction.pdfSantiagoGarridoBulln
 

More from SantiagoGarridoBulln (14)

Genetic Algorithms. Algoritmos Genéticos y cómo funcionan.
Genetic Algorithms. Algoritmos Genéticos y cómo funcionan.Genetic Algorithms. Algoritmos Genéticos y cómo funcionan.
Genetic Algorithms. Algoritmos Genéticos y cómo funcionan.
 
OptimumEngineeringDesign-Day2a.pdf
OptimumEngineeringDesign-Day2a.pdfOptimumEngineeringDesign-Day2a.pdf
OptimumEngineeringDesign-Day2a.pdf
 
OptimumEngineeringDesign-Day-1.pdf
OptimumEngineeringDesign-Day-1.pdfOptimumEngineeringDesign-Day-1.pdf
OptimumEngineeringDesign-Day-1.pdf
 
CI_L01_Optimization.pdf
CI_L01_Optimization.pdfCI_L01_Optimization.pdf
CI_L01_Optimization.pdf
 
CI_L02_Optimization_ag2_eng.pdf
CI_L02_Optimization_ag2_eng.pdfCI_L02_Optimization_ag2_eng.pdf
CI_L02_Optimization_ag2_eng.pdf
 
Lecture_Slides_Mathematics_06_Optimization.pdf
Lecture_Slides_Mathematics_06_Optimization.pdfLecture_Slides_Mathematics_06_Optimization.pdf
Lecture_Slides_Mathematics_06_Optimization.pdf
 
OptimumEngineeringDesign-Day7.pdf
OptimumEngineeringDesign-Day7.pdfOptimumEngineeringDesign-Day7.pdf
OptimumEngineeringDesign-Day7.pdf
 
CI_L11_Optimization_ag2_eng.pptx
CI_L11_Optimization_ag2_eng.pptxCI_L11_Optimization_ag2_eng.pptx
CI_L11_Optimization_ag2_eng.pptx
 
CI L11 Optimization 3 GlobalOptimization.pdf
CI L11 Optimization 3 GlobalOptimization.pdfCI L11 Optimization 3 GlobalOptimization.pdf
CI L11 Optimization 3 GlobalOptimization.pdf
 
optmizationtechniques.pdf
optmizationtechniques.pdfoptmizationtechniques.pdf
optmizationtechniques.pdf
 
complete-manual-of-multivariable-optimization.pdf
complete-manual-of-multivariable-optimization.pdfcomplete-manual-of-multivariable-optimization.pdf
complete-manual-of-multivariable-optimization.pdf
 
slides-linear-programming-introduction.pdf
slides-linear-programming-introduction.pdfslides-linear-programming-introduction.pdf
slides-linear-programming-introduction.pdf
 
bv_cvxslides (1).pdf
bv_cvxslides (1).pdfbv_cvxslides (1).pdf
bv_cvxslides (1).pdf
 
Optim_methods.pdf
Optim_methods.pdfOptim_methods.pdf
Optim_methods.pdf
 

Recently uploaded

High Profile Call Girls Nashik Megha 7001305949 Independent Escort Service Na...
High Profile Call Girls Nashik Megha 7001305949 Independent Escort Service Na...High Profile Call Girls Nashik Megha 7001305949 Independent Escort Service Na...
High Profile Call Girls Nashik Megha 7001305949 Independent Escort Service Na...Call Girls in Nagpur High Profile
 
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxDeepakSakkari2
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxpurnimasatapathy1234
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024hassan khalil
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024Mark Billinghurst
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSCAESB
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)Suman Mia
 
Current Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLCurrent Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLDeelipZope
 
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escortsranjana rawat
 
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSHARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSRajkumarAkumalla
 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxupamatechverse
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxwendy cai
 
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Serviceranjana rawat
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Dr.Costas Sachpazis
 

Recently uploaded (20)

Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptxExploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
 
High Profile Call Girls Nashik Megha 7001305949 Independent Escort Service Na...
High Profile Call Girls Nashik Megha 7001305949 Independent Escort Service Na...High Profile Call Girls Nashik Megha 7001305949 Independent Escort Service Na...
High Profile Call Girls Nashik Megha 7001305949 Independent Escort Service Na...
 
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptx
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptx
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentation
 
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
 
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)
 
Current Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLCurrent Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCL
 
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
 
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
 
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSHARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptx
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptx
 
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
 

Optimum engineering design - Day 6. Classical optimization methods

  • 2. Course Materials • Arora, Introduction to Optimum Design, 3e, Elsevier, (https://www.researchgate.net/publication/273120102_Introductio n_to_Optimum_design) • Parkinson, Optimization Methods for Engineering Design, Brigham Young University (http://apmonitor.com/me575/index.php/Main/BookChapters) • Iqbal, Fundamental Engineering Optimization Methods, BookBoon (https://bookboon.com/en/fundamental-engineering-optimization- methods-ebook)
  • 3. Direct Search Methods • The direct search methods are gradient-free methods that solve the optimization problem based on function evaluations. – Nelder-Mead Simplex Algorithm. Originally derived for solving parameter estimation problems, the Nelder-Mead algorithm also solves unconstrained optimization problems. – Stochastic methods. Simulated annealing is the most common. – Evolutionary algorithms. These algorithms are modeled after biological evolution, e.g., genetic algorithm (GA). – Swarm intelligence. These methods model the flocking behavior in intelligent species, e.g., particle swarm optimization (PSO), ant colony optimization (ACO), etc. – Metaheuristics. General population behavior based methods, e.g. harmony search.
  • 4. Nelder-Mead Algorithm • The Nelder-Mead algorithm finds the minimum by enclosing it in a simplex, i.e., a convex hull of 𝑛 + 1 non-degenerate vertices, and gradually shrinking it. – The algorithm is implemented in MATLAB ‘fminsearch’ function. • Let 𝑥0, 𝑥1, … , 𝑥𝑛 define the vertices of the simplex with associated function values 𝑓𝑗 = 𝑓 𝑥𝑗 , 𝑗 = 0, . . , 𝑛; the NM method evaluates one or two additional points in each iteration, followed by one of the following transformations on the simplex: – Reflection away from the worst vertex, i.e., the one with highest function value. – Shrinkage towards the best vertex, i.e., the one with least value. – Expansion if the function value improves. – Contraction in the neighborhood of a minimum.
  • 5. Nelder-Mead Transformations • Reflect • Expand • Contract • Shrink
  • 6. Nelder-Mead Algorithm 1. Initialize: given a point 𝑥0, compute 𝑥𝑗 = 𝑥0 + ℎ𝑗𝑒𝑗, 𝑗 = 1, … , 𝑛 as the vertices of simplex 𝑆. Choose constants 𝛼, 𝛽, 𝛾, 𝛿 to satisfy 1 < 𝛾 > 𝛼 > 0, 0 < 𝛽 < 1, 0 < 𝛿 < 1; for example, choose 𝛼 = 1, 𝛽 = 0.5, 𝛾 = 2, 𝛿 = 0.5 2. Check termination. Exit if marginal improvement in the function value is below tolerance, or if the simplex size falls below a certain minimum criterion. 3. Ordering. Rank the vertices of 𝑆 in the order of function value 𝑓0 ≤ 𝑓1 ≤ ⋯ ≤ 𝑓𝑛 4. Find centroid. Let 𝑓ℎ = max 𝑗 𝑓𝑗 , 𝑓𝑠 = max 𝑗≠ℎ 𝑓𝑗 , 𝑓𝑙 = min 𝑗 𝑓𝑗 ; compute 𝑐 = 1 𝑛 𝑥𝑗 𝑗≠ℎ
  • 7. Nelder-Mead Algorithm 5. Reflect. Compute the reflection point, 𝑥𝑟 = 𝑐 + 𝛼(𝑐 − 𝑥ℎ). – Expand. If 𝑓𝑟 < 𝑓𝑙, compute the expansion point, 𝑥𝑒 = 𝑐 + 𝛾(𝑥𝑟 − 𝑐); if 𝑓𝑒 < 𝑓𝑟, replace 𝑥ℎ by 𝑥𝑒, otherwise replace 𝑥ℎ by 𝑥𝑟 – Replace. If 𝑓𝑙 ≤ 𝑓𝑟 < 𝑓𝑠, replace 𝑥ℎ by 𝑥𝑟 – Contract outside. if 𝑓𝑠 ≤ 𝑓𝑟 < 𝑓ℎ, compute the contraction point, 𝑥𝑐 = 𝑐 + 𝛽(𝑥𝑟 − 𝑐); if 𝑓𝑐 < 𝑓𝑟, replace 𝑥ℎ by 𝑥𝑐, otherwise go to 6 – Contract inside. If 𝑓𝑟 > 𝑓ℎ, compute the contraction point, 𝑥𝑐 = 𝑐 + 𝛽(𝑥ℎ − 𝑐); if 𝑓𝑐 < 𝑓ℎ, replace 𝑥ℎ by 𝑥𝑐, otherwise go to 6 6. Shrink. In case of no result from step 5, compute 𝑛 new vertices as: 𝑥𝑗 = 𝑥𝑙 + 𝛿 𝑥𝑗 − 𝑥𝑙 7. Go to 2
  • 8. Nelder-Mead Algorithm %nelder-mead unconstrained optimization algorithm %input: x0; funciton: @f %f=@(x) 1/2*x'*[4 -1;-1 3]*x+[3 2]*x; %x0=[0 0]'; a=1;b=.5;c=2;d=.5; nvar=2; I=eye(nvar); tol=1e-8; pts=cell(1,nvar+1); fpts=zeros(1,nvar+1); pts{1}=x0; fpts(1)=f(x0); xsum=[0;0]; for i=1:nvar, pts{i+1}=x0+I(:,i); fpts(i+1)=f(pts{i+1}); xsum=xsum+pts{i+1}; end
  • 9. Nelder-Mead Algorithm while max(abs(diff(fpts)))>tol, [fsort,ix]=sort(fpts); xh=pts{ix(end)}; xsum=xsum-xh; xc=xsum/nvar; %centroid xr=xc+a*(xc-xh); %reflect discretize(f(xr), [-Inf fsort([1 end-1 end]) Inf]); switch ans case 1, xe=xc+c*(xr-xc); if f(xe)<f(xr), xr=xe; end case 3, xco=xc+b*(xr-xc); if f(xco)<f(xr), xr=xco; end case 4, xci=xc+b*(xh-xc); if f(xci)<f(xh), xr=xci; else for i=1:nvar+1, pts{i}=d*pts{i}; fpts(i)=f(pts{i}); end xr=d*xh; sum=d*sum; end end fpts(ix(end))=f(xr); xsum=xsum+xr; pts{ix(end)}=xr; disp(min(fpts)) end disp([xsum'/3 f(xsum/3)])
  • 10. Design Example: Insulated Spherical Tank Problem: choose the insulation thickness (𝑡) to minimize the life-cycle costs of a spherical tank of radius 𝑅. Life cycle costs: 𝑐2𝐴𝑡 + 𝑐3𝐺 + 𝑐4𝐺 ∗ 𝑝𝑤𝑓 Annual heat gain: 𝐺 = 365 × 24 × Δ𝑇 × 𝐴 𝜌×𝑡 Surface area: 𝐴 = 4𝜋𝑅2 [𝑚2 ] Thermal resistivity: 𝜌 𝑚 ⋅ 𝑠𝑒𝑐 ⋅ °𝐶/𝐽 Equipment insulation cost: 𝑐2 $/𝑚3 Equipment refrigeration cost: 𝑐3 $/𝑊ℎ Annual operating cost: 𝑐4 $/𝑊ℎ Present worth factor: 𝑝𝑤𝑓 = 𝐴 𝑖 1 − 1 1+𝑖 𝑛 Note, there are no constraints in this problem
  • 11. Design Example: Insulated Spherical Tank % spherical insulated tank lifecycle cooling costs, Arora p.26 % objective: min life-cycle cost; variable: thickness (t) R=3; %radius [m] c1=10e3; %thermal resistivity [Cm/W] c2=1e3; %insulation cost/m3 c3=1; %installation cost/Whr c4=.01; %operating cost/Whr dT=5; %temp difference ir=.05; %interest rate n=10; %life in years A=4*pi*R^2; G=365*24*dT*A/(c1*t); %heat gain [Whr] pwf=(1-1/(1+ir)^n)/ir; %present worth factor LC=c2*A*t+(c3+pwf*c4)*G; %life-cycle cost f=@(t) c2*A*t+(c3+pwf*c4)*365*24*dT*A/(c1*t); %objective fminsearch(f,.1) %use Nelder-Mead algorithm ans = 0.0687
  • 12. Hooke-Jeeves Pattern Search • The pattern search works by locally evaluating a set of points along N linearly independent search directions and polling the results. • It uses a combination of exploratory moves and pattern moves to find the optimum – An exploratory move is performed in the vicinity of current point along search directions – The results of exploratory moves are polled to find an improved objective and the new design point – Two local moves are used to make a pattern move to jump to a new location
  • 13. Pattern Search Algorithm • Initialize: choose initial point 𝑥0 , mesh size Δ𝑖, 𝑖 = 1, … , 𝑛, expansion factor 𝛼 > 1, termination parameter 𝜖 • For 𝑘 = 0,1, … – Check termination. If Δ < 𝜖, quit – Perform a set of exploratory moves as: 𝑥𝑘 ± Δ𝑖, 𝑖 = 1, … , 𝑛. – Poll (check objective at) the perturbed points and compare with the current point. If the poll is successful, i.e., if an improved objective is found, move to that point and increase the mesh size by 𝛼 – If poll is unsuccessful, set Δ𝑖 = Δ𝑖/𝛼 and repeat exploratory moves – If two successful polls result in moves along the same direction, make a pattern move as 𝑥𝑝 𝑘+1 = 𝑥𝑘 + (𝑥𝑘 − 𝑥𝑘−1 ) – Set 𝑘 = 𝑘 + 1
  • 14. Pattern Search • For example, assume that the initial point is: x0 = [2.1 1.7] • Using a mesh size of one, the mesh points are selected as: [1 0] + x0 = [3.1 1.7] [0 1] + x0 = [2.1 2.7] [-1 0] + x0 = [1.1 1.7] [0 -1] + x0 = [2.1 0.7] • The next point is x1 = [1.1 1.7]
  • 15. Simulated Annealing • Simulated annealing (SA) is modeled after annealing of solids, i.e., heating it to liquid state and slowly cooling it while maintaining thermal equilibrium. • During annealing, the atoms undertake random displacements. A move with negative change in energy state is accepted; a positive change is accepted with probability: 𝑃 = 𝑒−Δ𝐸/𝑘𝑇 , where 𝑘 is Boltzmann constant and 𝑇 is absolute temperature. • When applied to engineering problems, the objective function is analogous to energy, and Boltzmann constant is replaced by average change in the objective function. • The algorithm is started at some initial temperature parameter 𝑇0, that is gradually reduced to simulate the annealing process.
  • 16. Simulated Annealing • At each setting of temperature variable, random design changes are introduced; a change with lower objective value is accepted; a change with higher objective value is accepted with a probability 𝑃 = 𝑒−Δ𝐸/Δ𝐸𝑎𝑣𝑒𝑇 . • Once steady-state is reached, or after a certain number of changes, the temperature is reduced and the process repeated. • Although simulated annealing can be used for continuous problems, it is especially effective when applied to combinatorial problems.
  • 18. Simulated Annealing • Let 𝑇(𝑘) describe the schedule of annealing the temperature 𝑇; then probability of acceptance of a design change is given as: ℎ Δ𝐸 = 𝑒𝐸𝑘+1/𝑇 𝑒𝐸𝑘+1/𝑇+𝑒𝐸𝑘/𝑇 ≅ 1 1+𝑒Δ𝐸/𝑇 where Δ𝐸 = 𝐸𝑘+1 − 𝐸𝑘 • The probability distribution of the design perturbations is assumed to be normal, i.e., 𝑔 Δ𝑥 = 2𝜋𝑇 𝑛/2𝑒− 1 2 Δ𝑥2 𝑇 • Theoretically, the global minimum of the energy function 𝐸 𝑥 can be reached if 𝑇0 is selected large enough and 𝑇(𝑘) is selected to decrease no faster than 𝑇𝑘 = 𝑇0 ln 𝑘 • For faster quenching, the above schedule may be replaced by: 𝑇𝑘 = 𝑇0 𝑘
  • 19. Simulated Annealing • A schedule for 𝑇(𝑘) can be based on the acceptance probability of the worst case design: let 𝑃𝑠 and 𝑃𝑓 denote the desired probability at the beginning and at termination, then a schedule for 𝑇 is developed as: 𝑇𝑠 = − 1 ln 𝑃𝑠 ; 𝑇𝑓 = − 1 ln 𝑃𝑓 ; 𝐹 = 𝑇𝑓 𝑇𝑠 1/(𝑁−1) ; 𝑇𝑛+1 = 𝐹𝑇𝑛 For example, let 𝑃𝑠 = 0.5, 𝑃𝑓 = 10−8 , 𝑁 = 100; then 𝑇𝑠 = 1.4426, 𝑇𝑓 = 0.054278, 𝐹 = 0.9674. • An exponential schedule using a factor 𝐹 < 1 can also be drawn, where 𝑇𝑘 = 𝑇0𝑒 𝐹−1 𝑘
  • 20. Simulated Annealing 1. Pick an initial design; start at a high value of temperature variable (𝑇); pick 𝑁𝑆, number of cycles before temperature reduction, and optionally N, the total number of perturbations. 2. Start a cycle. Perturb one variable at a time; accept the new point if perturbation results in a lower value of the objective function. If perturbation results in a higher objective, accept it with probability: 𝑃 = 𝑒−Δ𝐸/Δ𝐸𝑎𝑣𝑒𝑇 , where Δ𝐸𝑎𝑣𝑔 is running average of accepted objective variations. 3. After completing 𝑁𝑆 cycles (or if steady-state has been reached, lower the temperature as per the desired schedule, e.g., 𝑇𝑛+1 = 𝐹𝑇𝑛 4. go to 2
  • 23. Simulated Annealing • Simulated annealing was developed for unconstrained problems. In the case of constrained problems, possible approaches are: – Reject the infeasible solutions generated in the process – Use a penalty function to add the constraints to the objective • Simulated annealing is particularly suited to discrete problems. In the case of continuous problems, SA is more effective when constraint surface is highly irregular with multiple local minima. • For general continuous problems, gradient based methods (e.g., GRG) are much faster and hence the preferred choice.
  • 24. Simulated Annealing Code %initialize: specify nvar, xl, xu d=xu-xl; x=(xu+xl)/2; %initial design xopt=x; kx=zeros(1,nvar); %current optimal, count T0=1; T=T0*ones(1,nvar); %set temperature Pf=1e-6; m=10; %acceptance probability while any(T)>-1/log(Pf) x=xopt; for j=1:m %start of cycle dx=2*T.*(rand(1,nvar)-.5); %apply random variation dX=diag(dx); for k=1:nvar dx=limits(x,dX(k,:),xl,xu); %adjust limits fx=f(x+dx); gx=g(x+dx); %objective & constraints px=exp(-fx/T(k))/(exp(-fx/T(k))+exp(-f(x)/T(k))); %acceptance probability
  • 25. Simulated Annealing Code if any(gx>0), continue %constraint violation elseif fx>f(x) && rand()>px, continue %random accept else x=x+dx; if f(x)<f(xopt), xopt=x; end %record current opt kx=kx+(dx~=0); %acceptance count end, end, end T=T0./log(kx); %adjust temperature if all(kx>100*nvar), break, end %exceed count end function dx = limits (x,dx,xl,xu) while any(x+dx<xl) idl=find(x+dx<xl); dx(idl)=(1-rand(size(idl))).*dx(idl); %adjust lower bound end while any(x+dx>xu), idu=find(x+dx>xu); dx(idu)=(1-rand(size(idu))).*dx(idu); %adjust upper bound end, end
  • 26. Design Example: Symmetric Two-Bar Truss Problem: design a symmetrical two-bar truss of minimum mass to support a fixed load 𝑃. The truss has height 𝐻, and span 𝐵. Design variables: diameter 𝑑 , height 𝐻 𝑙 = (𝐵/2)2+𝐻2, 𝐴 = 𝜋𝑑𝑡 Total weight: 𝑊 = 2𝜌𝑙𝐴, Constraints: Axial stress: 𝜎 = 𝑃𝑙 2𝜋𝑑𝑡𝐻 ≤ 𝜎𝑎 Buckling stress: 𝜎𝑏 = 𝜋2 𝑑2+𝑡2 𝐸 8𝑙2 ≤ 𝜎𝑎 Deflection: 𝜀 = 𝑃𝑙3 2𝜋𝑑𝑡𝐻2𝐸 ≤ 𝜀𝑚𝑎𝑥
  • 27. Design Example: Symmetric Two-Bar Truss Let the design variables be: diameter 𝑑 and height 𝐻 Then, the design optimization problem is defined as: Objective: min d,H 2𝜋𝑑𝑡𝜌 (𝐵/2)2+𝐻2 Subject to: 𝜎𝑏 𝜎 − 1 ≤ 0, 𝜎 𝜎𝑎 − 1 ≤ 0, 𝜀 𝜀𝑚𝑎𝑥 − 1 ≤ 0, For a particular problem, let: 𝑃 = 66 𝑘𝑖𝑝𝑠; 𝐵 = 60 𝑖𝑛; 𝑡 = 0.15; 𝜌 = 0.3 𝑙𝑏 𝑖𝑛3 ; 𝐸 = 30 × 106 𝑙𝑏 𝑖𝑛2 ; 𝜎𝑎 = 1 × 105 𝑝𝑠𝑖; 𝜀𝑚𝑎𝑥 = 0.25 𝑖𝑛
  • 28. SA Example: Two-bar Truss • A selection of SA results for two-bar truss: 𝑑 𝐻 𝑊 𝑇 1.2500 27.5000 14.3835 1.0000 1.0000 1.0441 27.6401 12.0417 1.0000 1.0000 1.0441 27.2462 11.9632 1.4427 0.4343 1.0441 27.1396 11.9421 0.9102 0.3693 1.0441 27.1367 11.9415 0.6213 0.2507 1.0441 27.1297 11.9401 0.5139 0.2354 1.0404 27.3274 11.9377 0.2423 0.1573 1.0404 27.3126 11.9348 0.2404 0.1555 1.0404 27.3104 11.9343 0.2387 0.1552 1.0404 27.3048 11.9332 0.2378 0.1547 1.0404 27.3038 11.9330 0.2276 0.1495 1.0404 27.3028 11.9328 0.2269 0.1493 1.0348 27.5877 11.9247 0.2039 0.1368 1.0348 27.5817 11.9235 0.2036 0.1367 1.0348 27.5758 11.9223 0.2036 0.1365 1.0348 27.5754 11.9222 0.1910 0.1306
  • 29. SA Example: Two-bar Truss • Design with three variables: 𝐻, 𝑑, 𝑡 Objective: min d,H 2𝜋𝑑𝑡𝜌 (𝐵/2)2+𝐻2 Subject to: 𝜎𝑏 𝜎 − 1 ≤ 0, 𝜎 𝜎𝑎 − 1 ≤ 0, 𝜀 𝜀𝑚𝑎𝑥 − 1 ≤ 0, • SA Results 𝐻 =29.9010 in, 𝑑 =1.8631 in, 𝑡 =0.0799 in; 𝑓 =11.8801 lbs • Note, the problem has multiple optima
  • 30. Genetic Algorithm • GA is inspired by the process of natural selection in the biological evolution. • GA is characterized by three basic operations that guide reproduction: – Selection of the fittest for mating – Crossover of genetic information during mating – Mutation, i.e., introduction of random changes during reproduction • When applied to the optimization problems, design variables are termed as genes, a chromosome represents a trial solution to the problem. A population is a collection of chromosomes. • Members from the population are chosen for mating based on their fitness. Application of crossover and mutation yields a new generation with improved average fitness than the previous generation. • The process continues until the improvement becomes negligible.
  • 31. Genetic Algorithm The steps in the application of a GA are: • Determine a coding scheme (genetic representation) of variables; two possible choices are value representation and binary representation. • Pick a crossover and mutation rate; typical values for binary representation are 0.8 and 0.001-0.01, respectively. • Develop an initial population of (20-100) design choices represented by chromosomes evenly spread in the design space. • Use a fitness function to evaluate and rank the chromosomes. • Select a mating pool from the population using one of the following: – Roulette selection. The probability of a chromosome being picked is in proportion to its fitness. – Tournament selection. A subset of population is randomly selected and those with highest fitness are included in the mating pool.
  • 32. Genetic Algorithm • Use crossover among pairs of parents to generate two children for the next generation: – Binary coding. Use a crossover point to divide the chromosome. Copy the first part and cross the second one among children. – Value coding. Each gene is separately considered for crossover. Single- point, uniform, or blend crossover can be considered. • Occasionally, perform mutation to randomly change the design: – Change individual bits in binary coding. – Change parameter values (genes) in value coding. • Evaluate the new generation for fitness. Retain individuals with higher fitness for reproduction. • The parent generation also competes in the selection process (elitism). • Continue for specific number of generations or till the improvement in average fitness value falls below a specified tolerance.
  • 33. Binary Coding • Binary coding was originally used to represent design choices – Precision = 𝑈𝑖−𝐿𝑖 2𝑛−1 (smallest change in variable) – Base 10 integer value: 𝑥𝑖𝑛𝑡10 = 2𝑛−1 𝑈−𝐿 𝑥 − 𝐿 – Real value: 𝑥 = 𝑈−𝐿 2𝑛−1 𝑥𝑖𝑛𝑡10 + 𝐿 – Example: let 𝑥 = 3.567 with a range of 0 to 10; then for 8-bit representation, 𝑥𝑖𝑛𝑡10 = 255 10 3.567 = 91 • A chromosome is created by combining binary strings of design variables together.
  • 34. Value Coding • Design variables are assembled together in a chromosome using numbers. In MATLAB, this can be done using a structure array. 𝑥 = {𝑔𝑒𝑛𝑒1, 𝑔𝑒𝑛𝑒2, … , 𝑔𝑒𝑛𝑒𝑛} • Scaling. For best results, objective function and constraints are scaled by their maximum values, i.e., values attained when the design parameters are at their maximum.
  • 35. Fitness • If there are no constraints, fitness equals the value of the objective function 𝑓. • If constraints are present, we may use a penalty parameter to write 𝑓𝑖𝑡𝑛𝑒𝑠𝑠 = 𝑓 + 𝑃𝑔, where 𝑔 is the maximum constraint violation given as: 𝑔 = max 0, 𝑔1, 𝑔2, … , 𝑔𝑚 , where 𝑔 = 0 indicates a feasible design. • Alternatively, fitness may be based on the maximum value of the objective in the current population, i.e., 𝑓𝑖𝑡𝑛𝑒𝑠𝑠 = 𝑓max 𝑓𝑒𝑎𝑠 + 𝑔.
  • 36. Crossover • Let the crossover probability = 0.8. Generate a random number to determine if crossover is to be performed. • Single-point crossover. Generate a random integer between 1 and 𝑛 to determine the crossover point at gene 𝑖 • Uniform crossover. Generate a random number 𝑟 for each of the 𝑛 genes; perform crossover for individual genes • Blend crossover. Generate a random number 𝑟 for each of the 𝑛 genes, then obtain the children genes as: 𝑦1 = 𝑟𝑥1 + 1 − 𝑟 𝑥2, 𝑦2 = 1 − 𝑟 𝑥1 + 𝑟𝑥2
  • 37. Mutation • Mutation: pick a mutation parameter, 0 ≤ 𝛽 < 1 (e.g., 𝛽 = 0.5). – For 𝛽 = 0, mutation probability is uniform in successive generations – For 𝛽 > 0, the mutation probability gradually decreases • Compute uniformity parameter 𝛼 as: 𝛼 = 1 − 𝑗−1 𝑀 𝛽 , where 𝑗 is the current generation number, and 𝑀 is the total number of generations. • Pick a random number 𝑟 between 𝑥min and 𝑥max; then perform the mutation as: If 𝑟 ≤ 𝑥 then 𝑦 = 𝑥min + 𝑟 − 𝑥min 𝛼 𝑥 − 𝑥min 1−𝛼 If 𝑟 > 𝑥 then 𝑦 = 𝑥max − 𝑥max − 𝑟 𝛼 𝑥max − 𝑥 1−𝛼
  • 38. Dynamic Mutation • Mutation parameter: 𝛽 = 1 means uniform mutation; 𝛽 = 0 means no mutation. • Uniformity parameter: 𝛼 = 1 means mutated variable is picked uniformly over its range; 𝛼 < 1 favors values near the current value of the variable.
  • 39. Elitism • Combine the N children with N parents to obtain 2N designs • Sort the designs by fitness values and pick the N most fit designs
  • 40. Design Example: Three-bar Truss • Three-bar truss (Ref: Parkinson, p.5-4) Design variables: 𝑥1 = 𝐴1, 𝑥2 = 𝐴2 The normalized objective and constraints are obtained as: 𝑓 = 1.429𝑥1 + 0.57𝑥2 𝑔1: 0.3386 − 1.354𝑥1 − 1.323𝑥2 ≤ 0 𝑔2: 0.2463 − 1.261𝑥1 − 1.232𝑥2 ≤ 0 𝑔3: −2𝑥1 ≤ 0, 𝑔4: −2𝑥2 ≤ 0
  • 41. GA Example • First generation: • Roulette selection of parents; let 𝛾 = 1.5 (fitness pressure) Design 𝑥1 𝑥2 𝑓 𝑔 𝑓𝑖𝑡𝑛𝑒𝑠𝑠 1 0.2833 0.1408 0.4852 0 0.4852 2 0.0248 0.0316 0.0535 0.2632 0.2632 + 0.8657 3 0.1384 0.4092 0.4314 0 0.4314 4 0.3229 0.1386 0.5406 0 0.5406 5 0.0481 0.1625 0.1615 0.0585 0.0585 + 0.8657 6 0.4921 0.2845 0.8657 0 0.8657 Design 𝑓𝑖𝑡𝑛𝑒𝑠𝑠 1/𝑓𝑖𝑡𝑛𝑒𝑠𝑠 𝛾 Normalized Cumulative 1 0.4852 2.9588 0.2424 0.2424 2 1.1289 0.8337 0.0683 0.3107 3 0.4314 3.5293 0.2892 0.5999 4 0.5406 2.5159 0.2061 0.8061 5 0.9242 1.1255 0.0922 0.8983 6 0.8657 1.2415 0.1017 1.0
  • 43. Design Example: Coil Spring Problem: design a minimum mass spring to carry a given axial load 𝑃 without material failure while satisfying minimum deflection and minimum surge wave frequency requirements Design variables: mean coil diameter 𝐷 , wire diameter 𝑑 , number of active coils (𝑁) Design equations: Spring mass: 𝑚 = 1 4 𝑁 + 𝑄 𝜋2 𝐷𝑑2 𝜌 Load deflection: 𝑃 = 𝐾𝛿, where 𝐾 = 𝑑4𝐺 8𝐷3𝑁 Shear stress: 𝜏 = 8𝑘𝑃𝐷 𝜋𝑑3 Stress concentration factor: 𝑘 = 4𝐷−𝑑 4(𝐷−𝑑) + 0.615 𝑑 𝐷 Frequency of surge waves: 𝜔 = 𝑑 2𝜋𝑁𝐷2 𝐺 2𝜌
  • 44. Design Example: Coil Spring The optimization problem is formulated as: Objective: min 𝑓 𝑁, 𝑑, 𝐷 = 𝑁 + 𝑄 𝐷𝑑2 Constraints: 𝜏 ≤ 𝜏𝑎, 𝜔 ≥ 𝜔0, 𝐷 + 𝑑 ≤ 𝐷0, 𝛿 = 𝑃 𝐾 ≥ Δ, Variable bounds: 𝑑𝑚𝑖𝑛 ≤ 𝑑 ≤ 𝑑𝑚𝑎𝑥, 𝐷𝑚𝑖𝑛 ≤ 𝐷 ≤ 𝐷𝑚𝑎𝑥, 𝑁𝑚𝑖𝑛 ≤ 𝑁 ≤ 𝑁𝑚𝑎𝑥 Assume the following parameter values: 𝑃 = 10 𝑙𝑏, Δ = 0.5 𝑖𝑛, 𝛾 = 0.285 𝑙𝑏 𝑖𝑛3 , 𝜔0 = 100 𝐻𝑧, 𝐷0 = 1.5 𝑖𝑛, 𝜏𝑎 = 80,000 𝑙𝑏 𝑖𝑛2 , 𝐺 = 1.15 × 107 𝑙𝑏 𝑖𝑛2 , 𝑄 = 2
  • 45. GA Example: Coil Spring % Coil spring model (Arora, p. 43) % Design variables: coil diameter (D), wire diameter (d), number of active coils (N) xl=[.5 .01 1]; %lower limits xu=[1.5 .15 11]; %upper limits x=(xl+xu)/2; %trial design %parameters P=10; %load [lb] Q=2; %inactive coils Del=.5; %min deflection [in] Dmax=1.5; %max diameter [in] gam=.285; %weight density [lb/in3] gr=386; %gravity [in/sec2] oml=100; %min frequency [Hz] G=1.15e7; %shear modulus [lb/in2] taumax=80e3; %max shear stress [lb/in2] rho=gam/gr; %mass density
  • 46. GA Example: Coil Spring m=@(x) pi^2/4*(x(3)+Q)*x(1)*x(2)^2*rho; %spring mass K=@(x) x(2)^4*G/(8*x(1)^3*x(3)); %spring constant k=@(x) (x(1)-x(2)/4)/(x(1)-x(2))+.615*x(2)/x(1); %stress concentration factor tau=@(x) 8*k(x)*P*x(1)/(pi*x(2)^3); %shear stress om=@(x) x(2)/(2*pi*x(1)^2*x(3))*sqrt(G*gr/(2*gam)); %surge frequency del=@(x) P/K(x); %deflection f=@(x) x(1)*x(2)*x(2)*(x(3)+Q); %objective, x=[D,d,N] g=@(x) [tau(x)/taumax-1; oml/om(x)-1; Del/del(x)-1; (x(1)+x(2))/Dmax-1]; %constraints
  • 47. MATLAB Optimization Problem Structure • Problem structure, specified as a structure with the following fields: – objective — Objective function – fitnessfcn — Fitness function – nvars — number of variables – x0 — Starting point – Aineq — Matrix for linear inequality constraints – bineq — Vector for linear inequality constraints – Aeq — Matrix for linear equality constraints – beq — Vector for linear equality constraints – lb — Lower bound for x – ub — Upper bound for x – nonlcon — Nonlinear constraint function – solver — ‘ga' – options — Options created with optimoptions or psoptimset – rngstate — Optional field to reset the state of the RNG
  • 48. GA Example: Coil Spring Opt.nvars=3; %number of variables Opt.fitnessfcn=f; %fitness function Opt.nonlcon=@(x) deal(g(x),[]); %nonlinear constraints Opt.lb=xl; %lower bounds Opt.ub=xu; %upper bounds Opt.solver='ga‘ %solver Opt.IntCon=[3] %integer variables Opt.x0=[1 .08 6]; %initial guess Opt.options=gaoptimset(@ga) %GA options >> [x,fval]=ga(Opt) %Solve the problem using GA Optimization terminated: average change in the penalty fitness value less than options.FunctionTolerance and constraint violation is less than options.ConstraintTolerance. x = 0.5601 0.0590 5.0000 fval = 0.0136
  • 49. Swarm Intelligence • Swarm intelligence models the collective behavior of species in the biological kingdom Examples include ant and termite colonies, schools of fish, flocks of birds, herds of animals, etc. • Swarm intelligence manifests in artificial systems composed of intelligent agents that coordinate using decentralized control and self-organization. • A typical swarm intelligence system has the following characteristics: – It is composed of many individuals that are relatively homogeneous; – The interactions among the individuals are based on simple behavioral rules that exploit only local information; – The overall behavior of the group emerges from the interactions of individuals with each other and with their environment.
  • 50. Particle Swarm Optimization • The design space is initialized with a random population of solutions (particles) with associated fitness values. Particles move around the search space with designated velocities. • Each particle’s position is iteratively updated based on: – its known best location (pbest), and – the overall best location achieved by any particle (gbest). • The update equations are: v[] = v[] + c1*rand()*(pbest[]- present[]) + c2*rand()*(gbest[] - present[]); present[] = persent[] + v[] where v[] is the particle velocity, present[] is the current position, pbest[] and gbest[] are defined above, rand() is a random number between (0,1), and c1, c2 are learning factors in the range [0,4]. Usually c1 = c2 = 2. http://www.swarmintelligence.org/tutorials.php
  • 51. Particle Swarm Optimization • Parameters that need to be tuned in PSO include: – The number of particles: the typical range is 20 - 40. More particles may be included for difficult problems. – Vmax: it determines the maximum change in particle position in an iteration. The range of the particle may be used as Vmax. For example, if a particle has a range [-10,10], then Vmax = 20. – Learning factors: c1 and c2 usually equal to 2. Other values have been suggested, where c1 equals to c2 and ranges from [0, 4]. – The stopping condition: the maximum number of iterations the PSO execute and the minimum error requirement.
  • 52. PSO Algorithm 1. Initialize: choose neighborhood size N, inertia W, stall counter c=0, the self-adjustment weight, c1, and social adjustment weight, c2. 2. Create an initial population of particles; set initial velocities in the range [-r, r]. 3. Compute objective function value of each particle. Record the current best position p(i) of each particle, and the global best position g(i). 4. Iterate: choose a random subset S of N particles; find fopt(S), the best local fitness value, and g(S), the position of the neighbor with best fitness. – Update particle velocity: v = W*v + y1*u1.*(p-x) + y2*u2.*(g-x) – Update particle position x = x + v – Enforce the bounds: if any particle is outside the bound, set it equal to the bound.
  • 53. PSO Algorithm 5. Evaluate the objective function f(x) – If f(x)<f(p), set p=x – If f(x)<f(g), then • Set c = max(0, c-1). • If c < 2, then set W = 2*W. • If c > 5, then set W = W/2. – Otherwise, Set c=c+1 6. Stop if max number of iterations is exceeded, or if the relative change in the best objective function value g over the last M iterations is less than a tolerance parameter. 7. Go to 3.
  • 54. Design Example: Three-bar Truss • Three-bar truss (Ref: Parkinson, p.5-4) Design variables: 𝑥1 = 𝐴1, 𝑥2 = 𝐴2 • For 𝑥1 = 𝑥2 = 0.5, we have 𝑓 = 70, 𝑔1 = 28350, 𝑔2 = 60900, 𝑔3 = 𝑔4 = 0.5; hence the scaled objective and constraints are obtained as: 𝑓 = 1.429𝑥1 + 0.57𝑥2 𝑔1: 0.3386 − 1.354𝑥1 − 1.323𝑥2 ≤ 0 𝑔2: 0.2463 − 1.261𝑥1 − 1.232𝑥2 ≤ 0 𝑔3: −2𝑥1 ≤ 0, 𝑔4: −2𝑥2 ≤ 0
  • 55. MATLAB Example: Three-bar Truss MATLAB commands P=20e3; %load h=40; b=60; %dimensions l1=sqrt(h^2+b^2/4); %length xu=[.5,.5]; %upper limit for variables fu=abs([2*l1 h]*xu(:)); %objective fcn cf=[2*l1 h]/fu; %scale coefficients f=@(x) cf*x(:); %define objective function gu=abs([9600;15e3;0;0]-[384e2 375e2; 768e2 75e3; 1 0; 0 1] *xu(:)); %maximum constraint values cg0=[9600;15e3;0;0]./gu; %scale constraints cg=[384e2 375e2; 768e2 75e3; 1 0; 0 1]./[gu gu]; g=@(x) cg0-cg*x(:); %define constraint function
  • 56. Three-bar Truss Design Using GA MATLAB commands Opt=struct('solver','ga','fitnessfcn',f,'nvars',2); Opt.x0=[.01,.01]; Opt.Aineq=-cg; Opt.bineq=-cg0; Opt.lb=[0,0]; Opt.ub=[.5,.5]; Opt.options=[]; [x,fval]=ga(Opt); %Solve the problem using GA Optimization terminated: average change in the fitness value less than options.FunctionTolerance. >> x,fval x = 0.0006 0.2546 fval = 0.1463
  • 57. Three-bar Truss Design Using PSO MATLAB Commands Opt.solver='particleswarm' Opt.objective=f [x,fval]=particleswarm(Opt); %try PSO Optimization ended: relative change in the objective value over the last OPTIONS.MaxStallIterations iterations is less than OPTIONS.FunctionTolerance. >> x,fval x = 0 0 fval = 0
  • 58. Three-bar Truss Using Pattern Search MATLAB Commands Opt.solver='patternsearch' [x,fval]=patternsearch(Opt); %try pattern search Optimization terminated: mesh size less than options.MeshTolerance. >> x,fval x = 0 0.2552 fval = 0.1459
  • 59. Three-bar Truss Design Using SA MATLAB Commands Opt.solver='simulannealbnd' [x,fval]=patternsearch(Opt); %try simulated annealing Optimization terminated: change in best function value less than options.FunctionTolerance. >> x,fval x = 1.0e-05 * 0.0159 0.2206 fval = 1.4882e-06 • However, using our own SA code, x = 0.0000 0.2588 f = 0.1479
  • 60. Example: Minimum Thrust Design • Problem: Select an engine for a business jet keeping in view the thrust requirements • Background: aircraft thrust requirements are dictated by the minimum thrust requirements during: – Take off – Climb – Cruise – Sustained turn – Service ceiling
  • 61. Example: Minimum Thrust Design • Thrust requirement during cruise and constant velocity turn: 𝑇 = 𝑞𝑆 𝐶𝐷𝑚𝑖𝑛 + 𝑘 𝑛𝑊 𝑞𝑆 2 + Δ𝑃 𝑉 where – 𝑞 = dynamic pressure – 𝑆 = surface area – 𝐶𝐷𝑚𝑖𝑛 = minimum drag coefficient – 𝑛 = load factor – 𝑘 = lift induced drag coefficient – Δ𝑃 = excess power
  • 62. Example: Minimum Thrust Design • Thrust requirement during TO run: 𝑆𝐺 = 𝑉𝐿𝑂𝐹 2𝑎 where 𝑎 = 𝑇−𝑞𝑆𝐶𝐷𝑇𝑂 𝑊 − 𝜇 1 − 𝑞𝑆𝐶𝐿𝑇𝑂 𝑊 𝑔 • Thrust requirement during climb: 𝑇 = 𝑉𝑉 𝑉 𝑊 + 𝑞𝑆 𝐶𝐷𝑚𝑖𝑛 + 𝑘 𝑊 𝑞𝑆 2 • Thrust requirement at service ceiling: 𝑇 = 1.667 𝑉 𝑊 + 𝑞𝑆 𝐶𝐷𝑚𝑖𝑛 + 𝑘 𝑊 𝑞𝑆 2 where 𝑞 = 𝑊 𝑆 𝑘 3𝐶𝐷𝑚𝑖𝑛
  • 63. Example: Minimum Thrust Design gwt=38875; %gross weight[lb] Tmin=.1799*gwt; %minimum thrust roc=50; %rate of climb [fps]@Vcl Sg=5000; %take off run [ft]@Vlo/sqrt(2) Vcr=533.4; %cruise velocity [KTAS]@Acr Vcl=171; %climb velocity {KCAS] Vlo=112; %lift off velocity {KCAS] Vst=102; %stall speed [KCAS] Acr=43000; %cruises altitude [ft] Aceil=45000; %service ceiling [ft] mu=.04; %ground friction CDmin=.0225; %minimum drag coeff CLto=.8; %lift coefficient @TO CDto=.0325; %drag coefficient @TO Ps=0; %energy state dsl=.002378; %air density @SL [slug/ft3] kd=.68756e-5; %air density variation constant gr=32.174; %gravity n=1; %load factor
  • 64. Example: Minimum Thrust Design e=@(x) 1.78*(1-.045*x(3)^.68)-.64; %span efficiency k=@(x) 1/(pi*x(3)*e(x)); %lift induced drag coefficient; S=@(x) x(2)^2/x(3); WSR=@(x) gwt*x(3)/x(2)^2; %TO run rho=dsl; %sea-level qto=1/2*rho*(1.688*Vlo)^2/2; %dynamic pressure [lb/ft2] TWRto=@(x) (1.688*Vlo)^2/(2*gr*Sg)+qto*CDto/WSR(x)+mu*(1- qto*CLto/WSR(x)); %thrust-weight ratio %climb qcl=1/2*rho*(1.688*Vcl)^2; %dynamic pressure [lb/ft2] TWRcl=@(x) roc/(1.688*Vcl)+qcl*(CDmin/WSR(x)+k(x)*(n/qcl)^2*WSR(x))+Ps/Vc r; %thrust-weight ratio %stall qst=1/2*rho*(1.688*Vst)^2; CLmax=@(x) gwt/(qst*S(x)); %max lift coefficient
  • 65. Example: Minimum Thrust Design %cruise drho=(1-kd*Acr)^4.2561; rho=dsl*drho; %air density rho=5.09e-4; q=1/2*rho*(1.688*Vcr)^2; %dynamic pressure [lb/ft2] CL=@(x) gwt/(q*S(x)); CD=@(x) CDmin+CL(x)^2/(pi*x(3)*e(x)); Tcr=@(x) q*S(x)*CD(x); TWRcr=@(x) q*(CDmin/WSR(x)+k(x)*(n/q)^2*WSR(x))+Ps/Vcr; %thrust-weight ratio %CV turn n=2; %load factor CDtn=@(x) CDmin+k(x)*(n*gwt/(q*S(x)))^2; Ttn=@(x) q*S(x)*CDtn(x); %thrust needed %TWRtn=@(x) q*(CDmin/WSR(x)+k(x)*(n/q)^2*WSR(x))+Ps/Vcr; %thrust-weight ratio
  • 66. Example: Minimum Thrust Design %ceiling Vv=1.667; %[fps] drho=(1-kd*Aceil)^4.2561; rho=dsl*drho; %air density qsc=@(x) gwt/S(x)*sqrt(k(x)/(3*CDmin)); CDsc=@(x) CDmin+k(x)*(gwt/(qsc(x)*S(x)))^2; Tsc=@(x) Vv/sqrt(2*qsc(x)/rho)*gwt+qsc(x)*S(x)*CDsc(x); TWRsc=@(x) Vv/sqrt(2/rho*WSR(x)*sqrt(k(x)/(3*CDmin)))+4*sqrt(k(x)* CDmin/3); %thrust-weight ratio f=@(x) (10*S(x)+x(1))/gwt; %objective g=@(x) [TWRto(x)/x(1)*gwt-1; TWRcl(x)/x(1)*gwt-1; Tcr(x)/x(1)-1; Ttn(x)/x(1)-1; Tsc(x)/x(1)-1; CLmax(x)/2.5-1]; %constraints View publication stats View publication stats