2. Course Materials
• Arora, Introduction to Optimum Design, 3e, Elsevier,
(https://www.researchgate.net/publication/273120102_Introductio
n_to_Optimum_design)
• Parkinson, Optimization Methods for Engineering Design, Brigham
Young University
(http://apmonitor.com/me575/index.php/Main/BookChapters)
• Iqbal, Fundamental Engineering Optimization Methods, BookBoon
(https://bookboon.com/en/fundamental-engineering-optimization-
methods-ebook)
3. Direct Search Methods
• The direct search methods are gradient-free methods that solve the
optimization problem based on function evaluations.
– Nelder-Mead Simplex Algorithm. Originally derived for solving
parameter estimation problems, the Nelder-Mead algorithm also
solves unconstrained optimization problems.
– Stochastic methods. Simulated annealing is the most common.
– Evolutionary algorithms. These algorithms are modeled after
biological evolution, e.g., genetic algorithm (GA).
– Swarm intelligence. These methods model the flocking behavior in
intelligent species, e.g., particle swarm optimization (PSO), ant
colony optimization (ACO), etc.
– Metaheuristics. General population behavior based methods, e.g.
harmony search.
4. Nelder-Mead Algorithm
• The Nelder-Mead algorithm finds the minimum by enclosing it in a
simplex, i.e., a convex hull of 𝑛 + 1 non-degenerate vertices, and
gradually shrinking it.
– The algorithm is implemented in MATLAB ‘fminsearch’ function.
• Let 𝑥0, 𝑥1, … , 𝑥𝑛 define the vertices of the simplex with associated
function values 𝑓𝑗 = 𝑓 𝑥𝑗 , 𝑗 = 0, . . , 𝑛; the NM method evaluates
one or two additional points in each iteration, followed by one of
the following transformations on the simplex:
– Reflection away from the worst vertex, i.e., the one with highest
function value.
– Shrinkage towards the best vertex, i.e., the one with least value.
– Expansion if the function value improves.
– Contraction in the neighborhood of a minimum.
6. Nelder-Mead Algorithm
1. Initialize: given a point 𝑥0, compute 𝑥𝑗 = 𝑥0 + ℎ𝑗𝑒𝑗, 𝑗 = 1, … , 𝑛 as
the vertices of simplex 𝑆. Choose constants 𝛼, 𝛽, 𝛾, 𝛿 to satisfy
1 < 𝛾 > 𝛼 > 0, 0 < 𝛽 < 1, 0 < 𝛿 < 1; for example, choose
𝛼 = 1, 𝛽 = 0.5, 𝛾 = 2, 𝛿 = 0.5
2. Check termination. Exit if marginal improvement in the function
value is below tolerance, or if the simplex size falls below a certain
minimum criterion.
3. Ordering. Rank the vertices of 𝑆 in the order of function value
𝑓0 ≤ 𝑓1 ≤ ⋯ ≤ 𝑓𝑛
4. Find centroid. Let 𝑓ℎ = max
𝑗
𝑓𝑗 , 𝑓𝑠 = max
𝑗≠ℎ
𝑓𝑗 , 𝑓𝑙 = min
𝑗
𝑓𝑗 ; compute
𝑐 =
1
𝑛
𝑥𝑗
𝑗≠ℎ
7. Nelder-Mead Algorithm
5. Reflect. Compute the reflection point, 𝑥𝑟 = 𝑐 + 𝛼(𝑐 − 𝑥ℎ).
– Expand. If 𝑓𝑟 < 𝑓𝑙, compute the expansion point, 𝑥𝑒 = 𝑐 + 𝛾(𝑥𝑟 −
𝑐); if 𝑓𝑒 < 𝑓𝑟, replace 𝑥ℎ by 𝑥𝑒, otherwise replace 𝑥ℎ by 𝑥𝑟
– Replace. If 𝑓𝑙 ≤ 𝑓𝑟 < 𝑓𝑠, replace 𝑥ℎ by 𝑥𝑟
– Contract outside. if 𝑓𝑠 ≤ 𝑓𝑟 < 𝑓ℎ, compute the contraction point,
𝑥𝑐 = 𝑐 + 𝛽(𝑥𝑟 − 𝑐); if 𝑓𝑐 < 𝑓𝑟, replace 𝑥ℎ by 𝑥𝑐, otherwise go to 6
– Contract inside. If 𝑓𝑟 > 𝑓ℎ, compute the contraction point,
𝑥𝑐 = 𝑐 + 𝛽(𝑥ℎ − 𝑐); if 𝑓𝑐 < 𝑓ℎ, replace 𝑥ℎ by 𝑥𝑐, otherwise go to 6
6. Shrink. In case of no result from step 5, compute 𝑛 new vertices
as: 𝑥𝑗 = 𝑥𝑙 + 𝛿 𝑥𝑗 − 𝑥𝑙
7. Go to 2
9. Nelder-Mead Algorithm
while max(abs(diff(fpts)))>tol,
[fsort,ix]=sort(fpts);
xh=pts{ix(end)};
xsum=xsum-xh;
xc=xsum/nvar; %centroid
xr=xc+a*(xc-xh); %reflect
discretize(f(xr), [-Inf fsort([1 end-1 end]) Inf]);
switch ans
case 1, xe=xc+c*(xr-xc); if f(xe)<f(xr), xr=xe; end
case 3, xco=xc+b*(xr-xc); if f(xco)<f(xr), xr=xco; end
case 4, xci=xc+b*(xh-xc); if f(xci)<f(xh), xr=xci;
else
for i=1:nvar+1, pts{i}=d*pts{i}; fpts(i)=f(pts{i}); end
xr=d*xh; sum=d*sum;
end
end
fpts(ix(end))=f(xr); xsum=xsum+xr;
pts{ix(end)}=xr; disp(min(fpts))
end
disp([xsum'/3 f(xsum/3)])
10. Design Example: Insulated Spherical Tank
Problem: choose the insulation thickness (𝑡) to minimize the life-cycle
costs of a spherical tank of radius 𝑅.
Life cycle costs: 𝑐2𝐴𝑡 + 𝑐3𝐺 + 𝑐4𝐺 ∗ 𝑝𝑤𝑓
Annual heat gain: 𝐺 = 365 × 24 × Δ𝑇 ×
𝐴
𝜌×𝑡
Surface area: 𝐴 = 4𝜋𝑅2 [𝑚2
]
Thermal resistivity: 𝜌 𝑚 ⋅ 𝑠𝑒𝑐 ⋅ °𝐶/𝐽
Equipment insulation cost: 𝑐2 $/𝑚3
Equipment refrigeration cost: 𝑐3 $/𝑊ℎ
Annual operating cost: 𝑐4 $/𝑊ℎ
Present worth factor: 𝑝𝑤𝑓 =
𝐴
𝑖
1 −
1
1+𝑖 𝑛
Note, there are no constraints in this problem
12. Hooke-Jeeves Pattern Search
• The pattern search works by locally evaluating a set of points along
N linearly independent search directions and polling the results.
• It uses a combination of exploratory moves and pattern moves to
find the optimum
– An exploratory move is performed in the vicinity of current point
along search directions
– The results of exploratory moves are polled to find an improved
objective and the new design point
– Two local moves are used to make a pattern move to jump to a
new location
13. Pattern Search Algorithm
• Initialize: choose initial point 𝑥0
, mesh size Δ𝑖, 𝑖 = 1, … , 𝑛,
expansion factor 𝛼 > 1, termination parameter 𝜖
• For 𝑘 = 0,1, …
– Check termination. If Δ < 𝜖, quit
– Perform a set of exploratory moves as: 𝑥𝑘
± Δ𝑖, 𝑖 = 1, … , 𝑛.
– Poll (check objective at) the perturbed points and compare with
the current point. If the poll is successful, i.e., if an improved
objective is found, move to that point and increase the mesh size
by 𝛼
– If poll is unsuccessful, set Δ𝑖 = Δ𝑖/𝛼 and repeat exploratory moves
– If two successful polls result in moves along the same direction,
make a pattern move as 𝑥𝑝
𝑘+1
= 𝑥𝑘
+ (𝑥𝑘
− 𝑥𝑘−1
)
– Set 𝑘 = 𝑘 + 1
14. Pattern Search
• For example, assume that the initial point is: x0 = [2.1 1.7]
• Using a mesh size of one, the mesh points are selected as:
[1 0] + x0 = [3.1 1.7]
[0 1] + x0 = [2.1 2.7]
[-1 0] + x0 = [1.1 1.7]
[0 -1] + x0 = [2.1 0.7]
• The next point is x1 = [1.1 1.7]
15. Simulated Annealing
• Simulated annealing (SA) is modeled after annealing of solids, i.e.,
heating it to liquid state and slowly cooling it while maintaining
thermal equilibrium.
• During annealing, the atoms undertake random displacements. A
move with negative change in energy state is accepted; a positive
change is accepted with probability: 𝑃 = 𝑒−Δ𝐸/𝑘𝑇
, where 𝑘 is
Boltzmann constant and 𝑇 is absolute temperature.
• When applied to engineering problems, the objective function is
analogous to energy, and Boltzmann constant is replaced by
average change in the objective function.
• The algorithm is started at some initial temperature parameter 𝑇0,
that is gradually reduced to simulate the annealing process.
16. Simulated Annealing
• At each setting of temperature variable, random design changes are
introduced; a change with lower objective value is accepted; a
change with higher objective value is accepted with a probability
𝑃 = 𝑒−Δ𝐸/Δ𝐸𝑎𝑣𝑒𝑇
.
• Once steady-state is reached, or after a certain number of changes,
the temperature is reduced and the process repeated.
• Although simulated annealing can be used for continuous
problems, it is especially effective when applied to combinatorial
problems.
18. Simulated Annealing
• Let 𝑇(𝑘) describe the schedule of annealing the temperature 𝑇;
then probability of acceptance of a design change is given as:
ℎ Δ𝐸 =
𝑒𝐸𝑘+1/𝑇
𝑒𝐸𝑘+1/𝑇+𝑒𝐸𝑘/𝑇 ≅
1
1+𝑒Δ𝐸/𝑇 where Δ𝐸 = 𝐸𝑘+1 − 𝐸𝑘
• The probability distribution of the design perturbations is assumed
to be normal, i.e., 𝑔 Δ𝑥 = 2𝜋𝑇 𝑛/2𝑒−
1
2
Δ𝑥2
𝑇
• Theoretically, the global minimum of the energy function 𝐸 𝑥 can
be reached if 𝑇0 is selected large enough and 𝑇(𝑘) is selected to
decrease no faster than 𝑇𝑘 =
𝑇0
ln 𝑘
• For faster quenching, the above schedule may be replaced by:
𝑇𝑘 =
𝑇0
𝑘
19. Simulated Annealing
• A schedule for 𝑇(𝑘) can be based on the acceptance probability of
the worst case design: let 𝑃𝑠 and 𝑃𝑓 denote the desired probability
at the beginning and at termination, then a schedule for 𝑇 is
developed as:
𝑇𝑠 = −
1
ln 𝑃𝑠
; 𝑇𝑓 = −
1
ln 𝑃𝑓
; 𝐹 =
𝑇𝑓
𝑇𝑠
1/(𝑁−1)
; 𝑇𝑛+1 = 𝐹𝑇𝑛
For example, let 𝑃𝑠 = 0.5, 𝑃𝑓 = 10−8
, 𝑁 = 100; then 𝑇𝑠 = 1.4426,
𝑇𝑓 = 0.054278, 𝐹 = 0.9674.
• An exponential schedule using a factor 𝐹 < 1 can also be drawn,
where 𝑇𝑘 = 𝑇0𝑒 𝐹−1 𝑘
20. Simulated Annealing
1. Pick an initial design; start at a high value of temperature variable
(𝑇); pick 𝑁𝑆, number of cycles before temperature reduction, and
optionally N, the total number of perturbations.
2. Start a cycle. Perturb one variable at a time; accept the new point
if perturbation results in a lower value of the objective function. If
perturbation results in a higher objective, accept it with
probability: 𝑃 = 𝑒−Δ𝐸/Δ𝐸𝑎𝑣𝑒𝑇
, where Δ𝐸𝑎𝑣𝑔 is running average of
accepted objective variations.
3. After completing 𝑁𝑆 cycles (or if steady-state has been reached,
lower the temperature as per the desired schedule, e.g.,
𝑇𝑛+1 = 𝐹𝑇𝑛
4. go to 2
23. Simulated Annealing
• Simulated annealing was developed for unconstrained problems. In
the case of constrained problems, possible approaches are:
– Reject the infeasible solutions generated in the process
– Use a penalty function to add the constraints to the objective
• Simulated annealing is particularly suited to discrete problems. In
the case of continuous problems, SA is more effective when
constraint surface is highly irregular with multiple local minima.
• For general continuous problems, gradient based methods (e.g.,
GRG) are much faster and hence the preferred choice.
24. Simulated Annealing Code
%initialize: specify nvar, xl, xu
d=xu-xl; x=(xu+xl)/2; %initial design
xopt=x; kx=zeros(1,nvar); %current optimal, count
T0=1; T=T0*ones(1,nvar); %set temperature
Pf=1e-6; m=10; %acceptance probability
while any(T)>-1/log(Pf)
x=xopt;
for j=1:m %start of cycle
dx=2*T.*(rand(1,nvar)-.5); %apply random variation
dX=diag(dx);
for k=1:nvar
dx=limits(x,dX(k,:),xl,xu); %adjust limits
fx=f(x+dx); gx=g(x+dx); %objective & constraints
px=exp(-fx/T(k))/(exp(-fx/T(k))+exp(-f(x)/T(k)));
%acceptance probability
25. Simulated Annealing Code
if any(gx>0), continue %constraint violation
elseif fx>f(x) && rand()>px, continue %random accept
else x=x+dx;
if f(x)<f(xopt), xopt=x; end %record current opt
kx=kx+(dx~=0); %acceptance count
end, end, end
T=T0./log(kx); %adjust temperature
if all(kx>100*nvar), break, end %exceed count
end
function dx = limits (x,dx,xl,xu)
while any(x+dx<xl)
idl=find(x+dx<xl);
dx(idl)=(1-rand(size(idl))).*dx(idl); %adjust lower bound
end
while any(x+dx>xu),
idu=find(x+dx>xu);
dx(idu)=(1-rand(size(idu))).*dx(idu); %adjust upper bound
end, end
26. Design Example: Symmetric Two-Bar Truss
Problem: design a symmetrical two-bar truss of minimum mass to
support a fixed load 𝑃. The truss has height 𝐻, and span 𝐵.
Design variables: diameter 𝑑 , height 𝐻
𝑙 = (𝐵/2)2+𝐻2, 𝐴 = 𝜋𝑑𝑡
Total weight: 𝑊 = 2𝜌𝑙𝐴,
Constraints:
Axial stress: 𝜎 =
𝑃𝑙
2𝜋𝑑𝑡𝐻
≤ 𝜎𝑎
Buckling stress: 𝜎𝑏 =
𝜋2 𝑑2+𝑡2 𝐸
8𝑙2 ≤ 𝜎𝑎
Deflection: 𝜀 =
𝑃𝑙3
2𝜋𝑑𝑡𝐻2𝐸
≤ 𝜀𝑚𝑎𝑥
27. Design Example: Symmetric Two-Bar Truss
Let the design variables be: diameter 𝑑 and height 𝐻
Then, the design optimization problem is defined as:
Objective: min
d,H
2𝜋𝑑𝑡𝜌 (𝐵/2)2+𝐻2
Subject to:
𝜎𝑏
𝜎
− 1 ≤ 0,
𝜎
𝜎𝑎
− 1 ≤ 0,
𝜀
𝜀𝑚𝑎𝑥
− 1 ≤ 0,
For a particular problem, let:
𝑃 = 66 𝑘𝑖𝑝𝑠; 𝐵 = 60 𝑖𝑛; 𝑡 = 0.15; 𝜌 = 0.3
𝑙𝑏
𝑖𝑛3
; 𝐸 = 30 × 106
𝑙𝑏
𝑖𝑛2
;
𝜎𝑎 = 1 × 105 𝑝𝑠𝑖; 𝜀𝑚𝑎𝑥 = 0.25 𝑖𝑛
29. SA Example: Two-bar Truss
• Design with three variables: 𝐻, 𝑑, 𝑡
Objective: min
d,H
2𝜋𝑑𝑡𝜌 (𝐵/2)2+𝐻2
Subject to:
𝜎𝑏
𝜎
− 1 ≤ 0,
𝜎
𝜎𝑎
− 1 ≤ 0,
𝜀
𝜀𝑚𝑎𝑥
− 1 ≤ 0,
• SA Results
𝐻 =29.9010 in, 𝑑 =1.8631 in, 𝑡 =0.0799 in; 𝑓 =11.8801 lbs
• Note, the problem has multiple optima
30. Genetic Algorithm
• GA is inspired by the process of natural selection in the biological
evolution.
• GA is characterized by three basic operations that guide reproduction:
– Selection of the fittest for mating
– Crossover of genetic information during mating
– Mutation, i.e., introduction of random changes during reproduction
• When applied to the optimization problems, design variables are termed
as genes, a chromosome represents a trial solution to the problem. A
population is a collection of chromosomes.
• Members from the population are chosen for mating based on their
fitness. Application of crossover and mutation yields a new generation
with improved average fitness than the previous generation.
• The process continues until the improvement becomes negligible.
31. Genetic Algorithm
The steps in the application of a GA are:
• Determine a coding scheme (genetic representation) of variables; two
possible choices are value representation and binary representation.
• Pick a crossover and mutation rate; typical values for binary
representation are 0.8 and 0.001-0.01, respectively.
• Develop an initial population of (20-100) design choices represented by
chromosomes evenly spread in the design space.
• Use a fitness function to evaluate and rank the chromosomes.
• Select a mating pool from the population using one of the following:
– Roulette selection. The probability of a chromosome being picked is in
proportion to its fitness.
– Tournament selection. A subset of population is randomly selected and
those with highest fitness are included in the mating pool.
32. Genetic Algorithm
• Use crossover among pairs of parents to generate two children for the
next generation:
– Binary coding. Use a crossover point to divide the chromosome. Copy the
first part and cross the second one among children.
– Value coding. Each gene is separately considered for crossover. Single-
point, uniform, or blend crossover can be considered.
• Occasionally, perform mutation to randomly change the design:
– Change individual bits in binary coding.
– Change parameter values (genes) in value coding.
• Evaluate the new generation for fitness. Retain individuals with higher
fitness for reproduction.
• The parent generation also competes in the selection process (elitism).
• Continue for specific number of generations or till the improvement in
average fitness value falls below a specified tolerance.
33. Binary Coding
• Binary coding was originally used to represent design choices
– Precision =
𝑈𝑖−𝐿𝑖
2𝑛−1
(smallest change in variable)
– Base 10 integer value: 𝑥𝑖𝑛𝑡10 =
2𝑛−1
𝑈−𝐿
𝑥 − 𝐿
– Real value: 𝑥 =
𝑈−𝐿
2𝑛−1
𝑥𝑖𝑛𝑡10 + 𝐿
– Example: let 𝑥 = 3.567 with a range of 0 to 10; then for 8-bit
representation, 𝑥𝑖𝑛𝑡10 =
255
10
3.567 = 91
• A chromosome is created by combining binary strings of design
variables together.
34. Value Coding
• Design variables are assembled together in a chromosome using
numbers. In MATLAB, this can be done using a structure array.
𝑥 = {𝑔𝑒𝑛𝑒1, 𝑔𝑒𝑛𝑒2, … , 𝑔𝑒𝑛𝑒𝑛}
• Scaling. For best results, objective function and constraints are
scaled by their maximum values, i.e., values attained when the
design parameters are at their maximum.
35. Fitness
• If there are no constraints, fitness equals the value of the objective
function 𝑓.
• If constraints are present, we may use a penalty parameter to write
𝑓𝑖𝑡𝑛𝑒𝑠𝑠 = 𝑓 + 𝑃𝑔, where 𝑔 is the maximum constraint violation
given as: 𝑔 = max 0, 𝑔1, 𝑔2, … , 𝑔𝑚 , where 𝑔 = 0 indicates a
feasible design.
• Alternatively, fitness may be based on the maximum value of the
objective in the current population, i.e., 𝑓𝑖𝑡𝑛𝑒𝑠𝑠 = 𝑓max
𝑓𝑒𝑎𝑠
+ 𝑔.
36. Crossover
• Let the crossover probability = 0.8. Generate a random number to
determine if crossover is to be performed.
• Single-point crossover. Generate a random integer between 1 and 𝑛
to determine the crossover point at gene 𝑖
• Uniform crossover. Generate a random number 𝑟 for each of the 𝑛
genes; perform crossover for individual genes
• Blend crossover. Generate a random number 𝑟 for each of the 𝑛
genes, then obtain the children genes as:
𝑦1 = 𝑟𝑥1 + 1 − 𝑟 𝑥2, 𝑦2 = 1 − 𝑟 𝑥1 + 𝑟𝑥2
37. Mutation
• Mutation: pick a mutation parameter, 0 ≤ 𝛽 < 1 (e.g., 𝛽 = 0.5).
– For 𝛽 = 0, mutation probability is uniform in successive
generations
– For 𝛽 > 0, the mutation probability gradually decreases
• Compute uniformity parameter 𝛼 as: 𝛼 = 1 −
𝑗−1
𝑀
𝛽
, where 𝑗 is
the current generation number, and 𝑀 is the total number of
generations.
• Pick a random number 𝑟 between 𝑥min and 𝑥max; then perform the
mutation as:
If 𝑟 ≤ 𝑥 then 𝑦 = 𝑥min + 𝑟 − 𝑥min
𝛼
𝑥 − 𝑥min
1−𝛼
If 𝑟 > 𝑥 then 𝑦 = 𝑥max − 𝑥max − 𝑟 𝛼
𝑥max − 𝑥 1−𝛼
38. Dynamic Mutation
• Mutation parameter: 𝛽 = 1 means uniform mutation; 𝛽 = 0 means
no mutation.
• Uniformity parameter: 𝛼 = 1 means mutated variable is picked
uniformly over its range; 𝛼 < 1 favors values near the current value
of the variable.
39. Elitism
• Combine the N children with N parents to obtain 2N designs
• Sort the designs by fitness values and pick the N most fit designs
43. Design Example: Coil Spring
Problem: design a minimum mass spring to carry a given axial load 𝑃 without
material failure while satisfying minimum deflection and minimum surge
wave frequency requirements
Design variables: mean coil diameter 𝐷 , wire diameter 𝑑 , number of
active coils (𝑁)
Design equations:
Spring mass: 𝑚 =
1
4
𝑁 + 𝑄 𝜋2
𝐷𝑑2
𝜌
Load deflection: 𝑃 = 𝐾𝛿, where 𝐾 =
𝑑4𝐺
8𝐷3𝑁
Shear stress: 𝜏 =
8𝑘𝑃𝐷
𝜋𝑑3
Stress concentration factor: 𝑘 =
4𝐷−𝑑
4(𝐷−𝑑)
+ 0.615
𝑑
𝐷
Frequency of surge waves: 𝜔 =
𝑑
2𝜋𝑁𝐷2
𝐺
2𝜌
45. GA Example: Coil Spring
% Coil spring model (Arora, p. 43)
% Design variables: coil diameter (D), wire diameter (d),
number of active coils (N)
xl=[.5 .01 1]; %lower limits
xu=[1.5 .15 11]; %upper limits
x=(xl+xu)/2; %trial design
%parameters
P=10; %load [lb]
Q=2; %inactive coils
Del=.5; %min deflection [in]
Dmax=1.5; %max diameter [in]
gam=.285; %weight density [lb/in3]
gr=386; %gravity [in/sec2]
oml=100; %min frequency [Hz]
G=1.15e7; %shear modulus [lb/in2]
taumax=80e3; %max shear stress [lb/in2]
rho=gam/gr; %mass density
46. GA Example: Coil Spring
m=@(x) pi^2/4*(x(3)+Q)*x(1)*x(2)^2*rho; %spring mass
K=@(x) x(2)^4*G/(8*x(1)^3*x(3)); %spring constant
k=@(x) (x(1)-x(2)/4)/(x(1)-x(2))+.615*x(2)/x(1);
%stress concentration factor
tau=@(x) 8*k(x)*P*x(1)/(pi*x(2)^3); %shear stress
om=@(x) x(2)/(2*pi*x(1)^2*x(3))*sqrt(G*gr/(2*gam));
%surge frequency
del=@(x) P/K(x); %deflection
f=@(x) x(1)*x(2)*x(2)*(x(3)+Q); %objective, x=[D,d,N]
g=@(x) [tau(x)/taumax-1; oml/om(x)-1; Del/del(x)-1;
(x(1)+x(2))/Dmax-1]; %constraints
47. MATLAB Optimization Problem Structure
• Problem structure, specified as a structure with the following fields:
– objective — Objective function
– fitnessfcn — Fitness function
– nvars — number of variables
– x0 — Starting point
– Aineq — Matrix for linear inequality constraints
– bineq — Vector for linear inequality constraints
– Aeq — Matrix for linear equality constraints
– beq — Vector for linear equality constraints
– lb — Lower bound for x
– ub — Upper bound for x
– nonlcon — Nonlinear constraint function
– solver — ‘ga'
– options — Options created with optimoptions or psoptimset
– rngstate — Optional field to reset the state of the RNG
48. GA Example: Coil Spring
Opt.nvars=3; %number of variables
Opt.fitnessfcn=f; %fitness function
Opt.nonlcon=@(x) deal(g(x),[]); %nonlinear constraints
Opt.lb=xl; %lower bounds
Opt.ub=xu; %upper bounds
Opt.solver='ga‘ %solver
Opt.IntCon=[3] %integer variables
Opt.x0=[1 .08 6]; %initial guess
Opt.options=gaoptimset(@ga) %GA options
>> [x,fval]=ga(Opt) %Solve the problem using GA
Optimization terminated: average change in the penalty fitness
value less than options.FunctionTolerance
and constraint violation is less than options.ConstraintTolerance.
x =
0.5601 0.0590 5.0000
fval =
0.0136
49. Swarm Intelligence
• Swarm intelligence models the collective behavior of species in the
biological kingdom Examples include ant and termite colonies,
schools of fish, flocks of birds, herds of animals, etc.
• Swarm intelligence manifests in artificial systems composed of
intelligent agents that coordinate using decentralized control and
self-organization.
• A typical swarm intelligence system has the following
characteristics:
– It is composed of many individuals that are relatively
homogeneous;
– The interactions among the individuals are based on simple
behavioral rules that exploit only local information;
– The overall behavior of the group emerges from the interactions of
individuals with each other and with their environment.
50. Particle Swarm Optimization
• The design space is initialized with a random population of
solutions (particles) with associated fitness values. Particles move
around the search space with designated velocities.
• Each particle’s position is iteratively updated based on:
– its known best location (pbest), and
– the overall best location achieved by any particle (gbest).
• The update equations are:
v[] = v[] + c1*rand()*(pbest[]- present[]) + c2*rand()*(gbest[] - present[]);
present[] = persent[] + v[]
where v[] is the particle velocity, present[] is the current position, pbest[]
and gbest[] are defined above, rand() is a random number between (0,1),
and c1, c2 are learning factors in the range [0,4]. Usually c1 = c2 = 2.
http://www.swarmintelligence.org/tutorials.php
51. Particle Swarm Optimization
• Parameters that need to be tuned in PSO include:
– The number of particles: the typical range is 20 - 40. More particles
may be included for difficult problems.
– Vmax: it determines the maximum change in particle position in an
iteration. The range of the particle may be used as Vmax. For
example, if a particle has a range [-10,10], then Vmax = 20.
– Learning factors: c1 and c2 usually equal to 2. Other values have
been suggested, where c1 equals to c2 and ranges from [0, 4].
– The stopping condition: the maximum number of iterations the
PSO execute and the minimum error requirement.
52. PSO Algorithm
1. Initialize: choose neighborhood size N, inertia W, stall counter c=0,
the self-adjustment weight, c1, and social adjustment weight, c2.
2. Create an initial population of particles; set initial velocities in the
range [-r, r].
3. Compute objective function value of each particle. Record the
current best position p(i) of each particle, and the global best
position g(i).
4. Iterate: choose a random subset S of N particles; find fopt(S), the
best local fitness value, and g(S), the position of the neighbor with
best fitness.
– Update particle velocity: v = W*v + y1*u1.*(p-x) + y2*u2.*(g-x)
– Update particle position x = x + v
– Enforce the bounds: if any particle is outside the bound, set it
equal to the bound.
53. PSO Algorithm
5. Evaluate the objective function f(x)
– If f(x)<f(p), set p=x
– If f(x)<f(g), then
• Set c = max(0, c-1).
• If c < 2, then set W = 2*W.
• If c > 5, then set W = W/2.
– Otherwise, Set c=c+1
6. Stop if max number of iterations is exceeded, or if the relative
change in the best objective function value g over the last M
iterations is less than a tolerance parameter.
7. Go to 3.
56. Three-bar Truss Design Using GA
MATLAB commands
Opt=struct('solver','ga','fitnessfcn',f,'nvars',2);
Opt.x0=[.01,.01];
Opt.Aineq=-cg;
Opt.bineq=-cg0;
Opt.lb=[0,0];
Opt.ub=[.5,.5];
Opt.options=[];
[x,fval]=ga(Opt); %Solve the problem using GA
Optimization terminated: average change in the fitness
value less than options.FunctionTolerance.
>> x,fval
x =
0.0006 0.2546
fval =
0.1463
57. Three-bar Truss Design Using PSO
MATLAB Commands
Opt.solver='particleswarm'
Opt.objective=f
[x,fval]=particleswarm(Opt); %try PSO
Optimization ended: relative change in the objective
value over the last OPTIONS.MaxStallIterations
iterations is less than OPTIONS.FunctionTolerance.
>> x,fval
x =
0 0
fval =
0
58. Three-bar Truss Using Pattern Search
MATLAB Commands
Opt.solver='patternsearch'
[x,fval]=patternsearch(Opt); %try pattern search
Optimization terminated: mesh size less than
options.MeshTolerance.
>> x,fval
x =
0 0.2552
fval =
0.1459
59. Three-bar Truss Design Using SA
MATLAB Commands
Opt.solver='simulannealbnd'
[x,fval]=patternsearch(Opt); %try simulated annealing
Optimization terminated: change in best function value
less than options.FunctionTolerance.
>> x,fval
x =
1.0e-05 *
0.0159 0.2206
fval =
1.4882e-06
• However, using our own SA code,
x = 0.0000 0.2588
f = 0.1479
60. Example: Minimum Thrust Design
• Problem: Select an engine for a business jet keeping in view the
thrust requirements
• Background: aircraft thrust requirements are dictated by the
minimum thrust requirements during:
– Take off
– Climb
– Cruise
– Sustained turn
– Service ceiling