authors devise new convex approximation called DQA which utilizes information of two consecutive points at iterates. Also, to guarantee global convergence, filter method is illustrated.
new optimization algorithm for topology optimization
1. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
1
2. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
2
1
2
4
Introduction
Diagonal Quadratic Approximation (DQA)
Numerical Examples
5 Conclusions
3 Filtered Diagonal Quadratic Approximation (FDQA)
3. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
3
1
2
4
Introduction
Diagonal Quadratic Approximation (DQA)
Numerical Examples
5 Conclusions
3 Filtered Diagonal Quadratic Approximation (FDQA)
4. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
4
● Considered Problem
0
, ,
minimize ( )
subject to ( ) 0, 1, ,
where , , 1, ,
j
i i L i U
f
f j m
x x x i n
x
x
x L
L
Nonlinear and continuously differentiable functions are considered.
The number of design variables is much larger than that of constraints. (n>>m)
Computational cost for the analysis is very high.
To solve this kind of problems, Sequential Approximate Optimization (SAO) with
the dual method has been developed in the last two decades.
5. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
5
yes
no
Compute 0 0,j jf fx x
0, , .j m L
2
Construct
a dual subproblem
3
Solve the dual subproblem4
Compute
0, , .j m L
Set k=k+18
END
6
7
Optimization parameter
setting
1
Move to the next point
5
( 1)
x k
jf ( 1)
,k
jf
x
( 1) ( )*
x xk k
What is the dual subproblem?
Converged?
6. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
6
Dual Subproblem
Primal Subproblem
At each k th iteration point
0
, ,
minimize ( )
subject to ( ) 0, 1, ,
where , 1, ,
k
k
j
k k
i i L i U
f
f j m
x x x i n
x
x
x
%
% L
L
( )
0
1
maximize ( ), ( ) ( )
subject to 0, 1, , .
m
k kk
j j
j
j
L f f
j m
λ
x λ λ x λ x λ% %
L
( )k
x
If n >> m the dual method is more
efficient than the primal method.
The primal variable is explicit
function of the dual variable.
Primal variable
Dual variable
7. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
7
Diagonal Quadratic Approximation (DQA)
( )
( ) ( ) ( ) ( ) ( ) 2
,
1 1
1
( ) ( ) ( ) ( ) 0, , .
2
k
n n
jk k k k k
j i i i j i i
i ii
f
f f x x h x x j m
x
x x% L
• DQA is expressed by the first order Taylor’s expansion and the quadratic term.
What is the Convex Separable Approximation?
1. Separable : off-diagonal Hessian terms of the approximate functions are all zero
2. Convex : diagonal Hessian terms of the approximate functions are non-negative
To effectively construct the dual subproblem,
convex separable approximation should be used for the approximating functions.
• Good approximation method of true diagonal Hessian terms is very important.
8. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
8
( )
( ) ( ) ( ) ( ) ( ) 2
,
1 1
1
( ) ( ) ( ) ( ) 0, , .
2
k
n n
jk k k k k
j i i i j i i
i ii
f
f f x x h x x j m
x
x x% L
Previous methods approximating the diagonal hessian terms
Exponential approximation
MMA approximation
CONLIN approximation
TANA approximation
Groenwold, A. A., Etman, L. F. P., and Wood, D. W., "Approximated approximations for SAO," Structural and
Multidisciplinary Optimization, Vol. 41, No. 1, 2010, pp. 39-56.
Groenwold, A. A., Etman, L. F. P., Snyman, J. A., and Rooda, J. E., “Incomplete series expansion for function
approximation”, Structural and Multidisciplinary Optimization, Vol. 34, No. 1, 2007, pp. 21-40.
2 ( )
,
( )k
k
i j
i jx
f
h
x
x% One point approximation
Two point approximation
9. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
9
Previous methods to enhance the convergence property of the SAO
5Fletcher, R., Leyffer, S., and Toint, P. L., "On the global convergence of a filter SQP algorithm," Siam Journal on
Optimization, Vol. 13, No. 1, 2002, pp. 44-59.
4Svanberg, K., "A class of globally convergent optimization methods based on conservative convex separable
approximations," Siam Journal on Optimization, Vol. 12, No. 2, 2001, pp. 555-573.
7Groenwold, A. A., and Etman, L. F. P., "On the conditional acceptance of iterates in SAO algorithms based on convex
separable approximations," Structural and Multidisciplinary Optimization, Vol. 42, No. 2, 2010, pp. 165-178.
NLP filter for the SQP5
GCMMA4
NLP filter for the DQA7
Filter tests if is acceptable.
If not acceptable, the inner
iteration is conducted.
In the inner iteration,
( )*k
x
( )
, ,
kk
i j i jh h
• Adjust move limit.
• Enforce conservatism.
User defined parameter
10. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
10
1. Propose a DQA with highly accurate diagonal hessian terms
employing a gradient based two-point approximation method.
2. Propose a Nonlinear Programming(NLP) filter appropriate to the
proposed DQA to improve convergence property.
Two-Point Approximation
f x
0
f x
0f x
x 1
f x
1f x
x
°( )f x
y
n
Compute 0 0,j jf fx x
0, , .j m L
2
Construct
dual subproblem
3
Solve dual subproblem4
Compute
0, , .j m L
5
Compute
0, , .j m L
Set k=k+113
END
Test
Convergence
11
12
Optimization parameter
setting
1 Move to the next point
Initialize
10
0ρ ρ
( )*
x k
jf
( 1)
x k
jf
( 1) ( )
x xk k
x
( 1)
,k
jf
x
( 1) ( )*
x xk k
NLP
filter
NLP filter
11. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
11
1
2
4
Introduction
Diagonal Quadratic Approximation (DQA)
Numerical Examples
5 Conclusions
3 Filtered Diagonal Quadratic Approximation (FDQA)
12. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
12
( )
( ) ( ) ( ) ( ) ( ) 2
,
1 1
1
( ) ( ) ( ) ( ) 0, , .
2
k
n n
jk k k k k
j i i i j i i
i ii
f
f f x x h x x j m
x
x x% L
*Kim, J. R., and Choi, D. H., “Enhanced two-point diagonal quadratic approximation methods for design
optimization,” Computer methods in applied mechanics and engineering, Vol. 197, 2008, pp. 846-856
2 ( ) ( )
( )eTDQA k
ij
i j
h
x
f
x
x% Ⅱ
( ) 2
( )
( ) ( ) ( ) ( ) 2 1
( 1) 2 ( ) 21 1
1 1
( )
1 1
( ) ( ) ( ) ( )
2 2
( ) ( )
n
k
k
e i i in n
eTDQA k k k i
i i i i i n n
k ki ii
i i i i i i
i i
H y y
f
f f y y G y y
y
H y y H y y
x x% Ⅱ
ip
i i iy x c where
For the diagonal hessian terms,
we use the second order
derivative of eTDQA.
Determination of these parameters is provided in the previous research.
13. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
13
( )
( )
2( 1)( )
2
( ) ( 1)
1
( 1) ( )
if
0 if
i
l l
k
i
k
ii i
pke i
ij i i i in p pk k
l l i l i
l
p f
xx c
Hh G p x c i j
H x c x c
i j
x
Analytically derive hessian terms of eTDQA
Not convex!
Separable!
( ) 2
( )
( ) ( ) ( ) ( ) 2 1
( 1) 2 ( ) 21 1
1 1
( )
1 1
( ) ( ) ( ) ( )
2 2
( ) ( )
n
k
k
e i i in n
eTDQA k k k i
i i i i i n n
k ki ii
i i i i i i
i i
H y y
f
f f y y G y y
y
H y y H y y
x x% Ⅱ
where, ip
i i iy x c
14. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
14
● Convexifying Operation of Part 1
( )
, 1 2
2 ( ( ) ( ) 2( 1)( )
2 ( ) 2
( ) ( 1)
1
( ) ( 1) ( ) i
l l
k
i j
eTDQA k k
pj ki e i
i i i ink p pk ki ii i
l l i l i
l
h P P
f p Hf
G p x c
x xx c
H x c x c
x x% Ⅱ)
Part 1
( )
( )
if / 0 1
if / 0 1
k
i i
k
i i
f x p
f x p
1then 0P
*S. Park, D. Choi (2011) A new convex separable approximation based on two-point diagonal quadratic approximation
for large-scale structural design optimization, WCSMO 9
15. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
15
ip
( )
( ) / 0k
if x x ( )
( ) / 0k
if x x
( ) ( 1)k k
i ix x
( ) ( 1)k k
i ix x
( ) ( 1)k k
i ix x
( ) ( 1)k k
i ix x
( )
( 1)
( / )
1
( / )
k
i
k
i
f x
f x
( )
( 1)
( / )
0 1
( / )
k
i
k
i
f x
f x
( )
( 1)
( / )
0
( / )
k
i
k
i
f x
f x
● Determination of Exponents Term pi
†
Current point Previous point
( 1) ( )
( 1) ( )
1 ln ln
k k
k k
i i i i i
i i
f f
p x c x c
x x
1 1 1 1
-1
-13
3
Eqn.†
Eqn.† Eqn.†
Eqn.†
*S. Park, D. Choi (2011) A new convex separable approximation based on two-point diagonal quadratic approximation
for large-scale structural design optimization, WCSMO 9
16. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
16
● Convexifying Operation of part 2
2 ( ( ) ( ) 2( 1)( )
2 ( ) 2
( ) ( 1)
1
( ) ( 1) ( ) i
l l
eTDQA k k
pj ki e i
i i i ink p pk ki ii i
l l i l i
l
f p Hf
G p x c
x xx c
H x c x c
x x% Ⅱ)
Part 2
2( 1)( )
2 2
( ) ( 1)
1
max ,
i
l l
pke i
i i in p pk k
l l l
l
H
P G p x
H x x
After convexifying procedure, all diagonal terms of hessian matrix become positive.
17. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
17
1
2
4
Introduction
Diagonal Quadratic Approximation (DQA)
Numerical Examples
5 Conclusions
3 Filtered Diagonal Quadratic Approximation (FDQA)
18. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
18
k: outer iteration number
l: inner iteration number
yes
no
Compute 0 0,j jf fx x
0, , .j m L
2
Construct
a dual subproblem
3
Solve the dual subproblem4
Compute
0, , .j m L
Set k=k+18
END
6
7
Optimization parameter
setting
1
Move to the next point
5
( 1)
x k
jf ( 1)
,k
jf
x
( 1) ( )*
x xk k
Converged?
19. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
19
k: outer iteration number
l: inner iteration number
yes
no
Compute 0 0,j jf fx x
0, , .j m L
2
Construct
dual subproblem
3
Solve dual subproblem4
Compute
0, , .j m L
5
Compute
0, , .j m L
Set k=k+113
END
11
12
Optimization parameter
setting
1 Move to the next point
Initialize
10
0ρ ρ
( )*
x k
jf
( 1)
x k
jf ( 1)
,k
jf
x
( 1) ( )*
x xk k
Converged?NLP filter is
adopted
to improve
convergence
property
20. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
20
is acceptable
to the current filter?
Slanting
Envelope
Test
Updating filter
( )*
x k
1l l
Slanting envelope test using NLP Filter
0f f max 0, , 1, , .jh f j m Land
,h f ( )*k
x
For the brevity,
At the kth iteration, the pair is obtained at .
For the filter, slanting envelop is used to prove convergence.
f
h (1) (1)
,h f
(2) (2)
,h f
(3) (3)
,h f
(4) (4)
,h f
,h f
( ) ( )
andj j
f h f h Reh ject
( ) ( )
orj j
f h f h h Acceptable
(5) (5)
,h f
( )
( (
)
) )
(
,j j
j j
Domin
f h f and h h
ate f h
Acceptable
(6) (6)
,h f
,h f
,h f (4) (4)
,h f
yes
no
yes
no
Inner iteration
Slanting envelope
*R. Fletcher, S. Leyffer, P. Toint (2002)
On the global convergence of a filter-SQP algorithm
Reduce move limit
Conservative approximate
1l l
Inner iteration
Reduce move limit
Conservative approximation
no
Satisfy sufficient
reduction criterion?
21. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
21
is acceptable
to the current filter
and 0f q q
Updating filter
( )*k
x
Reduce the move limit
y
n
n
y
1l l
Inner iteration
Reduce move limit
Conservative approximation
Conservative approximation
Solve the dual subproblem
ix
The move limit is decreased by one half in this study.
,
,
k l
i Lx ,
,
k l
i Ux , 1
,
k l
i Lx
, 1
,
k l
i Ux
,k l
ix
Move limit
( ) ( )
, , 0, , .k k
i j j i jh h j m L
The approximate functions can be easily conservative by
increasing hessian terms.
max 1.1,j j
( ) ( )
( ) ( )k k
j jf fx x%j is determined to match
where
( , ) ( )* ( )*
( ) ( )k l k k
j jf fx x%
The approximate function is conservative if
22. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
22
is acceptable
to the current filter?
( )*k
x
0.1
( ) ( )*
0 0
k k
q f f x x% %
( ) ( )*
0 0
k k
f f f x x
0 andq f q
yes
no
yes
no
1l l
Inner iteration
Reduce move limit
Conservative approximate
Updating filter
no
yes
Updating filter
1l l
Inner iteration
Reduce move limit
Conservative approximation
Next
iteration
Satisfy sufficient
reduction criterion?
Test if the reduction of real objective function
is smaller than we expected.
Let,
If
reduction of objective function is not
sufficient go to inner iteration
Otherwise, update filter and go to the next
iteration
23. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
23
1
2
4
Introduction
Diagonal Quadratic Approximation (DQA)
Numerical Examples
5 Conclusions
3 Filtered Diagonal Quadratic Approximation (FDQA)
24. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
24
No. Option Problem n m
1
a
Vanderplaats' cantilever beam problem7
20 21
b 200 201
2 Svanberg's 5-variate cantilever beam problem9 5 1
3
a MBB beam topology optimization problem (p=1, 75 by 25)9 1875 1
b MBB beam topology optimization problem (p=3, 75 by 25) 1875 1
7Groenwold, A. A., and Etman, L. F. P., "On the conditional acceptance of iterates in SAO algorithms based on convex
separable approximations," Structural and Multidisciplinary Optimization, Vol. 42, No. 2, 2010, pp. 165-178.
9Groenwold, A. A., Etman, L. F. P., and Wood, D. W., "Approximated approximations for SAO," Structural and
Multidisciplinary Optimization, Vol. 41, No. 1, 2010, pp. 39-56.
Performance of the proposed algorithm is compared with those of previous studies.7,9
Initial condition and convergence criteria are same as those of previous studies.7,9
Convergence criterion is .
For the 3b problem, performance of the proposed algorithm is compared with the MMA and
GCMMA.
( 1) ( )k k
x
x x
25. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
25
0
1
2 1
Minimize ( , )
subject to ( , ) 1 0 1,..., ,
( , ) 20 0 1,..., ,
( , ) 1 0,
1.0 100,
5.0 100.
p
i i i
i
i
j
p j i i
p
p
i
i
f b hl
f j p
f h b j p
y
f
y
b
h
x
b h
b h
b h
b h
Vanderplaats’ cantilever beam problem No. Method
1a
p=10
SAO-A 64244.83 1.68E-06
SAO-B 64244.83 1.11E-06
SAO-C 64244.83 3.21E-06
SAO-D 64244.83 2.41E-05
DQA 64244.83 5.35E-06
FDQA 64244.83 5.35E-06
*
k *
l
*
0f max jf
*
k
*
l
: The number of outer iterations
: The number of inner iterations
DQA and FDQA obtain appropriate
optimum point.
DQA and FDQA show better efficiency
compared to other methods.
1b
p=100
SAO-A 63678.1 1.78E-06
SAO-B 63678.1 4.90E-06
SAO-C 63678.1 2.98E-07
SAO-D 63678.1 4.06E-05
DQA 63678.1 8.66E-06
FDQA 63678.1 1.66E-06
38 -
101 261
40 94
29 2
10 -
10 0
34 -
457 2535
30 25
29 1
11 -
10 1
26. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
26
Svanberg’s 5-variate cantilever beam
0 1 1 2 3 4 5
3 3 3
1 1 2 3
3 3
4 5 2
1 2
(0)
Minimize ( ) ( )
subject to ( ) 61/ 37 / 19 /
7 / 1/ 0,
0.001 10 1,2,3,4,5
where 0.0624, 1.0,
5.0, 5.0, 5.0, 5.0, 5.0
i
T
f c x x x x x
f x x x
x x c
x i
c c
x
x
x
x
Method
T2:R 1.339956 -
T2:E 1.339956 -
T2:MMA 1.339956 -
T2:TANA-3 1.339956 -
GCMMA 1.339956 -
DQA 1.339957 -1.26E-06
FDQA 1.339957 -1.26E-06
*
k *
l
*
0f max jf
DQA and FDQA obtain appropriate optimum point.
DQA and FDQA show good efficiency.
10 8
13 7
20 15
10 4
19 20
8 -
8 0
27. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
27
MBB beam topology optimization
0
1
1 0
1
Minimize ( )
subject to ( ) 0,
0.001 1
n
T p T
e e e e
e
n
e
e
e
f x
f x fV
x
x u Ku u k u
x
Ku f
1, 0.3, 1E P
P
No. Method
3a
R 165.8839 -1.11E-13
T2:R 165.8839 -1.11E-13
E 165.8839 -5.60E-12
T2:E 165.8838 -5.46E-16
MMA 165.8839 5.84E-11
T2:MMA 165.8838 -4.53E-11
T2:R 165.8839 -1.11E-13
T2:E 165.8838 -5.46E-16
T2:MMA 165.8838 -6.06E-16
GCMMA 165.9624 1.90E-08
DQA 165.4939 -1.02E-11
FDQA 165.4936 3.18E-11
*
k *
l
*
0f max jf
All of methods obtain similar objective
function.
For 3a problem, the E method is
better than the proposed FDQA.
FDQA is more efficient than DQA.
59 -
58 -
33 -
35 -
68 -
51 -
58 0
35 0
43 11
37 103
56 -
39 2
28. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
28
MBB beam topology optimization
Method
10-3
DQA 205.6923 6.87E-11
MMA 205.784 -5.22E-05
FDQA 203.9638 -1.78E-07
GCMMA 315.4972 -1.29E-06
*
k
*
l
*
0f max jfx
For 3b problem, the proposed FDQA
shows best performance.
Optimization is not converged except
for the proposed FDQA when εx=10-4.
DQA MMA
Optimized layouts
FDQA GCMMA
The penalization parameter is set to 3.
10-4
DQA - -
MMA - -
FDQA 203.9638 -1.78E-07
GCMMA - -
956 -
301 -
72 74
347 1444
not converged -
not converged -
72 74
not converged -
29. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
29
5
1
2
4
Introduction
Diagonal Quadratic Approximation (DQA)
Numerical Examples
Conclusions
3 Filtered Diagonal Quadratic Approximation (FDQA)
30. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
30
Propose an SAO algorithm with highly accurate hessian terms
by using the eTDQA.
Propose a filtered SAO algorithm appropriate to the proposed DQA.
Investigate the efficiency and accuracy of the proposed algorithm
by solving the numerical examples.
Improve convergence property without worsening the efficiency
through the proposed algorithm.
31. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
31
32. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
32
33. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
33
Previous methods related with the accuracy of approximate method for the SAO
Method Year Keyword Author
Linear or reciprocal
approximation
1986 CONLIN Fluery, C.
1987 MMA Svanberg, K.
Exponential
approximation
1990 TPEA Fadel, G. M., Riley, M. F., Barthelemy, J. M.
Diagonal
quadratic
approximation
1995
Quasi-Newton
update
Duysinx, P. Z., Zhang, W. H., Fluery, C.
2002 Dynamic-Q Snyman, J. A., Hay, A. M.
2007
Incomplete series
expansion
Groenwold, A. A., Etman, L. F. P.,
Snyman, J. A., Rooda, J. E.
20109 Approximated
approximations9 Groenwold, A. A., Etman, L. F. P., Wood, D. W.
9Groenwold, A. A., Etman, L. F. P., and Wood, D. W., "Approximated approximations for SAO," Structural and
Multidisciplinary Optimization, Vol. 41, No. 1, 2010, pp. 39-56.
Several SAO with the dual methods are compared.
34. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
34
Previous methods related with the convergence property of the SAO
Method Year Author
Trust-region like framework
1998
Alexandrow, N. M., Dennis, J. E., Lewis, R. M.,
Torczon, V.
2000 Gonn, A. R., Gould, N. I. M., Toint, P. L.
Nonlinear acceptance filter for SQP
1998 Fletcher, R., Leyffer, S., Toint, P. L.
2002
Fletcher, R., Gould, N. I. M., Leyffer, S.,
Toint, P. L., Wächter, A.
Globally convergent version of MMA 2002 Svanberg, K.
Filter for the dual SAO 2009
Groenwold, A. A., Wood, D. W., Etman, L. F. P.,
Tosserams, S.
Filtered conservatism7
2010 Groenwold, A. A., Etman, L. F. P.
7Groenwold, A. A., and Etman, L. F. P., "On the conditional acceptance of iterates in SAO algorithms based on convex
separable approximations," Structural and Multidisciplinary Optimization, Vol. 42, No. 2, 2010, pp. 165-178.
According to the filter option, SAO-A, SAO-B, SAO-C, and SAO-D are compared.
35. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
35
Enhanced Two-point Diagonal Quadratic Approximation (eTDQA)*
( ) 2
( )
( ) ( ) ( ) ( ) 2 1
( 1) 2 ( ) 21 1
1 1
( )
1 1
( ) ( ) ( ) ( )
2 2
( ) ( )
n
k
k
e i i in n
eTDQA k k k i
i i i i i n n
k ki ii
i i i i i i
i i
H y y
f
f f y y G y y
y
H y y H y y
x x% Ⅱ
1. Intervening variable with shifting constant ci
1
f
ix
to define an intervening variable when
the design variable value is near zero
or negative
ip
iii cxy
0otherwise,
1,if
i
L
iii
c
xcx
*Kim, J. R., and Choi, D. H., “Enhanced two-point diagonal quadratic
approximation methods for design optimization,” Computer methods in
applied mechanics and engineering, Vol. 197, 2008, pp. 846-856
i i i i ec p G H Each parameter is calculated sequentially
ip
i i iy x c
where
36. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
36
3. Gi is determined to match when pi calculation fails, otherwise, Gi is set to 0.( ) °( )( 1) ( 1)k k
f fy y- -
Ñ = Ñ
2. pi is determined to match .
5. e is a correction factor to match .
ip
i i iy x c
where
( ) 2
( )
( ) ( ) ( ) ( ) 2 1
( 1) 2 ( ) 21 1
1 1
( )
1 1
( ) ( ) ( ) ( )
2 2
( ) ( )
n
k
k
e i i in n
eTDQA k k k i
i i i i i n n
k ki ii
i i i i i i
i i
H y y
f
f f y y G y y
y
H y y H y y
x x% Ⅱ
( ) °( )( 1) ( 1)k k
f fy y- -
=
( ) °( )( 1) ( 1)k k
f fy y- -
Ñ = Ñ
Calculation of parameters
4. .
( )
( )
( )
( )1
if / / 0
1 otherwise
k k
i i i
i
G f x f x
H
-ìï ¶ ¶ ¶ ¶ £ï= í
ïïî
g
37. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
37
Two-Point Approximation
f x
0
f x
0f x
x 1
f x
1f x
x
°( )f xCurrent Design Point
Previous Design Point
Design Information
Conventional SAO with dual method are usually utilize only the current point
information to construct approximate function.
Two-point approximation methods use function values and first order derivative
values of two design points.
( 1) ( 1) ( 1) ( 1)
1 2, ,...,
Tk k k k
nx x x
x
( 1)
( 1)
,
k
jk
j
i
f
f
x
x
( ) ( ) ( ) ( )
1 2, ,...,
Tk k k k
nx x x x
( )
( )
,
k
jk
j
i
f
f
x
x
38. The Center of Innovative Design Optimization Technology Applied Mechanics and Optimal Design Lab.
38
K. Svanberg (2002), A Class of Globally Convergent Optimization Methods Based on Conservative Convex Separable Approximations
• If the condition described below is satisfied, the approximate function is said to be
conservative
• If the approximate functions are not conservative, convergence property cannot be
guaranteed.*
• If the hessian terms are increased, it is possible to obtain optimum point near to
the current point.
• In the proposed method, the approximate functions can be easily conservative by
increasing hessian terms.
( , ) ( )* ( )*
( ) ( )k l k k
j jf fx x%
Example) Let . 0 1, ' 0 2f f
Then,
2 21 1
0 ' 0 0 0 1 2
2 2
f x f f x h x x hx %
8h
2h
Editor's Notes
Good morning everyone. I’m Seung-Hyun Jeong from Hanyang university in Korea.
In this presentation, a new sequential approximate optimization using the dual subproblem will be presented.
I am the second author.
And there are four authors involved in this research.
This slide shows the outline of this presentation.
First, I will introduce research background and objectives.
Second, the proposed diagonal quadratic approximation will be explained.
Third, I will explain the proposed filtered diagonal quadratic approximation.
The numerical examples and their results will be given.
And I will end up with conclusions.
This slide shows the outline of this presentation.
First, I will introduce research background and objectives.
In this research, we consider this constrained optimization problem.
We assume that the considered objective and constraints functions are nonlinear and continuously differentiable.
The number of design variables is much larger than that of constraints.
Also, computational cost for the analysis is very high.
To solve this optimization problem, SAO with the dual method has been developed.
This slide shows general flowchart of SAO with the dual method.
First, we set the parameters for optimization.
We compute function values and their gradient.
By using these information, a dual subproblem is constructed and solved.
After that we move to the next point and check whether the current point satisfy convergence criteria.
If the criteria is satisfied, the optimization is finished.
Otherwise, we compute the function value and gradient again.
And we construct the dual subproblem and solve it again.
Then what is the dual subproblem?
To explain the dual subproblem, let us start with the primal subproblem.
To effectively solve the optimization problem, the approximate functions are constructed in the kth iteration.
This tilda means the approximate function.
In the primal subproblem, the design variable is also called the primal variable.
If the number of design variable is much larger than that of constraints, the dual method is more efficient than the primal method.
The dual subproblem can be stated as follows.
This lambda is the Lagrange multiplier and also called dual variable.
Because the primal variable is explicit function of the dual variable, the number of design variable is reduced compared with the primal subproblem.
To effectively construct the dual subproblem, convex separable approximation should be used for the approximating functions.
Then what is the convex separable approximation?
First, the approximate function is separable if the off-diagonal hessian terms are all zero.
Second, the approximate function is convex if the diagonal hessian terms of the approximate functions are non-negative.
To construct the convex separable approximation, the diagonal quadratic approximation can be used.
The DQA is expressed by the first order Taylor’s expansion and the quadratic term.
Because we assume that the function values and their gradient can be provided, we should approximate these hessian terms.
Therefore, good approximation method of true diagonal hessian terms is very important.
To determine the hessian terms, several methods were proposed.
In the last research, professor Groenwold proposed to use approximate function’s second order derivative as the hessian terms.
For an approximate function, exponential approximation, MMA, CONLIN, and TANA can be used.
And these three methods are one point approximation.
TANA is two point approximation.
From the previous researches, we thought that the performance of the DQA can be improved by using highly accurate approximate function.
Also, there are several previous studies to enhance the convergence property of the SAO.
NLP filter is one of methods to enhance the convergence property of the SQP.
GCMMA is the globally convergent version of MMA.
The concept of NLP filter is adopted for the DQA in 2010 by professor Groenwold.
In this method, the filter test if the current optimum point is acceptable.
If the current point is not acceptable, the inner iteration is conducted.
And in the inner iteration, we adjust move limit and enforce conservatism by increasing hessian terms.
In the previous method, this ki is user defined parameter.
From the previous researches, we thought that the determination of ki can affect the convergence of filtered DQA.
From the previous researches, we think that SAO with the dual method can be improved by using more accurate approximation.
And the concept of NLP filter can be improved for SAO with the dual method.
Therefore, we set the research objectives as follows.
First, we propose a DQA with highly accurate diagonal hessian terms employing a gradient based two-point approximation method.
As you can see in this figure, we construct approximate function by using function value and gradient information of two design points.
Second, we propose a nonlinear filter to improve convergence property.
As you can see in this figure, some procedures are added in the SAO with the dual method to improve convergence property.
In this part, I will explain the proposed DQA.
To approximate hessian terms in the DQA, we use enhanced two diagonal quadratic approximation method proposed by Kim.
The approximate function of eTDQA is defined as follows.
The parameters in the eTDQA are determined mathematical rules and please follows this paper for a detailed description.
If we analytically derive hessian terms of the eTDQA, we can derive this equation.
As you can see in this equation, the approximate function is already separable because off-diagonal terms are all zero.
However, we cannot guarantee convexity of the approximate functions because this term can be negative.
To make diagonal terms to be positive, we construct some rules.
First, we divide the diagonal term into two parts.
For the first part, (xi+ci) is positive because this ci term make this term to be positive according to the rule of eTDQA.
Therefore, we consider the state of these two terms.
If the gradient value is negative, the value of pi is set to smaller than one to make the part 1 to be positive.
Otherwise, the value of pi is set to larger than 1 to make the part 1 to be positive.
The value of pi is determined by considering function behavior and please follow this paper for a more detailed description.
This slide shows determination of exponents term pi.
In addition to the gradient of previous point, we check the value of two design points and the value of current gradient divided by previous gradient.
According to the function behavior, the value of p is determined.
For more detailed information, please follow this paper.
The second part of diagonal term becomes positive by selecting maximum value between epsilon and the value from this equation.
After convexifying procedure, all diagonal terms of hessian matrix become positive.
In this part, I will explain the proposed filtered DQA.
In the proposed FDQA, some procedures are added to improve convergence property.
By using the NLP filter, we test the current optimum point is acceptable.
We also test the current optimum point satisfy sufficient reduction criterion.
Through these two procedure, the algorithm determines whether the inner iteration is conducted or not.
In the proposed FDQA, some procedures are added to improve convergence property.
By using the NLP filter, we test the current optimum point is acceptable.
We also test the current optimum point satisfy sufficient reduction criterion.
Through these two procedure, the algorithm determines whether the inner iteration is conducted or not.
First, I will explain slanting envelope test using the NLP filter.
For the brevity, let f is objective function and h is maximum value of violated constraint.
At the kth iteration the pair is obtained at the kth optimum point.
Let’s assume that there are four pairs in the current filter.
According to the rule proposed by professor Fletcher, slanting envelope was used to guarantee convergence.
In the slanting envelope test, there are two criteria.
Each criteria for one pair can be represented each envelope as follows.
At the current iteration, a pair is obtained as follows.
Because the current point cannot satisfy slanting envelope test, this point is rejected.
If we obtain a point which satisfy only one criterion, this point will be added to the current filter.
If we obtain a point which satisfy both criterion, this point replace a pair in the current filter.
If the current optimum point cannot satisfy slanting envelope test, inner iteration will be conducted.
The detailed procedure of inner iteration will be described soon.
This slide shows the procedure of the inner iteration.
In the inner iteration, we reduce the move limit of subproblem.
In this research, we reduce the move limit as one half of previous iteration.
In addition to reducing the move limit, we make conservative approximation for the inner iteration.
According to the previous researches, the convergence of SAO is not guaranteed if the approximation function is not conservative.
The approximate function is conservative if the approximate function value is larger than that of real function.
In this research, the approximate function can be easily conservative by increasing hessian terms like this.
We increase the hessian terms by multiplying psi term like this.
After reducing move limit and making conservative approximation, we resolve the dual subproblem.
After the slanting envelope test, we test the current point satisfy sufficient reduction criterion.
This procedure test if the reduction of real objective function is smaller than we expected.
Let delta f is the difference between real objective value of current point and current optimum point.
And let delta q is the difference between approximate objective value of current point and current optimum point.
If delta q is larger than 0, this means that the approximate objective function is decreased after by solving sub optimization problem.
And if delta f is smaller than multiplication of sigma and delta q, this means that the reduction is not satisfactory as we expected.
Therefore, in this situation inner iteration is conducted.
Otherwise, we update filter and go to the next iteration.
From now, I will explain the numerical examples and their results.
In this research, we solve three kind of optimization problems.
Performance of the proposed algorithm is compared with those of these previous studies.
Initial condition and convergence criteria are same with those of previous studies.
Convergence criterion is norm of difference between two design points.
For the third problem, there are two options with different penalization parameter.
And for the 3 b problem, performance of the proposed algorithm is compared with the MMA and GCMMA.
For the first problem, we consider Vanderplaats’ cantilever beam problem.
The optimization problem is stated as follows.
The performances of the SAO-A,B,C,D are from the previous paper.
As you can see in this table, DQA and FDQA show better efficiency compared to other methods.
For this example, efficiency of the FDQA is similar to that of DQA.
For the second problem, we consider Svanberg’s 5-variate cantilever beam.
The optimization problem is stated as follows.
The performances of the other algorithms in this table are from this paper.
Similar with the previous example, DQA and FDQA show better performance in terms of efficiency.
We think that the reason is good accuracy of the proposed DQA.
For the third problem, we consider MBB beam topology optimization problem.
The optimization problem is to minimize compliance subject to volume constraint.
This table shows performance of each method.
For this problem, the E method is better than the proposed FDQA.
Also, as you can see in this table, efficiency is improved by adopting filter.
This slide shows second case of topology optimization of the MBB beam.
The figures in the left side represent optimized layouts.
We obtain the optimized results by decreasing the convergence criteria.
As you can see in this table, proposed FDQA is more efficient than the other algorithms.
And optimization is not converged except for the proposed FDQA when the convergence criterion is decreased.
In this part, I will end up with conclusions.
Good morning everyone, my name is seonho park. And I’m come from hanyang university in korea. My advisor is profesor donghoon choi. My presentation title is “a new ~~ design optimization”. In this presentation I will present new convex separable approximation whose diagonal Hessian terms are approximated by this Etdqa.
Good morning everyone, my name is seonho park. And I’m come from hanyang university in korea. My advisor is profesor donghoon choi. My presentation title is “a new ~~ design optimization”. In this presentation I will present new convex separable approximation whose diagonal Hessian terms are approximated by this Etdqa.
This slide shows previous studies of SAO with the dual method.
As you can see in this table, many methods were proposed to accurately approximate response function.
For an approximate method, Linear and reciprocal approximation can be used.
And these methods are proposed by professor Fluery and professor Svanberg.
Also, exponential approximation called TPEA was proposed by professor Fadel.
After 1995, several diagonal quadratic approximation methods such as quasi-newton update, dynamic-Q, incomplete series expansion, and approximated approximations are proposed.
Especially in the last paper, several SAO with the dual methods are compared.
In addition to accuracy of the approximate function, the convergence property of the SAO is important.
To improve convergence property, trust-region like framework was proposed.
Also, nonlinear acceptance filter for SQP were proposed at 1998 by professor Fletcher.
In 2002, globally convergent version of MMA was proposed by professor Svanberg.
The concept of filter was applied for the dual SAO in 2009 by professor Groenwold.
And filtered conservatism was proposed in 2010.
In this paper several SAO algorithm with various options of filter were compared.
In this research we utilize enhanced two-point diagonal quadratic approximation which is abbreviated as eTDQA.
In this section we will speak about the details of eTDQA.
Firstly, eTDQA is two point approximation function, the nonlinearity behavior is represented by this intervening variable y
And y is composed by c term and p term
Firstly, this c term is shifting constant. If design variable is near zero or negative,
then shifting constant make design variable to more than zero.
And exponential term of intervening variable is to match derivative value at previous point,
If domain of logarithmic function is below to zero or some numerical difficulties are arise in here
The G term is to match this value by force
And eta term is to match function value at previous values.
Approximation function utilize function values and derivatives values at the current point or previous points.
This values called design informations.
And two-point approximation function utilize two point design information.
The conventional SAO with dual method are usually utilize the only current point information.