SlideShare a Scribd company logo
1 of 31
Download to read offline
Self-Adaptive Simulated Binary Crossover
     for Real-Parameter Optimization



   Kalyanmoy Deb*, S Karthik*, Tatsuya Okabe*
                     GECCO 2007

                   *Indian Inst. India
              **Honda Research Inst., Japan



                       Reviewed by

                   Paskorn Champrasert
                   paskorn@cs.umb.edu
                   http://dssg.cs.umb.edu
Outline
• Simulated Binary Crossover (SBX Crossover )
       (revisit)
• Problem in SBX
• Self-Adaptive SBX
• Simulation Results
   – Self-Adaptive SBX for Single-OOP
   – Self-Adaptive SBX for Multi-OOP
• Conclusion


 November 7, 08        DSSG Group Meeting       2/31
SBX
            Simulated Binary Crossover
• Simulated Binary Crossover (SBX) was proposed in
  1995 by Deb and Agrawal.

• SBX was designed with respect to the one-point
  crossover properties in binary-coded GA.
    – Average Property:
      The average of the decoded parameter values is the same
      before and after the crossover operation.
    – Spread Factor Property:
      The probability of occurrence of spread factor β ≈ 1 is more
      likely than any other β value.


November 7, 08              DSSG Group Meeting                       3/31
Important Property of One-Point Crossover
Spread factor:
  Spread factor β is defined as the ratio of the spread of
  offspring points to that of the parent points:
                             c1 −c2
                  β=       | p1 −p2 |
                                                               c1   c2

- Contracting Crossover      β<1                         p1              p2
   The offspring points are enclosed by the parent points.

- Expanding Crossover        β>1                    c1                        c2

   The offspring points enclose by the parent points. p1                 p2

- Stationary Crossover      β=1                               c1              c2
    The offspring points are the same as parent points        p1              p2
 November 7, 08              DSSG Group Meeting                               4/31
Spread Factor (β) in SBX
• The probability distribution of β in SBX should be similar (e.g.
   same shape) to the probability distribution of β in Binary-coded
   crossover.
                                  c(β) = 0.5(nc + 1)β nc , β ≤ 1
 • The probability distribution
 function is defined as:           c(β) = 0.5(nc + 1) β n1+2 , β > 1
                                                          c

                                                                  1.6
                                                                  1.4




                                       Probability Distribution
                                                                  1.2
                                                                  1.0                   n=2
                                                                  0.8
                                                                  0.6
                                                                  0.4
                                                                  0.2
                                                                  0.0
                                                                        0   1   2   3    4    5   6    7
                                                                                    β



November 7, 08              DSSG Group Meeting                                                        5/31
A large value of n gives a higher
                              probability for creating ‘near-parent’
contracting                   solutions.
   c(β) = 0.5(n + 1)β n , β ≤ 1



                      expanding
                                            1
                       c(β) = 0.5(n + 1) β n+2 , β > 1




                                                 Mostly, n=2 to 5
November 7, 08              DSSG Group Meeting                      6/31
Spread Factor (β) as a random number
          • β is a random number that follows the proposed
            probability distribution function:
                           1.6
                           1.4
                                                                         To get a β value,
Probability Distribution




                           1.2
                                                                         1) A unified random number u
                                                     n=2
                                                                            is generated [0,1]
                           1.0
                           0.8
                           0.6                                           2) Get β value that makes the
                           0.4
                           0.2
                                                                            area under the curve = u
                           0.0
                                     u
                                 0   1       2   3    4    5   6     7
                                                 β                          Offspring:
                                         β
                                                                                    c1 = x − 1 β(p2 − p1 )
                                                                                         ¯ 2
                                                                                    c2 = x + 1 β(p2 − p1 )
                                                                                         ¯ 2
                    November 7, 08                                 DSSG Group Meeting                    7/31
SBX Summary
• Simulated Binary Crossover (SBX) is a real-parameter
  combination operator which is commonly used in
  evolutionary algorithms (EA) to optimization problems.

    – SBX operator uses two parents and creates two offspring.

    – SBX operator uses a parameter called the distribution index
      (nc) which is kept fixed (i.e., 2 or 5) throughout a simulation
      run.
          • If nc is large the offspring solutions are close to the parent solutions
          • If nc is small the offspring solutions are away from the parent solutions

    – The distribution index (nc) has a direct effect in controlling the
      spread of offspring solutions.
 November 7, 08                     DSSG Group Meeting                                  8/31
Research Issues
•     Finding the distribution index (nc) to an appropriate value
      is an important task.
    –      The distribution index (nc) has effect in
          1) Convergence speed
                if nc is very high the offspring solutions are very close to the parent solutions the
                     convergence speed is very low.
          2) Local/Global optimum solutions.
             Past studies of SBX crossover GA found that it cannot work well in
             complicated problems (e.g., Restrigin’s function)




    November 7, 08                         DSSG Group Meeting                                     9/31
Self-Adaptive SBX
• This paper proposed a procedure for updating the
  distribution index (nc) adaptively to solve optimization
  problems.

    – The distribution index (nc) is updated every generation.

    – How the distribution index of the next generation (nc) is
      updated (increased or decreased) depends on how the
      offspring outperform (has better fitness value) than its two
      parents.
• The next slides will show the process to make an
  equation to update the distribution index.

November 7, 08               DSSG Group Meeting                      10/31
Probability density per offspring
         Spread factor distribution
                           1.6
                           1.4
                                                                                   (1)   c(β) = 0.5(nc + 1)β nc , β ≤ 1
Probability Distribution




                           1.2       (1)       (2)

                           1.0                             nc = 2
                                                                                   (2)   c(β) = 0.5(nc + 1) β n1+2 , β > 1
                                                                                                               c

                           0.8
                           0.6                                                                              ¯
                                                                                An example, when p1=2, p2=5 β>1
                           0.4
                           0.2
                           0.0             u
                                 0         1       2   3      4     5   6   7
                                                       β
                                               β                                                             (1)   (2)

         Offspring:
                                  ¯ 2¯
                             c1 = x − 1 β(p2 − p1 )                                              (1)   (2)

                                  ¯ 2¯
                             c2 = x + 1 β(p2 − p1 )



                            November 7, 08                                  DSSG Group Meeting                           11/31
Self-Adaptive SBX (1)
Consider the case that β > 1:

                                 c1                     c2
                     β>1
                                      p1               p2

                    c2 −c1
     β=           | p2 −p1 |
             (c2 −p2 )+(p2 −p1 )+(p1 −c1 )
    β=                  p2 −p1

    (c2 − p2 ) = (p1 − c1 )

                    2(c2 −p2 )
     β =1+           p2 −p1
                                  --- (1)

 November 7, 08                   DSSG Group Meeting         12/31
Self-Adaptive SBX (2)
                           1.6
                           1.4
                                                                                 (1)   c(β) = 0.5(nc + 1)β nc , β ≤ 1
Probability Distribution




                           1.2       (1)       (2)

                           1.0
                                                                                 (2)   c(β) = 0.5(nc + 1) β n1+2 , β > 1
                                                                                                             c

                           0.8
                           0.6
                           0.4
                           0.2
                           0.0
                                           u
                                 0         1       2   3      4   5   6   7
                                                       β
                                               β
                                                       (2)
                                                           Rβ  0.5(nc +1)
                                                                                = (u − 0.5)           --- (2)
                                                             1   β nc +2 dβ




                            November 7, 08                                DSSG Group Meeting                        13/31
(2)
                   Self-Adaptive SBX (3)
 Rβ  0.5(nc +1)
                                            --- (2)
  1    β nc +2 dβ = (u − 0.5)
Rβ
 1
    0.5(nc + 1)β −(nc +2) dβ =            (u − 0.5)

−0.5β −nc −1 − (−0.5) = (u − 0.5)
−0.5β −nc −1 = (u − 1)
log(β −nc −1 ) = log − (u−1)
                        0.5

(−nc − 1) log β = log(−2(u − 1))
                   log 2(1−u)
(−nc − 1) =           log β

                         log 2(1−u)
            nc = −(1 +      log β   )                --- (3)
  November 7, 08                DSSG Group Meeting             14/31
Self-Adaptive SBX (4)
                 c1                c2    c2
                                         ˆ
β>1
                      p1      p2

• If c2 is better than both parent solutions,
    – the authors assume that the region in which the child
      solutions is created is better than the region in which
      the parents solutions are.
    – the authors intend to extend the child solution further
      away from the closet parent solution (p2) by a factor α
      (α >1)


                 (c2 − p2 ) = α(c2 − p2 )
                  ˆ                                --- (4)


November 7, 08                DSSG Group Meeting             15/31
β Update
                 c1                    c2   c2
                                            ˆ
β>1                   p1          p2

                 2(c2 −p2 )
β =1+             p2 −p1
                                --- (1)      (c2 − p2) = α(c2 − p2) --- (4)
                                              ˆ


from (1)         ˆ
                 β =1+        2(c2 −p2 )
                                ˆ
                               p2 −p1

                              2((α(c2 −p2 )+p2 )−p2 )
                   =1+                p2 −p1
                              α2(c2 −p2 )
                   =1+          p2 −p1
                                                   β−1
  ˆ
  β = 1 + α(β − 1) --- (5)
November 7, 08                    DSSG Group Meeting                      16/31
Distribution Index (nc) Update
nc = −(1 +         log 2(1−u)
                              )   --- (3)        ˆ
                                                 β = 1 +α(β −1) --- (5)
                      log β

                   log 2(1−u)
nc = −(1 +
ˆ                         ˆ
                      log β
                              )

                     log 2(1−u)
nc = −(1 +
ˆ                  log(1+α(β−1)) )


                  log 2(1 − u) = −(nc + 1) log β

                                    (nc +1) log β
                  nc = −1 +
                  ˆ               log(1+α(β−1))        --- (6)



 November 7, 08                   DSSG Group Meeting                 17/31
Distribution Index (nc) Update




When c is better than p1 and p2 then the distribution index for the
next generation is calculated as (6). The distribution index will be
decreased in order to create an offspring (c’) further away from its
parent(p2) than the offspring (c) in the previous generation.
                    (nc +1) log β
 nc = −1 +
 ˆ                log(1+α(β−1))        --- (6)       [0,50]
                           α is a constant (α >1 )

 November 7, 08               DSSG Group Meeting                       18/31
Distribution Index (nc) Update
When c is worse than p1 and p2 then the distribution index for the
next generation is calculated as (7). The distribution index will be
increased in order to create an offspring (c’) closer to its
parent(p2) than the offspring (c) in the previous generation.

The 1/α is used instead of α

                      (nc +1) log β
      nc = −1 +
      ˆ             log(1+α(β−1))           --- (6)



                      (nc +1) log β
      nc = −1 +
      ˆ             log(1+(β−1)/α)          --- (7)


                                                    α is a constant (α >1 )
 November 7, 08                DSSG Group Meeting                             19/31
Distribution Index (nc) Update
         So, we are done with the expanding case                                                   (2)


                              How about contracting case                               (1)     ?
                                                        The same process is applied for                  (1)


                            1.6
                            1.4
                                                                                         (1)   c(β) = 0.5(nc + 1)β nc , β ≤ 1
Probability Distribution




                            1.2
                            1.0
                                                                                         (2)   c(β) = 0.5(nc + 1) β n1+2 , β > 1
                                                                                                                     c

                            0.8   (1)         (2)
                            0.6
                            0.4
                            0.2
                            0.0
                                      u
                                  0       1         2     3   4         5    6     7
                                                          β
                                        β                         (1)
                                                                   Rβ
                                                                        0
                                                                            0.5(nc + 1)β nc dβ = u                   --- (x)
                           November 7, 08                                        DSSG Group Meeting                            20/31
Distribution Index (nc) Update
                         for contracting case
When c is better than p1 and p2 then the distribution index for the
next generation is calculated as (8). The distribution index will be
decreased in order to create an offspring (c’) father away from its
parent(p2) than the offspring (c) in the previous generation.

                  1+nc                   --- (8)
       nc =
       ˆ           α     −1


 When c is worse than p1 and p2, 1/α is used instead of α

        nc = α(1 + nc ) − 1
        ˆ                                  --- (9)

                                                   α is a constant (α >1 )
 November 7, 08               DSSG Group Meeting                             21/31
Distribution Index Update Summary

                  Expanding Case                      Contracting Case
                     (u > 0.5)                            (u<0.5)
c is better than parents
                                                              1+nc
           nc = −1 +
           ˆ                 (nc +1) log β             ˆ
                                                       nc =    α     −1
                           log(1+α(β−1))


c is worse than parents
                             (nc +1) log β
          nc = −1 +
          ˆ                log(1+(β−1)/α)
                                                      nc =α(1+nc) −1
                                                      ˆ


  (u = 0.5), nc = nc
             ˆ
 November 7, 08                  DSSG Group Meeting                       22/31
Self-SBX Main Loop

                                                             Update nc


                                                         If rand < pc

                                              Parent 1                              Offspring C1
          Pt
                             Selection                             Crossover
        initial                               Parent 2                              Offspring C2
      population


                                                    If rand > pc
                   N = 150                                          Mutation
                   nc = 2
                   α = 1.5




                                                                      Pt+1
                                                                   Population
                                                                     at t+1


                                                                                N = 150
                                                                                α = 1.5
November 7, 08                           DSSG Group Meeting                                        23/31
Simulation Results
Sphere Problem:

 SBX
 • Initial nc is 2.
 • Population size =150
 • Pc = 0.9
 • The number of variables (n) =30
 • Pm =1/30
 • A run is terminated when a solution having a function value = 0.001
 • 11 runs are applied.

 Self-SBX
 • Pc = 0.7
 • α = 1.5



November 7, 08                 DSSG Group Meeting                        24/31
Fitness Value

                                   • Self-SBX can
                                     find better
                                     solutions




November 7, 08      DSSG Group Meeting              25/31
α Study
                         • α is varied in [1.05,2]
                         • The effect of α is not
                           significant since the
                           average number of
                           function evaluations is
                           similar.




November 7, 08   DSSG Group Meeting              26/31
Simulation Results
Rastrigin’s Function:

 Many local optima, one global minimum (0)
 The number of variables = 20
 SBX
 • Initial nc is 2.
 • Population size =100
 • Pc = 0.9
 • The number of variables (n) =30
 • Pm =1/30
 • A run is terminated when a solution having a function value = 0.001 or the maximum
   of 40,000 generations is reached.
 • 11 runs are applied.

 Self-SBX
 • Pc = 0.7
 • α = 1.5


November 7, 08                               DSSG Group Meeting                   27/31
Fitness Value



                       Best        Median   Worst



• Self-SBX can find a very close global optimal




November 7, 08      DSSG Group Meeting              28/31
α Study
                              • α is varied in
                                [1.05,10]
                              • The effect of α is
                                not significant since
                                the average number
                                of function
                                evaluations is
                                similar.
                              • α = 3 is the best

November 7, 08   DSSG Group Meeting                29/31
Self-SBX in MOOP
                       • Self-SBX is applied in
                         NSGA2-II.
                       • Larger hyper-volume is
                         better




November 7, 08        DSSG Group Meeting          30/31
Conclusion
• Self-SBX is proposed in this paper.
    – The distribution index is adjust adaptively on the run
      time.
• The simulation results show that Self-SBX can
  find the better solutions in single objective
  optimization problems and multi-objective
  optimization problems.

• However, α is used to tune up the distribution
  index.

November 7, 08           DSSG Group Meeting                31/31

More Related Content

What's hot

Graph Neural Networks
Graph Neural NetworksGraph Neural Networks
Graph Neural Networkstm1966
 
実験計画法入門 Part 1
実験計画法入門 Part 1実験計画法入門 Part 1
実験計画法入門 Part 1haji mizu
 
ICML 2020 最適輸送まとめ
ICML 2020 最適輸送まとめICML 2020 最適輸送まとめ
ICML 2020 最適輸送まとめohken
 
Lifted-ElGamal暗号を用いた任意関数演算の二者間秘密計算プロトコルのmaliciousモデルにおける効率化
Lifted-ElGamal暗号を用いた任意関数演算の二者間秘密計算プロトコルのmaliciousモデルにおける効率化Lifted-ElGamal暗号を用いた任意関数演算の二者間秘密計算プロトコルのmaliciousモデルにおける効率化
Lifted-ElGamal暗号を用いた任意関数演算の二者間秘密計算プロトコルのmaliciousモデルにおける効率化MITSUNARI Shigeo
 
Palindromic tree
Palindromic treePalindromic tree
Palindromic tree__math
 
パターン認識 05 ロジスティック回帰
パターン認識 05 ロジスティック回帰パターン認識 05 ロジスティック回帰
パターン認識 05 ロジスティック回帰sleipnir002
 
ElGamal型暗号文に対する任意関数演算・再暗号化の二者間秘密計算プロトコルとその応用
ElGamal型暗号文に対する任意関数演算・再暗号化の二者間秘密計算プロトコルとその応用ElGamal型暗号文に対する任意関数演算・再暗号化の二者間秘密計算プロトコルとその応用
ElGamal型暗号文に対する任意関数演算・再暗号化の二者間秘密計算プロトコルとその応用MITSUNARI Shigeo
 
『データ解析におけるプライバシー保護』勉強会 秘密計算
『データ解析におけるプライバシー保護』勉強会 秘密計算『データ解析におけるプライバシー保護』勉強会 秘密計算
『データ解析におけるプライバシー保護』勉強会 秘密計算MITSUNARI Shigeo
 
Bart : Denoising Sequence-to-Sequence Pre-training for Natural Language Gener...
Bart : Denoising Sequence-to-Sequence Pre-training for Natural Language Gener...Bart : Denoising Sequence-to-Sequence Pre-training for Natural Language Gener...
Bart : Denoising Sequence-to-Sequence Pre-training for Natural Language Gener...taeseon ryu
 
確率的主成分分析
確率的主成分分析確率的主成分分析
確率的主成分分析Mika Yoshimura
 
多人数不完全情報ゲームにおけるAI ~ポーカーと麻雀を例として~
多人数不完全情報ゲームにおけるAI ~ポーカーと麻雀を例として~多人数不完全情報ゲームにおけるAI ~ポーカーと麻雀を例として~
多人数不完全情報ゲームにおけるAI ~ポーカーと麻雀を例として~Kenshi Abe
 
クラシックな機械学習の入門 3. 線形回帰および識別
クラシックな機械学習の入門 3. 線形回帰および識別クラシックな機械学習の入門 3. 線形回帰および識別
クラシックな機械学習の入門 3. 線形回帰および識別Hiroshi Nakagawa
 
PCCC21:株式会社日立製作所 「研究開発力向上のための研究DXソリューション」
PCCC21:株式会社日立製作所 「研究開発力向上のための研究DXソリューション」PCCC21:株式会社日立製作所 「研究開発力向上のための研究DXソリューション」
PCCC21:株式会社日立製作所 「研究開発力向上のための研究DXソリューション」PC Cluster Consortium
 
実験計画法入門 Part 2
実験計画法入門 Part 2実験計画法入門 Part 2
実験計画法入門 Part 2haji mizu
 
PRML学習者から入る深層生成モデル入門
PRML学習者から入る深層生成モデル入門PRML学習者から入る深層生成モデル入門
PRML学習者から入る深層生成モデル入門tmtm otm
 
ベイズ最適化
ベイズ最適化ベイズ最適化
ベイズ最適化MatsuiRyo
 
カーネル法:正定値カーネルの理論
カーネル法:正定値カーネルの理論カーネル法:正定値カーネルの理論
カーネル法:正定値カーネルの理論Daiki Tanaka
 

What's hot (20)

Randomized smoothing
Randomized smoothingRandomized smoothing
Randomized smoothing
 
Graph Neural Networks
Graph Neural NetworksGraph Neural Networks
Graph Neural Networks
 
実験計画法入門 Part 1
実験計画法入門 Part 1実験計画法入門 Part 1
実験計画法入門 Part 1
 
ICML 2020 最適輸送まとめ
ICML 2020 最適輸送まとめICML 2020 最適輸送まとめ
ICML 2020 最適輸送まとめ
 
Chapter2.3.6
Chapter2.3.6Chapter2.3.6
Chapter2.3.6
 
Lifted-ElGamal暗号を用いた任意関数演算の二者間秘密計算プロトコルのmaliciousモデルにおける効率化
Lifted-ElGamal暗号を用いた任意関数演算の二者間秘密計算プロトコルのmaliciousモデルにおける効率化Lifted-ElGamal暗号を用いた任意関数演算の二者間秘密計算プロトコルのmaliciousモデルにおける効率化
Lifted-ElGamal暗号を用いた任意関数演算の二者間秘密計算プロトコルのmaliciousモデルにおける効率化
 
Palindromic tree
Palindromic treePalindromic tree
Palindromic tree
 
パターン認識 05 ロジスティック回帰
パターン認識 05 ロジスティック回帰パターン認識 05 ロジスティック回帰
パターン認識 05 ロジスティック回帰
 
ElGamal型暗号文に対する任意関数演算・再暗号化の二者間秘密計算プロトコルとその応用
ElGamal型暗号文に対する任意関数演算・再暗号化の二者間秘密計算プロトコルとその応用ElGamal型暗号文に対する任意関数演算・再暗号化の二者間秘密計算プロトコルとその応用
ElGamal型暗号文に対する任意関数演算・再暗号化の二者間秘密計算プロトコルとその応用
 
『データ解析におけるプライバシー保護』勉強会 秘密計算
『データ解析におけるプライバシー保護』勉強会 秘密計算『データ解析におけるプライバシー保護』勉強会 秘密計算
『データ解析におけるプライバシー保護』勉強会 秘密計算
 
Bart : Denoising Sequence-to-Sequence Pre-training for Natural Language Gener...
Bart : Denoising Sequence-to-Sequence Pre-training for Natural Language Gener...Bart : Denoising Sequence-to-Sequence Pre-training for Natural Language Gener...
Bart : Denoising Sequence-to-Sequence Pre-training for Natural Language Gener...
 
確率的主成分分析
確率的主成分分析確率的主成分分析
確率的主成分分析
 
多人数不完全情報ゲームにおけるAI ~ポーカーと麻雀を例として~
多人数不完全情報ゲームにおけるAI ~ポーカーと麻雀を例として~多人数不完全情報ゲームにおけるAI ~ポーカーと麻雀を例として~
多人数不完全情報ゲームにおけるAI ~ポーカーと麻雀を例として~
 
クラシックな機械学習の入門 3. 線形回帰および識別
クラシックな機械学習の入門 3. 線形回帰および識別クラシックな機械学習の入門 3. 線形回帰および識別
クラシックな機械学習の入門 3. 線形回帰および識別
 
PCCC21:株式会社日立製作所 「研究開発力向上のための研究DXソリューション」
PCCC21:株式会社日立製作所 「研究開発力向上のための研究DXソリューション」PCCC21:株式会社日立製作所 「研究開発力向上のための研究DXソリューション」
PCCC21:株式会社日立製作所 「研究開発力向上のための研究DXソリューション」
 
実験計画法入門 Part 2
実験計画法入門 Part 2実験計画法入門 Part 2
実験計画法入門 Part 2
 
DeepLearningTutorial
DeepLearningTutorialDeepLearningTutorial
DeepLearningTutorial
 
PRML学習者から入る深層生成モデル入門
PRML学習者から入る深層生成モデル入門PRML学習者から入る深層生成モデル入門
PRML学習者から入る深層生成モデル入門
 
ベイズ最適化
ベイズ最適化ベイズ最適化
ベイズ最適化
 
カーネル法:正定値カーネルの理論
カーネル法:正定値カーネルの理論カーネル法:正定値カーネルの理論
カーネル法:正定値カーネルの理論
 

Similar to Self-Adaptive Simulated Binary Crossover for Real-Parameter Optimization

Day 1 intro to functions
Day 1 intro to functionsDay 1 intro to functions
Day 1 intro to functionsErik Tjersland
 
Lecture 2: Data-Intensive Computing for Text Analysis (Fall 2011)
Lecture 2: Data-Intensive Computing for Text Analysis (Fall 2011)Lecture 2: Data-Intensive Computing for Text Analysis (Fall 2011)
Lecture 2: Data-Intensive Computing for Text Analysis (Fall 2011)Matthew Lease
 
Computational complexity and simulation of rare events of Ising spin glasses
Computational complexity and simulation of rare events of Ising spin glasses Computational complexity and simulation of rare events of Ising spin glasses
Computational complexity and simulation of rare events of Ising spin glasses Martin Pelikan
 
Lecture6
Lecture6Lecture6
Lecture6voracle
 
Alg2.7 A Notes
Alg2.7 A NotesAlg2.7 A Notes
Alg2.7 A Notesmbetzel
 
Kai – An Open Source Implementation of Amazon’s Dynamo
Kai – An Open Source Implementation of Amazon’s DynamoKai – An Open Source Implementation of Amazon’s Dynamo
Kai – An Open Source Implementation of Amazon’s DynamoTakeru INOUE
 
Day 16 representing with equations
Day 16 representing with equationsDay 16 representing with equations
Day 16 representing with equationsErik Tjersland
 
Fundamentals of Engineering Probability Visualization Techniques & MatLab Cas...
Fundamentals of Engineering Probability Visualization Techniques & MatLab Cas...Fundamentals of Engineering Probability Visualization Techniques & MatLab Cas...
Fundamentals of Engineering Probability Visualization Techniques & MatLab Cas...Jim Jenkins
 
Applied Deep Learning 11/03 Convolutional Neural Networks
Applied Deep Learning 11/03 Convolutional Neural NetworksApplied Deep Learning 11/03 Convolutional Neural Networks
Applied Deep Learning 11/03 Convolutional Neural NetworksMark Chang
 
28. distance formulatemplatetouchpad
28. distance formulatemplatetouchpad28. distance formulatemplatetouchpad
28. distance formulatemplatetouchpadMedia4math
 
The business case for automated software engineering
The business case for automated software engineering The business case for automated software engineering
The business case for automated software engineering CS, NcState
 
Distilling Free-Form Natural Laws from Experimental Data
Distilling Free-Form Natural Laws from Experimental DataDistilling Free-Form Natural Laws from Experimental Data
Distilling Free-Form Natural Laws from Experimental Dataswissnex San Francisco
 
On recent improvements in the conic optimizer in MOSEK
On recent improvements in the conic optimizer in MOSEKOn recent improvements in the conic optimizer in MOSEK
On recent improvements in the conic optimizer in MOSEKedadk
 
Locating objects without_bounding_boxes - paper review
Locating objects without_bounding_boxes - paper reviewLocating objects without_bounding_boxes - paper review
Locating objects without_bounding_boxes - paper reviewtaeseon ryu
 
Lesson 21: Curve Sketching (Section 10 version)
Lesson 21: Curve Sketching (Section 10 version)Lesson 21: Curve Sketching (Section 10 version)
Lesson 21: Curve Sketching (Section 10 version)Matthew Leingang
 
Anil Thomas - Object recognition
Anil Thomas - Object recognitionAnil Thomas - Object recognition
Anil Thomas - Object recognitionIntel Nervana
 
The H.264 Integer Transform
The H.264 Integer TransformThe H.264 Integer Transform
The H.264 Integer TransformIain Richardson
 

Similar to Self-Adaptive Simulated Binary Crossover for Real-Parameter Optimization (20)

Day 1 intro to functions
Day 1 intro to functionsDay 1 intro to functions
Day 1 intro to functions
 
Lecture 2: Data-Intensive Computing for Text Analysis (Fall 2011)
Lecture 2: Data-Intensive Computing for Text Analysis (Fall 2011)Lecture 2: Data-Intensive Computing for Text Analysis (Fall 2011)
Lecture 2: Data-Intensive Computing for Text Analysis (Fall 2011)
 
Computational complexity and simulation of rare events of Ising spin glasses
Computational complexity and simulation of rare events of Ising spin glasses Computational complexity and simulation of rare events of Ising spin glasses
Computational complexity and simulation of rare events of Ising spin glasses
 
Lecture6
Lecture6Lecture6
Lecture6
 
Alg2.7 A Notes
Alg2.7 A NotesAlg2.7 A Notes
Alg2.7 A Notes
 
Kai – An Open Source Implementation of Amazon’s Dynamo
Kai – An Open Source Implementation of Amazon’s DynamoKai – An Open Source Implementation of Amazon’s Dynamo
Kai – An Open Source Implementation of Amazon’s Dynamo
 
Day 16 representing with equations
Day 16 representing with equationsDay 16 representing with equations
Day 16 representing with equations
 
Presentation
PresentationPresentation
Presentation
 
Fundamentals of Engineering Probability Visualization Techniques & MatLab Cas...
Fundamentals of Engineering Probability Visualization Techniques & MatLab Cas...Fundamentals of Engineering Probability Visualization Techniques & MatLab Cas...
Fundamentals of Engineering Probability Visualization Techniques & MatLab Cas...
 
Applied Deep Learning 11/03 Convolutional Neural Networks
Applied Deep Learning 11/03 Convolutional Neural NetworksApplied Deep Learning 11/03 Convolutional Neural Networks
Applied Deep Learning 11/03 Convolutional Neural Networks
 
6.2 notes
6.2 notes6.2 notes
6.2 notes
 
28. distance formulatemplatetouchpad
28. distance formulatemplatetouchpad28. distance formulatemplatetouchpad
28. distance formulatemplatetouchpad
 
The business case for automated software engineering
The business case for automated software engineering The business case for automated software engineering
The business case for automated software engineering
 
Distilling Free-Form Natural Laws from Experimental Data
Distilling Free-Form Natural Laws from Experimental DataDistilling Free-Form Natural Laws from Experimental Data
Distilling Free-Form Natural Laws from Experimental Data
 
On recent improvements in the conic optimizer in MOSEK
On recent improvements in the conic optimizer in MOSEKOn recent improvements in the conic optimizer in MOSEK
On recent improvements in the conic optimizer in MOSEK
 
Locating objects without_bounding_boxes - paper review
Locating objects without_bounding_boxes - paper reviewLocating objects without_bounding_boxes - paper review
Locating objects without_bounding_boxes - paper review
 
Lesson 21: Curve Sketching (Section 10 version)
Lesson 21: Curve Sketching (Section 10 version)Lesson 21: Curve Sketching (Section 10 version)
Lesson 21: Curve Sketching (Section 10 version)
 
1.2.4
1.2.41.2.4
1.2.4
 
Anil Thomas - Object recognition
Anil Thomas - Object recognitionAnil Thomas - Object recognition
Anil Thomas - Object recognition
 
The H.264 Integer Transform
The H.264 Integer TransformThe H.264 Integer Transform
The H.264 Integer Transform
 

Recently uploaded

Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Alan Dix
 
Unlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power SystemsUnlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power SystemsPrecisely
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsAndrey Dotsenko
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr LapshynFwdays
 
APIForce Zurich 5 April Automation LPDG
APIForce Zurich 5 April  Automation LPDGAPIForce Zurich 5 April  Automation LPDG
APIForce Zurich 5 April Automation LPDGMarianaLemus7
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024BookNet Canada
 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxnull - The Open Security Community
 
Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024Neo4j
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 3652toLead Limited
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptxLBM Solutions
 

Recently uploaded (20)

Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
 
Unlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power SystemsUnlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power Systems
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
 
APIForce Zurich 5 April Automation LPDG
APIForce Zurich 5 April  Automation LPDGAPIForce Zurich 5 April  Automation LPDG
APIForce Zurich 5 April Automation LPDG
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
 
Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food Manufacturing
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptx
 

Self-Adaptive Simulated Binary Crossover for Real-Parameter Optimization

  • 1. Self-Adaptive Simulated Binary Crossover for Real-Parameter Optimization Kalyanmoy Deb*, S Karthik*, Tatsuya Okabe* GECCO 2007 *Indian Inst. India **Honda Research Inst., Japan Reviewed by Paskorn Champrasert paskorn@cs.umb.edu http://dssg.cs.umb.edu
  • 2. Outline • Simulated Binary Crossover (SBX Crossover ) (revisit) • Problem in SBX • Self-Adaptive SBX • Simulation Results – Self-Adaptive SBX for Single-OOP – Self-Adaptive SBX for Multi-OOP • Conclusion November 7, 08 DSSG Group Meeting 2/31
  • 3. SBX Simulated Binary Crossover • Simulated Binary Crossover (SBX) was proposed in 1995 by Deb and Agrawal. • SBX was designed with respect to the one-point crossover properties in binary-coded GA. – Average Property: The average of the decoded parameter values is the same before and after the crossover operation. – Spread Factor Property: The probability of occurrence of spread factor β ≈ 1 is more likely than any other β value. November 7, 08 DSSG Group Meeting 3/31
  • 4. Important Property of One-Point Crossover Spread factor: Spread factor β is defined as the ratio of the spread of offspring points to that of the parent points: c1 −c2 β= | p1 −p2 | c1 c2 - Contracting Crossover β<1 p1 p2 The offspring points are enclosed by the parent points. - Expanding Crossover β>1 c1 c2 The offspring points enclose by the parent points. p1 p2 - Stationary Crossover β=1 c1 c2 The offspring points are the same as parent points p1 p2 November 7, 08 DSSG Group Meeting 4/31
  • 5. Spread Factor (β) in SBX • The probability distribution of β in SBX should be similar (e.g. same shape) to the probability distribution of β in Binary-coded crossover. c(β) = 0.5(nc + 1)β nc , β ≤ 1 • The probability distribution function is defined as: c(β) = 0.5(nc + 1) β n1+2 , β > 1 c 1.6 1.4 Probability Distribution 1.2 1.0 n=2 0.8 0.6 0.4 0.2 0.0 0 1 2 3 4 5 6 7 β November 7, 08 DSSG Group Meeting 5/31
  • 6. A large value of n gives a higher probability for creating ‘near-parent’ contracting solutions. c(β) = 0.5(n + 1)β n , β ≤ 1 expanding 1 c(β) = 0.5(n + 1) β n+2 , β > 1 Mostly, n=2 to 5 November 7, 08 DSSG Group Meeting 6/31
  • 7. Spread Factor (β) as a random number • β is a random number that follows the proposed probability distribution function: 1.6 1.4 To get a β value, Probability Distribution 1.2 1) A unified random number u n=2 is generated [0,1] 1.0 0.8 0.6 2) Get β value that makes the 0.4 0.2 area under the curve = u 0.0 u 0 1 2 3 4 5 6 7 β Offspring: β c1 = x − 1 β(p2 − p1 ) ¯ 2 c2 = x + 1 β(p2 − p1 ) ¯ 2 November 7, 08 DSSG Group Meeting 7/31
  • 8. SBX Summary • Simulated Binary Crossover (SBX) is a real-parameter combination operator which is commonly used in evolutionary algorithms (EA) to optimization problems. – SBX operator uses two parents and creates two offspring. – SBX operator uses a parameter called the distribution index (nc) which is kept fixed (i.e., 2 or 5) throughout a simulation run. • If nc is large the offspring solutions are close to the parent solutions • If nc is small the offspring solutions are away from the parent solutions – The distribution index (nc) has a direct effect in controlling the spread of offspring solutions. November 7, 08 DSSG Group Meeting 8/31
  • 9. Research Issues • Finding the distribution index (nc) to an appropriate value is an important task. – The distribution index (nc) has effect in 1) Convergence speed if nc is very high the offspring solutions are very close to the parent solutions the convergence speed is very low. 2) Local/Global optimum solutions. Past studies of SBX crossover GA found that it cannot work well in complicated problems (e.g., Restrigin’s function) November 7, 08 DSSG Group Meeting 9/31
  • 10. Self-Adaptive SBX • This paper proposed a procedure for updating the distribution index (nc) adaptively to solve optimization problems. – The distribution index (nc) is updated every generation. – How the distribution index of the next generation (nc) is updated (increased or decreased) depends on how the offspring outperform (has better fitness value) than its two parents. • The next slides will show the process to make an equation to update the distribution index. November 7, 08 DSSG Group Meeting 10/31
  • 11. Probability density per offspring Spread factor distribution 1.6 1.4 (1) c(β) = 0.5(nc + 1)β nc , β ≤ 1 Probability Distribution 1.2 (1) (2) 1.0 nc = 2 (2) c(β) = 0.5(nc + 1) β n1+2 , β > 1 c 0.8 0.6 ¯ An example, when p1=2, p2=5 β>1 0.4 0.2 0.0 u 0 1 2 3 4 5 6 7 β β (1) (2) Offspring: ¯ 2¯ c1 = x − 1 β(p2 − p1 ) (1) (2) ¯ 2¯ c2 = x + 1 β(p2 − p1 ) November 7, 08 DSSG Group Meeting 11/31
  • 12. Self-Adaptive SBX (1) Consider the case that β > 1: c1 c2 β>1 p1 p2 c2 −c1 β= | p2 −p1 | (c2 −p2 )+(p2 −p1 )+(p1 −c1 ) β= p2 −p1 (c2 − p2 ) = (p1 − c1 ) 2(c2 −p2 ) β =1+ p2 −p1 --- (1) November 7, 08 DSSG Group Meeting 12/31
  • 13. Self-Adaptive SBX (2) 1.6 1.4 (1) c(β) = 0.5(nc + 1)β nc , β ≤ 1 Probability Distribution 1.2 (1) (2) 1.0 (2) c(β) = 0.5(nc + 1) β n1+2 , β > 1 c 0.8 0.6 0.4 0.2 0.0 u 0 1 2 3 4 5 6 7 β β (2) Rβ 0.5(nc +1) = (u − 0.5) --- (2) 1 β nc +2 dβ November 7, 08 DSSG Group Meeting 13/31
  • 14. (2) Self-Adaptive SBX (3) Rβ 0.5(nc +1) --- (2) 1 β nc +2 dβ = (u − 0.5) Rβ 1 0.5(nc + 1)β −(nc +2) dβ = (u − 0.5) −0.5β −nc −1 − (−0.5) = (u − 0.5) −0.5β −nc −1 = (u − 1) log(β −nc −1 ) = log − (u−1) 0.5 (−nc − 1) log β = log(−2(u − 1)) log 2(1−u) (−nc − 1) = log β log 2(1−u) nc = −(1 + log β ) --- (3) November 7, 08 DSSG Group Meeting 14/31
  • 15. Self-Adaptive SBX (4) c1 c2 c2 ˆ β>1 p1 p2 • If c2 is better than both parent solutions, – the authors assume that the region in which the child solutions is created is better than the region in which the parents solutions are. – the authors intend to extend the child solution further away from the closet parent solution (p2) by a factor α (α >1) (c2 − p2 ) = α(c2 − p2 ) ˆ --- (4) November 7, 08 DSSG Group Meeting 15/31
  • 16. β Update c1 c2 c2 ˆ β>1 p1 p2 2(c2 −p2 ) β =1+ p2 −p1 --- (1) (c2 − p2) = α(c2 − p2) --- (4) ˆ from (1) ˆ β =1+ 2(c2 −p2 ) ˆ p2 −p1 2((α(c2 −p2 )+p2 )−p2 ) =1+ p2 −p1 α2(c2 −p2 ) =1+ p2 −p1 β−1 ˆ β = 1 + α(β − 1) --- (5) November 7, 08 DSSG Group Meeting 16/31
  • 17. Distribution Index (nc) Update nc = −(1 + log 2(1−u) ) --- (3) ˆ β = 1 +α(β −1) --- (5) log β log 2(1−u) nc = −(1 + ˆ ˆ log β ) log 2(1−u) nc = −(1 + ˆ log(1+α(β−1)) ) log 2(1 − u) = −(nc + 1) log β (nc +1) log β nc = −1 + ˆ log(1+α(β−1)) --- (6) November 7, 08 DSSG Group Meeting 17/31
  • 18. Distribution Index (nc) Update When c is better than p1 and p2 then the distribution index for the next generation is calculated as (6). The distribution index will be decreased in order to create an offspring (c’) further away from its parent(p2) than the offspring (c) in the previous generation. (nc +1) log β nc = −1 + ˆ log(1+α(β−1)) --- (6) [0,50] α is a constant (α >1 ) November 7, 08 DSSG Group Meeting 18/31
  • 19. Distribution Index (nc) Update When c is worse than p1 and p2 then the distribution index for the next generation is calculated as (7). The distribution index will be increased in order to create an offspring (c’) closer to its parent(p2) than the offspring (c) in the previous generation. The 1/α is used instead of α (nc +1) log β nc = −1 + ˆ log(1+α(β−1)) --- (6) (nc +1) log β nc = −1 + ˆ log(1+(β−1)/α) --- (7) α is a constant (α >1 ) November 7, 08 DSSG Group Meeting 19/31
  • 20. Distribution Index (nc) Update So, we are done with the expanding case (2) How about contracting case (1) ? The same process is applied for (1) 1.6 1.4 (1) c(β) = 0.5(nc + 1)β nc , β ≤ 1 Probability Distribution 1.2 1.0 (2) c(β) = 0.5(nc + 1) β n1+2 , β > 1 c 0.8 (1) (2) 0.6 0.4 0.2 0.0 u 0 1 2 3 4 5 6 7 β β (1) Rβ 0 0.5(nc + 1)β nc dβ = u --- (x) November 7, 08 DSSG Group Meeting 20/31
  • 21. Distribution Index (nc) Update for contracting case When c is better than p1 and p2 then the distribution index for the next generation is calculated as (8). The distribution index will be decreased in order to create an offspring (c’) father away from its parent(p2) than the offspring (c) in the previous generation. 1+nc --- (8) nc = ˆ α −1 When c is worse than p1 and p2, 1/α is used instead of α nc = α(1 + nc ) − 1 ˆ --- (9) α is a constant (α >1 ) November 7, 08 DSSG Group Meeting 21/31
  • 22. Distribution Index Update Summary Expanding Case Contracting Case (u > 0.5) (u<0.5) c is better than parents 1+nc nc = −1 + ˆ (nc +1) log β ˆ nc = α −1 log(1+α(β−1)) c is worse than parents (nc +1) log β nc = −1 + ˆ log(1+(β−1)/α) nc =α(1+nc) −1 ˆ (u = 0.5), nc = nc ˆ November 7, 08 DSSG Group Meeting 22/31
  • 23. Self-SBX Main Loop Update nc If rand < pc Parent 1 Offspring C1 Pt Selection Crossover initial Parent 2 Offspring C2 population If rand > pc N = 150 Mutation nc = 2 α = 1.5 Pt+1 Population at t+1 N = 150 α = 1.5 November 7, 08 DSSG Group Meeting 23/31
  • 24. Simulation Results Sphere Problem: SBX • Initial nc is 2. • Population size =150 • Pc = 0.9 • The number of variables (n) =30 • Pm =1/30 • A run is terminated when a solution having a function value = 0.001 • 11 runs are applied. Self-SBX • Pc = 0.7 • α = 1.5 November 7, 08 DSSG Group Meeting 24/31
  • 25. Fitness Value • Self-SBX can find better solutions November 7, 08 DSSG Group Meeting 25/31
  • 26. α Study • α is varied in [1.05,2] • The effect of α is not significant since the average number of function evaluations is similar. November 7, 08 DSSG Group Meeting 26/31
  • 27. Simulation Results Rastrigin’s Function: Many local optima, one global minimum (0) The number of variables = 20 SBX • Initial nc is 2. • Population size =100 • Pc = 0.9 • The number of variables (n) =30 • Pm =1/30 • A run is terminated when a solution having a function value = 0.001 or the maximum of 40,000 generations is reached. • 11 runs are applied. Self-SBX • Pc = 0.7 • α = 1.5 November 7, 08 DSSG Group Meeting 27/31
  • 28. Fitness Value Best Median Worst • Self-SBX can find a very close global optimal November 7, 08 DSSG Group Meeting 28/31
  • 29. α Study • α is varied in [1.05,10] • The effect of α is not significant since the average number of function evaluations is similar. • α = 3 is the best November 7, 08 DSSG Group Meeting 29/31
  • 30. Self-SBX in MOOP • Self-SBX is applied in NSGA2-II. • Larger hyper-volume is better November 7, 08 DSSG Group Meeting 30/31
  • 31. Conclusion • Self-SBX is proposed in this paper. – The distribution index is adjust adaptively on the run time. • The simulation results show that Self-SBX can find the better solutions in single objective optimization problems and multi-objective optimization problems. • However, α is used to tune up the distribution index. November 7, 08 DSSG Group Meeting 31/31