Newtonian Law Inspired Optimization Techniques Based on Gravitational Search Algorithm
1. AThesis Seminar on
Newtonian Law Inspired Optimization
Techniques Based on Gravitational Search
Algorithm
Presented by-
Rajdeep Chatterjee
M.Tech, 2009-11
School of Computer Engineering
KIIT University
Under the guidance of-
Prof. (Dr.) Madhabananda Das
Dean, School of Computer Engineering
KIIT University
3. Gravitational Search Algorithm
This algorithm is based on the Newtonian gravity: „„Every particle in
the universe attracts every other particle with a force that is directly
proportional to the product of their masses and inversely proportional to
the square of the distance between them”.
The position of the mass corresponds to a solution of the problem,
and its masses are determined using a fitness function.
9. GSA and PID Controller
Simulation Result
GSA PSO BFO
KP 0.85 0.56 0.80
KI - - -
KD 0.57 0.62 0.53
Cost 15.3026 19.1416 15.7906
Table 3
10. GSA and PID Controller
Fig. 6 Closed Loop Response
11. Mutation
Optimization algorithm often trapped into local
optima.
We cannot obtain the global optima and rather
ended up with local optimal value.
Mutation operator is used to lift the population
from local optima to not yet explored search
space.
13. Differential Evolution
It is a heuristic approach for minimizing possibly
nonlinear and non differentiable continuous space.
It was introduced by Rainer Storn and Kenneth Price in
the year 1995.
16. Proposed Algorithms
GSA–m :: Gravitational Search Algorithm with mutation
GDE :: Gravitational Differential Evolution
DE is used as mutation operator to improve the
convergence.
DE-1 :: DE/best/1/exp
DE-2 :: DE/best/2/exp
DE-3 :: DE/best/2/bin
17. Proposed GSA-m Algorithm
1. Generate initial population
2. For I: 0 to max-iteration or stop criteria is reached do
3. Evaluate the fitness for each agent
4. Update the G, best and worst of the population
5. Calculate M , F and a for each agent
6. Update velocity and position i.e updated-agent
7. If last r iterations give same result or (I mod k) == 0
8. Create Difference-Offspring from updated-agent
9. Evaluate fitness;
10. If an offspring is better than updated-agent
11. Then replace the updated-agent by offspring in the next generation;
12. End If;
13. End If;
14. End For
15. Return approximate global optima
18. Proposed GDE Algorithm
1. Generate initial population
2. For I: 0 to max-iteration or stop criteria is reached do
3. Evaluate the fitness for each agent
4. Update the G, best and worst of the population
5. Calculate M , F and a for each agent
6. Update velocity and position i.e updated-agent
7. Create Difference-Offspring from updated-agent
8. Evaluate fitness;
9. If an offspring is better than updated-agent
10. Then replace the updated-agent by offspring in the next
generation;
11. End If;
12. End For
13. Return approximate global optima
23. Pareto Optimality
A well formed Multi-objective problem, there should
not be a single solution that simultaneously minimizes
each objective to its fullest.
In each case we are looking for a solution for which
each objective has been optimized to the extent that
if we try to optimize it any further, then the other
objective(s) will suffer as a result.
Finding such a solution, and quantifying how much
better this solution is compared to other such
solutions (there will generally be many) is the goal
when setting up and solving a Multi-objective
optimization problem.
24. Types of Domination
Given two decision or solution vectors x and y,
we say that decision vector x weakly dominates (or
simply dominates) the decision vector y (denoted by x
y) if and only if fi(x) fi(y)∀ i = 1, ...,M (i.e., the
solution x is no worse than y in all objectives) and
fi(x) ≺ fi(y) for at least one i ∈ 1, 2, ...,M (i.e., the
solution x is strictly better than y in at least one
objective).
A solution x strongly dominates a solution y (denoted
by x ≺ y ), if solution x is strictly better than solution
y in all M objectives.
27. Multi-objective Gravitational Optimization
(MOGO)
Equation (8) is modified to (14)
…(8)
…(14)
Where m is the number of objectives; bestk and abestk are the
maximum and minimum fitness value among the solutions for kth
objective. Mass of an agent is the summation of the masses in all
dimensions of the objective space.
28. MOGO
1. Generate initial population and set the parameters
2. Evaluate fitness and add solutions to the archive
3. For I: 0 to max-iteration or stop criteria is reached do
4. Update the G, best and worst of the population
5. Calculate M , F and a for each agent
6. Update velocity and position
7. Evaluate fitness
8. Domination check for the new set of solutions with the
solutions in the archive
9. End For
10. Return set of non-dominated set of solutions
31. Observations
GSA is implemented to optimize gain parameters in PID
Controller.
GSA provides better gain values than other popular
algorithms – PSO and BFO.
Proposed algorithm GSA-m produces better results
than Classical GSA except F6.
Again, proposed algorithm GDE generates better results
than Classical GSA as well as GSA-m for all the test
functions.
32. Observations
Results obtained from GSA, new algorithms GSA-m and
GDE has been compared with existing popular
optimization techniques GA and PSO.
In F3, GA and in F5 PSO outperform all the three physics
inspired algorithms.
But GDE outclasses GA and PSO in all other test
functions. Also, results of GDE not far from these
popular algorithms.
Hence, our new Hybrid Algorithm GDE is very much
competitive with the GA and PSO.
33. Observations
Distribution of non-dominated points is not so uniform
in nature in all the cases.
As far as the spreads of the Pareto fronts for the
benchmark test functions are concerned, our results are
well suited except for Deb benchmark function.
Unlike in MOPSO approaches, we have no leader
selection strategy in our proposed MOGO. This in turn
has reduced the computational complexity to a great
extent as compared to MOPSO approaches.
Hence, proposed MOGO is a novel algorithm and it
serves the purpose quite well. It could lead us to a
complete new arena with very high possibilities.
34. Publications
R. Chatterjee and M. N. DAS, “Physics Inspired Optimization
Algorithms: Introducing New Hybrid Gravitational
Evolution & Gravitational Search Algorithm with
mutation”, International Symposium on Devices MEMS
Intelligence System Communication 2011, SMU, Sikkim, India, APR
2011.
https://www.researchgate.net/publication/259193474_Physics_Inspi
red_Optimization_Algorithms_Introducing_New_Hybrid_Gravitati
onal_Differential_Evolution_and_Gravitational_Search_Algorithm_
with_mutation?ev=prf_pub
35. References
Darrell Whitley. A Genetic Algorithm Tutorial. Computer Science
Department, Colorado State University, page www.cs.colostate.edu/
genitor/MiscPubs/tutorial.pdf.
David E. Goldberg. Genetic Algorithms in Search, Optimization, and Machine
Learning.Addison-Wesley Longman Publishing Co. Inc, 1 edition, 1989.
E. Rashedi. Gravitational Search Algorithm. M.Sc Thesis, Shahid Bahonar
University of Kerman, 2007.
Esmat Rashedi, Hossein Nezamabadi-pour and Saeid Saryazdi. GSA: A
Gravitational Search Algorithm. Information Sciences, 179:2232–2248,2009.
H. Nezamabadi-pour, S. Saryazdi and E. Rashedi. Edge detection using Ant
Algorithm, . Soft Computing, 10:623–628,2006.
Hai Shen, Yunlong Zhu, Xiaoming Zhou, Haifenf Gho and Chuanguang
Chang. Bacterial Foraging Optimization Algorithm with Particle Swarm
Optimization Strategy for Global Numerical Optimization. In Proceeding
GEC '09 Proceedings of the rst ACM/SIGEVO Summit on Genetic and
Evolutionary Computation.
36. References
Hui Liu, Zixing Cai and Yong Wang. Hybridizing particle swarm optimization
with differential evolution for constrained numerical and engineering
optimization.Applied Soft Computing, 10.
James Kennedy and Russell Eberhart . Particle Swarm Optimization. In
proceedings of the IEEE, International Conference on Neural Network,
pages 1942–1948,1995.
Wael Mansour Korani. Bacterial foraging oriented by particle swarm
optimization strategy for PID tuning. GECCO '08 Conference on Genetic
and evolutionary computation,ACM , 2008.
Xiaohui Hu, Russell C. Eberhert and Yuhui Shi. Particle Swarm with
Extended Memory for Multiobjective Optimization. In IEEE Swarm
Intelligence Symposium, pages 193–197,2003.
Yuhui Shi and Russell C. Eberhert. Empirical study of Particle Swarm
Optimization. Evolutionary Computation, 1999. CEC 99. Proceedings of the
1999 Congress , 2009.
37. References
Liping Xie, Jianchao Zeng and Zhihua Cui. General framework of Artificial
Physics Optimization Algorithm. Nature & Biologically Inspired Computing
(NaBIC 2009), pages pp 1321–1326,2009.
N. C. Jagan . Control System. B S Publication , 2 edition, 2008.
R. A. Formato. Central Force Optimization: A New Metaheuristic with
Applications in Applied Electromagnetic. Progress In Electromagnetic
Research, PIER,77:425–491,2007.
Rainer Storn and Kenneth Price. Differential Evolution - A Simple and
Efficient Heuristic for Global Optimization over Continuous Spaces. Journal
of Global Optimization archive, 11 Issue 4, 1997.