The document summarizes three algorithms for multi-robot path planning: Bacteria Foraging Optimization (BFO), Ant Colony Optimization (ACO), and Particle Swarm Optimization (PSO). BFO is inspired by how bacteria like E. coli search for food by swimming and tumbling. ACO is based on how ants deposit and follow pheromone trails to find food sources. PSO mimics the movement of bird flocking and fish schooling. The document provides details on the mechanisms and equations used in each algorithm's approach to finding optimal paths for multiple robots.
Spectroscopic sensing of soil nutrientsBhawana Singh
16 most essential nutrients of soil. These nutrients can be classified as:
1. The primary macronutrients: nitrogen (N), phosphorus (P), potassium (K)
2. The secondary macronutrients: calcium (Ca), sulfur (S), magnesium (Mg)
3. The micronutrients: boron (B), chlorine (Cl), manganese (Mn), iron (Fe), zinc (Zn), copper
(Cu),molybdenum (Mo), nickel (Ni)
These nutrients are supplied by the soil and by the addition of fertilizers such as manure, compost, and fertilizer salts.
A soil testing program can be divided into four main components:
Soil sample preparations
Measurement
Reference analysis
Calibration
Validation
Predictions
https://www.youtube.com/watch?v=Lm1UKqqUhQ8&t=42s
METAHEURISTIC OPTIMIZATION ALGORITHM FOR THE SYNCHRONIZATION OF CHAOTIC MOBIL...ijsc
We provide a scheme for the synchronization of two chaotic mobile robots when a mismatch between the
parameter values of the systems to be synchronized is present. We have shown how meta-heuristic
optimization can be used to adapt the parameters in two coupled systems such that the two systems are
synchronized, although their behavior is chaotic and they have started with different initial conditions and
parameter settings. The controlled system synchronizes its dynamics with the control signal in the periodic
as well as chaotic regimes. The method can be seen also as another way of controlling the chaotic
behavior of a coupled system. In the case of coupled chaotic systems, under the interaction between them,
their chaotic dynamics can be cooperatively self-organized. A synergistic approach to meta-heuristic
optimization search algorithm is developed. To avoid being trapped into local optimum and to enrich the
searching behavior, chaotic dynamics is incorporated into the proposed search algorithm. A chaotic Levy
flight is firstly incorporated in the proposed search algorithm for efficiently generating new solutions. And
secondly, chaotic sequence and a psychology factor of emotion are introduced for move acceptance in the
search algorithm. We illustrate the application of the algorithm by estimating the complete parameter
vector of a chaotic mobile robot.
Load balancing using ant colony in cloud computingijitcs
Ants are very small insects.They are capable to find food even they are complete blind. The ants lives in
their nest and their job is to search food while they get hungry. We are not interested in their living style,
such as how they live, how they sleep. But we are interested in how they search for food, and how they find
the shortest path. The technique for finding the shortest path are now applying in cloud computing. The Ant
Colony approach towards Cloud Computing gives better performance.
Agu maosi chen h31g-1590 retrieval of surface ozone from uv-mfrsr irradiances...Maosi Chen
High concentration of surface ozone is harmful to humans and plants. USDA UV-B Monitoring and Research Program (UVMRP) uses Ultraviolet (UV) version of Multi-Filter Rotating Shadowband Radiom-eter (UV-MFRSR) to measure direct, diffuse, and total irradiances every 3 minutes at 7 UV channels (i.e. 300, 305, 311, 317, 325, 332, and 368 nm channels with 2 nm full width at half maximum). There have been plenty of literature exploring retrieval methods of total column ozone from UV-MFRSR measurements, but few has explored the retrieval of surface ozone. Under clear-sky conditions, UV irradiances absorption by ozone are significant and variable by height and wavelength. Therefore, multi-channel UV irradiances at the ground have the potential to resolve ozone concentrations at multiple vertical layers (including surface ozone). In this study, we used a deep learning algorithm (i.e. Self-Normalizing Neural Network, SNN) to retrieve surface ozone from 3-minute UV-MFRSR direct and diffuse irradiances (and the airmass) under clear-sky conditions at the UVMRP station located at Billings, Oklahoma. The 3-minute surface ozone data for training and validation are accumulated from 1-second surface ozone measured at the collocated Southern Great Plains (SGP) station by US Department of Energy Atmospheric Radiation Measurement Climate Research Facility (ARM). To cover the cloudy conditions, we also explored several spatial interpolation techniques [i.e. Triangulation-based linear interpolation, Graph Convolutional Neural Network (GCNN or ChebNet), mixture model network (MoNet), and Re-current Neural Network (RNN)] to estimate the hourly surface ozone at the same UVMRP station from the adjacent (i.e. within the 3-degree box of) US Environmental Protection Agency (EPA) hourly surface ozone observations.
Spectroscopic sensing of soil nutrientsBhawana Singh
16 most essential nutrients of soil. These nutrients can be classified as:
1. The primary macronutrients: nitrogen (N), phosphorus (P), potassium (K)
2. The secondary macronutrients: calcium (Ca), sulfur (S), magnesium (Mg)
3. The micronutrients: boron (B), chlorine (Cl), manganese (Mn), iron (Fe), zinc (Zn), copper
(Cu),molybdenum (Mo), nickel (Ni)
These nutrients are supplied by the soil and by the addition of fertilizers such as manure, compost, and fertilizer salts.
A soil testing program can be divided into four main components:
Soil sample preparations
Measurement
Reference analysis
Calibration
Validation
Predictions
https://www.youtube.com/watch?v=Lm1UKqqUhQ8&t=42s
METAHEURISTIC OPTIMIZATION ALGORITHM FOR THE SYNCHRONIZATION OF CHAOTIC MOBIL...ijsc
We provide a scheme for the synchronization of two chaotic mobile robots when a mismatch between the
parameter values of the systems to be synchronized is present. We have shown how meta-heuristic
optimization can be used to adapt the parameters in two coupled systems such that the two systems are
synchronized, although their behavior is chaotic and they have started with different initial conditions and
parameter settings. The controlled system synchronizes its dynamics with the control signal in the periodic
as well as chaotic regimes. The method can be seen also as another way of controlling the chaotic
behavior of a coupled system. In the case of coupled chaotic systems, under the interaction between them,
their chaotic dynamics can be cooperatively self-organized. A synergistic approach to meta-heuristic
optimization search algorithm is developed. To avoid being trapped into local optimum and to enrich the
searching behavior, chaotic dynamics is incorporated into the proposed search algorithm. A chaotic Levy
flight is firstly incorporated in the proposed search algorithm for efficiently generating new solutions. And
secondly, chaotic sequence and a psychology factor of emotion are introduced for move acceptance in the
search algorithm. We illustrate the application of the algorithm by estimating the complete parameter
vector of a chaotic mobile robot.
Load balancing using ant colony in cloud computingijitcs
Ants are very small insects.They are capable to find food even they are complete blind. The ants lives in
their nest and their job is to search food while they get hungry. We are not interested in their living style,
such as how they live, how they sleep. But we are interested in how they search for food, and how they find
the shortest path. The technique for finding the shortest path are now applying in cloud computing. The Ant
Colony approach towards Cloud Computing gives better performance.
Agu maosi chen h31g-1590 retrieval of surface ozone from uv-mfrsr irradiances...Maosi Chen
High concentration of surface ozone is harmful to humans and plants. USDA UV-B Monitoring and Research Program (UVMRP) uses Ultraviolet (UV) version of Multi-Filter Rotating Shadowband Radiom-eter (UV-MFRSR) to measure direct, diffuse, and total irradiances every 3 minutes at 7 UV channels (i.e. 300, 305, 311, 317, 325, 332, and 368 nm channels with 2 nm full width at half maximum). There have been plenty of literature exploring retrieval methods of total column ozone from UV-MFRSR measurements, but few has explored the retrieval of surface ozone. Under clear-sky conditions, UV irradiances absorption by ozone are significant and variable by height and wavelength. Therefore, multi-channel UV irradiances at the ground have the potential to resolve ozone concentrations at multiple vertical layers (including surface ozone). In this study, we used a deep learning algorithm (i.e. Self-Normalizing Neural Network, SNN) to retrieve surface ozone from 3-minute UV-MFRSR direct and diffuse irradiances (and the airmass) under clear-sky conditions at the UVMRP station located at Billings, Oklahoma. The 3-minute surface ozone data for training and validation are accumulated from 1-second surface ozone measured at the collocated Southern Great Plains (SGP) station by US Department of Energy Atmospheric Radiation Measurement Climate Research Facility (ARM). To cover the cloudy conditions, we also explored several spatial interpolation techniques [i.e. Triangulation-based linear interpolation, Graph Convolutional Neural Network (GCNN or ChebNet), mixture model network (MoNet), and Re-current Neural Network (RNN)] to estimate the hourly surface ozone at the same UVMRP station from the adjacent (i.e. within the 3-degree box of) US Environmental Protection Agency (EPA) hourly surface ozone observations.
A REVIEW OF PARTICLE SWARM OPTIMIZATION (PSO) ALGORITHMIAEME Publication
Particle swarm optimization (PSO) is a population-based stochastic optimization technique that is inspired by the intelligent collective behaviour of certain animals, such as flocks of birds or schools of fish. It has undergone numerous improvements since its debut in 1995. As academics became more familiar with the technique, they produced additional versions aimed at different demands, created new applications in a variety of fields, published theoretical analyses of the impacts of various factors, and offered other variants of the algorithm. This paper discusses the PSO's origins and background, as well as its theory analysis. Then, we examine the current state of research and application in algorithm structure, parameter selection, topological structure, discrete and parallel PSO algorithms, multi-objective optimization PSO, and engineering applications. Finally, existing difficulties are discussed, and new study directions are proposed.
This paper proposes a new methodology to
optimize trajectory of the path for multi-robots using
Improved particle swarm optimization Algorithm (IPSO) in
clutter Environment. IPSO technique is incorporated into
the multi-robot system in a dynamic framework, which will
provide robust performance, self-deterministic cooperation,
and coping with an inhospitable environment
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
ANALYSINBG THE MIGRATION PERIOD PARAMETER IN PARALLEL MULTI-SWARM PARTICLE SW...ijcsit
In recent years, there has been an increasing interest in parallel computing. In parallel computing, multiple
computing resources are used simultaneously in solving a problem. There are multiple processors that will
work concurrently and the program is divided into different tasks to be simultaneously solved. Recently, a
considerable literature has grown up around the theme of metaheuristic algorithms. Particle swarm
optimization (PSO) algorithm is a popular metaheuristic algorithm. The parallel comprehensive learning
particle swarm optimization (PCLPSO) algorithm based on PSO has multiple swarms based on the masterslave
paradigm and works cooperatively and concurrently. The migration period is an important parameter
in PCLPSO and affects the efficiency of the algorithm. We used the well-known benchmark functions in the
experiments and analysed the performance of PCLPSO using different migration periods.
Optimized Robot Path Planning Using Parallel Genetic Algorithm Based on Visib...IJERA Editor
An analysis is made for optimized path planning for mobile robot by using parallel genetic algorithm. The
parallel genetic algorithm (PGA) is applied on the visible midpoint approach to find shortest path for mobile
robot. The hybrid ofthese two algorithms provides a better optimized solution for smooth and shortest path for
mobile robot. In this problem, the visible midpoint approach is used to make the effectiveness for avoiding
local minima. It gives the optimum paths which are always consisting on free trajectories. But the
proposedhybrid parallel genetic algorithm converges very fast to obtain the shortest route from source to
destination due to the sharing of population. The total population is partitioned into a number subgroups to
perform the parallel GA. The master thread is the center of information exchange and making selection with
fitness evaluation.The cell to cell crossover makes the algorithm significantly good. The problem converges
quickly with in a less number of iteration.
A Randomized Load Balancing Algorithm In Grid using Max Min PSO Algorithm IJORCS
Grid computing is a new paradigm for next generation computing, it enables the sharing and selection of geographically distributed heterogeneous resources for solving large scale problems in science and engineering. Grid computing does require special software that is unique to the computing project for which the grid is being used. In this paper the proposed algorithm namely dynamic load balancing algorithm is created for job scheduling in Grid computing. Particle Swarm Intelligence (PSO) is the latest evolutionary optimization techniques in Swarm Intelligence. It has the better performance of global searching and has been successfully applied to many areas. The performance measure used for scheduling is done by Quality of service (QoS) such as makespan, cost and deadline. Max PSO and Min PSO algorithm has been partially integrated with PSO and finally load on the resources has been balanced.
Comparative Study of Ant Colony Optimization And Gang SchedulingIJTET Journal
Abstract— Ant Colony Optimization (ACO) is a well known and rapidly evolving meta-heuristic technique. All optimization problems have already taken advantage of the ACO technique while countless others are on their way. Ant Colony Optimization (ACO) has been used as an effective algorithm in solving the scheduling problem in grid computing. Whereas gang scheduling is a scheduling algorithm that is used to schedule the parallel systems and schedules related threads or processes to run simultaneously on different processors. The threads that are scheduled are belonging to the same process, but they from different processes in some cases, for example when the processes have a producer-consumer relationship, when all processes come from the same MPI program.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
SWARM INTELLIGENCE FROM NATURAL TO ARTIFICIAL SYSTEMS: ANT COLONY OPTIMIZATIONFransiskeran
Successful applications coming from biologically inspired algorithm like Ant Colony Optimization (ACO)
based on artificial swarm intelligence which is inspired by the collective behavior of social insects. ACO
has been inspired from natural ants system, their behavior, team coordination, synchronization for the
searching of optimal solution and also maintains information of each ant. At present, ACO has emerged as
a leading metaheuristic technique for the solution of combinatorial optimization problems which can be
used to find shortest path through construction graph. This paper describe about various behavior of ants,
successfully used ACO algorithms, applications and current trends. In recent years, some researchers
have also focused on the application of ACO algorithms to design of wireless communication network,
bioinformatics problem, dynamic problem and multi-objective problem.
Impact of initialization of a modified particle swarm optimization on coopera...IJECEIAES
Swarm robotic is well known for its flexibility, scalability and robustness that make it suitable for solving many real-world problems. Source searching which is characterized by complex operation due to the spatial characteristic of the source intensity distribution, uncertain searching environments and rigid searching constraints is an example of application where swarm robotics can be applied. Particle swarm optimization (PSO) is one of the famous algorithms have been used for source searching where its effectiveness depends on several factors. Improper parameter selection may lead to a premature convergence and thus robots will fail (i.e., low success rate) to locate the source within the given searching constraints. Additionally, target overshooting and improper initialization strategies may lead to a nonoptimal (i.e., take longer time to converge) target searching. In this study, a modified PSO and three different initializations strategies (i.e., random, equidistant and centralized) were proposed. The findings shown that the proposed PSO model successfully reduce the target overshooting by choosing optimal PSO parameters and has better convergence rate and success rate compared to the benchmark algorithms. Additionally, the findings also indicate that the random initialization give better searching success compared to equidistant and centralize initialization.
EFFECTS OF THE DIFFERENT MIGRATION PERIODS ON PARALLEL MULTI-SWARM PSOcscpconf
In recent years, there has been an increasing interest in parallel computing. In parallel computing, multiple computing resources are used simultaneously in solving a problem. There are multiple processors that will work concurrently and the program is divided into different tasks to be simultaneously solved. Recently, a considerable literature has grown up around the theme of metaheuristic algorithms. Particle swarm optimization (PSO) algorithm is a popular metaheuristic algorithm. The parallel comprehensive learning particle swarm optimization (PCLPSO) algorithm based on PSO has multiple swarms based on the master-slave paradigm and works cooperatively and concurrently. The migration period is an important parameter in PCLPSO and affects the efficiency of the algorithm. We used the well-known benchmark functions in the experiments and analysed the performance of PCLPSO using different migration periods.
Effects of The Different Migration Periods on Parallel Multi-Swarm PSO csandit
In recent years, there has been an increasing inter
est in parallel computing. In parallel
computing, multiple computing resources are used si
multaneously in solving a problem. There
are multiple processors that will work concurrently
and the program is divided into different
tasks to be simultaneously solved. Recently, a cons
iderable literature has grown up around the
theme of metaheuristic algorithms. Particle swarm o
ptimization (PSO) algorithm is a popular
metaheuristic algorithm. The parallel comprehensive
learning particle swarm optimization
(PCLPSO) algorithm based on PSO has multiple swarms
based on the master-slave paradigm
and works cooperatively and concurrently. The migra
tion period is an important parameter in
PCLPSO and affects the efficiency of the algorithm.
We used the well-known benchmark
functions in the experiments and analysed the perfo
rmance of PCLPSO using different
migration periods.
A REVIEW OF PARTICLE SWARM OPTIMIZATION (PSO) ALGORITHMIAEME Publication
Particle swarm optimization (PSO) is a population-based stochastic optimization technique that is inspired by the intelligent collective behaviour of certain animals, such as flocks of birds or schools of fish. It has undergone numerous improvements since its debut in 1995. As academics became more familiar with the technique, they produced additional versions aimed at different demands, created new applications in a variety of fields, published theoretical analyses of the impacts of various factors, and offered other variants of the algorithm. This paper discusses the PSO's origins and background, as well as its theory analysis. Then, we examine the current state of research and application in algorithm structure, parameter selection, topological structure, discrete and parallel PSO algorithms, multi-objective optimization PSO, and engineering applications. Finally, existing difficulties are discussed, and new study directions are proposed.
This paper proposes a new methodology to
optimize trajectory of the path for multi-robots using
Improved particle swarm optimization Algorithm (IPSO) in
clutter Environment. IPSO technique is incorporated into
the multi-robot system in a dynamic framework, which will
provide robust performance, self-deterministic cooperation,
and coping with an inhospitable environment
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
ANALYSINBG THE MIGRATION PERIOD PARAMETER IN PARALLEL MULTI-SWARM PARTICLE SW...ijcsit
In recent years, there has been an increasing interest in parallel computing. In parallel computing, multiple
computing resources are used simultaneously in solving a problem. There are multiple processors that will
work concurrently and the program is divided into different tasks to be simultaneously solved. Recently, a
considerable literature has grown up around the theme of metaheuristic algorithms. Particle swarm
optimization (PSO) algorithm is a popular metaheuristic algorithm. The parallel comprehensive learning
particle swarm optimization (PCLPSO) algorithm based on PSO has multiple swarms based on the masterslave
paradigm and works cooperatively and concurrently. The migration period is an important parameter
in PCLPSO and affects the efficiency of the algorithm. We used the well-known benchmark functions in the
experiments and analysed the performance of PCLPSO using different migration periods.
Optimized Robot Path Planning Using Parallel Genetic Algorithm Based on Visib...IJERA Editor
An analysis is made for optimized path planning for mobile robot by using parallel genetic algorithm. The
parallel genetic algorithm (PGA) is applied on the visible midpoint approach to find shortest path for mobile
robot. The hybrid ofthese two algorithms provides a better optimized solution for smooth and shortest path for
mobile robot. In this problem, the visible midpoint approach is used to make the effectiveness for avoiding
local minima. It gives the optimum paths which are always consisting on free trajectories. But the
proposedhybrid parallel genetic algorithm converges very fast to obtain the shortest route from source to
destination due to the sharing of population. The total population is partitioned into a number subgroups to
perform the parallel GA. The master thread is the center of information exchange and making selection with
fitness evaluation.The cell to cell crossover makes the algorithm significantly good. The problem converges
quickly with in a less number of iteration.
A Randomized Load Balancing Algorithm In Grid using Max Min PSO Algorithm IJORCS
Grid computing is a new paradigm for next generation computing, it enables the sharing and selection of geographically distributed heterogeneous resources for solving large scale problems in science and engineering. Grid computing does require special software that is unique to the computing project for which the grid is being used. In this paper the proposed algorithm namely dynamic load balancing algorithm is created for job scheduling in Grid computing. Particle Swarm Intelligence (PSO) is the latest evolutionary optimization techniques in Swarm Intelligence. It has the better performance of global searching and has been successfully applied to many areas. The performance measure used for scheduling is done by Quality of service (QoS) such as makespan, cost and deadline. Max PSO and Min PSO algorithm has been partially integrated with PSO and finally load on the resources has been balanced.
Comparative Study of Ant Colony Optimization And Gang SchedulingIJTET Journal
Abstract— Ant Colony Optimization (ACO) is a well known and rapidly evolving meta-heuristic technique. All optimization problems have already taken advantage of the ACO technique while countless others are on their way. Ant Colony Optimization (ACO) has been used as an effective algorithm in solving the scheduling problem in grid computing. Whereas gang scheduling is a scheduling algorithm that is used to schedule the parallel systems and schedules related threads or processes to run simultaneously on different processors. The threads that are scheduled are belonging to the same process, but they from different processes in some cases, for example when the processes have a producer-consumer relationship, when all processes come from the same MPI program.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
SWARM INTELLIGENCE FROM NATURAL TO ARTIFICIAL SYSTEMS: ANT COLONY OPTIMIZATIONFransiskeran
Successful applications coming from biologically inspired algorithm like Ant Colony Optimization (ACO)
based on artificial swarm intelligence which is inspired by the collective behavior of social insects. ACO
has been inspired from natural ants system, their behavior, team coordination, synchronization for the
searching of optimal solution and also maintains information of each ant. At present, ACO has emerged as
a leading metaheuristic technique for the solution of combinatorial optimization problems which can be
used to find shortest path through construction graph. This paper describe about various behavior of ants,
successfully used ACO algorithms, applications and current trends. In recent years, some researchers
have also focused on the application of ACO algorithms to design of wireless communication network,
bioinformatics problem, dynamic problem and multi-objective problem.
Impact of initialization of a modified particle swarm optimization on coopera...IJECEIAES
Swarm robotic is well known for its flexibility, scalability and robustness that make it suitable for solving many real-world problems. Source searching which is characterized by complex operation due to the spatial characteristic of the source intensity distribution, uncertain searching environments and rigid searching constraints is an example of application where swarm robotics can be applied. Particle swarm optimization (PSO) is one of the famous algorithms have been used for source searching where its effectiveness depends on several factors. Improper parameter selection may lead to a premature convergence and thus robots will fail (i.e., low success rate) to locate the source within the given searching constraints. Additionally, target overshooting and improper initialization strategies may lead to a nonoptimal (i.e., take longer time to converge) target searching. In this study, a modified PSO and three different initializations strategies (i.e., random, equidistant and centralized) were proposed. The findings shown that the proposed PSO model successfully reduce the target overshooting by choosing optimal PSO parameters and has better convergence rate and success rate compared to the benchmark algorithms. Additionally, the findings also indicate that the random initialization give better searching success compared to equidistant and centralize initialization.
EFFECTS OF THE DIFFERENT MIGRATION PERIODS ON PARALLEL MULTI-SWARM PSOcscpconf
In recent years, there has been an increasing interest in parallel computing. In parallel computing, multiple computing resources are used simultaneously in solving a problem. There are multiple processors that will work concurrently and the program is divided into different tasks to be simultaneously solved. Recently, a considerable literature has grown up around the theme of metaheuristic algorithms. Particle swarm optimization (PSO) algorithm is a popular metaheuristic algorithm. The parallel comprehensive learning particle swarm optimization (PCLPSO) algorithm based on PSO has multiple swarms based on the master-slave paradigm and works cooperatively and concurrently. The migration period is an important parameter in PCLPSO and affects the efficiency of the algorithm. We used the well-known benchmark functions in the experiments and analysed the performance of PCLPSO using different migration periods.
Effects of The Different Migration Periods on Parallel Multi-Swarm PSO csandit
In recent years, there has been an increasing inter
est in parallel computing. In parallel
computing, multiple computing resources are used si
multaneously in solving a problem. There
are multiple processors that will work concurrently
and the program is divided into different
tasks to be simultaneously solved. Recently, a cons
iderable literature has grown up around the
theme of metaheuristic algorithms. Particle swarm o
ptimization (PSO) algorithm is a popular
metaheuristic algorithm. The parallel comprehensive
learning particle swarm optimization
(PCLPSO) algorithm based on PSO has multiple swarms
based on the master-slave paradigm
and works cooperatively and concurrently. The migra
tion period is an important parameter in
PCLPSO and affects the efficiency of the algorithm.
We used the well-known benchmark
functions in the experiments and analysed the perfo
rmance of PCLPSO using different
migration periods.
Similar to 5 multi robot path planning algorithms (20)
2. International Journal of Computer Science and Engineering Research and Development (IJCSERD), ISSN
2248-9363 (Print), ISSN 2248-9371 (Online) Volume 1, Number 1, April-June (2011)
researches have lead to development of better path optimization solution with comparatively less
computational time and path cost.
There are basically two main components that can be suggested for global or calculated path
planning are; 1) robot representation of the world in configuration space(C-space) & 2) the
algorithm implementation. These two components are interconnected and greatly dependent on
one another in the manner to determine minimal path for robot traversing.
The main focus of the paper is to present a distributed PSO approach, a BFO approach and to
finish with ACS algorithm for RPP. The paper is categorized as follows: Section 2 contains
reviews of the PSO algorithm, section 3 reviews the BFO algorithm, section 4 gives reviews
ACS, and section 5 concludes the survey.
II. CLASSIC PARTICLE SWARM OPTIMIZATION (PSO) ALGORITHM
Particle swarm optimization is also a bio-mimetic optimization technique developed by
Eberhart and Kennedy in 1995 which was stimulated by social behaviour of the bird flocking and
fish schooling. The critical concept of PSO, at each time step consists of changing the velocity of
each particle towards its best location. In past several years PSO has been successfully applied in
many application and research areas [3].
In PSO the entire population of particle is maintain throughout the exploration operation
as there are no cross over and mutation operators. Each particle moves around some
multidimensional search space with a pre determined velocity and position that is updated based
on the particle’s experience and that of the swarm (the collection of particles).information sharing
may or may not occur between the particles or entire swarm during the iteration of the algorithm.
This results in effectively influencing other particles’ search of an area.
The basic pseudo code for the classic PSO algorithm taken from [4][5] is:
Step 1: Initialize population with random positions and
Velocities
Step 2: Evaluate – compute fitness of each particle in
Swarm
Step 3: Do for each particle in swarm {
Step 3.1: Find particle best (pbest) – compute fitness of the particle.
If current pbest pbest
Pbest = current pbest
Pbest location = current location
Endif
Step 3.2: Find global best (gbest) – best fitness of all
pbest
Gbest location = location of min (all pbest)
Step 3.3: Update velocity of particle per equation 2
Step 3.4: Update position of particle per equation 3}
Step 4: Repeat steps 3.1 through 3.4 until termination condition is reached.
A population of particles is formed and then evaluated to compute their fitness and find the
particle and global best from the population/swarm. The particle best, pbest, stands for the best
fitness value for a particle up to that moment in time, whereas the global best represents the best
particle in the entire swarm. A global best, gbest, or a local best, lbest, are used for a single
neighborhood or multiple local neighborhoods, respectively. When multiple local neighborhoods
are used each particle stays within their local neighborhood for all iterations of the program. PSO
algorithm may use a fitness function based on the Euclidean distance from a particle to a target.
The coordinates of the particle are px and py, while the coordinates of the target are tx and ty as
shown in equation 1.
39
3. International Journal of Computer Science and Engineering Research and Development (IJCSERD), ISSN
2248-9363 (Print), ISSN 2248-9371 (Online) Volume 1, Number 1, April-June (2011)
ݏݏ݁݊ݐ݅ܨൌ ඥሺ ݔݐെ ݔሻଶ ሺ ݕݐെ ݕሻଶ ሺ1ሻ
The velocity of a particle is computed as follows [6]:
݊ݒ 1 ൌ
ݓ ݒ ܿ1 ൈ 1ݎൈ ሺݐݏܾ݁ െ ሻ ܿ2 ൈ 2ݎൈ
ሺܾ݃݁ݐݏ െ ሻ ሺ2ሻ
where parameters used are:
ݓ = inertia coefficient
ݒ = current particle velocity
r1=uniformly distributed random number [0:1]
r2= uniformly distributed random number [0:1]
c1, c2=acceleration constants
The position of the particle is based on its previous position , , and its velocity over a unit of
time:
ାଵ ൌ ݒାଵ ሺ3ሻ
Each particle is accelerated towards its pbest and gbest locations using acceleration values c1 and
c2, respectively, at each time step. Typical values for parameters are wi=0.9, and c1=c2=1 [7][8].
Maximum velocities for some small robots noted in the literature are 20 cm/sec [9], 100 cm/sec
[10], and 1 m/sec [11]. A large wi favors global search while a small wi favors local search. The
search process is terminated when a predetermined number of iterations are reached, or the
minimum error requirement is reached for all particles.
III. THE BACTERIA FORAGING OPTIMIZATION ALGORITHM
Bacteria foraging optimization algorithm (BFOA) proposed by Passino [12] is a new corner to the
family of nature inspired optimization algorithms. BFOA is inspired by the social foraging
behaviour of Escherichia Coli. A bacterium takes foraging decision considering two factors; first,
communicating with others by sending signals and secondly, searches for nutrients in a manner to
maximize energy obtained per unit time.
The process in which a bacterium moves by taking small steps while searching for nutrients is
called chemotaxis and the key idea of BFOA is mimicking chemotic movement of virtual bacteria
in the problem search space. On getting sufficient food, they increase in length. They form exact
replica of themselves by breaking from the middle in presence of suitable temperature. The
chemotatic progress may get damaged due to the occurrence of sudden environmental changes.
40
4. International Journal of Computer Science and Engineering Research and Development (IJCSERD), ISSN
2248-9363 (Print), ISSN 2248-9371 (Online) Volume 1, Number 1, April-June (2011)
Thus a group of bacteria may shift to some other place or some other may come into the swarm of
concern. This leads to the event of elimination-dispersal in the bacterial population, where all the
bacteria in a region are killed or a group is dispersed into a new division of the environment.
p
Now suppose that we want to find the minimum of J( ) where
(i.e. is a p-dimensional
vector of real numbers),and we do not have measurements or an analytical description of the
gradient J( ). BFOA mimics the four principal mechanisms observed in a real bacterial
system: chemotaxis, swarming, reproduction, and elimination-dispersal to solve this non-gradient
optimization problem. A virtual bacterium is actually one trial solution (also called a search-
agent) that moves on the functional surface (see Figure 1 [13]) to locate the global optimum.
Figure 1: A bacterial swarm on a multi-modal objective function surface [13]
Let us define a chemotactic step to be a tumble followed by a tumble or a tumble followed by a
run. Let j be the index for the chemotactic step. Let k be the index for the reproduction step. Let l
be the index of the elimination-dispersal event. Also let
p : Dimension of the search space,
S: Total number of bacteria in the population,
Nc : The number of chemotactic steps,
Ns: The swimming length.
Nre : The number of reproduction steps,
Ned : The number of elimination-dispersal events,
Ped : Elimination-dispersal probability,
C (i): The size of the step taken in the random direction specified by the tumble.
Let ܲሺ݆, ݇, ݈ሻ ൌ ሼߠ ሺ݆, ݇, ݈ሻ|݅ ൌ 1,2, … , ܵሽrepresent the position of each member in the population
of the S bacteria at the j-th chemotactic step, k-th reproduction step, and l-th elimination-dispersal
event. Here, let J (i, j, k, l) denote the cost at the location of the i-th bacterium
i p
(j,k,l) (sometimes we drop the indices and refer to the i-th bacterium position as i). Note
that we will interchangeably refer to J as being a “cost” (using terminology from optimization
theory) and as being a nutrient surface (in reference to the biological connections). For actual
bacterial populations, S can be very large (e.g., S =109), but p = 3. Below we briefly describe the
four prime steps in BFOA.
i) Chemotaxis: This process simulates the movement of an E-coli cell through swimming and
tumbling via flagella. Biologically an E-coli bacterium can move in two different ways. It can
41
5. International Journal of Computer Science and Engineering Research and Development (IJCSERD), ISSN
2248-9363 (Print), ISSN 2248-9371 (Online) Volume 1, Number 1, April-June (2011)
swim for a period of time in the same direction or it may tumble, and alternate between these two
modes of operation for the entire lifetime. Suppose ߠ ሺ݆, ݇, ݈ሻ represents i-th bacterium at jth
chemotactic, k-th reproductive and l-th elimination-dispersal step. C(i) is the size of the step taken
in the random direction specified by the tumble (run length unit). Then in computational
chemotaxis the movement of the bacterium may be represented by:
∆ሺ݅ሻ
ߠ ሺ݆ 1, ݇, ݈ሻ ൌ ߠ ሺ݆, ݇, ݈ሻ ܥሺ݅ሻ ሺ1ሻ
ඥ∆் ሺ݅ሻ∆ሺ݅ሻ
Where ∆ indicates a vector in the random direction whose elements lie in [-1, 1].
ii) Swarming: An interesting group behavior has been observed for several motile species of
bacteria including E-coli and S- typhimurium, where intricate and stable spatio-temporal patterns
are formed in semisolid nutrient medium. A group of E-coli cells arrange themselves in a
traveling ring by moving up the nutrient gradient when placed amidst a semisolid matrix with a
single nutrient chemo effecter. The cell-to-cell signaling in E-coli swarm may be represented by
the following function.
ܬ ൫ߠ, ܲሺ݆, ݇, ݈ሻ൯ ൌ ∑ௌ ܬ ቀߠ, ߠ ሺ݆, ݇, ݈ሻቁ
ୀଵ
ଶ
ൌ ∑ௌ ቂെ݀௧௧௧ exp ቀെݓ௧௧௧ ∑ୀଵ൫ߠ െ ߠ ൯ ቁቃ
ୀଵ
ଶ
∑ௌ ቂെ݄௧ exp ቀെݓ௧ ∑ୀଵ൫ߠି ߠ ൯ ቁቃ ሺ2ሻ
ୀଵ
Where ܬ ൫ߠ, ܲሺ݆, ݇, ݈ሻ൯is the objective function value to be added to the actual objective function
(to be minimized) to present a time varying objective function. S is the total number of bacteria, p
is the number of variables to be optimized, which are present in each bacterium and
ߠ ൌ ሾߠଵ, ߠଶ,….ఏ ሿ் is a point in the p-dimensional search domain.
݀௧௧௧௧ , ݓ௧௧௧௧ , ݄௧ , ݓ௧ are different coefficients to be chosen properly.
iii) Reproduction: The least healthy bacteria eventually die where as the healthier bacteria,
yielding lower value of the objective function asexually split into two bacteria, which are then
placed in the same location, keeping the swarm size constant.
iv) Elimination and Dispersal: Gradual or sudden changes in the local environment where a
bacterium population lives may occur due to various reasons e.g. a significant local rise of
temperature may kill a group of bacteria that are currently in a region with a high concentration of
nutrient gradients. All the bacteria in a region are killed or a group may disperse to a new
location. To simulate this phenomenon in BFOA some bacteria are settled at random with a very
small probability while the new replacements are randomly initialized over the search space.
The pseudo-code of the complete algorithm is presented below:
42
6. International Journal of Computer Science and Engineering Research and Development (IJCSERD), ISSN
2248-9363 (Print), ISSN 2248-9371 (Online) Volume 1, Number 1, April-June (2011)
The BFOA Algorithm
Parameters:
Step 1 Initialize parameters p, S, Nc, Ns, Nre, Ned, Ped, C(i), ݅ ൌ 1,2, … , ܵ, θi
Algorithm:
Step 2 Elimination-dispersal loop: l=l+1
Step 3 Reproduction loop: k=k+1
Step 4 Chemotaxis loop: j=j+1
[a] For ݅ ൌ 1,2, … , ܵ take a chemotactic step for bacterium i as follows.
[b] Compute fitness function, ܬሺ݅, ݆, ݇, ݈ሻ.
Let, ܬሺ݅, ݆, ݇, ݈ሻ ൌ ܬሺ݅, ݆, ݇, ݈ሻ ܬ ሺߠ ሺ݆, ݇, ݈ሻ, ܲሺ݆, ݇, ݈ሻሻ
(i.e. add on the cell-to cell attractant–repellant profile to simulate the swarming behavior) where,
Jcc is defined in (2).
[c] Let Jlast=J (i, j, k, l) to save this value since we may find a better cost via a run.
[d] Tumble: generate a random vector
p
∆(i) with each element ∆m(i),m=1,2,...,p, a random number on [-1, 1].
[e] Move: Let
∆ሺ݅ሻ
ߠ ሺ݆ 1, ݇, ݈ሻ ൌ ߠ ሺ݆, ݇, ݈ሻ ܥሺ݅ሻ
ඥ∆் ሺ݅ሻ∆ሺ݅ሻ
This results in a step of size C(i) in the direction of the tumble for bacterium i.
[f] Compute ܬሺ݅, ݆ 1, ݇, ݈ሻ and let
ܬሺ݅, ݆ 1, ݇, ݈ሻ ൌ ܬሺ݅, ݆, ݇, ݈ሻ ܬ ሺߠ ሺ݆ 1, ݇, ݈ሻ, ܲሺ݆ 1, ݇, ݈ሻሻ
[g] Swim
i) Let m=0 (counter for swim length).
ii) While m < Ns (if have not climbed down too long).
• Let m=m+1.
• If ܬሺ݅, ݆ 1, ݇, ݈ሻ< ܬ௦௧ (if doing better), let
ܬ௦௧ = ܬሺ݅, ݆ 1, ݇, ݈ሻ and let
∆ሺ݅ሻ
ߠ ሺ݆ 1, ݇, ݈ሻ ൌ ߠ ሺ݆, ݇, ݈ሻ ܥሺ݅ሻ
ඥ∆் ሺ݅ሻ∆ሺ݅ሻ
And use this ߠ ሺ݆ 1, ݇, ݈ሻto compute the new ܬሺ݅, ݆ 1, ݇, ݈ሻ as we did in [f]
• Else, let m=Ns. This is the end of the while statement.
43
7. International Journal of Computer Science and Engineering Research and Development (IJCSERD), ISSN
2248-9363 (Print), ISSN 2248-9371 (Online) Volume 1, Number 1, April-June (2011)
[h] Go to next bacterium (i+1) if i ≠ S (i.e., go to [b] to process the next bacterium).
Step 5 If ݆ ൏ ܰ , go to step 4. In this case continue chemotaxis since the life of the bacteria is not
over.
Step 6 Reproduction:
[a] For the given k and l, and for each ݅ ൌ 1,2, … , ܵ , let
ே ାଵ
ܬ௧
ൌ ܬሺ݅, ݆, ݇, ݈ሻ ሺ3ሻ
ୀଵ
be the health of the bacterium i (a measure of how many nutrients it got over its lifetime and how
successful it was at avoiding noxious substances). Sort bacteria and chemotactic parameters C(i)
in order of ascending cost ܬ௧ (higher cost means lower health).
[b] The ܵ bacteria with the highest ܬ௧ values die and the remaining ܵ bacteria with the best
values split (this process is performed by the copies that are made are placed at the same location
as their parent).
Step 7 If ൏ ܰ , go to step 3. In this case, we have not reached the number of specified
reproduction steps, so we start the next generation of the chemotactic loop.
Step 8 Elimination-dispersal: For ݅ ൌ 1,2, … , ܵ with probabilityܲௗ , eliminate and disperse each
bacterium (this keeps the number of bacteria in the population constant). To do this, if a
bacterium is eliminated, simply disperse another one to a random location on the optimization
domain. If ݈ ൏ ܰௗ , then go to step 2; otherwise end.
IV. ANT COLONY SYSTEM ALGORITHM
Ant algorithms are multi-agent systems that exploit artificial stigmergy as a means for
coordinating artificial ants for the solution of computational problems. The flow chart and the
algorithm on the basis of the Figure 2 [14] is given below:
Step 1: Starting from the start node located at x-y coordinate of (1, 1), the ant will start to move
from one node to other feasible adjacent nodes during the map construction as shown in Figure 3.
As depicted, there are three possible movements available for the ants to move from the start
node (1,1). The possible nodes are: node 1 located at the x-y coordinate of (1,3), node 2 at (2,3)
and node 3 at (3,2).
Step 2: The ant will then take the next step to move randomly based on the probability given by
equation (1) below:
44
8. International Journal of Computer Science and Engineering Research and Development (IJCSERD), ISSN
2248-9363 (Print), ISSN 2248-9371 (Online) Volume 1, Number 1, April-June (2011)
݆ܲ݅ ݕݐ݈ܾܾ݅݅ܽݎሺݐሻ ൌ ݆݅ ܿ݅ݐݏ݅ݎݑ݁ܪሺݐሻ ൈ ݄݆ܲ݁݅ ݁݊݉ݎሺݐሻ
݀݅ ݐݔ݁݊ ݄݁ݐ ݐ ݐ݊݅ ݐݎܽݐݏ ݎݐܿ݁ݒ ݊݁݁ݓݐܾ݁ ݁ܿ݊ܽݐݏఉ ఙ
ൌ ቈ൬1ൗ ൰ ൈ ቀ݈݅ܽݎݐൗ∑ ݈݅ܽݎݐቁ
݈ܽ݃ ݐ ݈݁݊݅ ݁ܿ݊݁ݎ݂݁݁ݎ ݐ ݐ݊݅ ݐݎܽݐݏ ݀݊ܽ ݐ݊݅
(1)
where Heuristic ij(t) indicates every possible adjacent nodes to be traversed by the ant in its grid
position at every t time. The quantity of Pheromone ij(t) is an accumulated pheromone, between
nodes when the ant traverses at every time. Therefore, the probability equation depends on both
values which guide the ants to choose every possible node in every t time.
Figure 2: Outline of ACO for RPP of a mobile robot [14]
45
9. International Journal of Computer Science and Engineering Research and Development (IJCSERD), ISSN
2248-9363 (Print), ISSN 2248-9371 (Online) Volume 1, Number 1, April-June (2011)
Figure 3: Ant current position (red nodes) with 3 possible next positions (green nodes) [14]
A. Derivation of heuristic@ visibility equation
The heuristic equation was derived from the idea of the local and global search solution for the
RPP purposes. This represents the distance between selected adjacent nodes with intersected
point at reference line where the intersected point must be perpendicular to the next node
(Distance A) as shown in Figure 4.The distance between each node that have the shortest distance
should have a higher probability to be chosen compared to the longer distance. To ensure this, the
derivation of heuristic proposed must be inversed with the distance found as shown in equation
(2) below:
Heuristic=[1/distance between next point with intersect point at reference line]β
(2)
where β=heuristic coefficient
Figure 4 depicts the proposed solution of using the Pythagoras Theorem to calculate the distance
A. θ here refers to the angle of vector of intersect point to next visited node with line of intersect
point to goal position while Distance B refers to the distance from intersect point at reference line
to the next visited node. Intersect point is the point obtained when the line of the next node which
is parallel with the x-axis intersects with the reference line. Thus, the equation for distance A is
equal to:
ܣ ݁ܿ݊ܽݐݏ݅ܦൌ ߠ ݊݅ݏൈ ඥሾሺ 2ݕെ 1ݕሻଶ ሺ 2ݔെ 1ݔሻଶ ሿ ሺ3ሻ
Figure 4: Derivation of Distance A [14]
As depicted in Figure 4, the heuristic calculation can be calculated as shown below:
46
10. International Journal of Computer Science and Engineering Research and Development (IJCSERD), ISSN
2248-9363 (Print), ISSN 2248-9371 (Online) Volume 1, Number 1, April-June (2011)
ܣ ݁ܿ݊ܽݐݏ݅ܦൌ ߠ ݊݅ݏൈ ඥሾሺ 2ݕെ 1ݕሻଶ ሺ 2ݔെ 1ݔሻଶ ሿ
ൌ 054 ݊݅ݏൈ ඥሾሺ2 െ 2ሻଶ ሺ3 െ 2ሻଶ ሿ
ൌ 0.7071 ൈ 1 ൌ 0.7071
ܸ݅ ݕݐ݈ܾ݅݅݅ݏൌ ሾ1⁄0.7071ሿ1 ൌ 1.4142
B. Derivation of trail equation
The artificial ants behave similar to the real ants where the pheromone will attract the next ants to
follow the shortest path until the process converges when all ants follow the same path to the
target.
Equation (4) is the trail equation used in this research.
ܶ ݈ܽ݅ݎൌ ሾ ݈ܽ݅ݎݐ ∑⁄ ݈ܽ݅ݎݐሿߙ ሺ4ሻ
where α=trail coefficient
The concept of the amount of pheromone used is similar with the original ACS concept where the
higher the amount of pheromone, the higher the probability value that attracts more ants to follow
the shortest path and simultaneously cause the ACS to converge faster. Thus, if the amount of
pheromone is smaller, the probability will become lower thus the path cost will be higher as the
ants will avoid traversing a path with a lower pheromone amount.
Step 3: Each time ants construct a path from one node to another, the pheromone amount will be
reduced locally by the given evaporation rate using the formula of update local rule sas shown
below:
݆ܶ݅ሺ݈݊݁ܽ݅ݎݐ ݓሻ ՚ ሺ1 െ ߩሻ ൈ ݆ܶ݅ሺ݈ܽ݅ݎݐ ݈݀ሻ (5)
where ρ=evaporation rate
This equation shows that each time the ants move from one node to another node, the amount of
local pheromone will be updated in parallel. This process prevents the map from getting
unlimited accumulation of pheromone and enables the algorithm to forget a bad decision that has
been previously made.
Step 4: Once the ants finds its path to goal, the fitness of ants will be calculated. This covers the
calculation of distance or path cost each ant takes to traverse from start point to goal point by
using derivation of objective function for RPP below:
݁ܿ݊ܽݐݏ݅ܦൌ ඥሾሺ 2ݕെ 1ݕሻଶ ሺ 2ݔെ 1ݔሻଶ ሿ ሺ6ሻ
Step 5: The fitness value will then be used for the process of global update. When all ants reach
the destination, the ants will update the value of pheromone globally based on the fitness found
47
11. International Journal of Computer Science and Engineering Research and Development (IJCSERD), ISSN
2248-9363 (Print), ISSN 2248-9371 (Online) Volume 1, Number 1, April-June (2011)
by each ant by using Equation (7) below. This process will be repeated as the path being traverse
by ants in each generation is determined using this global value.
Generally, this process will attract the ants to follow the optimal path. During the process, the
path with the shorter distance will be chosen as the probability to be chosen is higher compared to
the path with the longer distance. The equation of global updating is derived in (7) and (8) below:
ݐ ՚ ݐ ∆ݐ
ሺ7ሻ
where ∆ݐ is the amount of pheromone of ant m deposits on the path it has visited. It’s defined as
below:
ܳ ⁄ ܥ ; ݂݅ ܽܿݎሺ݅, ݆ሻܾ݈݁ܲ ݄ݐܽ ݐ ݏ݃݊
∆ݐ
ൌ൝ 0 ሺ8ሻ
; ݁ݏ݅ݓݎ݄݁ݐ
1
where Q is number of nodes and Ck is the length of the path
Pk built by the ants.
Step 6: The process will be repeated from Step 1 to Step 5 until the process converges. The
process will stop when all ants traverse the same path that shows the shortest path to goal has
been found. The detailed flow of the algorithm structure is simplified as shown in pseudo code
below.
C. Pseudo Code of ACS Algorithm for RPP
݊݅ݐܽݎ݁ݐ݅ ݂ܫሺݔܽ݉ݐሻ ൌ 1,2,3,4,5,6, … … … … ݊
ݐ݊ܽ ݂݅ ݁ݏ݈ܧሺ݉ሻ ൌ 1,2,3,4,5,6, … … … … ݊
ݏ݁݀݊ ݂݅ ݁ݏ݈ܧሺ݊ሻ ൌ 1,2,3,4,5,6, … … … … ݊
Compute the probability of the mth ants next nodes
Move to the next nodes by computed probability
Store history of past location of nodes in an array
If current location of nodes is equal to destination
Break the nodes (n) loop
End
End
Evaluate fitness and store path distance of mth ants
Compute pheromone amount generated by mth ants
End
Update pheromone amount of the entire map
End
48
12. International Journal of Computer Science and Engineering Research and Development (IJCSERD), ISSN
2248-9363 (Print), ISSN 2248-9371 (Online) Volume 1, Number 1, April-June (2011)
V. CONCLUSION
The survey of all the algorithms of robot path planning suggests that in PSO, the path
generated in each step is feasible and the robot can reach its target without colliding with the
obstacles. In BFO we have given a detailed study of algorithm wherein we have found that that
the major process is the chemotactic behaviour.ACO is robust and effective in finding the
optimal path as it satisfies the optimization criteria; reducing path cost and computation time, for
RPP.
REFERENCES
[1] O. Khatib, Real time obstacle avoidance for manipulators and mobile robots, International
Journal of Robotics Research, Vol. 5(10), 1985, pp. 90-98.
[2] G. Nagib, Gharieb W, Path planning for a mobile robot using genetic Algorithm, IEEE
proceding of Robotics, 2004, pp185-189.
[3] R. C. Eberhart and Y.H. Shi. Particle swarm optimization: Developments, applications and
resources. In CEC 2001: proceedings of the IEEE congress on evolutionary computation,
pages 81–86, Seoul, South Korea, May, 2001. IEEE.
[4] M. Tasgetiren, Y. Liang, “A Binary particle Swarm Optimization Algorithm for Lot Sizing
Problem”, Journal of Economic ans Social Research 5(2), 1-20.
[5] W. Jatmiko, K. Sekiyama, T. Fukuda, “A PSO Based Mobile Sensor Network for Odor
Source Localization in Dynamic Environment: Theory, Simulation and Measurement ”,
2006 Congress on evoluntionary Computation, Vancouver, BC, pp. 3781 -3788, July 2006.
[6] L. Smith, G. Venayagamoorthy, P. Holloway, “Obstacle Avoidance in Collective Robotic
Search Using Particle Swarm Optimization”
[7] S. Doctor, G. Venayagamoorthy, v. Gudise,”OptimalPSO for Collective Robotic Search
Applications”, IEEE Congress on Evolutionary Computation , Portland, OR, pp. 1390-1395,
June 2004.
[8] S. Doctor G.Venayagamoorthy “Unmanned Vehical Navigation Using Swarm Intelligence”,
Intelligent Sensingand Information Processing, ICISIP 2005.
[9] R. Grabowski, l. Navaroo-Serment, P. Khosla, “An Army of Small Robots”, Scientific
American Reports, Vol. 18 No.1, pp. 34-39, May 2008.
[10] Wireless Meshed Network for Building Automation, Industrial Wireless Book Issue 10:2,
http://wireless.industrial-networking.com/articles/articledisplay.asp?id=1264.
[11] J. Pugh, A. Martinoli, “Inspiring And Modeling Multi-Robot Search with Particle Swarm
Optimization.”
[12] K. M. Passino. Biomimicry of bacterial foraging for distributed optimization and control.
IEEE Control Systems Magazine, 22: 52–67, 2002.
[13] S. Das, A. Biswas, S. Dasgupta, A. Abraham, “Bacterial Foraging Optimization Algoritm:
Theoretical, Foundation, Analysis and Applications.”
[14] N. B. Sariff, N. Buniyamin, “Ant Colony System for Robot path Planning in Global Static
Environment.”
49