The document describes an improved simulated annealing algorithm to solve optimal power flow problems. It first provides background on simulated annealing and describes the standard algorithm. It then applies the algorithm to minimize losses for IEEE 14-bus and 30-bus test systems. The results show the algorithm converges for both systems and finds optimal solutions that minimize losses within computational limits.
1. An Improved Simulated Annealing Algorithm (SAA)
EEPE40- MODERN OPTIMIZATION TECHNIQUES FOR
ELECTRICAL POWER SYSTEMS
DEPARTMENT OF
ELECTRICAL AND ELECTRONICS ENGINEERING
NATIONAL INSTITUTE OF TECHNOLOGY
TIRUCHIRAPPALLI – 620015
Submitted By,
Anmol Dwivedi (107115009)
2. ACKNOWLEDGEMENTS
My sincere thanks to Mr. Mukesh Muthu, Department of Electrical and Electronics
Engineering, National Institute of Technology, Tiruchirappalli, for his consent and support.
ABSTRACT
The simulated annealing is a general-purpose stochastic optimization technique that
has proven to be an effective tool for approximating globally optimal solutions to
many types of hard combinatorial optimization problems. Simulated annealing is
based on an analogy with the physical annealing process - a technique in the field of
condensed matter physics for obtaining the minimum-energy state of a solid. There
are many optimization algorithms, including hill climbing, genetic algorithms, gradient
descent, and more but the simulated annealing's strength is that it avoids getting
caught at local maxima / minima - solutions that are better than any others nearby.
The paradigm has also been proven to be effective in the field of power systems.
However, the major drawback of the paradigm is its typically high and sometimes
prohibitive computational cost. The optimal power flow problem has been widely
studied in order to improve power systems operation and planning. For real power
systems, the problem is formulated as a non-linear and as a large combinatorial
problem. The first approaches used to solve this problem were based on
mathematical methods which required huge computational efforts. Lately, artificial
intelligence techniques, such as metaheuristics based on biological processes, were
adopted. Metaheuristics require lower computational resources, which is a clear
advantage for addressing the problem in large power systems.
In this report, the algorithm is tested on standard functions like the Himmelblau’s
function, Easom function and Ackley function. Further the algorithm is used to solve
an Optimal Power Flow (OPF) with an objective to minimize the transmission line
losses for an IEEE 14 bus and 30 bus systems. The proposed methodology has
been tested with IEEE 14 bus and 30 bus networks and finally the results section
includes the conclusions of the work.
3. TABLE OF CONTENTS
Title Page No.
ABSTRACT i
ACKNOWLEDGEMENTS ii
TABLE OF CONTENTS iii
REFERENCES iv
1. Introduction
2. Algorithm & Flowchart
3. Performance for standard functions
4. Optimal power flow (Objective to minimize loss) using SAA
5. Formulation of OPF
6. Algorithm for OPF
7. Performance of Algorithm& Results
IEEE 30 bus system
IEEE 14 bus system
8. Conclusion
4. 1 | P a g e
Introduction:
Meta-heuristic optimization algorithms have received significant attention and remarkable
growth over the past few decades. The most common way to classify meta-heuristic
algorithms is based on solo-searchers (simulated annealing (SA), tabu search (TS), hill
climbing (HC)) versus the population-based searchers, such as genetic algorithm (GA),
particle swarm optimization (PSO), ant colony optimization (ACO) and other. The first
methods are those that employ a single solution during the search process while in the
latter a population of solutions is used and evolved during a given number of iterations.
Population-based algorithms have been found to perform well on many real world
problems. This has led to an effort by researchers to understand and explain this
behaviour. Simulated annealing (SA) is a popular generic probabilistic solo-algorithm used
for global optimization problems. The name and inspiration stem from annealing in
metallurgy, a process involving heating and controlled cooling of a material to increase the
size of its crystals and reduce its defects. The atoms become unstuck from their initial
positions by heating and wander randomly through states of higher energy. More chance
to find configurations with lower internal energy than initial one is provided by slow cooling.
In a similar way with this physical process, in SA, each feasible solution is analogous to a
state of a physical system, and the fitness function which needs to be minimized is similar
to the internal energy of the system in that state. The ultimate goal is to bring the system,
from an arbitrary (random) initial state, to a state in which the energy of the system is
minimal.
In each stage, SA replaces the current solution by a random nearby solution with a
probability depending both on the difference between the corresponding fitness values and
also on a parameter, named temperature. The ease of implementation makes SA as an
extremely popular method for solving large and practical problems such as travelling
salesman, communication systems continuous optimization among others. However, SA
suffers from two main drawbacks, being trapped in local minima and taking long
computational time to find a reasonable solution. Due to the fact that SA is a solo-
searcher, its success depends strongly on the selection of the starting point and the
decisions it makes. Hence, any bad luck affects the nature of the results and instead of a
global minimum a local one may be achieved, especially when the problem dimension is
high and there are many local minima. Moreover, seeking search space with a single
solution takes long computational time to discover a reasonable solution. In order to
improve the SA performance, various researchers have developed different strategies like
faster annealing schedules [1], simulated annealing with an adaptive non-uniform mutation
[2], adaptive simulated annealing (ASA) [3], and hybridization of SA with other heuristics,
such as genetic algorithm [4].
Formulation:
The law of thermodynamics state that at temperature T, the probability of an increase in
energy of magnitude E , is given by
5. 2 | P a g e
( )
E
kT
P E e
(1)
Where k is a constant known as Boltzmann’s constant. The simulation in the Metropolis
algorithm calculates the new energy of the system. If the energy has decreased then the
system moves to this state. If the energy has increased then the new state is accepted
using the probability returned by the above formula. A certain number of iterations are
carried out at each temperature and then the temperature is decreased. This is repeated
until the system freezes into a steady state.
This equation is directly used in simulated annealing, although it is usual to drop the
Boltzmann constant as this was only introduced into the equation to cope with different
materials. Therefore, the probability of accepting a worse state is given by the equation
c
T
P e r
(2)
Where
c = the change in the evaluation function
T = the current temperature
r = a random number between 0 and 1
The probability of accepting a worse move is a function of both the temperature of the
system and of the change in the cost function. It can be appreciated that as the
temperature of the system decreases the probability of accepting a worse move is
decreased. This is the same as gradually moving to a frozen state in physical annealing.
Also note, for small temperature, only better moves will be accepted which effectively
makes simulated annealing act like hill climbing.
Algorithm & Flow Chart:
Function SIMULATED-ANNEALING (Problem, Schedule) returns a solution state:
Inputs: Problem, a problem (Objective)
Schedule, a mapping from time to temperature (Cooling process)
Local Variables: Present node
Next node (Neighbour)
T, a “temperature” controlling the probability of downward steps
Present = MAKE-NODE (INITIAL STATE [Objective])
For t = 1 to do
1. T = Schedule [t]
2. If T = 0 then return Present
3. Next = a randomly selected successor of Present
4. E = VALUE [Next] – VALUE [Present]
If E > 0 then Present = Next
Else if Present = Next only with probability
E
T
e
if greater than U (0, 1)
6. 3 | P a g e
Else go to Step (2)
5. If the stopping criterion is satisfied then stop; else decrease the temperature T and
go to Step (3).
End
Fig.1
7. 4 | P a g e
Performance for Standard functions:
1. Easom Function:
z = -cos( x(1) ) * cos( x(2) ) * exp( -( (x(1)-pi).^2 + (x(2)-pi).^2) ) ;
-100 < x(1) & x(2)< 100
Iteration 100: Best Cost = -0.99995
Position Vector of particle: [3.147189828330370,3.141363964704188]
Best Cost: -0.999952930030512
Fig.2
Fig.3
8. 5 | P a g e
2. Ackley Function:
z= ( -20*exp( -0.2*sqrt(0.5*( x(1).^2 + x(2).^2) )) -
exp(0.5*(cos(2*pi*x(1))+cos(2*pi*x(2)))) + exp(1) + 20 );
-5 < x(1) & x(2)< 5
Iteration 100: Best Cost = 0.013159
Position Vector of particle: [0.00393793979981002, -0.00210400838651485]
Best Cost: 0.013159033438857
Fig.4
Fig.5
9. 6 | P a g e
3. Himmelblau Function:
z= ( x(1).^2 + x(2) - 11).^2 + ( x(1) + x(2).^2 - 7 ).^2;
-5 < x(1) & x(2)< 5
Iteration 100: Best Cost = 0.0082141
Fig.6
Fig.7
10. 7 | P a g e
Optimal power flow (Objective to minimize loss) using SAA:
Optimal Power Flow (OPF) has the goal of determining the active and reactive power
generation in order to obtain the optimal operation of the power system. It requires running
several times a power flow algorithm with different power generation values and choosing
the best scenario for a certain objective. Typical OPF objectives can be: to minimize
generation costs, to minimize active power losses. This fact implies different state variables.
This objective function in the report minimizes the transmission line losses by varying the
bus voltage magnitude and power generated by all the generator buses except the slack
bus. Thus these variables are the control also known as the control variables. This implies
that it is necessary to have distinct mathematical formulation for the OPF problem according
to the aimed goals.
The OPF is studied since the 1960’s when electricity companies needed to reduce their
cost but maintaining the security and power quality in the system. It is run offline well in
advance and is mainly used for power system planning. Several methods were used on
OPF, such as Linear Programming (LP); Non-Linear Programming (NLP); Mixed-Integer
Programming (MIP); Newton method; interior point method; and Artificial Intelligence (AI)
techniques. The most modern techniques to solve OPF come from the artificial intelligence
field, being mainly metaheuristics based on local search and population evolution. These
techniques have advantages against classical optimization techniques (as LP, NLP, and
MIP) in required computational resources, such as execution time and memory allocation.
So, AI techniques could be applied in large combinatorial problems due to the
characteristics above. Genetic Algorithm (GA), Particle Swarm Optimization (PSO).
Formulation of OPF:
Objective Function (losses):
max min
2 2
2 2
max min
max min
2 2
( - ) ( - )
( , , ) 1 1
+ p2 ( - ) ( - )
( -Q ) ( -Q )
3
gslack gslack gslack gslack
gi di
base base
i i i i
gslack gslack gslack gslack
base base
P P P P
F x u c P P p p
S S
V V V V
Q Q
p
S S
(3)
1
iN
gi di ij
i
j i
P P P
(4)
11. 8 | P a g e
1
iN
gi di ij
i
j i
Q Q Q
(5)
min max
,gi gi gi GP P P i N (6)
min max
,gi gi gi GQ Q Q i N (7)
min max
,i i i BV V V i N (8)
min max
,i i i Bi N (9)
Penalty functions p1, p2 and p3 are penalty functions which ensure the voltage at all
the buses and power at generator buses are within limits.
Algorithm for OPF:
Initialize SA Parameters (T0, M0, α)
Set Initial Solution Randomly (s0)
Previous Solution (s1) ← Initial Solution (s0)
While stopping criteria are not satisfied do
For 1 until M0 Temperature Iteration do
s2 ← generate Neighbourhood Solution (s1)
Run a Newton-Raphson PF
If F (s2) < F (s1) (eq. 10)
s1 ← s2
If F (s2) < F (best) (eq. 1)
Best ← s2
End
Else
If randomly probability < Boltzmann’s probability
s1 ← s2
End
End
End for
Reduce Temperature by α
Iteration ← Iteration +1
Evaluate Best Solution Evolution
12. 9 | P a g e
End while
Results: 1. IEEE 30 Bus System:)
MATPOWER Version 6.0, 16-Dec-2016 -- AC Power Flow (Newton)
Newton's method power flow converged in 3 iterations.
Converged in 0.00 seconds
================================================================================
| System Summary |
================================================================================
How many? How much? P (MW) Q (MVAr)
--------------------- ------------------- ------------- -----------------
Buses 30 Total Gen Capacity 335.0 -95.0 to 405.9
Generators 6 On-line Capacity 335.0 -95.0 to 405.9
Committed Gens 6 Generation (actual) 191.0 99.8
Loads 20 Load 189.2 107.2
Fixed 20 Fixed 189.2 107.2
Dispatchable 0 Dispatchable -0.0 of -0.0 -0.0
Shunts 2 Shunt (inj) -0.0 0.2
Branches 41 Losses (I^2 * Z) 1.75 8.69
Transformers 0 Branch Charging (inj) - 15.9
Inter-ties 7 Total Inter-tie Flow 43.8 39.2
Areas 3
Minimum Maximum
------------------------- --------------------------------
Voltage Magnitude 0.979 p.u. @ bus 8 1.060 p.u. @ bus 13
Voltage Angle -1.47 deg @ bus 7 3.10 deg @ bus 13
P Losses (I^2*R) - 0.16 MW @ line 27-30
Q Losses (I^2*X) - 2.74 MVAr @ line 12-13
================================================================================
| Bus Data |
================================================================================
Bus Voltage Generation Load
# Mag(pu) Ang(deg) P (MW) Q (MVAr) P (MW) Q (MVAr)
----- ------- -------- -------- -------- -------- --------
1 1.000 0.000* 0.57 -4.23 - -
2 1.002 0.160 57.96 16.70 21.70 12.70
3 0.996 -0.640 - - 2.40 1.20
4 0.996 -0.721 - - 7.60 1.60
5 0.992 -0.899 - - - -
6 0.991 -0.957 - - - -
7 0.982 -1.468 - - 22.80 10.90
8 0.979 -1.382 - - 30.00 30.00
9 1.001 -0.364 - - - -
10 1.007 -0.058 - - 5.80 2.00
11 1.001 -0.364 - - - -
12 1.022 0.382 - - 11.20 7.50
13 1.060 3.098 36.66 29.15 - -
14 1.012 -0.194 - - 6.20 1.60
15 1.014 -0.017 - - 8.20 2.50
16 1.009 -0.101 - - 3.50 1.80
17 1.002 -0.285 - - 9.00 5.80
18 0.999 -0.769 - - 3.20 0.90
14. 11 | P a g e
-------- --------
Total: 1.753 8.69
****************************************************************
Minimum loss = 1.75
Iteration 500: Best Cost = 1.7514
Optimized Values of Power Generated and Voltage Magnitudes at buses:
57.8669783832663 MW
47.0437161375111 MW
31.5441607694867 MW
17.4098016695630 MW
36.7817327215485 MW
1.00263461153200 PU
1.02564085717530 PU
1.04167046811973 PU
1.03029604471814 PU
1.05941127920561 PU
Fig.8
21. 18 | P a g e
References:
1. X. Yao, “A new simulated annealing algorithm,” International Journal of Computer Mathematics,
vol. 56, pp. 161-168, 1995MH Rashid - IEEE, ACADEMIC PRESS. ISBN 0-8493-7336-0, 2007
2. Z. Xinchao, “Simulated annealing algorithm with adaptive neighborhood,” Applied Soft Computing,
vol. 11, pp. 1827-1836, 2010.
3. L. Ingber, “Adaptive simulated annealing (ASA): Lessons learned,” 1996
4. O. Cordon, F. Moya, and C. Zarco, “A new evolutionary algorithm combining simulated annealing
and genetic programming for relevance feedback in fuzzy information retrieval systems,” Soft
Computing, vol. 6, pp. 308-319, 2002.
5. http://katrinaeg.com/simulated-annealing.html