SlideShare a Scribd company logo
1 of 10
Download to read offline
This article appeared in a journal published by Elsevier. The attached
copy is furnished to the author for internal non-commercial research
and education use, including for instruction at the authors institution
and sharing with colleagues.
Other uses, including reproduction and distribution, or selling or
licensing copies, or posting to personal, institutional or third party
websites are prohibited.
In most cases authors are permitted to post their version of the
article (e.g. in Word or Tex form) to their personal website or
institutional repository. Authors requiring further information
regarding Elsevier’s archiving and manuscript policies are
encouraged to visit:
http://www.elsevier.com/authorsrights
Author's personal copy
Adaptive brain emotional decayed learning for online prediction of
geomagnetic activity indices
Ehsan Lotfi a,n
, M.-R. Akbarzadeh-T. b
a
Department of Computer Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
b
Departments of Electrical Engineering and Computer Engineering, Center of Excellence on Soft Computing and Intelligent Information Processing,
Ferdowsi University of Mashhad, Iran
a r t i c l e i n f o
Article history:
Received 18 December 2011
Received in revised form
7 February 2013
Accepted 28 February 2013
Available online 31 May 2013
Keywords:
Amygdala
Adaptive BEL
BELBIC
Long-term forgetting
Online learning
Solar winds
a b s t r a c t
In this paper we propose adaptive brain-inspired emotional decayed learning to predict Kp, AE and Dst
indices that characterize the chaotic activity of the earth's magnetosphere by their extreme lows and
highs. In mammalian brain, the limbic system processes emotional stimulus and consists of two main
components: Amygdala and Orbitofrontal Cortex (OFC). Here, we propose a learning algorithm for the
neural basis computational model of Amygdala–OFC in a supervised manner and consider a decay rate in
Amygdala learning rule. This added decay rate has in fact a neurobiological basis and yields to better
learning and adaptive decision making as illustrated here. In the experimental studies, various
comparisons are made between the proposed method named ADBEL, Multilayer Perceptron (MLP),
Adaptive Neuro-Fuzzy Inference System (ANFIS) and Locally Linear Neuro-Fuzzy (LLNF). The main
features of the presented predictor are the higher accuracy at all points especially at critical points, lower
computational complexity and adaptive training. Hence, the presented model can be utilized in adaptive
online prediction problems.
& 2013 Elsevier B.V. All rights reserved.
1. Introduction
The solar wind and geomagnetic storms resulting from the solar
activity are amongst the most important physical phenomena that
can considerably disturb communication systems and damage
satellites. They also have significant effects on space missions.
Therefore predicting the occurrences of the solar wind and geo-
magnetic storms are very important in space missions, planning and
satellite alarm systems. These events can be reasonably character-
ized by the following three geomagnetic activity indices: Kp (Kenn-
ziffer planetarisch) index, AE (auroral electrojet) index and Dst storm
time index [71,7,65,72,53] where each index can be considered as a
chaotic time series. These indicators are good monitors for the
warning and alert systems of satellites. For example, the high values
of Kp and AE and the large variation at low values of Dst often
correspond to geomagnetic storms or substorms [4,21,67,13].
Various models and learning algorithms have been developed
to predict these chaotic time series, such as the real time WINDMI
model which is based on six nonlinear differential equations [49],
neurofuzzy models such as Adaptive Neuro-Fuzzy Inference
Systems (ANFIS), Artificial Neural Networks (ANN [66,15,48]) as
well as Locally Linear Neuro-Fuzzy systems (LLNF [54]) that divide
the input space into small linear subspaces with fuzzy validity
functions. Among these methods, ANNs are inspired by physiolo-
gical workings of the brain. They resemble the actual networks of
neural cells in the brain. MLP is a feedforward ANN that is widely
used to predict Kp, AE and Dst indices [48,6]. The learning
algorithms of MLP and ANFIS impose high computational com-
plexity that is not suited for online learning on fast-varying
environments. This problem is viewed in many other learning
algorithms such as Locally Linear Model Tree (LoLiMoT [53,54]).
LoLiMoT and Recursive LoLiMoT (RLoLiMoT) are popular incre-
mental learning algorithms for LLNF model. In contrast to LoLiMoT,
RLoLiMoT can be used for online applications but still suffers from
high computational complexity [53] and has been used only in
problems with time increments that are sufficiently long.
Recently, the computational models of Brain Emotional Learn-
ing (BEL) have been successfully utilized for solving the prediction
problem of geomagnetic indices [25,3]. The main feature of BEL
based predictors is low computational complexity. These methods
are based on reinforcement learning and, as discussed in Section
2.1, they show high accuracy in predicting peak points but do not
show acceptable accuracy at all points [3] especially at low values.
Specifically, they do not adequately predict time series such as Dst
index where the low values are most important.
Our understanding of emotion is minimal and the current
computational models are over simplified. Their only justification
Contents lists available at ScienceDirect
journal homepage: www.elsevier.com/locate/neucom
Neurocomputing
0925-2312/$ - see front matter & 2013 Elsevier B.V. All rights reserved.
http://dx.doi.org/10.1016/j.neucom.2013.02.040
n
Corresponding author. Tel.: +98 935 570 0102.
E-mail addresses: esilotf@gmail.com (E. Lotfi),
Akbarzadeh@ieee.org (M.-R. Akbarzadeh-T.).
Neurocomputing 126 (2014) 188–196
Author's personal copy
is their great utility in solving difficult problems. Here, adaptive
brain emotional supervised learning with decayed rule, simulating
the forgetting role of Amygdala, is proposed to predict the Dst
beside the Kp and AE indices in real time. This adaptive/forgetting
approach to Amygdala, in contrast to the more long term memory
perspective, also has a biological basis as reported in several other
recent works [24,37]. Specifically, Kim et al. [37] examined the
long-term forgetting effect of Amygdala, and Hardt et al. [28]
showed that a brain-wide decay mechanism in the brain can
systematically remove some memories and increase the life
expectancy of a memory.
The proposed approach considers decayed learning in an
adaptive-online manner in order to enhance the prediction results
against non-stationary behavior of time series. The proposed
approach is general and can be applied in various emotion based
application domains such as in emotion recognition [74], facial
expression recognition [52], affective computing [70,23], human–
computer interaction [12], autonomous robot and agent design
[69,16,75], improved modern artificial intelligence tools [34–36] as
well as understanding the brain's emotional process [47].
1.1. Motivations towards emotional modeling
What motivates employing emotional modeling in engineering
applications is the high speed of emotional processing resulting
from its effects on inhibitory synapses and existence of short paths
between Thalamus and Amygdala in the emotional brain
[39,40,27]. Although some of the present neural models indicate
that sensory structures, especially hierarchical processing struc-
tures play a key role in fast processing [5,20], there are also models
which shed light on the effects of emotional learning on inhibitory
synapses and the role of inhibitory synapses in fast processing. For
example, in his study, Scelfo [63] elaborates on the effects of
emotional learning on inhibitory synapses and Bazhenov et al. [9]
show that inhibitory synapses can play a pivotal role in fast
learning.
The subject of quickness of emotional processing can also be
seen from the perspective of psychology. Emotional processing
creates emotional intelligence in human brain and according to
Goleman [27], emotional intelligence can facilitate learning, espe-
cially in children, and it is also accountable for the ability to react
quickly in emergencies. Goleman believes humans possess two
minds, rational mind and emotional mind: emotional mind is far
quicker than the rational mind and emotional stimuli such as fear
can bring about quick reactions usually when there is no chance
for the rational mind to process the danger. Parts of brain
responsible for processing emotions have the ability to produce
the required reaction extremely quickly; and consequently the
inhibitory connections in cerebral cortex, which are affected by the
emotional system, can improve learning speed.
Considering that Limbic system is responsible for processing
emotional stimuli, it is not unlikely that the most important
characteristic of the practical models produced based on this
system and especially the models including the Amygdala–Thala-
mus short path and the inhibitory connections is fast learning and
quick reacting. This can reveal their ability in predicting non-
stationary time series. The main motivation behind the existing
tendency towards models based on human emotions is the very
same fact that emotional stimuli can speed up processing in
humans and it is expected that quick learning is the distinctive
feature of artificial models of emotional learning. Here we propose
a novel brain-inspired emotional model that, because of its fast
learning, can be used in real time applications.
The organization of the paper is as follows: Neuropsychological
motivation and works related to modeling emotional learning are
presented in Section 2. The proposed method is then presented in
Section 3. Experimental results on online prediction are evaluated
through several simulations in Section 4. Finally, conclusions are
made in Section 5.
2. Neuropsychological aspect of emotion and related works
Most human behavior is dictated by emotion. Emotions are
cognitive processes [64] that are studied under various disciplines
such as psychology, neuroscience and artificial intelligence. Psy-
chological and neural studies of emotion have a long history. From
a psychological point of view, emotions can be derived through
reward and punishment in various real-life situations [56]. Studies
of the neural basis of emotion culminated in the limbic system (LS)
theory of emotion. As shown in Fig. 1, LS which is located in the
cerebral cortex consists mainly of the following components [41]:
Amygdala, Orbitofrontal Cortex (OFC), Thalamus, Sensory Cortex,
Hypothalamus and Hippocampus. Amygdala which is located in
sub-cortical area is an emotional computer. Attention and perma-
nent memory are the other cognitive functions of Amygdala [60].
Amygdala has extensive interconnections with many other areas.
It receives connections from the sensory cortical areas and reward
signals in the learning process. Amygdala also interacts with the
OFC. OFC receives connections from the sensory cortical area and
Amygdala responds to the emotional stimulus. OFC then evaluates
the Amygdala's response and tries to prevent inappropriate
answers based on the context provided by the hippocampus [8].
For BEL modeling, researchers focus on internal representation
of emotional brain system, and formalize the brain states. The
Amygdala–OFC system was first proposed by Morén and Balkenius
in 2000 [55,8,56]. Amygdala–OFC model learns to react to the new
stimulus based on the history of input rewards and punishment
signals. Additionally, in the model, Amygdala learns to associate
with emotionally charged and neutral stimuli. And the OFC
prevents inappropriate experience and learning connections.
Amygdala–OFC model consists of two subsystems which attempt
to respond correctly to emotional stimuli. Each subsystem consists
of a number of nodes which are related to the dimension of each
stimulus. At first, the stimulus enters the Thalamus part of the
model to calculate the maximum input and submits it to Amygdala
as one of the inputs. The OFC does not receive any input from
Thalamus. Instead, it receives Amygdala's output in order to
update the weights [55].
Fig. 1. The limbic system in the brain.
E. Lotfi, M.-R. Akbarzadeh-T. / Neurocomputing 126 (2014) 188–196 189
Author's personal copy
Although the structure of this model is very simple; the reward
signal is still not clearly defined, while this signal is vital for
updating the weights of subsystems. There are various modified
versions [44,57,3,1] of Amygdala–OFC model. All the models are
based on the four main components presented in Fig. 2 and include
presented information pathway in the figure. These models should
learn by using an external reward signal. Lucas et al. [44] explicitly
determined the reward signal and proposed the BEL base controller
(BELBIC) which has been successfully utilized in various control
applications [43,14,45,50,51,59,61,32,10,17,46,18,62]. Babaie et al.
[3] formulated the input reward for multi-agent optimization
problems and presented a BEL based predictor to forecast AE index
in alarm systems for satellites. In Amygdala–OFC model and its
modified versions, the weights of Amygdala cannot decrease, i.e. it
is a monotonic learning process. So once an emotional reaction is
learned, it is permanent and cannot be unlearnt. The predictor
presented by Babaie et al. [3] is based on monotonic learning and
reinforcement learning in Amygdala and, as discussed in the
following section, shows high accuracy in predicting peak points
but not for all of the points [3] particularly when signal level is low.
2.1. The drawbacks of the current models and the essential reforms
The nature and the description of the relationship between the
four main components, consisting of Amygdala, Thalamus, Sensory
Cortex and OFC, is common among all the presented models as
shown in Fig. 2. What differs from one model to another is how
they formulate the reward signal in the learning process. For
example in the model presented by Morén [56], the need for the
existence of reward signal is expressed; but it is not clarified how
values are assigned. In the modified model of Babaie [3] and Abdi
[1], the reward signal (R) is defined as follows and the formuliza-
tion of other equations is formed accordingly:
R ¼ ∑
j
wjrj; ð1Þ
where r stands for the factors of the reinforcement agent and w
represents the related weights which are selective. For informa-
tion on how to select the weights in Eq. (1) in a special application
see the study of Dehkordi et al. [19] and Khalilian et al. [33]. Eq. (1)
is particularly useful in multi-agent problems but in case of
supervised learning of time series it is not so. Since weights in
Eq. (1) are problem specific, they can be arranged in a way to
produce better results for peak points as is done by Babaie [3]. But
this approach is model sensitive and leads to low adaptability of
the model with changes in signal behavior. It also renders the
models ineffective in learning different signals with opposite
behaviors. For example, Babaie's model [3] is ineffective in learn-
ing signals such as Dst when their significant points are in valleys.
The model presented here aims to cover these weaknesses.
Instead of R signal in the learning phase, our model employs the
target value of input pattern. Putting target instead of Eq. (1) holds
a major advantage: the model can be adjusted by pattern–target
samples. But this reduces the precision of the processes and in fact
the model becomes forgetful and only gives precise answers to
recent and current patterns and forgets more distant examples. In
order to correct this problem we use a decay rate in learning rules
which controls the effects of using targets. So the novelty of our
method compared to Morén [56], Lucas [44], [3], Parsapoor [57]
and Abdi [1] models is the use of target value instead of Eq. (1) in
training the model and employing a decay rate in learning rules.
Additionally, based on these adjustments, we can propose adap-
tive version of brain emotional learning that is discussed in the
following section.
3. The proposed adaptive brain emotional decayed learning
In contrast to previous BEL based predictors, the proposed
Adaptive Decayed BEL (ADBEL) focuses on the need for online
adaptation. Additionally, ADBEL is based on supervised learning
rules and is based on a decayed mechanism for Amygdala mono-
tonic learning. It is observed that by controlling this feature, the
performance of the model can be extended. This controlling is
performed by using a decay rate. Additionally, due to lower
performance at low points, the common BEL based predictors
cannot be adequately used to predict the Dst index in alarm
systems, while the proposed method can be used to predict the
Dst index along with other indices such as Kp and AE. Fig. 3 shows
the proposed supervised model where the solid lines present the
data flow and learning lines are presented by dashed lines.
Consider the following time series:
Kpt−4; Kpt−3; Kpt−2; Kpt−1
ADBEL can predict the Kp value at t. Basically the model is
divided into two parts, corresponding to the Amygdala and the
OFC. Amygdala receives input pattern (…Kpt−4, Kpt−3, Kpt−2, Kpt−1)
from the Thalamus and from the sensory cortex, while the OFC
only receives input pattern from the sensory cortex unit. Amygdala
has two internal outputs: Ea, that is used for adjusting its own
weights (see Eq. (8)) and E′
a that is used for adjusting OFC weights
(see Eqs. (9) and (10)). As shown in Fig. 3, system′s input is
described by the vector Kp including (…Kpt−4, Kpt−3, Kpt−2, Kpt−1).
There is one node for each attribute of input pattern in the
network model of Amygdale and OFC. The output of each node
is calculated by the multiplication of learning weight vj to Kpt−j for
Amygdala and wj to Kpt−j for OFC. After learning and adjusting the
weights, the
_
Kpt is the predicted Kp value at time t which is
calculated as follows:
_
Kpt ¼ Ea−Eo ð2Þ
where
Ea ¼ E′
a þ vth  m ð3Þ
E′
a ¼ ∑
j
ðvj  Kpt−jÞ ð4Þ
Eo ¼ ∑
j
ðwj  Kpt−jÞ ð5Þ
and
m ¼ maxjðKpt−jÞ; j ¼ 1…n ð6Þ
where n is the number of attributes in input pattern, m is the
output of Thalamus and vth is the related weight. In Eq. (2),
subtraction implements the inhibitory task of OFC. Actually the
model′s inputs and outputs involve the following equation:
Fig. 2. The routes of sensory information for modeling, modified from Morén [56]
and Babaie et al. [3].
E. Lotfi, M.-R. Akbarzadeh-T. / Neurocomputing 126 (2014) 188–196190
Author's personal copy
_
Kpt ¼ f ðKpt−n; …; Kpt−3; Kpt−2; Kpt−1Þ ð7Þ
In the learning phase and after observing the target value of Kp
at time t (Kpt), the following supervised decay learning rules are
used to adjust the model′s weights:
vj ¼ ð1−γÞvj þ α maxðKpt−Ea; 0ÞKpt−j; j ¼ 1…n ð8Þ
wj ¼ wj þ ðβ R0 Kpt−jÞ; j ¼ 1…n ð9Þ
where α and β are learning rates, γ is the proposed decay rate and
R0 is the internal reward calculated by the following formula:
R0 ¼
maxðE′
a−Kpt; 0Þ−Ek
o if ðKp4≠0Þ
maxðE′
a−Eo; 0Þ Otherwise
(
ð10Þ
where Kpt is the target value associated with input pattern (Kpt−4,
Kpt−3, Kpt−2, Kpt−1). So the proposed adaptive time series predic-
tion algorithm is as follows:
Adaptive supervised BEL based predictor:
Constants: α and β
Inputs: Previous values of Kp: Kpt−n,…, Kpt−3, Kpt−2, Kpt−1
Optimized γ
Output: Predicted Kpt (
_
Kpt)
The adjustable weights: w1, w2,…,wn, v1,v2,…,vn, vth
Step 1: Prediction
− Use the following equations to predict Kp at t ð
_
KptÞ
 m ¼ maxjðKpt−jÞ; j ¼ 1…n
 E′
a ¼ ∑
j ¼ 1…n
ðvj  Kpt−jÞ

Ea ¼ E′
a þ vth  m
 Eo ¼ ∑
j ¼ 1…n
ðwj  Kpt−jÞ
 _
Kpt ¼ Ea−Eo
Step 2: Learning
− Wait for Observing Target Value of Kp at time t (Kpt)
R0 ¼
maxðE′
a−Kpt; 0Þ−Eo if ðKpt≠0Þ
maxðE′
a−Eo; 0Þ Otherwise
(

− Update OFC input weight j, for j¼1…n
wj ¼ wj þ ðβR0Kpt−jÞ 
− Update Amygdala weight j, for j¼1…n
vj ¼ ð1−γÞvj þ α maxðKpt−Ea; 0ÞKpt−j 
vth ¼ ð1−γÞvth þ α maxðKpt−Ea; 0Þm 
− t¼t+l and proceed to the first
In the algorithm, (Kpt−4, Kpt−3, Kpt−2, Kpt−1) is the training pattern
and Kpt is the related target extracted from the Kp time series. The
proposed algorithm in this form can be utilized in time series
forecasting problems. For AE prediction, the input pattern is (AEt−4,
AEt−3, AEt−2, AEt−1) and the target value is AEt. Also for Dst
prediction, the input pattern is (Dstt−4, Dstt−3, Dstt−2, Dstt−1) and
the target value is Dstt.
4. Experimental studies
The proposed ADBEL has been written and tested on Matlab
R2010b. The source code is accessible from http://www.bitools.ir/
projects.html and is evaluated to predict Kp, AE and Ds indices
which are used to characterize the geomagnetic activity of the
earth′s magnetosphere. These time series have chaotic behavior
[29–31,3] with low dimensional chaos [68,58]. A 78,912 hourly
samples data set from 2000 to 2008 has been used for online
prediction. The data set named OMNI2 is accessible from National
Space Science Data Center (NSSDC). We consider each 4 sequence
samples as a pattern and 5th as its target. So 78,908 pattern–target
pairs of Kp index, 78,908 pattern–target pairs of AE index and
78,908 pattern–target pairs of Dst index are used for the evalua-
tions. The maximum and minimum of each index are determined
and scaled data (between 0 and 1) are used to adjust the weights.
In experimental studies, the initialization of the weights is
random. And the values α and β are set respectively at 0.8 and
0.2. Also various values of decay rate were tested in the search of
an optimum γ. The values include; γ¼0, 0.05, 0.1, 0.15… 1. By
setting decay rate to 0, the system learns a small set of training
pattern–target pairs of Kp index. The training is repeated 10 times
and the error average is recorded. This scenario is repeated by
various values for γ and the results are as follows; the maximum
error is observed when γ¼0 and the minimum error is achieved
at γ¼0.1. Also the values 0.1 and 0.01 (with step size 0.01) are
Fig. 3. The proposed learning lines in the limbic model are presented by dashed lines.
E. Lotfi, M.-R. Akbarzadeh-T. / Neurocomputing 126 (2014) 188–196 191
Author's personal copy
achieved using AE and Dst indices respectively. In various applica-
tions, it was observed that the optimum decay rate can be
generally defined between 0.01 and 0.2.
The decay rate in the proposed model has a certain neuropsy-
chological interpretation. As mentioned by many neurobiologists,
the Amygdala is required for long term memory [38,11,73,22,26].
Yet, this type of memory also involves a forgetting process as
mentioned in several other recent works [37,28]. Hence, the
proposed decay rate here actually simulates the forgetting role
of Amygdala. When γ¼1 and the Amygdala faces a new pattern, it
forgets all information stored in the past and quickly learns the
current one; and when γ¼0, it tries to keep the past information as
well as to learn the new pattern. The findings show that the
weakest capacity of Amygdala is achieved if the forgetting role is
disregarded and best generalization capacity is in fact a trade-off
between forgetting and permanent storage. The optimum value of
γ shows that the forgetting role of Amygdala may presumably
improve the learning ability of the chaotic behavior.
4.1. Comparative studies on online prediction
Figs. 4–9 present the online prediction results of the proposed
method and Tables 1 and 2 illustrate the comparative results
between proposed method and common methods, MLP and ANFIS.
Fig. 4 shows the target and online predicted Kp and related error
obtained from the proposed adaptive method during the first 500 h
in year 2000. Fig. 4 presents the results of predictions from the start
point. As illustrated in Fig. 4, after 50 h the system can show a stable
result. Hence, the predicted curve illustrated in Fig. 4 is divided into
the two segments; the first is the transient region that is between
hours [0–50] and the second is the steady state region where the
prediction results are validated. In fact, the weights rapidly converge
during the first 50 h. And after that, they show slight changes
adaptively. Fig. 5 is the curve of online predicted values of Kp versus
target values from 2000 till 2008. As illustrated in Fig. 5, in Kp
predictions, a correlation COR¼0.92952 is obtained from the
proposed method in steady state.
The AE online prediction results are shown in Figs. 6 and 7.
Fig. 6 shows the target and predicted AE values and related error
between hours 0 and 500 in year 2000. Fig. 7 illustrates the COR
value of AE results obtained using proposed ADBEL in steady state.
The system falls into the steady state after 150 h. During the first
150 h, the system tries to learn the AE behavior, and their weights
rapidly converge to one step ahead prediction of hourly AE index.
The curve illustrated in Fig. 8 is the predicted Dst. The transition
time in online Dst prediction is 25 h meaning that the system
learns the Dst activity during the first 25 epochs. And after the first
25 h, the results are in steady state. Fig. 9 illustrates the predicted
versus desired output of the Dst index. As illustrated in Fig. 9, the
Fig. 4. Online predicted Kp values (top) and related error (bottom) from start point
in year 2000 obtained using proposed adaptive method. Results are validated after
the initial 50 h.
Fig. 5. Actual versus desired output of the Kp between 2000 and 2008 obtained
from proposed ADBEL in steady state.
Fig. 6. Online predicted AE values (top) and related error (bottom) from start point
in year 2000 by the proposed ADBEL. The results are validated after the initial150 h.
E. Lotfi, M.-R. Akbarzadeh-T. / Neurocomputing 126 (2014) 188–196192
Author's personal copy
COR¼0.95322 is obtained from the proposed ADBEL in steady
state.
Table 1 shows the RMSE and COR comparisons between MLP,
ANFIS and the proposed ADBEL in steady states for online predic-
tion of hourly Kp, AE and Dst indices. As illustrated in Table 1, the
best agreement between the predicted Kp values and its target
values and the best agreement between the predicted AE values
and its target values are obtained from proposed method. It shows
an improvement with respect to the MLP and ANFIS in Kp and AE
prediction problems. Higher correlation and lower root mean
square error obtained from proposed method means that it has a
better performance than MLP and ANFIS in Kp prediction.
According to Table 1, the best overall agreement between the
predicted Dst values and target values is obtained by ANFIS.
However, as illustrated in Table 2, the proposed method is more
accurate and more sensitive than ANFIS based predictor for low
values of Dst which involve the critical areas. The results of the
predicted Dst are based on the following two thresholds: less than
−50 and less than −100 are presented in Table 2. These thresholds
are evaluated here because of their important role in solar winds
and geomagnetic storms studies. The importance of these thresh-
olds is followed by Alves et al. [2]. For more information, please
refer to this reference. According to Table 2, the proposed method
does more accurate predictions for low values of Dst. The recall
ratio obtained from the proposed method is higher than the ANFIS
based predictor. Tables 1 and 2 shows the best result of each
method in 10 runs.
Fig. 7. Actual versus desired output of the AE between 2000 and 2008 obtained
from proposed adaptive prediction method in steady state.
Fig. 8. Predicted Dst values (top) and related error (bottom) from start point in year
2000 by proposed ADBEL. The results are validated after the first 25 h.
Fig. 9. Actual versus desired output of the Dst between 2000 and 2008 obtained
from proposed ADBEL in steady state.
Table 1
Comparisons between MLP, ANFIS and proposed ADBEL based on the RMSE
and correlation in online prediction of hourly Kp, AE and Dst indices between
2000 and 2008.
Index Kp AE Dst
Model RMSE COR RMSE COR RMSE COR
MLP with EBP 2.7126 0.87935 292.9055 0.84027 23.3845 0.73020
ANFIS with EBP 0.5830 0.91719 125.3251 0.81410 7.6862 0.95748
Proposed ADBEL 0.5376 0.92952 125.5962 0.83178 10.5941 0.95322
Table 2
The prediction results of hourly Dst index at low points.
Threshold Dsto−50 (%) Dsto−100 (%)
Model ANFIS Proposed method ANFIS Proposed method
Recall 81.34 91.73 83.18 89.21
false 0.00 0.00 16.45 9.51
Missed 18.66 8.27 0.37 1.28
E. Lotfi, M.-R. Akbarzadeh-T. / Neurocomputing 126 (2014) 188–196 193
Author's personal copy
4.2. Comparative study by offline prediction
The online prediction results presented in Section 4.1 illustrate
the ability of the proposed ADBEL in steady state. In this subsection,
we show that these results are achieved by the proposed method
with lower computational complexity as compared with the other
methods. Here and prior to entering comparative numerical studies,
let us analyze the computational complexity. Regarding the learning
step (step 2) in proposed predictor, the algorithm adjust O(2n)
weights for each pattern–target sample, where n is number of input
attributes (for our examples n¼4). In contrast, computational time is
O(cn) for single output MLP where c is number of hidden neurons
(generally c¼10), and it is exponential for ANFIS and LLNF. Addi-
tionally EBP is based on derivative computations which impose high
complexity while the proposed method is derivative free. So the
proposed method has lower computational complexity and higher
efficiency with respect to the other methods. This improved com-
puting efficiency can be important for online predictions, especially
when the time interval of observations is small. Because of the high
computational complexity of the above methods, the mean daily
observation of indices has usually been utilized in the literature. In
contrast to these models, our method can learn hourly as well as
minutely as well as secondly observations without growing in
computation for large samples. This is the key point of our method,
the fast convergence, which makes it suitable for online applications.
For comparison of the convergence in practice, we run the methods
in offline manner to learn 78,908 pattern–target pairs of the hourly
indices. The stopping criterion in learning process is to reach a
certain correlation. The result is reported in Table 3. According to
Table 3, the number of learning epochs of MLP and ANFIS are much
higher than the proposed method; this is while each epoch is also
significantly lower in computational order as discussed above. The
results indicated in Table 3 are based on 10 runs and are statistically
significant according to Student′s t-test with 95% confidence. Inter-
ested reader may find more detailed statistical analysis and results
of offline prediction in [42]. Specifically, further statistical analysis,
i.e. separate training/testing/validation stages, and a study of sensi-
tivity to different thresholds levels are discussed there.
Recently, Mirmomeni et al. [53] applied the LLNF for one step
ahead prediction of daily Kp and Dst indices. As he concluded, LLNF
shows an improvement with respect to the MLP and RBF. Here, the
24-h average of the one day ahead predictions is used for an
appropriate comparison of the proposed approach with LLNF.
Tables 4 and 5 show the comparisons with LLNF based on the
normalized mean square error (NMSE). According to Table 4, the
best performance for prediction of daily Kp values is obtained by
the proposed method. And as illustrated in Table 5, in daily Dst
prediction, the proposed emotional algorithm results in lower
NMSE with respect to the LoLiMoT. And although the RLoLiMoT
yields lower NMSE, according to Table 2, the proposed method is
more than learning pattern–target samples; it reinforces a beha-
vior in learning those indices such as Dst. Hence, it can detect the
critical area of Dst index where the solar winds and geomagnetic
storms may have occurred.
5. Conclusions
ADBEL, coming from a neurophysiological aspect of the brain,
shows to be a novel adaptive learning algorithm and is an
appropriate online predictor for geomagnetic activity indices
which is a complex dynamical system. In contrast to the previous
BEL based predictors, the proposed supervisory approach learns
pattern–target samples and applies well to adaptive and online
prediction problems. Furthermore, ADBEL is based on a decay
mechanism for Amygdala monotonic learning rule. The proposed
model is utilized here to predict Kp and AE indices, where their
high values have greater importance, and Dst of geomagnetic
index where its low values involve critical areas. According to
the experimental studies, comparisons between the proposed
method, Multilayer Perceptron (MLP), Adaptive Neuro-Fuzzy Infer-
ence System (ANFIS) and Locally Linear Neuro-Fuzzy (LLNF)
present the following conclusions. Firstly, the performance of the
proposed method is higher than MLP and ANFIS in one hour ahead
predictions of Kp and AE indices. Secondly, the proposed method is
more accurate than ANFIS based predictor at low values of Dst.
Thirdly, the proposed method shows better or comparative per-
formance with respect to the adaptive LLNF which is proposed
recently and predicts Kp and Dst daily indices. Fourthly, in contrast
to the high computational order of the comparative approaches,
ADBEL has lower computational order and faster training and is
hence suitable for online prediction of geomagnetic indices.
For future improvements, we believe this adaptation can be
further improved by adjusting the learning rates α and β during
the learning and predicting processes. We aim to address the
optimization/adaptation of α and β along with the decay rate in
adaptive prediction problems. Finally, the proposed model hopes
to present an important perspective for future neuropsychological
research related to the role of Amygdala long-term forgetting on
its learning and generalizing ability.
Acknowledgments
The authors thank the reviewers for their excellent feedback on
the paper and thank the “National Space Science Data Center” for
using the data sets.
Table 3
The number of offline learning epochs.
Model Kp AE Dst
MLP with EBP 187.377.5 49.474.8 30.972.2
ANFIS with EBP 50.170.41 50.170.41 59.870.74
Proposed ADBEL 2.770.35 2.570.51 4.670.38
Table 4
The NMSE comparison between LLNF and proposed ADBEL in prediction of daily
Kp index between 2000 and October 2008.
Model Learning NMSE
LLNF LoLiMoT 0.5918a
Adaptive LLNF RLoLiMoT 0.0888a
Proposed ADBEL Emotional decaying 0.0130
a
From Mirmomeni et al. [53].
Table 5
The NMSE comparison between LLNF and proposed ADBEL in prediction of daily
Dst index between 2000 and 2006.
Method Learning NMSE
LLNF LoLiMoT 0.5348a
Adaptive LLNF RLoLiMoT 0.0968a
Proposed ADBEL Emotional decaying 0.1123
a
From Mirmomeni et al. [53]
E. Lotfi, M.-R. Akbarzadeh-T. / Neurocomputing 126 (2014) 188–196194
Author's personal copy
References
[1] J. Abdi, B. Moshiri, B. Abdulhai, A.K. Sedigh, Forecasting of short-term traffic
flow based on improved neuro-fuzzy models via emotional temporal differ-
ence learning algorithm, Eng. Appl. Artif. Intell. (2011), http://dx.doi.org/
10.1016/j.engappai.2011.09.011.
[2] M.V. Alves, E. Echer, W.D. Gonzalez, Geoeffectiveness of solar wind inter-
planetary magnetic structures, J. Atmos. Sol.–Terr. Phys. 73 (2011) 1380–1384,
http://dx.doi.org/10.1016/j.jastp.2010.07.024.
[3] T. Babaie, Lucas Karimizandi, Learning based brain emotional intelligence as a
new aspect for development of an alarm system, Soft Comput. 12 (2008)
857–873, http://dx.doi.org/10.1007/s00500-007-0258-8.
[4] R. Bala, P.H. Reiff, J.E. Landivar, Real‐time prediction of magnetospheric activity
using the Boyle Index, Space Weather 7 (2009) S04003, http://dx.doi.org/
10.1029/2008SW000407.
[5] E. Balaguer-Ballester, N.R. Clark, M. Coath, K. Krumbholz, S.L. Denham, Under-
standing pitch perception as a hierarchical process with top-down modula-
tion, PLoS Comput. Biol. 5 (3) (2009) e1000301.
[6] R. Balasubramanian, Forecasting geomagnetic activity indices using the Boyle
index through artificial neural networks, Department of Electrical  Computer
Engineering, RICE University, Houston, TX, 2010Department of Electrical 
Computer Engineering, RICE University, Houston, TX, 2010. (Ph.D. Thesis).
[7] G. Balasis, I.A. Daglis, C. Papadimitriou, M. Kalimeri, A. Anastasiadis, K. Eftaxias,
Investigating dynamical complexity in the magnetosphere using various
entropy measures, J. Geophys. Res. 114 (2009) A00D06, http://dx.doi.org/
10.1029/2008JA014035.
[8] C. Balkenius, J. Morén, Emotional learning: a computational model of amyg-
dala, Cybern. Syst. 32 (6) (2001) 611–636.
[9] M. Bazhenov, M. Stopfer, T.J. Sejnowski, G. Laurent, Fast odor learning
improves reliability of odor responses in the locust antennal lobe, Neuron
46 (3) (2005) 483–492, http://dx.doi.org/10.1016/j.neuron.2005.03.022.
[10] Z. Beheshti, S.Z.M. Hashim, A review of emotional learning and it′s utilization
in control engineering, Int. J. Adv. Soft Comput. Appl. 2 (2010) 191–208.
[11] M. Bianchin, T. Mello e Souza, J.H. Medina, I. Izquierdo, The amygdala is
involved in the modulation of long-term memory, but not in working or short-
term memory, Neurobiol. Learn. Mem. 71 (2) (1999) 127–131.
[12] G. Caridakis, K. Karpouzis, S. Kollias, User and context adaptive neural networks
for emotion recognition, Neurocomputing 71 (13) (2008) 2553–2562.
[13] Y. Cerrato, E. Saiz, C. Cid, W.D. Gonzalez, J. Palacios, Solar and interplanetary
triggers of the largest Dst variations of the solar cycle 23, J. Atmos. Sol.–Terr.
Phys. (2011), http://dx.doi.org/10.1016/j.jastp.2011.09.001.
[14] M. Chandra, Analytical study of a control algorithm based on emotional
processing (M.S. Dissertation), Indian Institute of Technology Kanpur, 2005.
[15] A.J. Conway, K.P. Macpherson, J.C. Brown, Delayed time series predictions with
neural networks, Neurocomputing 18 (1) (1998) 81–89.
[16] E. Daglarli, H. Temeltas, M. Yesiloglu, Behavioral task processing for
cognitive robots using artificial emotions, Neurocomputing 72 (13) (2009)
2835–2844.
[17] E. Daryabeigi, G.R.A. Markadeh, C. Lucas, Emotional controller (BELBIC) for
electric drives—a review, 7–10 November, 2010, Glendale, AZ, pp. 2901–2907,
〈http://dx.doi.org/10.1109/IECON.2010.5674934〉.
[18] B.M. Dehkordi, A. Parsapoor, M. Moallem, C. Lucas, Sensorless speed control of
switched reluctance motor using brain emotional learning based intelligent
controller, Energy Convers. Manage. 52 (1) (2011) 85–96, http://dx.doi.org/
10.1016/j.enconman.2010.06.046.
[19] B.M. Dehkordi, A. Kiyoumarsi, P. Hamedani, C. Lucas, A comparative study of
various intelligent based controllers for speed control of IPMSM drives in the
field-weakening region, Expert Syst. Appl. 38 (10) (2011) 12643–12653, http:
//dx.doi.org/10.1016/j.eswa.2011.04.052.
[20] S. Denham, Auditory scene analysis: a competition between auditory proto-
objects? J. Acoust. Soc. Am. 131 (4) (2012) 3267 3267.
[21] B.A. Emery, et al., Solar wind structure sources and periodicities of auroral
electron power over three solar cycles, J. Atmos. Sol.–Terr. Phys. 71 (2009)
1157–1175, http://dx.doi.org/10.1016/j.jastp.2008.08.005.
[22] J.P. Fadok, M. Darvas, T.M. Dickerson, R.D. Palmiter, Long-term memory for
pavlovian fear conditioning requires dopamine in the nucleus accumbens and
basolateral amygdala, PloS One 5 (9) (2010) e12751.
[23] N. Fragopanagos, J.G. Taylor, Modelling the interaction of attention and
emotion, Neurocomputing 69 (16) (2006) 1977–1983.
[24] R. Gallassi, L. Sambati, R. Poda, M.S. Maserati, F. Oppi, M. Giulioni, P. Tinuper,
Accelerated long-term forgetting in temporal lobe epilepsy: evidence of
improvement after left temporal pole lobectomy, Epilepsy Behav. 22 (4)
(2011) 793–795.
[25] A. Gholipour, C. Lucas, D. Shahmirzadi, Predicting geomagnetic activity index
by brain emotional learning, WSEAS Trans. Syst. 3m (2004) 296–299.
[26] E.M. Griggs, E.J. Young, G. Rumbaugh, C.A. Miller, MicroRNA-182 regulates
amygdala-dependent memory formation, J. Neurosci. 33 (4) (2013) 1734–1740.
[27] D. Goleman, Emotional Intelligence; Why it can Matter More than IQ, Bantam,
New York, Bantam Books, 2006.
[28] O. Hardt, K. Nader, L. Nadel, Decay happens: the role of active forgetting in
memory, Trends Cogn. Sci. (2013).
[29] W. Horton, Y.H. Ichikawa, Chaos and Structures in Nonlinear Plasmas, Allied
Publishers, World Scientific, Singapore, 1996.
[30] W. Horton, Chaos and structures in the magnetosphere, Phys. Rep. 283 (1)
(1997) 265–302.
[31] W. Horton, J.P. Smith, R. Weigel, C. Crabtree, I. Doxas, B. Goode, J. Cary,
The solar-wind driven magnetosphere–ionosphere as a complex dynamical
system, Phys. Plasmas 6 (1999) 4178.
[32] S. Jafarzadeh, Designing PID and BELBIC controllers in path tracking problem,
Int. J. Comput. Commun. Control III (2008), ISSN 1841-9836, E-ISSN 1841-9844
(Suppl. issue: Proceedings of ICCCC 2008, pp. 343–348).
[33] M. Khalilian, A. Abedi, A. Deris-Z, Position control of hybrid stepper motor
using brain emotional controller, Energy Proc. 14 (2012) 1998–2004, http://dx.
doi.org/10.1016/j.egypro.2011.12.1200.
[34] A. Khashman, A modified back propagation learning algorithm with added
emotional coefficients, IEEE Trans. Neural Netw. 19 (11) (2008) 1896–1909.
[35] A. Khashman, Application of an emotional neural network to facial recogni-
tion, Neural Comput. Appl. 18 (4) (2009) 309–320.
[36] A. Khashman, Modeling cognitive and emotional processes: a novel neural
network architecture, Neural Netw. 23 (2010) 1155–1163, http://dx.doi.org/
10.1016/j.neunet.2010.07.004.
[37] J.H. Kim, S. Li, A.S. Hamlin, G.P. McNally, R. Richardson, Phosphorylation of
mitogen-activated protein kinase in the medial prefrontal cortex and the
amygdala following memory retrieval or forgetting in developing rats,
Neurobiol. Learn. Mem. 97 (1) (2011) 59–68.
[38] R. Lamprecht, S. Hazvi, Y. Dudai, cAMP response element-binding protein in
the amygdala is required for long—but not short-term conditioned taste
aversion memory, J. Neurosci. 17 (21) (1997) 8443–8450.
[39] Joseph E. LeDoux, Emotion and the limbic system concept, Concepts Neurosci.
2 (1991) 169–199.
[40] J. LeDoux, The Emotional Brain, Simon and Schuster, New York, 1996.
[41] J.E. LeDoux, Emotion circuits in the brain, Annu. Rev. Neurosci. 23 (1) (2000)
155–184.
[42] E. Lotfi, M.R. Akbarzadeh-T, Supervised brain emotional learning, in: IEEE
International Joint Conference on Neural Networks (IJCNN), 2012, pp. 1–6,
http://dx.doi.org/10.1109/IJCNN.2012.6252391.
[43] C. Lucas, A. Abbaspour, A. Gholipour, B.N. Araabi, M. Fatourechi, Enhancing the
performance of neurofuzzy predictors by emotional learning algorithm, Int. J.
Inf. 27 (2) (2003) 137–145.
[44] C. Lucas, D. Shahmirzadi, N. Sheikholeslami, Introducing BELBIC: brain emo-
tional learning based intelligent controller, Int. J. Intell. Autom. Soft Comput.
10 (2004) 11–21.
[45] C. Lucas, R.M. Milasi, B.N. Araabi, Intelligent modeling and control of washing
machine using Locally Linear Neuro-Fuzzy (LLNF), Asian J. Control 8 (2006)
393–400, http://dx.doi.org/10.1111/j.1934-6093.2006.tb00290.x.
[46] C. Lucas, BELBIC and its industrial applications: towards embedded neuroemo-
tional control codesign, integrated systems, Des. Technol. 3 (2010) 203–214,
http://dx.doi.org/10.1007/978-3-642-17384-4_17.
[47] Stacy Marsella, Jonathan Gratch, Paolo Petta, Computational models of
emotion, in: K.R. Scherer, T. Bänziger, E. Roesch (Eds.), A Blueprint for Affective
Computing, 2010, pp. 21–45.
[48] M. Mattinen, Modeling and forecasting of local geomagnetic activity, AALTO
University, School of Science and Technology, Helsinki, 2010. (Master′s Thesis).
[49] M.L. Mays, W. Horton, Real-time predictions of geomagnetic storms and
substorms: use of the Solar Wind Magnetosphere–Ionosphere System model,
Space Weather 7 (2009) S07001, http://dx.doi.org/10.1029/2008SW000459.
[50] A.R. Mehrabian, C. Lucas, Emotional learning based intelligent robust adaptive
controller for stable uncertain nonlinear systems, Int. J. Eng. Math. Sci. 2 (4)
(2005) 246–252.
[51] A.R. Mehrabian, C. Lucas, Jafar Roshanian, Aerospace launch vehicle control: an
intelligent adaptive approach, Aerosp. Sci. Technol. 10 (2006) 149–155, http:
//dx.doi.org/10.1016/j.ast.2005.11.002.
[52] M. Mermillod, P. Bonin, L. Mondillon, D. Alleysson, N. Vermeulen, Coarse scales
are sufficient for efficient categorization of emotional facial expressions:
evidence from neural computation, Neurocomputing 73 (13) (2010) 2522–2531.
[53] M. Mirmomeni, C. Lucas, B. Moshiri, B.N. Arabbi, Introducing adaptive
neurofuzzy modeling with online learning method for prediction of time-
varying, Sol. Geomagn. Act. Indices 37 (12) (2010) 8267–8277, http://dx.doi.
org/10.1016/j.eswa.2010.05.059.
[54] M. Mirmomeni, E. Kamaliha, S. Parsapoor, C. Lucas, Variation of embedding
dimension as one of the chaotic characteristics of solar and geomagnetic
activity indices, Natl. Acad. Sci. Repub. Arm. (2010) 338–349.
[55] J. Morén, C. Balkenius, 2000. A computational model of emotional learning in
the amygdala, in: J.A. Meyer, A. Berthoz, D. Floreano, H.L. Roitblat, S.W. Wilson
(Eds.), From Animals to Animates, vol. 6: Proceedings of the 6th International
Conference on the Simulation of Adaptive Behaviour, MIT Press, Cambridge,
MA, USA, pp. 115–124.
[56] J. Morén, Emotion and learning—a computational model of the Amygdala,
Department of Cognitive Science, Lund University, Lund, Sweden, 2002
(Ph.D. Thesis).
[57] M. Parsapoor, C. Lucas, S. Setayeshi, Reinforcement_recurrent fuzzy rule based
system based on brain emotional learning structure to predict the complexity
dynamic system, in: Proceedings of the 3rd International Conference on Digital
Information Management, London, November 13–16, 2008, pp. 25–32, doi:
10.1109/ICDIM.2008.4746712.
[58] G.P. Pavlos, A.C. Iliopoulos, V.G. Tsoutsouras, D.V. Sarafopoulos, D.S. Sfiris, L.
P. Karakatsanis, E.G. Pavlos, First and second order non-equilibrium phase
transition and evidence for non-extensive Tsallis statistics in Earth′s magneto-
sphere, Physica A Stat. Mech. Appl. 390 (15) (2011) 2819–2839.
[59] H. Rouhani, M. Jalili, B.N. Araabi, W. Eppler, C. Lucas, Brain emotional learning
based intelligent controller applied to neurofuzzy model of micro-heat
E. Lotfi, M.-R. Akbarzadeh-T. / Neurocomputing 126 (2014) 188–196 195
Author's personal copy
exchanger, Expert Syst. Appl. 32 (2007) 911–918, http://dx.doi.org/10.1016/j.
eswa.2006.01.047.
[60] E.T. Rolls, Neurophysiology and functions of the primate amygdala, in: The
Amygdala: Neurobiological Aspects of Emotion, Memory and Mental Dysfunc-
tion, 1992.
[61] M. Samadi, A. Afzali-Kusha, C. Lucas. Power management by brain emotional
learning algorithm, in: Proceedings of the 7th International Conference on
ASIC, 2007, ASICON'07, IEEE, Guilin, China, 2007, pp. 78–81.
[62] A. Sadeghieh, H. Sazgar, K. Goodarzi, C. Lucas, Identification and real-time
position control of a servo-hydraulic rotary actuator by means of a neurobio-
logically motivated algorithm, ISA Trans. (2011), http://dx.doi.org/10.1016/j.
isatra.2011.09.006.
[63] B. Scelfo, B. Sacchetti, P. Strata, Learning-related long-term potentiation of
inhibitory synapses in the cerebellar cortex, Proc. Natl. Acad. Sci. 105 (2)
(2008) 769–774.
[64] N.A. Stillings, S.E. Weisler, C.H. Chase, M.H. Feinstein, J.L. Garfield, E. L.
Rissland, Cognitive Science: An Introduction, MIT Press, Cambridge, Massa-
chusetts, London, England, 1995.
[65] E. Spencer, A. Rao, W. Horton, M.L. Mays, Evaluation of solar wind–magneto-
sphere coupling functions during geomagnetic storms with the WINDMI
model, J. Geophys. Res. 114 (2009) A02206, http://dx.doi.org/10.1029/
2008JA013530.
[66] J. Takalo, J. Timonen, Neural network prediction of the AE index from the PC
index, Phys. Chem. Earth Part C: Sol.–Terr. Planet. Sci. 24 (1) (1999) 89–92.
[67] O. Troshichev, D. Sormakov, A. Janzhura, Relation of PC index to the
geomagnetic storm Dst variation, J. Atmos. Sol.–Terr. Phys. (2010), http://dx.
doi.org/10.1016/j.jastp.2010.12.015.
[68] D.V. Vassiliadis, A.S. Sharma, T.E. Eastman, K. Papadopoulos, Low‐dimensional
chaos in magnetospheric activity from AE time series, Geophys. Res. Lett.
17 (11) (1990) 1841–1844.
[69] R. Ventura, C. Pinto-Ferreira, Responding efficiently to relevant stimuli using
an emotion-based agent architecture, Neurocomputing 72 (13) (2009)
2923–2930.
[70] H. Wang, Kongqiao Wang, Affective interaction based on person-independent
facial expression space, Neurocomputing 71 (10) (2008) 1889–1901.
[71] H.L. Wei, D.Q. Zhu, S.A. Billings, M.A. Balikhin, Forecasting the geomagnetic
activity of the Dst index using multiscale radial basis function networks,
Adv. Space Res. 40 (12) (2007) 1863–1870, http://dx.doi.org/10.1016/j.
asr.2007.02.080.
[72] M. Wiltberger, R.S. Weigel, W. Lotko, J.A. Fedder, Modeling seasonal variations
of auroral particle precipitation in a global-scale magnetosphere–ionosphere
simulation, J. Geophys. Res. 114 (2009) A01204, http://dx.doi.org/10.1029/
2008JA013108.
[73] S.H. Yeh, C.H. Lin, P.W. Gean, Acetylation of nuclear factor-κB in rat amygdala
improves long-term but not short-term retention of fear memory, Mol.
Pharmacol. 65 (5) (2004) 1286–1292.
[74] Q. Zhang, M. Lee, A hierarchical positive and negative emotion understanding
system based on integrated analysis of visual and brain signals, Neurocomputing
73 (16) (2010) 3264–3272.
[75] Q. Zhang, S. Jeong, M. Lee, Autonomous emotion development using incre-
mental modified adaptive neuro-fuzzy inference system, Neurocomputing
86 (1) (2012) 33–44.
Ehsan Lotfi (Student Member, IEEE) received the B.Sc.
degree in Computer Engineering (2006), from Ferdowsi
University of Mashhad, Iran and his M.Sc. in Artificial
Intelligence (2009) from Azad University, Mashhad
Branch, Iran. He is currently doctoral student of Prof.
Akbarzadeh in Artificial Intelligence at Science and
Research Campus, Azad University, Tehran, Iran. He is
member of Young Researchers Club in Azad University,
Mashhad, Iran. His research interest includes cognitive
sciences, computational intelligence, soft computing and
their applications.
Mohammad-R. Akbarzadeh-T. (Senior Member, IEEE)
received his Ph.D. on Evolutionary Optimization and Fuzzy
Control of Complex Systems from the Department of
Electrical and Computer Engineering at the University
of New Mexico in 1998.
He currently holds dual appointment as professor in
the Departments of Electrical Engineering and Computer
Engineering at Ferdowsi University of Mashhad. In
2006–2007, he completed a one year visiting scholar
position at Berkeley Initiative on Soft Computing (BISC),
UC Berkeley. From 1996 to 2002, he was affiliated with
the NASA Center for Autonomous Control Engineering at
University of New Mexico (UNM). In 2011, he chaired the
first National Workshop on Soft Computing and Intelligent Systems in Mashhad. In
2007, he served as the technical chair for the First Joint Congress on Fuzzy 
Intelligent Systems that was held in Mashhad, Iran. Also, in 2003, he chaired the Fifth
Conference on Intelligent Systems as well as co-chaired two mini-symposiums on
“Satisficing Multi-agent and Cyber-learning Systems” in Spain and “Intelligent and
Biomedical Systems” in Iran, during 2004 and 2005 respectively.
Dr. Akbarzadeh is the founding president of the Intelligent Systems Scientific
Society of Iran, the founding councilor representing the Iranian Coalition on Soft
Computing in IFSA, and a council member of the Iranian Fuzzy Systems Society. He is
also a life member of Eta Kappa Nu (The Electrical Engineering Honor Society), Kappa
Mu Epsilon (The Mathematics Honor Society), and the Golden Key National Honor
Society. From 2000-to-2008, he served as the faculty advisor for the IEEE student
branch at Ferdowsi University of Mashhad. He has also been on board of several IEEE
conference/congress technical committees such as the IEEE-SMC, IEEE-WCCI, Genetic
and Evolutionary Computation Conference (GECCO), and Automatic Control Con-
ference (ACC). He has received several awards including: the IDB Excellent Leader-
ship Award in 2010, The IDB Excellent Performance Award in 2009, the Outstanding
Faculty Award in 2008 and 2002, the IDB Merit Scholarship for High Technology in
2006, the Outstanding Faculty Award in Support of Student Scientific Activities in
2004, Outstanding Graduate Student Award in 1998, and Service Award from the
Mathematics Honor Society in 1989. His research interests are in the areas of
evolutionary algorithms, fuzzy logic and control, soft computing, multi-agent
systems, complex systems, robotics, and biomedical engineering systems. He has
published over 250 peer-reviewed articles in these and related research fields.
E. Lotfi, M.-R. Akbarzadeh-T. / Neurocomputing 126 (2014) 188–196196

More Related Content

What's hot

Introduction to Soft Computing
Introduction to Soft Computing Introduction to Soft Computing
Introduction to Soft Computing Aakash Kumar
 
New Generation Routing Protocol over Mobile Ad Hoc Wireless Networks based on...
New Generation Routing Protocol over Mobile Ad Hoc Wireless Networks based on...New Generation Routing Protocol over Mobile Ad Hoc Wireless Networks based on...
New Generation Routing Protocol over Mobile Ad Hoc Wireless Networks based on...ijasuc
 
Soft computing abstracts
Soft computing abstractsSoft computing abstracts
Soft computing abstractsabctry
 
Neural Computing
Neural ComputingNeural Computing
Neural ComputingESCOM
 
soft computing manoj
soft computing manojsoft computing manoj
soft computing manojManoj Yadav
 
A systematic review on sequence-to-sequence learning with neural network and ...
A systematic review on sequence-to-sequence learning with neural network and ...A systematic review on sequence-to-sequence learning with neural network and ...
A systematic review on sequence-to-sequence learning with neural network and ...IJECEIAES
 
Take-Home Exam Questions on Brain and Computation'
Take-Home Exam Questions on Brain and Computation'Take-Home Exam Questions on Brain and Computation'
Take-Home Exam Questions on Brain and Computation'butest
 
19 3 sep17 21may 6657 t269 revised (edit ndit)
19 3 sep17 21may 6657 t269 revised (edit ndit)19 3 sep17 21may 6657 t269 revised (edit ndit)
19 3 sep17 21may 6657 t269 revised (edit ndit)IAESIJEECS
 
Evaluation of rule extraction algorithms
Evaluation of rule extraction algorithmsEvaluation of rule extraction algorithms
Evaluation of rule extraction algorithmsIJDKP
 
abstrakty přijatých příspěvků.doc
abstrakty přijatých příspěvků.docabstrakty přijatých příspěvků.doc
abstrakty přijatých příspěvků.docbutest
 

What's hot (17)

Soft computing
Soft computingSoft computing
Soft computing
 
Soft computing
Soft computingSoft computing
Soft computing
 
Soft computing
Soft computingSoft computing
Soft computing
 
Introduction to Soft Computing
Introduction to Soft Computing Introduction to Soft Computing
Introduction to Soft Computing
 
International Journal of Engineering Inventions (IJEI)
International Journal of Engineering Inventions (IJEI)International Journal of Engineering Inventions (IJEI)
International Journal of Engineering Inventions (IJEI)
 
New Generation Routing Protocol over Mobile Ad Hoc Wireless Networks based on...
New Generation Routing Protocol over Mobile Ad Hoc Wireless Networks based on...New Generation Routing Protocol over Mobile Ad Hoc Wireless Networks based on...
New Generation Routing Protocol over Mobile Ad Hoc Wireless Networks based on...
 
Soft computing abstracts
Soft computing abstractsSoft computing abstracts
Soft computing abstracts
 
Soft computing01
Soft computing01Soft computing01
Soft computing01
 
[IJET V2I2P20] Authors: Dr. Sanjeev S Sannakki, Ms.Anjanabhargavi A Kulkarni
[IJET V2I2P20] Authors: Dr. Sanjeev S Sannakki, Ms.Anjanabhargavi A Kulkarni[IJET V2I2P20] Authors: Dr. Sanjeev S Sannakki, Ms.Anjanabhargavi A Kulkarni
[IJET V2I2P20] Authors: Dr. Sanjeev S Sannakki, Ms.Anjanabhargavi A Kulkarni
 
Neural Computing
Neural ComputingNeural Computing
Neural Computing
 
soft computing manoj
soft computing manojsoft computing manoj
soft computing manoj
 
A systematic review on sequence-to-sequence learning with neural network and ...
A systematic review on sequence-to-sequence learning with neural network and ...A systematic review on sequence-to-sequence learning with neural network and ...
A systematic review on sequence-to-sequence learning with neural network and ...
 
Take-Home Exam Questions on Brain and Computation'
Take-Home Exam Questions on Brain and Computation'Take-Home Exam Questions on Brain and Computation'
Take-Home Exam Questions on Brain and Computation'
 
Introduction to Soft Computing
Introduction to Soft ComputingIntroduction to Soft Computing
Introduction to Soft Computing
 
19 3 sep17 21may 6657 t269 revised (edit ndit)
19 3 sep17 21may 6657 t269 revised (edit ndit)19 3 sep17 21may 6657 t269 revised (edit ndit)
19 3 sep17 21may 6657 t269 revised (edit ndit)
 
Evaluation of rule extraction algorithms
Evaluation of rule extraction algorithmsEvaluation of rule extraction algorithms
Evaluation of rule extraction algorithms
 
abstrakty přijatých příspěvků.doc
abstrakty přijatých příspěvků.docabstrakty přijatých příspěvků.doc
abstrakty přijatých příspěvků.doc
 

Viewers also liked

1. direct torque control of induction motor with fuzzy controller a review
1. direct torque control of induction motor with fuzzy controller a review1. direct torque control of induction motor with fuzzy controller a review
1. direct torque control of induction motor with fuzzy controller a reviewMajdi Dadeq
 
Modeling and simulation of the induction motor feed by matrix converter
Modeling and simulation of the induction motor feed by matrix converterModeling and simulation of the induction motor feed by matrix converter
Modeling and simulation of the induction motor feed by matrix converterAnurag Choudhary
 
Updated field oriented control of induction motor.pptx
Updated field oriented control of induction motor.pptxUpdated field oriented control of induction motor.pptx
Updated field oriented control of induction motor.pptxMohit Sharma
 
vector control of induction motor
vector control of induction motorvector control of induction motor
vector control of induction motorDwaraka Pilla
 
Single Phase Induction Motor Speed Control
Single Phase Induction Motor Speed ControlSingle Phase Induction Motor Speed Control
Single Phase Induction Motor Speed ControlEdgefxkits & Solutions
 
Daniel_Goleman_Emotional_Intelligence_SumitMehta
Daniel_Goleman_Emotional_Intelligence_SumitMehtaDaniel_Goleman_Emotional_Intelligence_SumitMehta
Daniel_Goleman_Emotional_Intelligence_SumitMehtaSumit Mehta
 
speed control of three phase induction motor
speed control of three phase induction motorspeed control of three phase induction motor
speed control of three phase induction motorAshvani Shukla
 
3 ph induction motor ppt
3 ph induction motor ppt3 ph induction motor ppt
3 ph induction motor pptAjay Balar
 
2015 Upload Campaigns Calendar - SlideShare
2015 Upload Campaigns Calendar - SlideShare2015 Upload Campaigns Calendar - SlideShare
2015 Upload Campaigns Calendar - SlideShareSlideShare
 
What to Upload to SlideShare
What to Upload to SlideShareWhat to Upload to SlideShare
What to Upload to SlideShareSlideShare
 
Getting Started With SlideShare
Getting Started With SlideShareGetting Started With SlideShare
Getting Started With SlideShareSlideShare
 

Viewers also liked (12)

1. direct torque control of induction motor with fuzzy controller a review
1. direct torque control of induction motor with fuzzy controller a review1. direct torque control of induction motor with fuzzy controller a review
1. direct torque control of induction motor with fuzzy controller a review
 
Modeling and simulation of the induction motor feed by matrix converter
Modeling and simulation of the induction motor feed by matrix converterModeling and simulation of the induction motor feed by matrix converter
Modeling and simulation of the induction motor feed by matrix converter
 
Updated field oriented control of induction motor.pptx
Updated field oriented control of induction motor.pptxUpdated field oriented control of induction motor.pptx
Updated field oriented control of induction motor.pptx
 
Motor theory
Motor theoryMotor theory
Motor theory
 
vector control of induction motor
vector control of induction motorvector control of induction motor
vector control of induction motor
 
Single Phase Induction Motor Speed Control
Single Phase Induction Motor Speed ControlSingle Phase Induction Motor Speed Control
Single Phase Induction Motor Speed Control
 
Daniel_Goleman_Emotional_Intelligence_SumitMehta
Daniel_Goleman_Emotional_Intelligence_SumitMehtaDaniel_Goleman_Emotional_Intelligence_SumitMehta
Daniel_Goleman_Emotional_Intelligence_SumitMehta
 
speed control of three phase induction motor
speed control of three phase induction motorspeed control of three phase induction motor
speed control of three phase induction motor
 
3 ph induction motor ppt
3 ph induction motor ppt3 ph induction motor ppt
3 ph induction motor ppt
 
2015 Upload Campaigns Calendar - SlideShare
2015 Upload Campaigns Calendar - SlideShare2015 Upload Campaigns Calendar - SlideShare
2015 Upload Campaigns Calendar - SlideShare
 
What to Upload to SlideShare
What to Upload to SlideShareWhat to Upload to SlideShare
What to Upload to SlideShare
 
Getting Started With SlideShare
Getting Started With SlideShareGetting Started With SlideShare
Getting Started With SlideShare
 

Similar to 2014 Adaptive brain emotional decayed learning

STUDY AND IMPLEMENTATION OF ADVANCED NEUROERGONOMIC TECHNIQUES
STUDY AND IMPLEMENTATION OF ADVANCED NEUROERGONOMIC TECHNIQUES STUDY AND IMPLEMENTATION OF ADVANCED NEUROERGONOMIC TECHNIQUES
STUDY AND IMPLEMENTATION OF ADVANCED NEUROERGONOMIC TECHNIQUES acijjournal
 
Mind reading computers
Mind reading computersMind reading computers
Mind reading computersAnshu Maurya
 
19 3 sep17 21may 6657 t269 revised (edit ndit)
19 3 sep17 21may 6657 t269 revised (edit ndit)19 3 sep17 21may 6657 t269 revised (edit ndit)
19 3 sep17 21may 6657 t269 revised (edit ndit)IAESIJEECS
 
Model of Differential Equation for Genetic Algorithm with Neural Network (GAN...
Model of Differential Equation for Genetic Algorithm with Neural Network (GAN...Model of Differential Equation for Genetic Algorithm with Neural Network (GAN...
Model of Differential Equation for Genetic Algorithm with Neural Network (GAN...Sarvesh Kumar
 
ANALYSIS ON MACHINE CELL RECOGNITION AND DETACHING FROM NEURAL SYSTEMS
ANALYSIS ON MACHINE CELL RECOGNITION AND DETACHING FROM NEURAL SYSTEMSANALYSIS ON MACHINE CELL RECOGNITION AND DETACHING FROM NEURAL SYSTEMS
ANALYSIS ON MACHINE CELL RECOGNITION AND DETACHING FROM NEURAL SYSTEMSIAEME Publication
 
Dissertation character recognition - Report
Dissertation character recognition - ReportDissertation character recognition - Report
Dissertation character recognition - Reportsachinkumar Bharadva
 
Summary Of Thesis
Summary Of ThesisSummary Of Thesis
Summary Of Thesisguestb452d6
 
A Time Series ANN Approach for Weather Forecasting
A Time Series ANN Approach for Weather ForecastingA Time Series ANN Approach for Weather Forecasting
A Time Series ANN Approach for Weather Forecastingijctcm
 
NEURAL MODEL-APPLYING NETWORK (NEUMAN): A NEW BASIS FOR COMPUTATIONAL COGNITION
NEURAL MODEL-APPLYING NETWORK (NEUMAN): A NEW BASIS FOR COMPUTATIONAL COGNITIONNEURAL MODEL-APPLYING NETWORK (NEUMAN): A NEW BASIS FOR COMPUTATIONAL COGNITION
NEURAL MODEL-APPLYING NETWORK (NEUMAN): A NEW BASIS FOR COMPUTATIONAL COGNITIONaciijournal
 
An Evaluation Of Motor Models Of Handwriting
An Evaluation Of Motor Models Of HandwritingAn Evaluation Of Motor Models Of Handwriting
An Evaluation Of Motor Models Of HandwritingJoaquin Hamad
 
Study on Different Human Emotions Using Back Propagation Method
Study on Different Human Emotions Using Back Propagation MethodStudy on Different Human Emotions Using Back Propagation Method
Study on Different Human Emotions Using Back Propagation Methodijiert bestjournal
 
Neural Model-Applying Network (Neuman): A New Basis for Computational Cognition
Neural Model-Applying Network (Neuman): A New Basis for Computational CognitionNeural Model-Applying Network (Neuman): A New Basis for Computational Cognition
Neural Model-Applying Network (Neuman): A New Basis for Computational Cognitionaciijournal
 
Neural Network
Neural NetworkNeural Network
Neural NetworkSayyed Z
 
A Parallel Framework For Multilayer Perceptron For Human Face Recognition
A Parallel Framework For Multilayer Perceptron For Human Face RecognitionA Parallel Framework For Multilayer Perceptron For Human Face Recognition
A Parallel Framework For Multilayer Perceptron For Human Face RecognitionCSCJournals
 
Minimizing Musculoskeletal Disorders in Lathe Machine Workers
Minimizing Musculoskeletal Disorders in Lathe Machine WorkersMinimizing Musculoskeletal Disorders in Lathe Machine Workers
Minimizing Musculoskeletal Disorders in Lathe Machine WorkersWaqas Tariq
 
Artificial Neural Network Abstract
Artificial Neural Network AbstractArtificial Neural Network Abstract
Artificial Neural Network AbstractAnjali Agrawal
 
Pattern Recognition using Artificial Neural Network
Pattern Recognition using Artificial Neural NetworkPattern Recognition using Artificial Neural Network
Pattern Recognition using Artificial Neural NetworkEditor IJCATR
 

Similar to 2014 Adaptive brain emotional decayed learning (20)

STUDY AND IMPLEMENTATION OF ADVANCED NEUROERGONOMIC TECHNIQUES
STUDY AND IMPLEMENTATION OF ADVANCED NEUROERGONOMIC TECHNIQUES STUDY AND IMPLEMENTATION OF ADVANCED NEUROERGONOMIC TECHNIQUES
STUDY AND IMPLEMENTATION OF ADVANCED NEUROERGONOMIC TECHNIQUES
 
Mind reading computers
Mind reading computersMind reading computers
Mind reading computers
 
19 3 sep17 21may 6657 t269 revised (edit ndit)
19 3 sep17 21may 6657 t269 revised (edit ndit)19 3 sep17 21may 6657 t269 revised (edit ndit)
19 3 sep17 21may 6657 t269 revised (edit ndit)
 
Model of Differential Equation for Genetic Algorithm with Neural Network (GAN...
Model of Differential Equation for Genetic Algorithm with Neural Network (GAN...Model of Differential Equation for Genetic Algorithm with Neural Network (GAN...
Model of Differential Equation for Genetic Algorithm with Neural Network (GAN...
 
BCI Paper
BCI PaperBCI Paper
BCI Paper
 
ANALYSIS ON MACHINE CELL RECOGNITION AND DETACHING FROM NEURAL SYSTEMS
ANALYSIS ON MACHINE CELL RECOGNITION AND DETACHING FROM NEURAL SYSTEMSANALYSIS ON MACHINE CELL RECOGNITION AND DETACHING FROM NEURAL SYSTEMS
ANALYSIS ON MACHINE CELL RECOGNITION AND DETACHING FROM NEURAL SYSTEMS
 
Dissertation character recognition - Report
Dissertation character recognition - ReportDissertation character recognition - Report
Dissertation character recognition - Report
 
Summary Of Thesis
Summary Of ThesisSummary Of Thesis
Summary Of Thesis
 
A Time Series ANN Approach for Weather Forecasting
A Time Series ANN Approach for Weather ForecastingA Time Series ANN Approach for Weather Forecasting
A Time Series ANN Approach for Weather Forecasting
 
NEURAL MODEL-APPLYING NETWORK (NEUMAN): A NEW BASIS FOR COMPUTATIONAL COGNITION
NEURAL MODEL-APPLYING NETWORK (NEUMAN): A NEW BASIS FOR COMPUTATIONAL COGNITIONNEURAL MODEL-APPLYING NETWORK (NEUMAN): A NEW BASIS FOR COMPUTATIONAL COGNITION
NEURAL MODEL-APPLYING NETWORK (NEUMAN): A NEW BASIS FOR COMPUTATIONAL COGNITION
 
An Evaluation Of Motor Models Of Handwriting
An Evaluation Of Motor Models Of HandwritingAn Evaluation Of Motor Models Of Handwriting
An Evaluation Of Motor Models Of Handwriting
 
Bx36449453
Bx36449453Bx36449453
Bx36449453
 
8421ijbes01
8421ijbes018421ijbes01
8421ijbes01
 
Study on Different Human Emotions Using Back Propagation Method
Study on Different Human Emotions Using Back Propagation MethodStudy on Different Human Emotions Using Back Propagation Method
Study on Different Human Emotions Using Back Propagation Method
 
Neural Model-Applying Network (Neuman): A New Basis for Computational Cognition
Neural Model-Applying Network (Neuman): A New Basis for Computational CognitionNeural Model-Applying Network (Neuman): A New Basis for Computational Cognition
Neural Model-Applying Network (Neuman): A New Basis for Computational Cognition
 
Neural Network
Neural NetworkNeural Network
Neural Network
 
A Parallel Framework For Multilayer Perceptron For Human Face Recognition
A Parallel Framework For Multilayer Perceptron For Human Face RecognitionA Parallel Framework For Multilayer Perceptron For Human Face Recognition
A Parallel Framework For Multilayer Perceptron For Human Face Recognition
 
Minimizing Musculoskeletal Disorders in Lathe Machine Workers
Minimizing Musculoskeletal Disorders in Lathe Machine WorkersMinimizing Musculoskeletal Disorders in Lathe Machine Workers
Minimizing Musculoskeletal Disorders in Lathe Machine Workers
 
Artificial Neural Network Abstract
Artificial Neural Network AbstractArtificial Neural Network Abstract
Artificial Neural Network Abstract
 
Pattern Recognition using Artificial Neural Network
Pattern Recognition using Artificial Neural NetworkPattern Recognition using Artificial Neural Network
Pattern Recognition using Artificial Neural Network
 

2014 Adaptive brain emotional decayed learning

  • 1. This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or selling or licensing copies, or posting to personal, institutional or third party websites are prohibited. In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier’s archiving and manuscript policies are encouraged to visit: http://www.elsevier.com/authorsrights
  • 2. Author's personal copy Adaptive brain emotional decayed learning for online prediction of geomagnetic activity indices Ehsan Lotfi a,n , M.-R. Akbarzadeh-T. b a Department of Computer Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran b Departments of Electrical Engineering and Computer Engineering, Center of Excellence on Soft Computing and Intelligent Information Processing, Ferdowsi University of Mashhad, Iran a r t i c l e i n f o Article history: Received 18 December 2011 Received in revised form 7 February 2013 Accepted 28 February 2013 Available online 31 May 2013 Keywords: Amygdala Adaptive BEL BELBIC Long-term forgetting Online learning Solar winds a b s t r a c t In this paper we propose adaptive brain-inspired emotional decayed learning to predict Kp, AE and Dst indices that characterize the chaotic activity of the earth's magnetosphere by their extreme lows and highs. In mammalian brain, the limbic system processes emotional stimulus and consists of two main components: Amygdala and Orbitofrontal Cortex (OFC). Here, we propose a learning algorithm for the neural basis computational model of Amygdala–OFC in a supervised manner and consider a decay rate in Amygdala learning rule. This added decay rate has in fact a neurobiological basis and yields to better learning and adaptive decision making as illustrated here. In the experimental studies, various comparisons are made between the proposed method named ADBEL, Multilayer Perceptron (MLP), Adaptive Neuro-Fuzzy Inference System (ANFIS) and Locally Linear Neuro-Fuzzy (LLNF). The main features of the presented predictor are the higher accuracy at all points especially at critical points, lower computational complexity and adaptive training. Hence, the presented model can be utilized in adaptive online prediction problems. & 2013 Elsevier B.V. All rights reserved. 1. Introduction The solar wind and geomagnetic storms resulting from the solar activity are amongst the most important physical phenomena that can considerably disturb communication systems and damage satellites. They also have significant effects on space missions. Therefore predicting the occurrences of the solar wind and geo- magnetic storms are very important in space missions, planning and satellite alarm systems. These events can be reasonably character- ized by the following three geomagnetic activity indices: Kp (Kenn- ziffer planetarisch) index, AE (auroral electrojet) index and Dst storm time index [71,7,65,72,53] where each index can be considered as a chaotic time series. These indicators are good monitors for the warning and alert systems of satellites. For example, the high values of Kp and AE and the large variation at low values of Dst often correspond to geomagnetic storms or substorms [4,21,67,13]. Various models and learning algorithms have been developed to predict these chaotic time series, such as the real time WINDMI model which is based on six nonlinear differential equations [49], neurofuzzy models such as Adaptive Neuro-Fuzzy Inference Systems (ANFIS), Artificial Neural Networks (ANN [66,15,48]) as well as Locally Linear Neuro-Fuzzy systems (LLNF [54]) that divide the input space into small linear subspaces with fuzzy validity functions. Among these methods, ANNs are inspired by physiolo- gical workings of the brain. They resemble the actual networks of neural cells in the brain. MLP is a feedforward ANN that is widely used to predict Kp, AE and Dst indices [48,6]. The learning algorithms of MLP and ANFIS impose high computational com- plexity that is not suited for online learning on fast-varying environments. This problem is viewed in many other learning algorithms such as Locally Linear Model Tree (LoLiMoT [53,54]). LoLiMoT and Recursive LoLiMoT (RLoLiMoT) are popular incre- mental learning algorithms for LLNF model. In contrast to LoLiMoT, RLoLiMoT can be used for online applications but still suffers from high computational complexity [53] and has been used only in problems with time increments that are sufficiently long. Recently, the computational models of Brain Emotional Learn- ing (BEL) have been successfully utilized for solving the prediction problem of geomagnetic indices [25,3]. The main feature of BEL based predictors is low computational complexity. These methods are based on reinforcement learning and, as discussed in Section 2.1, they show high accuracy in predicting peak points but do not show acceptable accuracy at all points [3] especially at low values. Specifically, they do not adequately predict time series such as Dst index where the low values are most important. Our understanding of emotion is minimal and the current computational models are over simplified. Their only justification Contents lists available at ScienceDirect journal homepage: www.elsevier.com/locate/neucom Neurocomputing 0925-2312/$ - see front matter & 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.neucom.2013.02.040 n Corresponding author. Tel.: +98 935 570 0102. E-mail addresses: esilotf@gmail.com (E. Lotfi), Akbarzadeh@ieee.org (M.-R. Akbarzadeh-T.). Neurocomputing 126 (2014) 188–196
  • 3. Author's personal copy is their great utility in solving difficult problems. Here, adaptive brain emotional supervised learning with decayed rule, simulating the forgetting role of Amygdala, is proposed to predict the Dst beside the Kp and AE indices in real time. This adaptive/forgetting approach to Amygdala, in contrast to the more long term memory perspective, also has a biological basis as reported in several other recent works [24,37]. Specifically, Kim et al. [37] examined the long-term forgetting effect of Amygdala, and Hardt et al. [28] showed that a brain-wide decay mechanism in the brain can systematically remove some memories and increase the life expectancy of a memory. The proposed approach considers decayed learning in an adaptive-online manner in order to enhance the prediction results against non-stationary behavior of time series. The proposed approach is general and can be applied in various emotion based application domains such as in emotion recognition [74], facial expression recognition [52], affective computing [70,23], human– computer interaction [12], autonomous robot and agent design [69,16,75], improved modern artificial intelligence tools [34–36] as well as understanding the brain's emotional process [47]. 1.1. Motivations towards emotional modeling What motivates employing emotional modeling in engineering applications is the high speed of emotional processing resulting from its effects on inhibitory synapses and existence of short paths between Thalamus and Amygdala in the emotional brain [39,40,27]. Although some of the present neural models indicate that sensory structures, especially hierarchical processing struc- tures play a key role in fast processing [5,20], there are also models which shed light on the effects of emotional learning on inhibitory synapses and the role of inhibitory synapses in fast processing. For example, in his study, Scelfo [63] elaborates on the effects of emotional learning on inhibitory synapses and Bazhenov et al. [9] show that inhibitory synapses can play a pivotal role in fast learning. The subject of quickness of emotional processing can also be seen from the perspective of psychology. Emotional processing creates emotional intelligence in human brain and according to Goleman [27], emotional intelligence can facilitate learning, espe- cially in children, and it is also accountable for the ability to react quickly in emergencies. Goleman believes humans possess two minds, rational mind and emotional mind: emotional mind is far quicker than the rational mind and emotional stimuli such as fear can bring about quick reactions usually when there is no chance for the rational mind to process the danger. Parts of brain responsible for processing emotions have the ability to produce the required reaction extremely quickly; and consequently the inhibitory connections in cerebral cortex, which are affected by the emotional system, can improve learning speed. Considering that Limbic system is responsible for processing emotional stimuli, it is not unlikely that the most important characteristic of the practical models produced based on this system and especially the models including the Amygdala–Thala- mus short path and the inhibitory connections is fast learning and quick reacting. This can reveal their ability in predicting non- stationary time series. The main motivation behind the existing tendency towards models based on human emotions is the very same fact that emotional stimuli can speed up processing in humans and it is expected that quick learning is the distinctive feature of artificial models of emotional learning. Here we propose a novel brain-inspired emotional model that, because of its fast learning, can be used in real time applications. The organization of the paper is as follows: Neuropsychological motivation and works related to modeling emotional learning are presented in Section 2. The proposed method is then presented in Section 3. Experimental results on online prediction are evaluated through several simulations in Section 4. Finally, conclusions are made in Section 5. 2. Neuropsychological aspect of emotion and related works Most human behavior is dictated by emotion. Emotions are cognitive processes [64] that are studied under various disciplines such as psychology, neuroscience and artificial intelligence. Psy- chological and neural studies of emotion have a long history. From a psychological point of view, emotions can be derived through reward and punishment in various real-life situations [56]. Studies of the neural basis of emotion culminated in the limbic system (LS) theory of emotion. As shown in Fig. 1, LS which is located in the cerebral cortex consists mainly of the following components [41]: Amygdala, Orbitofrontal Cortex (OFC), Thalamus, Sensory Cortex, Hypothalamus and Hippocampus. Amygdala which is located in sub-cortical area is an emotional computer. Attention and perma- nent memory are the other cognitive functions of Amygdala [60]. Amygdala has extensive interconnections with many other areas. It receives connections from the sensory cortical areas and reward signals in the learning process. Amygdala also interacts with the OFC. OFC receives connections from the sensory cortical area and Amygdala responds to the emotional stimulus. OFC then evaluates the Amygdala's response and tries to prevent inappropriate answers based on the context provided by the hippocampus [8]. For BEL modeling, researchers focus on internal representation of emotional brain system, and formalize the brain states. The Amygdala–OFC system was first proposed by Morén and Balkenius in 2000 [55,8,56]. Amygdala–OFC model learns to react to the new stimulus based on the history of input rewards and punishment signals. Additionally, in the model, Amygdala learns to associate with emotionally charged and neutral stimuli. And the OFC prevents inappropriate experience and learning connections. Amygdala–OFC model consists of two subsystems which attempt to respond correctly to emotional stimuli. Each subsystem consists of a number of nodes which are related to the dimension of each stimulus. At first, the stimulus enters the Thalamus part of the model to calculate the maximum input and submits it to Amygdala as one of the inputs. The OFC does not receive any input from Thalamus. Instead, it receives Amygdala's output in order to update the weights [55]. Fig. 1. The limbic system in the brain. E. Lotfi, M.-R. Akbarzadeh-T. / Neurocomputing 126 (2014) 188–196 189
  • 4. Author's personal copy Although the structure of this model is very simple; the reward signal is still not clearly defined, while this signal is vital for updating the weights of subsystems. There are various modified versions [44,57,3,1] of Amygdala–OFC model. All the models are based on the four main components presented in Fig. 2 and include presented information pathway in the figure. These models should learn by using an external reward signal. Lucas et al. [44] explicitly determined the reward signal and proposed the BEL base controller (BELBIC) which has been successfully utilized in various control applications [43,14,45,50,51,59,61,32,10,17,46,18,62]. Babaie et al. [3] formulated the input reward for multi-agent optimization problems and presented a BEL based predictor to forecast AE index in alarm systems for satellites. In Amygdala–OFC model and its modified versions, the weights of Amygdala cannot decrease, i.e. it is a monotonic learning process. So once an emotional reaction is learned, it is permanent and cannot be unlearnt. The predictor presented by Babaie et al. [3] is based on monotonic learning and reinforcement learning in Amygdala and, as discussed in the following section, shows high accuracy in predicting peak points but not for all of the points [3] particularly when signal level is low. 2.1. The drawbacks of the current models and the essential reforms The nature and the description of the relationship between the four main components, consisting of Amygdala, Thalamus, Sensory Cortex and OFC, is common among all the presented models as shown in Fig. 2. What differs from one model to another is how they formulate the reward signal in the learning process. For example in the model presented by Morén [56], the need for the existence of reward signal is expressed; but it is not clarified how values are assigned. In the modified model of Babaie [3] and Abdi [1], the reward signal (R) is defined as follows and the formuliza- tion of other equations is formed accordingly: R ¼ ∑ j wjrj; ð1Þ where r stands for the factors of the reinforcement agent and w represents the related weights which are selective. For informa- tion on how to select the weights in Eq. (1) in a special application see the study of Dehkordi et al. [19] and Khalilian et al. [33]. Eq. (1) is particularly useful in multi-agent problems but in case of supervised learning of time series it is not so. Since weights in Eq. (1) are problem specific, they can be arranged in a way to produce better results for peak points as is done by Babaie [3]. But this approach is model sensitive and leads to low adaptability of the model with changes in signal behavior. It also renders the models ineffective in learning different signals with opposite behaviors. For example, Babaie's model [3] is ineffective in learn- ing signals such as Dst when their significant points are in valleys. The model presented here aims to cover these weaknesses. Instead of R signal in the learning phase, our model employs the target value of input pattern. Putting target instead of Eq. (1) holds a major advantage: the model can be adjusted by pattern–target samples. But this reduces the precision of the processes and in fact the model becomes forgetful and only gives precise answers to recent and current patterns and forgets more distant examples. In order to correct this problem we use a decay rate in learning rules which controls the effects of using targets. So the novelty of our method compared to Morén [56], Lucas [44], [3], Parsapoor [57] and Abdi [1] models is the use of target value instead of Eq. (1) in training the model and employing a decay rate in learning rules. Additionally, based on these adjustments, we can propose adap- tive version of brain emotional learning that is discussed in the following section. 3. The proposed adaptive brain emotional decayed learning In contrast to previous BEL based predictors, the proposed Adaptive Decayed BEL (ADBEL) focuses on the need for online adaptation. Additionally, ADBEL is based on supervised learning rules and is based on a decayed mechanism for Amygdala mono- tonic learning. It is observed that by controlling this feature, the performance of the model can be extended. This controlling is performed by using a decay rate. Additionally, due to lower performance at low points, the common BEL based predictors cannot be adequately used to predict the Dst index in alarm systems, while the proposed method can be used to predict the Dst index along with other indices such as Kp and AE. Fig. 3 shows the proposed supervised model where the solid lines present the data flow and learning lines are presented by dashed lines. Consider the following time series: Kpt−4; Kpt−3; Kpt−2; Kpt−1 ADBEL can predict the Kp value at t. Basically the model is divided into two parts, corresponding to the Amygdala and the OFC. Amygdala receives input pattern (…Kpt−4, Kpt−3, Kpt−2, Kpt−1) from the Thalamus and from the sensory cortex, while the OFC only receives input pattern from the sensory cortex unit. Amygdala has two internal outputs: Ea, that is used for adjusting its own weights (see Eq. (8)) and E′ a that is used for adjusting OFC weights (see Eqs. (9) and (10)). As shown in Fig. 3, system′s input is described by the vector Kp including (…Kpt−4, Kpt−3, Kpt−2, Kpt−1). There is one node for each attribute of input pattern in the network model of Amygdale and OFC. The output of each node is calculated by the multiplication of learning weight vj to Kpt−j for Amygdala and wj to Kpt−j for OFC. After learning and adjusting the weights, the _ Kpt is the predicted Kp value at time t which is calculated as follows: _ Kpt ¼ Ea−Eo ð2Þ where Ea ¼ E′ a þ vth  m ð3Þ E′ a ¼ ∑ j ðvj  Kpt−jÞ ð4Þ Eo ¼ ∑ j ðwj  Kpt−jÞ ð5Þ and m ¼ maxjðKpt−jÞ; j ¼ 1…n ð6Þ where n is the number of attributes in input pattern, m is the output of Thalamus and vth is the related weight. In Eq. (2), subtraction implements the inhibitory task of OFC. Actually the model′s inputs and outputs involve the following equation: Fig. 2. The routes of sensory information for modeling, modified from Morén [56] and Babaie et al. [3]. E. Lotfi, M.-R. Akbarzadeh-T. / Neurocomputing 126 (2014) 188–196190
  • 5. Author's personal copy _ Kpt ¼ f ðKpt−n; …; Kpt−3; Kpt−2; Kpt−1Þ ð7Þ In the learning phase and after observing the target value of Kp at time t (Kpt), the following supervised decay learning rules are used to adjust the model′s weights: vj ¼ ð1−γÞvj þ α maxðKpt−Ea; 0ÞKpt−j; j ¼ 1…n ð8Þ wj ¼ wj þ ðβ R0 Kpt−jÞ; j ¼ 1…n ð9Þ where α and β are learning rates, γ is the proposed decay rate and R0 is the internal reward calculated by the following formula: R0 ¼ maxðE′ a−Kpt; 0Þ−Ek o if ðKp4≠0Þ maxðE′ a−Eo; 0Þ Otherwise ( ð10Þ where Kpt is the target value associated with input pattern (Kpt−4, Kpt−3, Kpt−2, Kpt−1). So the proposed adaptive time series predic- tion algorithm is as follows: Adaptive supervised BEL based predictor: Constants: α and β Inputs: Previous values of Kp: Kpt−n,…, Kpt−3, Kpt−2, Kpt−1 Optimized γ Output: Predicted Kpt ( _ Kpt) The adjustable weights: w1, w2,…,wn, v1,v2,…,vn, vth Step 1: Prediction − Use the following equations to predict Kp at t ð _ KptÞ m ¼ maxjðKpt−jÞ; j ¼ 1…n E′ a ¼ ∑ j ¼ 1…n ðvj  Kpt−jÞ Ea ¼ E′ a þ vth  m Eo ¼ ∑ j ¼ 1…n ðwj  Kpt−jÞ _ Kpt ¼ Ea−Eo Step 2: Learning − Wait for Observing Target Value of Kp at time t (Kpt) R0 ¼ maxðE′ a−Kpt; 0Þ−Eo if ðKpt≠0Þ maxðE′ a−Eo; 0Þ Otherwise ( − Update OFC input weight j, for j¼1…n wj ¼ wj þ ðβR0Kpt−jÞ − Update Amygdala weight j, for j¼1…n vj ¼ ð1−γÞvj þ α maxðKpt−Ea; 0ÞKpt−j vth ¼ ð1−γÞvth þ α maxðKpt−Ea; 0Þm − t¼t+l and proceed to the first In the algorithm, (Kpt−4, Kpt−3, Kpt−2, Kpt−1) is the training pattern and Kpt is the related target extracted from the Kp time series. The proposed algorithm in this form can be utilized in time series forecasting problems. For AE prediction, the input pattern is (AEt−4, AEt−3, AEt−2, AEt−1) and the target value is AEt. Also for Dst prediction, the input pattern is (Dstt−4, Dstt−3, Dstt−2, Dstt−1) and the target value is Dstt. 4. Experimental studies The proposed ADBEL has been written and tested on Matlab R2010b. The source code is accessible from http://www.bitools.ir/ projects.html and is evaluated to predict Kp, AE and Ds indices which are used to characterize the geomagnetic activity of the earth′s magnetosphere. These time series have chaotic behavior [29–31,3] with low dimensional chaos [68,58]. A 78,912 hourly samples data set from 2000 to 2008 has been used for online prediction. The data set named OMNI2 is accessible from National Space Science Data Center (NSSDC). We consider each 4 sequence samples as a pattern and 5th as its target. So 78,908 pattern–target pairs of Kp index, 78,908 pattern–target pairs of AE index and 78,908 pattern–target pairs of Dst index are used for the evalua- tions. The maximum and minimum of each index are determined and scaled data (between 0 and 1) are used to adjust the weights. In experimental studies, the initialization of the weights is random. And the values α and β are set respectively at 0.8 and 0.2. Also various values of decay rate were tested in the search of an optimum γ. The values include; γ¼0, 0.05, 0.1, 0.15… 1. By setting decay rate to 0, the system learns a small set of training pattern–target pairs of Kp index. The training is repeated 10 times and the error average is recorded. This scenario is repeated by various values for γ and the results are as follows; the maximum error is observed when γ¼0 and the minimum error is achieved at γ¼0.1. Also the values 0.1 and 0.01 (with step size 0.01) are Fig. 3. The proposed learning lines in the limbic model are presented by dashed lines. E. Lotfi, M.-R. Akbarzadeh-T. / Neurocomputing 126 (2014) 188–196 191
  • 6. Author's personal copy achieved using AE and Dst indices respectively. In various applica- tions, it was observed that the optimum decay rate can be generally defined between 0.01 and 0.2. The decay rate in the proposed model has a certain neuropsy- chological interpretation. As mentioned by many neurobiologists, the Amygdala is required for long term memory [38,11,73,22,26]. Yet, this type of memory also involves a forgetting process as mentioned in several other recent works [37,28]. Hence, the proposed decay rate here actually simulates the forgetting role of Amygdala. When γ¼1 and the Amygdala faces a new pattern, it forgets all information stored in the past and quickly learns the current one; and when γ¼0, it tries to keep the past information as well as to learn the new pattern. The findings show that the weakest capacity of Amygdala is achieved if the forgetting role is disregarded and best generalization capacity is in fact a trade-off between forgetting and permanent storage. The optimum value of γ shows that the forgetting role of Amygdala may presumably improve the learning ability of the chaotic behavior. 4.1. Comparative studies on online prediction Figs. 4–9 present the online prediction results of the proposed method and Tables 1 and 2 illustrate the comparative results between proposed method and common methods, MLP and ANFIS. Fig. 4 shows the target and online predicted Kp and related error obtained from the proposed adaptive method during the first 500 h in year 2000. Fig. 4 presents the results of predictions from the start point. As illustrated in Fig. 4, after 50 h the system can show a stable result. Hence, the predicted curve illustrated in Fig. 4 is divided into the two segments; the first is the transient region that is between hours [0–50] and the second is the steady state region where the prediction results are validated. In fact, the weights rapidly converge during the first 50 h. And after that, they show slight changes adaptively. Fig. 5 is the curve of online predicted values of Kp versus target values from 2000 till 2008. As illustrated in Fig. 5, in Kp predictions, a correlation COR¼0.92952 is obtained from the proposed method in steady state. The AE online prediction results are shown in Figs. 6 and 7. Fig. 6 shows the target and predicted AE values and related error between hours 0 and 500 in year 2000. Fig. 7 illustrates the COR value of AE results obtained using proposed ADBEL in steady state. The system falls into the steady state after 150 h. During the first 150 h, the system tries to learn the AE behavior, and their weights rapidly converge to one step ahead prediction of hourly AE index. The curve illustrated in Fig. 8 is the predicted Dst. The transition time in online Dst prediction is 25 h meaning that the system learns the Dst activity during the first 25 epochs. And after the first 25 h, the results are in steady state. Fig. 9 illustrates the predicted versus desired output of the Dst index. As illustrated in Fig. 9, the Fig. 4. Online predicted Kp values (top) and related error (bottom) from start point in year 2000 obtained using proposed adaptive method. Results are validated after the initial 50 h. Fig. 5. Actual versus desired output of the Kp between 2000 and 2008 obtained from proposed ADBEL in steady state. Fig. 6. Online predicted AE values (top) and related error (bottom) from start point in year 2000 by the proposed ADBEL. The results are validated after the initial150 h. E. Lotfi, M.-R. Akbarzadeh-T. / Neurocomputing 126 (2014) 188–196192
  • 7. Author's personal copy COR¼0.95322 is obtained from the proposed ADBEL in steady state. Table 1 shows the RMSE and COR comparisons between MLP, ANFIS and the proposed ADBEL in steady states for online predic- tion of hourly Kp, AE and Dst indices. As illustrated in Table 1, the best agreement between the predicted Kp values and its target values and the best agreement between the predicted AE values and its target values are obtained from proposed method. It shows an improvement with respect to the MLP and ANFIS in Kp and AE prediction problems. Higher correlation and lower root mean square error obtained from proposed method means that it has a better performance than MLP and ANFIS in Kp prediction. According to Table 1, the best overall agreement between the predicted Dst values and target values is obtained by ANFIS. However, as illustrated in Table 2, the proposed method is more accurate and more sensitive than ANFIS based predictor for low values of Dst which involve the critical areas. The results of the predicted Dst are based on the following two thresholds: less than −50 and less than −100 are presented in Table 2. These thresholds are evaluated here because of their important role in solar winds and geomagnetic storms studies. The importance of these thresh- olds is followed by Alves et al. [2]. For more information, please refer to this reference. According to Table 2, the proposed method does more accurate predictions for low values of Dst. The recall ratio obtained from the proposed method is higher than the ANFIS based predictor. Tables 1 and 2 shows the best result of each method in 10 runs. Fig. 7. Actual versus desired output of the AE between 2000 and 2008 obtained from proposed adaptive prediction method in steady state. Fig. 8. Predicted Dst values (top) and related error (bottom) from start point in year 2000 by proposed ADBEL. The results are validated after the first 25 h. Fig. 9. Actual versus desired output of the Dst between 2000 and 2008 obtained from proposed ADBEL in steady state. Table 1 Comparisons between MLP, ANFIS and proposed ADBEL based on the RMSE and correlation in online prediction of hourly Kp, AE and Dst indices between 2000 and 2008. Index Kp AE Dst Model RMSE COR RMSE COR RMSE COR MLP with EBP 2.7126 0.87935 292.9055 0.84027 23.3845 0.73020 ANFIS with EBP 0.5830 0.91719 125.3251 0.81410 7.6862 0.95748 Proposed ADBEL 0.5376 0.92952 125.5962 0.83178 10.5941 0.95322 Table 2 The prediction results of hourly Dst index at low points. Threshold Dsto−50 (%) Dsto−100 (%) Model ANFIS Proposed method ANFIS Proposed method Recall 81.34 91.73 83.18 89.21 false 0.00 0.00 16.45 9.51 Missed 18.66 8.27 0.37 1.28 E. Lotfi, M.-R. Akbarzadeh-T. / Neurocomputing 126 (2014) 188–196 193
  • 8. Author's personal copy 4.2. Comparative study by offline prediction The online prediction results presented in Section 4.1 illustrate the ability of the proposed ADBEL in steady state. In this subsection, we show that these results are achieved by the proposed method with lower computational complexity as compared with the other methods. Here and prior to entering comparative numerical studies, let us analyze the computational complexity. Regarding the learning step (step 2) in proposed predictor, the algorithm adjust O(2n) weights for each pattern–target sample, where n is number of input attributes (for our examples n¼4). In contrast, computational time is O(cn) for single output MLP where c is number of hidden neurons (generally c¼10), and it is exponential for ANFIS and LLNF. Addi- tionally EBP is based on derivative computations which impose high complexity while the proposed method is derivative free. So the proposed method has lower computational complexity and higher efficiency with respect to the other methods. This improved com- puting efficiency can be important for online predictions, especially when the time interval of observations is small. Because of the high computational complexity of the above methods, the mean daily observation of indices has usually been utilized in the literature. In contrast to these models, our method can learn hourly as well as minutely as well as secondly observations without growing in computation for large samples. This is the key point of our method, the fast convergence, which makes it suitable for online applications. For comparison of the convergence in practice, we run the methods in offline manner to learn 78,908 pattern–target pairs of the hourly indices. The stopping criterion in learning process is to reach a certain correlation. The result is reported in Table 3. According to Table 3, the number of learning epochs of MLP and ANFIS are much higher than the proposed method; this is while each epoch is also significantly lower in computational order as discussed above. The results indicated in Table 3 are based on 10 runs and are statistically significant according to Student′s t-test with 95% confidence. Inter- ested reader may find more detailed statistical analysis and results of offline prediction in [42]. Specifically, further statistical analysis, i.e. separate training/testing/validation stages, and a study of sensi- tivity to different thresholds levels are discussed there. Recently, Mirmomeni et al. [53] applied the LLNF for one step ahead prediction of daily Kp and Dst indices. As he concluded, LLNF shows an improvement with respect to the MLP and RBF. Here, the 24-h average of the one day ahead predictions is used for an appropriate comparison of the proposed approach with LLNF. Tables 4 and 5 show the comparisons with LLNF based on the normalized mean square error (NMSE). According to Table 4, the best performance for prediction of daily Kp values is obtained by the proposed method. And as illustrated in Table 5, in daily Dst prediction, the proposed emotional algorithm results in lower NMSE with respect to the LoLiMoT. And although the RLoLiMoT yields lower NMSE, according to Table 2, the proposed method is more than learning pattern–target samples; it reinforces a beha- vior in learning those indices such as Dst. Hence, it can detect the critical area of Dst index where the solar winds and geomagnetic storms may have occurred. 5. Conclusions ADBEL, coming from a neurophysiological aspect of the brain, shows to be a novel adaptive learning algorithm and is an appropriate online predictor for geomagnetic activity indices which is a complex dynamical system. In contrast to the previous BEL based predictors, the proposed supervisory approach learns pattern–target samples and applies well to adaptive and online prediction problems. Furthermore, ADBEL is based on a decay mechanism for Amygdala monotonic learning rule. The proposed model is utilized here to predict Kp and AE indices, where their high values have greater importance, and Dst of geomagnetic index where its low values involve critical areas. According to the experimental studies, comparisons between the proposed method, Multilayer Perceptron (MLP), Adaptive Neuro-Fuzzy Infer- ence System (ANFIS) and Locally Linear Neuro-Fuzzy (LLNF) present the following conclusions. Firstly, the performance of the proposed method is higher than MLP and ANFIS in one hour ahead predictions of Kp and AE indices. Secondly, the proposed method is more accurate than ANFIS based predictor at low values of Dst. Thirdly, the proposed method shows better or comparative per- formance with respect to the adaptive LLNF which is proposed recently and predicts Kp and Dst daily indices. Fourthly, in contrast to the high computational order of the comparative approaches, ADBEL has lower computational order and faster training and is hence suitable for online prediction of geomagnetic indices. For future improvements, we believe this adaptation can be further improved by adjusting the learning rates α and β during the learning and predicting processes. We aim to address the optimization/adaptation of α and β along with the decay rate in adaptive prediction problems. Finally, the proposed model hopes to present an important perspective for future neuropsychological research related to the role of Amygdala long-term forgetting on its learning and generalizing ability. Acknowledgments The authors thank the reviewers for their excellent feedback on the paper and thank the “National Space Science Data Center” for using the data sets. Table 3 The number of offline learning epochs. Model Kp AE Dst MLP with EBP 187.377.5 49.474.8 30.972.2 ANFIS with EBP 50.170.41 50.170.41 59.870.74 Proposed ADBEL 2.770.35 2.570.51 4.670.38 Table 4 The NMSE comparison between LLNF and proposed ADBEL in prediction of daily Kp index between 2000 and October 2008. Model Learning NMSE LLNF LoLiMoT 0.5918a Adaptive LLNF RLoLiMoT 0.0888a Proposed ADBEL Emotional decaying 0.0130 a From Mirmomeni et al. [53]. Table 5 The NMSE comparison between LLNF and proposed ADBEL in prediction of daily Dst index between 2000 and 2006. Method Learning NMSE LLNF LoLiMoT 0.5348a Adaptive LLNF RLoLiMoT 0.0968a Proposed ADBEL Emotional decaying 0.1123 a From Mirmomeni et al. [53] E. Lotfi, M.-R. Akbarzadeh-T. / Neurocomputing 126 (2014) 188–196194
  • 9. Author's personal copy References [1] J. Abdi, B. Moshiri, B. Abdulhai, A.K. Sedigh, Forecasting of short-term traffic flow based on improved neuro-fuzzy models via emotional temporal differ- ence learning algorithm, Eng. Appl. Artif. Intell. (2011), http://dx.doi.org/ 10.1016/j.engappai.2011.09.011. [2] M.V. Alves, E. Echer, W.D. Gonzalez, Geoeffectiveness of solar wind inter- planetary magnetic structures, J. Atmos. Sol.–Terr. Phys. 73 (2011) 1380–1384, http://dx.doi.org/10.1016/j.jastp.2010.07.024. [3] T. Babaie, Lucas Karimizandi, Learning based brain emotional intelligence as a new aspect for development of an alarm system, Soft Comput. 12 (2008) 857–873, http://dx.doi.org/10.1007/s00500-007-0258-8. [4] R. Bala, P.H. Reiff, J.E. Landivar, Real‐time prediction of magnetospheric activity using the Boyle Index, Space Weather 7 (2009) S04003, http://dx.doi.org/ 10.1029/2008SW000407. [5] E. Balaguer-Ballester, N.R. Clark, M. Coath, K. Krumbholz, S.L. Denham, Under- standing pitch perception as a hierarchical process with top-down modula- tion, PLoS Comput. Biol. 5 (3) (2009) e1000301. [6] R. Balasubramanian, Forecasting geomagnetic activity indices using the Boyle index through artificial neural networks, Department of Electrical Computer Engineering, RICE University, Houston, TX, 2010Department of Electrical Computer Engineering, RICE University, Houston, TX, 2010. (Ph.D. Thesis). [7] G. Balasis, I.A. Daglis, C. Papadimitriou, M. Kalimeri, A. Anastasiadis, K. Eftaxias, Investigating dynamical complexity in the magnetosphere using various entropy measures, J. Geophys. Res. 114 (2009) A00D06, http://dx.doi.org/ 10.1029/2008JA014035. [8] C. Balkenius, J. Morén, Emotional learning: a computational model of amyg- dala, Cybern. Syst. 32 (6) (2001) 611–636. [9] M. Bazhenov, M. Stopfer, T.J. Sejnowski, G. Laurent, Fast odor learning improves reliability of odor responses in the locust antennal lobe, Neuron 46 (3) (2005) 483–492, http://dx.doi.org/10.1016/j.neuron.2005.03.022. [10] Z. Beheshti, S.Z.M. Hashim, A review of emotional learning and it′s utilization in control engineering, Int. J. Adv. Soft Comput. Appl. 2 (2010) 191–208. [11] M. Bianchin, T. Mello e Souza, J.H. Medina, I. Izquierdo, The amygdala is involved in the modulation of long-term memory, but not in working or short- term memory, Neurobiol. Learn. Mem. 71 (2) (1999) 127–131. [12] G. Caridakis, K. Karpouzis, S. Kollias, User and context adaptive neural networks for emotion recognition, Neurocomputing 71 (13) (2008) 2553–2562. [13] Y. Cerrato, E. Saiz, C. Cid, W.D. Gonzalez, J. Palacios, Solar and interplanetary triggers of the largest Dst variations of the solar cycle 23, J. Atmos. Sol.–Terr. Phys. (2011), http://dx.doi.org/10.1016/j.jastp.2011.09.001. [14] M. Chandra, Analytical study of a control algorithm based on emotional processing (M.S. Dissertation), Indian Institute of Technology Kanpur, 2005. [15] A.J. Conway, K.P. Macpherson, J.C. Brown, Delayed time series predictions with neural networks, Neurocomputing 18 (1) (1998) 81–89. [16] E. Daglarli, H. Temeltas, M. Yesiloglu, Behavioral task processing for cognitive robots using artificial emotions, Neurocomputing 72 (13) (2009) 2835–2844. [17] E. Daryabeigi, G.R.A. Markadeh, C. Lucas, Emotional controller (BELBIC) for electric drives—a review, 7–10 November, 2010, Glendale, AZ, pp. 2901–2907, 〈http://dx.doi.org/10.1109/IECON.2010.5674934〉. [18] B.M. Dehkordi, A. Parsapoor, M. Moallem, C. Lucas, Sensorless speed control of switched reluctance motor using brain emotional learning based intelligent controller, Energy Convers. Manage. 52 (1) (2011) 85–96, http://dx.doi.org/ 10.1016/j.enconman.2010.06.046. [19] B.M. Dehkordi, A. Kiyoumarsi, P. Hamedani, C. Lucas, A comparative study of various intelligent based controllers for speed control of IPMSM drives in the field-weakening region, Expert Syst. Appl. 38 (10) (2011) 12643–12653, http: //dx.doi.org/10.1016/j.eswa.2011.04.052. [20] S. Denham, Auditory scene analysis: a competition between auditory proto- objects? J. Acoust. Soc. Am. 131 (4) (2012) 3267 3267. [21] B.A. Emery, et al., Solar wind structure sources and periodicities of auroral electron power over three solar cycles, J. Atmos. Sol.–Terr. Phys. 71 (2009) 1157–1175, http://dx.doi.org/10.1016/j.jastp.2008.08.005. [22] J.P. Fadok, M. Darvas, T.M. Dickerson, R.D. Palmiter, Long-term memory for pavlovian fear conditioning requires dopamine in the nucleus accumbens and basolateral amygdala, PloS One 5 (9) (2010) e12751. [23] N. Fragopanagos, J.G. Taylor, Modelling the interaction of attention and emotion, Neurocomputing 69 (16) (2006) 1977–1983. [24] R. Gallassi, L. Sambati, R. Poda, M.S. Maserati, F. Oppi, M. Giulioni, P. Tinuper, Accelerated long-term forgetting in temporal lobe epilepsy: evidence of improvement after left temporal pole lobectomy, Epilepsy Behav. 22 (4) (2011) 793–795. [25] A. Gholipour, C. Lucas, D. Shahmirzadi, Predicting geomagnetic activity index by brain emotional learning, WSEAS Trans. Syst. 3m (2004) 296–299. [26] E.M. Griggs, E.J. Young, G. Rumbaugh, C.A. Miller, MicroRNA-182 regulates amygdala-dependent memory formation, J. Neurosci. 33 (4) (2013) 1734–1740. [27] D. Goleman, Emotional Intelligence; Why it can Matter More than IQ, Bantam, New York, Bantam Books, 2006. [28] O. Hardt, K. Nader, L. Nadel, Decay happens: the role of active forgetting in memory, Trends Cogn. Sci. (2013). [29] W. Horton, Y.H. Ichikawa, Chaos and Structures in Nonlinear Plasmas, Allied Publishers, World Scientific, Singapore, 1996. [30] W. Horton, Chaos and structures in the magnetosphere, Phys. Rep. 283 (1) (1997) 265–302. [31] W. Horton, J.P. Smith, R. Weigel, C. Crabtree, I. Doxas, B. Goode, J. Cary, The solar-wind driven magnetosphere–ionosphere as a complex dynamical system, Phys. Plasmas 6 (1999) 4178. [32] S. Jafarzadeh, Designing PID and BELBIC controllers in path tracking problem, Int. J. Comput. Commun. Control III (2008), ISSN 1841-9836, E-ISSN 1841-9844 (Suppl. issue: Proceedings of ICCCC 2008, pp. 343–348). [33] M. Khalilian, A. Abedi, A. Deris-Z, Position control of hybrid stepper motor using brain emotional controller, Energy Proc. 14 (2012) 1998–2004, http://dx. doi.org/10.1016/j.egypro.2011.12.1200. [34] A. Khashman, A modified back propagation learning algorithm with added emotional coefficients, IEEE Trans. Neural Netw. 19 (11) (2008) 1896–1909. [35] A. Khashman, Application of an emotional neural network to facial recogni- tion, Neural Comput. Appl. 18 (4) (2009) 309–320. [36] A. Khashman, Modeling cognitive and emotional processes: a novel neural network architecture, Neural Netw. 23 (2010) 1155–1163, http://dx.doi.org/ 10.1016/j.neunet.2010.07.004. [37] J.H. Kim, S. Li, A.S. Hamlin, G.P. McNally, R. Richardson, Phosphorylation of mitogen-activated protein kinase in the medial prefrontal cortex and the amygdala following memory retrieval or forgetting in developing rats, Neurobiol. Learn. Mem. 97 (1) (2011) 59–68. [38] R. Lamprecht, S. Hazvi, Y. Dudai, cAMP response element-binding protein in the amygdala is required for long—but not short-term conditioned taste aversion memory, J. Neurosci. 17 (21) (1997) 8443–8450. [39] Joseph E. LeDoux, Emotion and the limbic system concept, Concepts Neurosci. 2 (1991) 169–199. [40] J. LeDoux, The Emotional Brain, Simon and Schuster, New York, 1996. [41] J.E. LeDoux, Emotion circuits in the brain, Annu. Rev. Neurosci. 23 (1) (2000) 155–184. [42] E. Lotfi, M.R. Akbarzadeh-T, Supervised brain emotional learning, in: IEEE International Joint Conference on Neural Networks (IJCNN), 2012, pp. 1–6, http://dx.doi.org/10.1109/IJCNN.2012.6252391. [43] C. Lucas, A. Abbaspour, A. Gholipour, B.N. Araabi, M. Fatourechi, Enhancing the performance of neurofuzzy predictors by emotional learning algorithm, Int. J. Inf. 27 (2) (2003) 137–145. [44] C. Lucas, D. Shahmirzadi, N. Sheikholeslami, Introducing BELBIC: brain emo- tional learning based intelligent controller, Int. J. Intell. Autom. Soft Comput. 10 (2004) 11–21. [45] C. Lucas, R.M. Milasi, B.N. Araabi, Intelligent modeling and control of washing machine using Locally Linear Neuro-Fuzzy (LLNF), Asian J. Control 8 (2006) 393–400, http://dx.doi.org/10.1111/j.1934-6093.2006.tb00290.x. [46] C. Lucas, BELBIC and its industrial applications: towards embedded neuroemo- tional control codesign, integrated systems, Des. Technol. 3 (2010) 203–214, http://dx.doi.org/10.1007/978-3-642-17384-4_17. [47] Stacy Marsella, Jonathan Gratch, Paolo Petta, Computational models of emotion, in: K.R. Scherer, T. Bänziger, E. Roesch (Eds.), A Blueprint for Affective Computing, 2010, pp. 21–45. [48] M. Mattinen, Modeling and forecasting of local geomagnetic activity, AALTO University, School of Science and Technology, Helsinki, 2010. (Master′s Thesis). [49] M.L. Mays, W. Horton, Real-time predictions of geomagnetic storms and substorms: use of the Solar Wind Magnetosphere–Ionosphere System model, Space Weather 7 (2009) S07001, http://dx.doi.org/10.1029/2008SW000459. [50] A.R. Mehrabian, C. Lucas, Emotional learning based intelligent robust adaptive controller for stable uncertain nonlinear systems, Int. J. Eng. Math. Sci. 2 (4) (2005) 246–252. [51] A.R. Mehrabian, C. Lucas, Jafar Roshanian, Aerospace launch vehicle control: an intelligent adaptive approach, Aerosp. Sci. Technol. 10 (2006) 149–155, http: //dx.doi.org/10.1016/j.ast.2005.11.002. [52] M. Mermillod, P. Bonin, L. Mondillon, D. Alleysson, N. Vermeulen, Coarse scales are sufficient for efficient categorization of emotional facial expressions: evidence from neural computation, Neurocomputing 73 (13) (2010) 2522–2531. [53] M. Mirmomeni, C. Lucas, B. Moshiri, B.N. Arabbi, Introducing adaptive neurofuzzy modeling with online learning method for prediction of time- varying, Sol. Geomagn. Act. Indices 37 (12) (2010) 8267–8277, http://dx.doi. org/10.1016/j.eswa.2010.05.059. [54] M. Mirmomeni, E. Kamaliha, S. Parsapoor, C. Lucas, Variation of embedding dimension as one of the chaotic characteristics of solar and geomagnetic activity indices, Natl. Acad. Sci. Repub. Arm. (2010) 338–349. [55] J. Morén, C. Balkenius, 2000. A computational model of emotional learning in the amygdala, in: J.A. Meyer, A. Berthoz, D. Floreano, H.L. Roitblat, S.W. Wilson (Eds.), From Animals to Animates, vol. 6: Proceedings of the 6th International Conference on the Simulation of Adaptive Behaviour, MIT Press, Cambridge, MA, USA, pp. 115–124. [56] J. Morén, Emotion and learning—a computational model of the Amygdala, Department of Cognitive Science, Lund University, Lund, Sweden, 2002 (Ph.D. Thesis). [57] M. Parsapoor, C. Lucas, S. Setayeshi, Reinforcement_recurrent fuzzy rule based system based on brain emotional learning structure to predict the complexity dynamic system, in: Proceedings of the 3rd International Conference on Digital Information Management, London, November 13–16, 2008, pp. 25–32, doi: 10.1109/ICDIM.2008.4746712. [58] G.P. Pavlos, A.C. Iliopoulos, V.G. Tsoutsouras, D.V. Sarafopoulos, D.S. Sfiris, L. P. Karakatsanis, E.G. Pavlos, First and second order non-equilibrium phase transition and evidence for non-extensive Tsallis statistics in Earth′s magneto- sphere, Physica A Stat. Mech. Appl. 390 (15) (2011) 2819–2839. [59] H. Rouhani, M. Jalili, B.N. Araabi, W. Eppler, C. Lucas, Brain emotional learning based intelligent controller applied to neurofuzzy model of micro-heat E. Lotfi, M.-R. Akbarzadeh-T. / Neurocomputing 126 (2014) 188–196 195
  • 10. Author's personal copy exchanger, Expert Syst. Appl. 32 (2007) 911–918, http://dx.doi.org/10.1016/j. eswa.2006.01.047. [60] E.T. Rolls, Neurophysiology and functions of the primate amygdala, in: The Amygdala: Neurobiological Aspects of Emotion, Memory and Mental Dysfunc- tion, 1992. [61] M. Samadi, A. Afzali-Kusha, C. Lucas. Power management by brain emotional learning algorithm, in: Proceedings of the 7th International Conference on ASIC, 2007, ASICON'07, IEEE, Guilin, China, 2007, pp. 78–81. [62] A. Sadeghieh, H. Sazgar, K. Goodarzi, C. Lucas, Identification and real-time position control of a servo-hydraulic rotary actuator by means of a neurobio- logically motivated algorithm, ISA Trans. (2011), http://dx.doi.org/10.1016/j. isatra.2011.09.006. [63] B. Scelfo, B. Sacchetti, P. Strata, Learning-related long-term potentiation of inhibitory synapses in the cerebellar cortex, Proc. Natl. Acad. Sci. 105 (2) (2008) 769–774. [64] N.A. Stillings, S.E. Weisler, C.H. Chase, M.H. Feinstein, J.L. Garfield, E. L. Rissland, Cognitive Science: An Introduction, MIT Press, Cambridge, Massa- chusetts, London, England, 1995. [65] E. Spencer, A. Rao, W. Horton, M.L. Mays, Evaluation of solar wind–magneto- sphere coupling functions during geomagnetic storms with the WINDMI model, J. Geophys. Res. 114 (2009) A02206, http://dx.doi.org/10.1029/ 2008JA013530. [66] J. Takalo, J. Timonen, Neural network prediction of the AE index from the PC index, Phys. Chem. Earth Part C: Sol.–Terr. Planet. Sci. 24 (1) (1999) 89–92. [67] O. Troshichev, D. Sormakov, A. Janzhura, Relation of PC index to the geomagnetic storm Dst variation, J. Atmos. Sol.–Terr. Phys. (2010), http://dx. doi.org/10.1016/j.jastp.2010.12.015. [68] D.V. Vassiliadis, A.S. Sharma, T.E. Eastman, K. Papadopoulos, Low‐dimensional chaos in magnetospheric activity from AE time series, Geophys. Res. Lett. 17 (11) (1990) 1841–1844. [69] R. Ventura, C. Pinto-Ferreira, Responding efficiently to relevant stimuli using an emotion-based agent architecture, Neurocomputing 72 (13) (2009) 2923–2930. [70] H. Wang, Kongqiao Wang, Affective interaction based on person-independent facial expression space, Neurocomputing 71 (10) (2008) 1889–1901. [71] H.L. Wei, D.Q. Zhu, S.A. Billings, M.A. Balikhin, Forecasting the geomagnetic activity of the Dst index using multiscale radial basis function networks, Adv. Space Res. 40 (12) (2007) 1863–1870, http://dx.doi.org/10.1016/j. asr.2007.02.080. [72] M. Wiltberger, R.S. Weigel, W. Lotko, J.A. Fedder, Modeling seasonal variations of auroral particle precipitation in a global-scale magnetosphere–ionosphere simulation, J. Geophys. Res. 114 (2009) A01204, http://dx.doi.org/10.1029/ 2008JA013108. [73] S.H. Yeh, C.H. Lin, P.W. Gean, Acetylation of nuclear factor-κB in rat amygdala improves long-term but not short-term retention of fear memory, Mol. Pharmacol. 65 (5) (2004) 1286–1292. [74] Q. Zhang, M. Lee, A hierarchical positive and negative emotion understanding system based on integrated analysis of visual and brain signals, Neurocomputing 73 (16) (2010) 3264–3272. [75] Q. Zhang, S. Jeong, M. Lee, Autonomous emotion development using incre- mental modified adaptive neuro-fuzzy inference system, Neurocomputing 86 (1) (2012) 33–44. Ehsan Lotfi (Student Member, IEEE) received the B.Sc. degree in Computer Engineering (2006), from Ferdowsi University of Mashhad, Iran and his M.Sc. in Artificial Intelligence (2009) from Azad University, Mashhad Branch, Iran. He is currently doctoral student of Prof. Akbarzadeh in Artificial Intelligence at Science and Research Campus, Azad University, Tehran, Iran. He is member of Young Researchers Club in Azad University, Mashhad, Iran. His research interest includes cognitive sciences, computational intelligence, soft computing and their applications. Mohammad-R. Akbarzadeh-T. (Senior Member, IEEE) received his Ph.D. on Evolutionary Optimization and Fuzzy Control of Complex Systems from the Department of Electrical and Computer Engineering at the University of New Mexico in 1998. He currently holds dual appointment as professor in the Departments of Electrical Engineering and Computer Engineering at Ferdowsi University of Mashhad. In 2006–2007, he completed a one year visiting scholar position at Berkeley Initiative on Soft Computing (BISC), UC Berkeley. From 1996 to 2002, he was affiliated with the NASA Center for Autonomous Control Engineering at University of New Mexico (UNM). In 2011, he chaired the first National Workshop on Soft Computing and Intelligent Systems in Mashhad. In 2007, he served as the technical chair for the First Joint Congress on Fuzzy Intelligent Systems that was held in Mashhad, Iran. Also, in 2003, he chaired the Fifth Conference on Intelligent Systems as well as co-chaired two mini-symposiums on “Satisficing Multi-agent and Cyber-learning Systems” in Spain and “Intelligent and Biomedical Systems” in Iran, during 2004 and 2005 respectively. Dr. Akbarzadeh is the founding president of the Intelligent Systems Scientific Society of Iran, the founding councilor representing the Iranian Coalition on Soft Computing in IFSA, and a council member of the Iranian Fuzzy Systems Society. He is also a life member of Eta Kappa Nu (The Electrical Engineering Honor Society), Kappa Mu Epsilon (The Mathematics Honor Society), and the Golden Key National Honor Society. From 2000-to-2008, he served as the faculty advisor for the IEEE student branch at Ferdowsi University of Mashhad. He has also been on board of several IEEE conference/congress technical committees such as the IEEE-SMC, IEEE-WCCI, Genetic and Evolutionary Computation Conference (GECCO), and Automatic Control Con- ference (ACC). He has received several awards including: the IDB Excellent Leader- ship Award in 2010, The IDB Excellent Performance Award in 2009, the Outstanding Faculty Award in 2008 and 2002, the IDB Merit Scholarship for High Technology in 2006, the Outstanding Faculty Award in Support of Student Scientific Activities in 2004, Outstanding Graduate Student Award in 1998, and Service Award from the Mathematics Honor Society in 1989. His research interests are in the areas of evolutionary algorithms, fuzzy logic and control, soft computing, multi-agent systems, complex systems, robotics, and biomedical engineering systems. He has published over 250 peer-reviewed articles in these and related research fields. E. Lotfi, M.-R. Akbarzadeh-T. / Neurocomputing 126 (2014) 188–196196