Investigate the Stability of Homeostatic Plasticity Controller
1. Information Technology Course Module
Software Engineering
by Damir Dobric / Andreas Pech
Frankfurt University of Applied Sciences 2019
Investigate stability of Homeostatic Plasticity Controller
Abstract
Homeostatic Plasticity Controller (HPC) is a
developed algorithm inspired by Homeostatic
Plasticity Mechanism in neuron networks. It
controls the boosting mechanism in Hierarchical
Temporal Memory (HTM) – Spatial Pooler (SP)
Learning algorithm. The Spatial Pooler (SP)
Leaning generates the Sparse Distributed
Representative (SDR) of the inputs. The SDR of
the inputs are expected to be unchanged in the
long run. This experiment investigated the effect
of a boosting factor on the minimum unchanged
SDR in SP-Learning iteration required for HPC.
Keywords—Homeostatic Plasticity Controller,
Spatial Pooler Learning, Boosting, Sparse
Distributed Representation, Hierarchical Temporal
Memory
I. INTRODUCTION
The temporal lobe of the brain inspires the idea of
Hierarchical Temporal Memory Cortical Learning
Algorithm (HTM CLA), which is a learning
algorithm. The integration of the temporal and
spatial pattern is aided by the neo cortex which is a
part of the brain. This helps in learning to identify
and recall temporal sequences Hawkins, Subutai,
and Cui (Hawkins, Subutai, and Cui, 2017). The
neurons are well structured in layers of column-like
cells constructed from multitude of neurons, such
that the unit areas are made up of interconnected
networks amongst the cells. Every region, columns
as well as mini-columns are structurally arranged
in a hierarchical order. (Mountcastle, 1997) which
can be improved upon related in larger, more
dynamic networks higher cognitive mechanisms
such as invariant Pattern and sequence
identification, as well as representations and so on.
General, the HTM CLA is made up of two main
components: Spatial Pooler and Temporal
Memory.
These two (Spatial Pooler and Temporal Memory)
are algorithms. Furthermore, the mini-columns are
used for the Spatial Pooler. Yuwei, Subutai, among
other sensory stimuli (Hawkins, 2017) is a book by
Hawkins. This book is focused on the controlling
of spatial learning patterns by encoding the patterns
in a sparse format referred to as system of
distributed representation (SDR).
The TM is in charge of memorizing sequences
from SDR stands for Exclusive Drawing Rights.
Experiments in this paper prove that the new
system is ineffective .The Spatial Pooler version is
unstable. During the learning process, previously
learned habits will be overlooked. Then heard it all
over again findings reveal that the Spatial Pooler
alternates between being stable and being unstable.
Furthermore, studies show that the instabilities are
related to a single pattern rather than a series of
patterns. The behavior of HPC was studied in this
study.
II. METHODS
The HPC algorithm developed by Dobric et al. [2]
introduced the method to disable the boosting,
which is the cause of changed SDR in the long run.
The experiment is designed to show the effect of
two parameters, which are assumed to be effect on
the changed SDR in the HTM – SP Learning
algorithm. In this experiment the source code [4] in
the NeocortexAPI - Spatial Pooler Learning was
modified to accept many combinations of
parameters, for the sake of simplicity, while the
mechanism of it remains unchanged. The other
parameters as shown in Table 1 are fixed with the
following values.
Table 1 Spatial Pooler fixed parameters in the experiments.
Parameters Value
MIN_PCT_OVERLAP_CYCLES 1
INPUT_BITS 200
COLUMNS 2048
SYN_PERM_CONNECTED 0.1
Joseph Itopa Abubakar
talktoitopa@gmail.com
Paween Pongsomboon
paween.pongsomboon@gmail.com
2. DUTY_CYCLE_PERIOD 100
GLOBAL_INHIBITION true
LOCAL_AREA_DENSITY -1
ACTIVATION_THRESHOLD 10
MAX_INPUT 100
MAX_BOOST experiment
MIN_CYCLES experiment
While other parameters are constant in the
experiment, MAX_BOOST and MIN_CYCLES
are investigated. Since both parameters are
presumed to have effect on changed or unchanged
active columns in SP-Learning iterations.
A. MAX_BOOST
The boosting algorithm gives the opportunity for
inactive columns to become active more, which is
known as Homeostatic Plasticity Mechanism. The
boosting algorithm can be found in Spatial Pooler
Learning [4]. The experiment shows the effect of
boosting on the outcomes. Table 2 shows the max
boost parameter value used in this experiment.
Table 2 Max Boost value used in this experiment
MAX_BOOST 1 5 10 15 20 25 30
B. MIN_CYCLES
Minimum Cycle of unchanged active columns in
SP Learning iterations to be stability. This
parameter defines the iterations of minimum cycle
required for unchanged active columns before
stability is verified. This experiment investigates
on the effect of boost factor with the number of
iterations required for unchanged active columns in
the long run. Table 3 shows the minimum cycle
used in this experiment.
Table 3 Minimum cycle required for unchanged SDR
parameter used in this experiment
MIN_CYCLES 5 10 15 20 25 30
The experiment source code and results can be found at [1].
III. RESULTS
The experiment was conducted with the
NeocortexAPI Version 1.1.0 [6] and 100 inputs.
The overall stability, unchanged of SDR more than
1000 iterations are shown in table 4. The symbol〇
indicates no changed found between active
columns after the stable state has been defined,
while the symbol ×indicates the changed between
active columns found, hence the Unstable state.
Table 4 Stable and Unstable state found in MAX_BOOST
and MIN_CYCLES parameters combination
MIN_CYCLES MAX_BOOST
1 5 10 15 20 25 30
5 〇
10 〇 〇 〇
15 〇 〇
20 〇 〇 〇 〇 〇
25 〇 〇
30 〇 〇
◯: Stable ×:Unstable
The results of experiments can also be classified
into following categories.
A. Stable
If the active columns remain unchanged even after
the SP-Learning algorithm is in stable state, then it
is stable.
B. Stable af first then following by Unstable
In the long run, SP-Learning Algorithm is
changing from stable into unstable. There is a
change in active columns between iterations. The
toggle between Stable and Unstable and vice versa
also fall into this category.
C. Never Stable
The unchanged active columns iterations never
meet the minimum cycle requirement for stability.
The following Table 5 shows the overall results
with their cycle number of the event.
Table 5 Results with cycle number of the event
MAX_BOOST MIN_CYCLE First
Stable
cycle
First
Unstable
cycle
Stable for
overall
iterations
1 5
10 408 - STABLE
15 348 - STABLE
20 352 - STABLE
25
30 451 - STABLE
5 5
10 270 292 -
15 351 379 -
20 394 - STABLE
25
30
10 5
10 424 441 -
15 408 450 -
20 395 - STABLE
25
30 394 - STABLE
15 5
10 385 - STABLE
15 411 424 -
3. 20 346 - STABLE
25 357 - STABLE
30
20 5
10 351 358 -
15 409 425 -
20 392 401 -
25
30
25 5
10 329 360 -
15 371 - STABLE
20 392 401 -
25
30
30 5
10 359 - STABLE
15 356 414 -
20 365 - STABLE
25
30
- : No Unstable found
Some of the results are plotted to observe the result more
between active column values. Figure 1 shows the stable
result of active columns (MAX_BOOST = 5 and
MIN_CYCLES = 20). Active columns can be noticed for
unchanged after around cycle 400.
Figure 1 Stable Results
Figure 2 shows the state of active columns change from stable
to unstable at the cycle 424 (MAX_BOOST = 15 and
MIN_CYCLE = 15).
Figure 2 Unstable Results (Left : Overall , Right : Zoomed
between cycle 390 and 470 )
From the graph, the zoomed image in Figure 2, Right shows
the slight changed of active columns between cycle 390 and
470, which cause the unstable, can be observed.
In addition, the experiment with the different number of
inputs, MAX_INPUT, and the SP Learning iterations, have
been conducted as well. So far it can be observed the greater
number of inputs, the easier for the SP Learning algorithm to
become unstable. Table 6 shows the results of SP Learning
algorithm with 10000 learning iterations with the same other
parameters mentioned in method section.
Table 6 experiment with 10000 learning iterations
MAX_BOOST MIN_CYCLES First
Stable
found
Unstable
found
STABLE
1 15 ◯ × STABLE
5 15 ◯ × STABLE
10 15 ◯ × STABLE
15 15 ◯ × STABLE
20 15 ◯ ◯ ×
5 5 ◯ ◯ ×
5 15 ◯ × STABLE
5 25 ◯ × STABLE
5 35 ◯ ◯ ×
5 45 × × ×
◯: Stable ×:Unstable
IV. CONCLUSION
Boost factor (MAX_BOOST) clearly has effect to the
changed of active columns in the Hierarchical Temporal
Memory - Spatial Pooler Learning algorithm. The
experiment results show unchanged active columns at all for
1000 iterations and are stable. The increment of max boost
factor requires more iterations for active columns to become
unchanged. The investigation on the stability of HPC need
to be observed more, since there are more two parameters
involved with boosting algorithm, which are
MIN_PCT_OVERLAP_CYCLES and
MIN_ACTIVE_DUTY_CYCLES in the Spatial Pooler
Learning[4]. So far, the experiment results have shown the
change in some cases, thus more experiments need to be
done before the solution is discovered.
REFERENCES
[1] Source code and the result of experiment Branch:
https://github.com/UniversityOfAppliedSciencesFrankf
urt/se-cloud-2020-2021/tree/ML20/21-5.4.-Investigate-
stability-of-Homeostatic-Plasticity-Controller-Elite-
group/Source/MyProject
[2] Dobric, Pech, Ghita, Wennekers. Improved Spatial
Pooler with Homeostatic Plasticity Controller:
https://github.com/ddobric/neocortexapi/blob/master/Ne
oCortexApi/Documentation/experiments.md#improved-
spatial-pooler-with-homeostatic-plasticity-controller-
best-industrial-paper-award