The field of identification is rather complex. This complexity is due to the variety of applications, goals, conditions, etc. Exact
specifications of final aim to be attained, analysis of available precision, etc. are some of the basic requirements to guide the work. In
this paper we used CMAC look up table for identification of both linear and non-linear systems. It deals with the basic CMAC
architectures, its capability and shows the motivations why CMAC net can be used for system identification. CMAC network has
hundreds of thousands of adjustable weights that can be trained to approximate non-linearities which are not explicitly written out or
even known. Its associative memory has a built in generalization. We have shown that due to its extremely fast mapping and training
operations, it is suitable for real time applications.
2. Chandradeo Prasad and Tarun Kumar Gupta VSRDIJCSIT, Vol. IV (X) October 2014 / 174
activated (na = 5 in the example). The layer 1 neurons
effectively quantize the inputs.
Layer 2 contains nv association neurons aij which
are connected to one neuron from each layer 1 input
array (z1i, z2j ). Each layer 2 neuron outputs 1.0 when
all its inputs are nonzero, otherwise it outputs zeroā
these neurons compute the logical AND of their inputs.
They are arranged so exactly na are activated by any
input (5 in the example).
Layer 3 contains the nx output neurons, each of
which computes a weighted sum of all layer 2
outputs, i.e.:
xi = āwijk ajk
The parameters wijk are the weights which
parameterize the CMAC mapping (wijk connects ajk
to output i). There are nx weights for every layer 2
association neuron, which makes nv nx weights in total.
Only a fraction of all the possible association neurons
are used. They are distributed in a pattern which
conserves weight parameters without degrading the
local generalization properties too much (this will be
explained later). Each layer 2 neuron has a receptive
field that is na Ć na units in size, i.e. this is the size of
the input space region that activates the neuron.
The CMAC was intended by Albus to be a simple model
of the cerebellum. Its three layers correspond to sensory
feature-detecting neurons, granule cells and Purkinje
cells respectively, the last two cell types being
dominant in the cortex of the cerebellum. This
similarity is rather superficial (as many biological
properties of the cerebellum are not modeled) therefore
it is not the main reason for using the CMAC in a
biologically based control approach. The CMACās
implementation speed is a far more useful property.
3. THE CMAC AS A LOOKUP TABLE
An alternative table based CMAC which is exactly
equivalent to the above neural network version. The
inputs y1 and y2 are first scaled and quantized to
integers q1 and q2. In this example:
q1 = 15 y1 ā¦ (1.1)
q2 = 15 y2 ā¦ (1.2)
as there are 15 inputs quantization steps between 0
and 1. The indexes (q1, q2) are used to look up
weights in na two-dimensional lookup tables (since
na = 5 is the number of association neurons activated
for any input). Each lookup table is called an
āassociation unitā (AU)āthey may be labeled AU1. .
.AU5. The AU tables store one weight value in each
cell. Each AU has cells which are na times larger than
the input quantization cell size, and are also displaced
along each axis by some constant. The displacement
for AUi along input yj is di
. If pi is the table index for
AUi along axis yj then, in the example:
Pi
j = [(qj + di
j) / na] = [(qj +di
j)/(qj + di
j) / 5] ā¦ (1.3)
CMAC mapping algorithm (based on the table form)
follows in the next section. Software CMAC
implementations never use the neural network form.
The CMAC HASH function used by the CMAC
QUANTIZE AND ASSOCIATE procedure takes the AU
number and the table indexes and generates an index
into a weight array.
The CMAC is faster than an āequivalentā MLP
network because only na neurons (i.e. na table
entries) need to be considered to compute each output,
whereas the MLP must perform calculations for all of
its neurons.
4. THE CMAC ALGORITHM
Parameters
ny Number of inputs (integerā„ 1)
nx Number of outputs (integerā„ 1)
na Number of association units (integerā„ 1)
di j The displacement for AU i along input yj
(integer, 0 ā¤ di j < na)
mini / Input ranges;
maxi the minimum and maximum values for yi (real).
resi Input resolution;
the number of quantization steps for input yi
(integer > 1).
nw The total number of physical CMAC weights per
output.
Internal state variables Āµi ā An index into the
weight tables for association unit i (i = 1...na).
Wj[i] Weight i in the weight table for output xj,
i = 0...(nw ā1), j = 1...nx.
ā Increment
Quantization
Inputs: y1 ...yn y
Outputs: q1 ...qn x
for i ā 1...ny āfor all inputs
if yi < mini then : yi ā mini limit yi
if yi > maxi then : yi ā maxi limit yi
qi āā¦ resi (( yi -mini )/(maxi āmini)] +1 quantize input
if qi ā„ resi then : qi ā resi
Output Calculation
āfor i ā 1...nx for all outputs
xi ā 0
for j ā 1...na
xi ā xi + Wij for all AUs add table entry to xi
Training
Inputs : t1 ā¦ tnx , ā (scalar)
Increment ā ā ((ti ā xi)/na)
Wj[i] = Wj[i] + increment
Training Procedure: The training process takes a set
of desired input to output mappings (training points)
and adjusts the weights so that the global mapping fits
the set. It will be useful to distinguish between two
different CMAC training modes:
j
3. Chandradeo Prasad and Tarun Kumar Gupta VSRDIJCSIT, Vol. IV (X) October 2014 / 175
Target Training: First an input is presented to the
CMAC and the output is computed. Then each of the
na referenced weights (one per lookup table) is
increased so that the outputs xi come closer to the
desired target values ti , i.e.:
wijk ā wijk + (ti - xi) ā¦ (1.5)
where Ī± is the learning rate constant (0 ā¤ Ī± ā¤ 1). If
Ī± = 0 then the outputs will not change. If Ī± = 1 then
the outputs will be set equal to the targets. This is
implemented in the CMAC TARGET-TRAIN
procedure.
Error Training: First an input is presented to the
CMAC and the output is computed. Then each of the
na referenced weights is incremented so that the output
vector [x1 . . . xnx ] increases in the direction of the error
vector [e1 . . . enx ], i.e.:
wijk ā wijk + ei ... (1.6)
This results in a trivial change to the CMAC
TARGETTRAIN procedure.
Target training causes the CMAC output to seek a given
target, error. Training causes it to grow in a given
direction. The input/desired-output pairs (training
points) are normally presented to the CMAC in one of
two ways:
Random Order: If the CMAC is to be trained to
represent some function that is known beforehand, then
a selection of training points from the function can be
presented in random order to minimize learning
interference.
Trajectory Order: If the CMAC is being used in an
online controller then the training point inputs will
probably vary gradually, as they are tied to the system
sensors. In this case each training point will be close to
the previous one in the input space.
Comparison with the MLP: To represent a given set of
training data the CMAC normally requires more
parameters than an MLP. The CMAC is often over-
parameterized for the training data, so the MLP has a
smoother representation which ignores noise and
corresponds better to the dataās underlying structure. The
number of CMAC weights required to represent a given
training set is proportional to the volume spanned by that
data in the input space. The CMAC can be more easily
trained on-line than the MLP, for two reasons. First, the
training iterations are fast. Second, the rate of
convergence to the training data can be made as high as
required, up to the instant convergence of Ī± = 1 (although
such high learning rates are never used because of
training interference and noise problems). The MLP
always requires much iteration to converge, regardless of
its learning rate.
5. EXPERIMENTAL RESULTS
The CMAC network programs developed in MATLAB.
These programs were employed to test a single pendulum
system. This system has a driving force and viscous
damping force. This system is defined by the following
equation:
Ī``
+ 2Ī¶Īø + sinĪø = f(t)
Where
Īø(t) is the angular displacement of pendulum
f(t) is the driving force of the system
Ī¶ is the damping coefficient
This neural network identification block took the present
driving force value and present values of Īø` and Īø``
(angular displacement and angular velocity). These are
state variables of the system as inputs and give the
estimate of the present value of the damping coefficients.
The network was trained with learning rates of 0.2, 0.3
and 0.4. The network was also tested with different
number of data points. The results of network with
learning rate of 0.2 and 126 number of data points is
listed in table number 1.1.
The result of the randomly chosen 65 data points for a
learning rate of 0.2 from the above given data is listed in
table number 1.2.
Fig. 1.1: An example two-input two-output CMAC, in Neural Network Form
(ny = 2, na = 5, nv = 72, nx = 2)
6. Chandradeo Prasad and Tarun Kumar Gupta VSRDIJCSIT, Vol. IV (X) October 2014 / 178
Fig. 1.2: Variance Plot of Omega
Fig. 1.3: Variance Plot of Actuation Signal
Fig. 1.4: Time Variance Plot of Output Variable
7. Chandradeo Prasad and Tarun Kumar Gupta VSRDIJCSIT, Vol. IV (X) October 2014 / 179
6. CONCLUSION
The implementation of CMAC in system identification
has been compared with that of other conventional
method and other neural network implementations. The
self-tuning regulator method calls for large system inputs
and yet cannot track the system when the plant is non-
linear. The ML, BPN approaches are capable of tracking
non-linear systems too but take long time to learn the
system. The mapping and training operations of CMAC is
extremely fast. Here estimates are generated virtually
from a table look up procedure. It performs well with
both linear and non-linear systems as well. It has been
found to be easier to implement and seems to be most
suitable for real time applications.
7. FUTURE SCOPE
Hardware Implementation: Hardware implementation
of CMAC requires large amount of memory. Once it is
achieved, it can be used in missile delivery/defence
systems giving a clear and efficient edge over rivals
because of its extremely fast mapping.
CMAC can also be experimented to be efficiently used in
higher order and unstable systems.
8. REFERENCES
[1] L. Ljung, System Identification - Theory for the User.
Prentice-Hall, N.J. 2nd edition, 1999.
[2] J. Schoukens and R. Pintelon, System Identification. A
Frequency Domain Approach, IEEE Press, New York,
2001.
[3] T. Sƶderstrƶm and P. Stoica, System Indentification,
Prentice Hall, Enhlewood Cliffs, NJ. 1989.
[4] P. Eykhoff, System Identification, Parameter and State
Estimation, Wiley, New York, 1974.
[5] A. P. Sage and J. L. Melsa, Estimation Theory with
Application to Communications and Control, McGraw-
Hill, New York, 1971.
[6] H. L. Van Trees, Detection Estimation and Modulation
Theory, Part I. Wiley, New York, 1968.
[7] G. C. Goodwin and R. L. Payne, Dynamic System
Identification, Academic Press, New York, 1977.
[8] K. Hornik, M. Stinchcombe and H. White, Multilayer
Feed-forward Networks are Universal Approximators",
Neural Networks Vol. 2. 1989. pp. 359-366.
[9] G. Cybenko, Approximation by Superposition of
Sigmoidal Functions, Mathematical Control Signals
Systems, Vol. 2. pp. 303-314, 1989.
[10] K. I. Funahashi, On the Approximate Realization of
Continuous Mappings by Neural Networks", Neural
Networks, Vol. 2. No. 3. pp. 1989. 183-192.
[11] M. Leshno, V. Y. Lin, A. Pinkus and S. Schocken,
Multilayer Feed-forward Networks With a Nonpolynomial
Activation Function Can Approximate Any Function,
Neural Networks, Vol. 6. 1993. pp. 861-67
[12] J. S. Albus, A New Approach to Manipulator Control: The
Cerebellar Model Articulation Controller (CMAC),
Transaction of the ASME, Sep. 1975. pp. 220-227.
[13] Y. H. Pao, Adaptive Pattern Recognition and Neural
Networks, Addison-Wesley, Reading, Mass., 1989, pp.
197-222.
[14] D. F. Specht, Polynomial Neural Networks, Neural
Networks, Vol.3. No. 1 pp. 1990. pp. 109-118,
[15] J. Park and I. W. Sandberg, Approximation and Radial-
Basis-Function Networks, Neural Computation, Vol 5. No.
2.
ļ¬ļ¬ļ¬ļ
ļ
8. Chandradeo Prasad and Tarun Kumar Gupta VSRDIJCSIT, Vol. IV (X) October 2014 / 180
ļ