SlideShare a Scribd company logo
1 of 8
Download to read offline
VSRD International Journal of Computer Science &Information Technology, Vol. IV Issue X October 2014 / 173
e-ISSN : 2231-2471, p-ISSN : 2319-2224 Ā© VSRD International Journals : www.vsrdjournals.com
RESEARCH PAPER
SYSTEM IDENTIFICATION
USING CEREBELLAR MODEL ARITHMETIC COMPUTER
1Chandradeo Prasad* and 2Tarun Kumar Gupta
1Research Scholar, Department of Computer Science & Engineering,
UP Technical University, Lucknow, Uttar Pradesh, INDIA.
2Head of Department, Department of Computer Science & Engineering,
RGEC, Meerut, Uttar Pradesh, INDIA.
*Corresponding Authorā€™s Email ID: cpsindri@gmail.com
ABSTRACT
The field of identification is rather complex. This complexity is due to the variety of applications, goals, conditions, etc. Exact
specifications of final aim to be attained, analysis of available precision, etc. are some of the basic requirements to guide the work. In
this paper we used CMAC look up table for identification of both linear and non-linear systems. It deals with the basic CMAC
architectures, its capability and shows the motivations why CMAC net can be used for system identification. CMAC network has
hundreds of thousands of adjustable weights that can be trained to approximate non-linearities which are not explicitly written out or
even known. Its associative memory has a built in generalization. We have shown that due to its extremely fast mapping and training
operations, it is suitable for real time applications.
Keywords: CMAC: Cerebellar Model Arithmetic Computer, NN: Neural Networks, MLP: Multilayer Perceptron, BPN: Back
Propagation Network.
1. INTRODUCTION
System is a deļ¬ne part of the real world. Interactions with
the environment are described by input signals, output
signals and disturbances. Dynamical System is a system
with a memory, i.e. the input value at time t wills
inļ¬‚uences the output signal at the future.
System identification is defined as the determination on
the basis of input and output, of a system within a
specified class of systems to which system under test is
equivalent.
With very mean knowledge about the system and input
output values for certain period, system identification
determines the followings:
ļ‚· Structure, expressed in terms of mathematical
identities, block diagram, flow diagrams or graphics,
connection matrices.
ļ‚· Parameter values.
ļ‚· Values of dependent variables (state) at a certain
instant or as a function of time.
If a certain amount of a priori knowledge is available
about any system, then the system identification problem
is reduced to the finding of numerical values of the
parameters. This is termed as parameter estimation.
Parameter estimation is defined as experimental
determination of the values of the parameters that governs
the dynamic and non-linear behaviour, assuming that the
structure of the process model is known.
Fig. 1.1: Control Block Diagram
Example: Fig. 1.1 illustrates the finding of damping
coefficient (Ī¶) and natural frequency (įæ³n) in a second
order process.
2. THE CMAC AS A NEURAL NETWORK
The basic operation of a two-input two-output CMAC
network is illustrated in figure 1.1a. It has three layers,
labeled L1, L2, L3 in the figure. The inputs are the
values y1 and y2. Layer 1 contains an array of ā€œfeature
detectingā€ neurons zij for each input yi. Each of these
outputs one for inputs in a limited range, otherwise they
output zero (figure 1.1 b). For any input yi, a fixed
number of neurons (na ) in each layer 1 array will be
Chandradeo Prasad and Tarun Kumar Gupta VSRDIJCSIT, Vol. IV (X) October 2014 / 174
activated (na = 5 in the example). The layer 1 neurons
effectively quantize the inputs.
Layer 2 contains nv association neurons aij which
are connected to one neuron from each layer 1 input
array (z1i, z2j ). Each layer 2 neuron outputs 1.0 when
all its inputs are nonzero, otherwise it outputs zeroā€”
these neurons compute the logical AND of their inputs.
They are arranged so exactly na are activated by any
input (5 in the example).
Layer 3 contains the nx output neurons, each of
which computes a weighted sum of all layer 2
outputs, i.e.:
xi = āˆ‘wijk ajk
The parameters wijk are the weights which
parameterize the CMAC mapping (wijk connects ajk
to output i). There are nx weights for every layer 2
association neuron, which makes nv nx weights in total.
Only a fraction of all the possible association neurons
are used. They are distributed in a pattern which
conserves weight parameters without degrading the
local generalization properties too much (this will be
explained later). Each layer 2 neuron has a receptive
field that is na Ɨ na units in size, i.e. this is the size of
the input space region that activates the neuron.
The CMAC was intended by Albus to be a simple model
of the cerebellum. Its three layers correspond to sensory
feature-detecting neurons, granule cells and Purkinje
cells respectively, the last two cell types being
dominant in the cortex of the cerebellum. This
similarity is rather superficial (as many biological
properties of the cerebellum are not modeled) therefore
it is not the main reason for using the CMAC in a
biologically based control approach. The CMACā€™s
implementation speed is a far more useful property.
3. THE CMAC AS A LOOKUP TABLE
An alternative table based CMAC which is exactly
equivalent to the above neural network version. The
inputs y1 and y2 are first scaled and quantized to
integers q1 and q2. In this example:
q1 = 15 y1 ā€¦ (1.1)
q2 = 15 y2 ā€¦ (1.2)
as there are 15 inputs quantization steps between 0
and 1. The indexes (q1, q2) are used to look up
weights in na two-dimensional lookup tables (since
na = 5 is the number of association neurons activated
for any input). Each lookup table is called an
ā€œassociation unitā€ (AU)ā€”they may be labeled AU1. .
.AU5. The AU tables store one weight value in each
cell. Each AU has cells which are na times larger than
the input quantization cell size, and are also displaced
along each axis by some constant. The displacement
for AUi along input yj is di
. If pi is the table index for
AUi along axis yj then, in the example:
Pi
j = [(qj + di
j) / na] = [(qj +di
j)/(qj + di
j) / 5] ā€¦ (1.3)
CMAC mapping algorithm (based on the table form)
follows in the next section. Software CMAC
implementations never use the neural network form.
The CMAC HASH function used by the CMAC
QUANTIZE AND ASSOCIATE procedure takes the AU
number and the table indexes and generates an index
into a weight array.
The CMAC is faster than an ā€œequivalentā€ MLP
network because only na neurons (i.e. na table
entries) need to be considered to compute each output,
whereas the MLP must perform calculations for all of
its neurons.
4. THE CMAC ALGORITHM
Parameters
ny Number of inputs (integerā‰„ 1)
nx Number of outputs (integerā‰„ 1)
na Number of association units (integerā‰„ 1)
di j The displacement for AU i along input yj
(integer, 0 ā‰¤ di j < na)
mini / Input ranges;
maxi the minimum and maximum values for yi (real).
resi Input resolution;
the number of quantization steps for input yi
(integer > 1).
nw The total number of physical CMAC weights per
output.
Internal state variables Āµi ā€” An index into the
weight tables for association unit i (i = 1...na).
Wj[i] Weight i in the weight table for output xj,
i = 0...(nw āˆ’1), j = 1...nx.
āˆ Increment
Quantization
Inputs: y1 ...yn y
Outputs: q1 ...qn x
for i ā† 1...ny ā€”for all inputs
if yi < mini then : yi ā† mini limit yi
if yi > maxi then : yi ā† maxi limit yi
qi ā†ā¦‹ resi (( yi -mini )/(maxi ā€“mini)] +1 quantize input
if qi ā‰„ resi then : qi ā† resi
Output Calculation
ā‡’for i ā† 1...nx for all outputs
xi ā† 0
for j ā† 1...na
xi ā† xi + Wij for all AUs add table entry to xi
Training
Inputs : t1 ā€¦ tnx , āˆ (scalar)
Increment ā† āˆ ((ti ā€“ xi)/na)
Wj[i] = Wj[i] + increment
Training Procedure: The training process takes a set
of desired input to output mappings (training points)
and adjusts the weights so that the global mapping fits
the set. It will be useful to distinguish between two
different CMAC training modes:
j
Chandradeo Prasad and Tarun Kumar Gupta VSRDIJCSIT, Vol. IV (X) October 2014 / 175
Target Training: First an input is presented to the
CMAC and the output is computed. Then each of the
na referenced weights (one per lookup table) is
increased so that the outputs xi come closer to the
desired target values ti , i.e.:
wijk ā† wijk + (ti - xi) ā€¦ (1.5)
where Ī± is the learning rate constant (0 ā‰¤ Ī± ā‰¤ 1). If
Ī± = 0 then the outputs will not change. If Ī± = 1 then
the outputs will be set equal to the targets. This is
implemented in the CMAC TARGET-TRAIN
procedure.
Error Training: First an input is presented to the
CMAC and the output is computed. Then each of the
na referenced weights is incremented so that the output
vector [x1 . . . xnx ] increases in the direction of the error
vector [e1 . . . enx ], i.e.:
wijk ā† wijk + ei ... (1.6)
This results in a trivial change to the CMAC
TARGETTRAIN procedure.
Target training causes the CMAC output to seek a given
target, error. Training causes it to grow in a given
direction. The input/desired-output pairs (training
points) are normally presented to the CMAC in one of
two ways:
Random Order: If the CMAC is to be trained to
represent some function that is known beforehand, then
a selection of training points from the function can be
presented in random order to minimize learning
interference.
Trajectory Order: If the CMAC is being used in an
online controller then the training point inputs will
probably vary gradually, as they are tied to the system
sensors. In this case each training point will be close to
the previous one in the input space.
Comparison with the MLP: To represent a given set of
training data the CMAC normally requires more
parameters than an MLP. The CMAC is often over-
parameterized for the training data, so the MLP has a
smoother representation which ignores noise and
corresponds better to the dataā€™s underlying structure. The
number of CMAC weights required to represent a given
training set is proportional to the volume spanned by that
data in the input space. The CMAC can be more easily
trained on-line than the MLP, for two reasons. First, the
training iterations are fast. Second, the rate of
convergence to the training data can be made as high as
required, up to the instant convergence of Ī± = 1 (although
such high learning rates are never used because of
training interference and noise problems). The MLP
always requires much iteration to converge, regardless of
its learning rate.
5. EXPERIMENTAL RESULTS
The CMAC network programs developed in MATLAB.
These programs were employed to test a single pendulum
system. This system has a driving force and viscous
damping force. This system is defined by the following
equation:
Ī˜``
+ 2Ī¶Īø + sinĪø = f(t)
Where
Īø(t) is the angular displacement of pendulum
f(t) is the driving force of the system
Ī¶ is the damping coefficient
This neural network identification block took the present
driving force value and present values of Īø` and Īø``
(angular displacement and angular velocity). These are
state variables of the system as inputs and give the
estimate of the present value of the damping coefficients.
The network was trained with learning rates of 0.2, 0.3
and 0.4. The network was also tested with different
number of data points. The results of network with
learning rate of 0.2 and 126 number of data points is
listed in table number 1.1.
The result of the randomly chosen 65 data points for a
learning rate of 0.2 from the above given data is listed in
table number 1.2.
Fig. 1.1: An example two-input two-output CMAC, in Neural Network Form
(ny = 2, na = 5, nv = 72, nx = 2)
Chandradeo Prasad and Tarun Kumar Gupta VSRDIJCSIT, Vol. IV (X) October 2014 / 176
Table 1.1: Results of Testing with Number of Data Points
Ī˜ā€™ Ī˜ā€ f(t) Ī¶
0 1.0000 1.5000 0
0.0500 0.9988 1.5000 0.0500
0.0998 0.9950 1.5000 0.1000
0.1494 0.9888 1.5000 0.1500
0.1987 0.9801 1.5000 0.2000
0.2474 0.9689 1.5000 0.2500
0.2955 0.9553 1.5000 0.3000
0.3429 0.9394 1.5000 0.3500
0.3894 0.9211 1.5000 0.4000
0.4350 0.9004 1.5000 0.4500
0.4794 0.8776 1.5000 0.5000
0.5227 0.8525 1.5000 0.5500
0.5646 0.8253 1.5000 0.6000
0.6052 0.7961 1.5000 0.6500
0.6442 0.7648 1.5000 0.7000
0.6816 0.7317 1.5000 0.7500
0.7174 0.6967 1.5000 0.8000
0.7513 0.6600 1.5000 0.8500
0.7833 0.6216 1.5000 0.9000
0.8134 0.5817 1.5000 0.9500
0.8415 0.5403 1.5000 1.0000
0.8674 0.4976 1.5000 1.0500
0.8912 0.4536 1.5000 1.1000
0.9128 0.4085 1.5000 1.1500
0.9320 0.3624 1.5000 1.2000
0.9490 0.3153 1.5000 1.2500
0.9636 0.2675 1.5000 1.3000
0.9757 0.2190 1.5000 1.3500
0.9854 0.1700 1.5000 1.4000
0.9927 0.1205 1.5000 1.4500
0.9975 0.0707 1.5000 1.5000
0.9998 0.0208 1.5000 1.5500
0.9996 -0.0292 1.5000 1.6000
0.9969 -0.0791 1.5000 1.6500
0.9917 -0.1288 1.5000 1.7000
0.9840 -0.1782 1.5000 1.7500
0.9738 -0.2272 1.5000 1.8000
0.9613 -0.2756 1.5000 1.8500
0.9463 -0.3233 1.5000 1.9000
0.9290 -0.3702 1.5000 1.9500
0.9093 -0.4161 1.5000 2.0000
0.9093 -0.4161 -2.0000 2.0000
0.8874 -0.4611 -2.0000 1.9000
0.8632 -0.5048 -2.0000 1.8000
0.8369 -0.5474 -2.0000 1.7000
0.8085 -0.5885 -2.0000 1.6000
0.7781 -0.6282 -2.0000 1.5000
0.7457 -0.6663 -2.0000 1.4000
0.7115 -0.7027 -2.0000 1.3000
0.6755 -0.7374 -2.0000 1.2000
0.6378 -0.7702 -2.0000 1.1000
0.5985 -0.8011 -2.0000 1.0000
0.5577 -0.8301 -2.0000 0.9000
0.5155 -0.8569 -2.0000 0.8000
Chandradeo Prasad and Tarun Kumar Gupta VSRDIJCSIT, Vol. IV (X) October 2014 / 177
Table 1.2: Test Results
Target Values Calculated Outputs
0.0000 0.0000
0.1000 0.1000
0.2000 0.2000
0.3000 0.3000
0.4000 0.4000
0.5000 0.5000
0.6000 0.6000
0.7000 0.7000
0.8000 0.8000
0.9000 0.9000
1.0000 1.0000
1.1000 1.1000
1.2000 1.2000
1.3000 1.3000
1.4000 1.4000
1.5000 1.5000
1.6000 1.6000
1.7000 1.7000
1.8000 1.8000
1.9000 1.9000
2.0000 2.0000
2.1000 2.1000
2.2000 2.2000
2.3000 2.3000
0.4000 0.4000
1.5000 1.5000
0.6000 0.6000
1.7000 1.7000
1.8000 1.8000
0.9000 0.9000
1.0000 1.0000
0.8000 0.8000
0 0
-0.2000 -0.2000
-0.5000 -0.5000
-0.6000 -0.6000
-0.8000 -0.8000
-1.2000 -1.2000
-1.4000 -1.4000
-1.6000 -1.6000
-1.8000 -1.8000
-2.4000 -2.4000
-2.2000 -2.2000
-0.0653 -0.0653
-1.0059 -1.0059
-2.6000 -2.6000
-2.0623 -2.0623
-1.0123 -1.0123
-1.0256 -1.0256
Chandradeo Prasad and Tarun Kumar Gupta VSRDIJCSIT, Vol. IV (X) October 2014 / 178
Fig. 1.2: Variance Plot of Omega
Fig. 1.3: Variance Plot of Actuation Signal
Fig. 1.4: Time Variance Plot of Output Variable
Chandradeo Prasad and Tarun Kumar Gupta VSRDIJCSIT, Vol. IV (X) October 2014 / 179
6. CONCLUSION
The implementation of CMAC in system identification
has been compared with that of other conventional
method and other neural network implementations. The
self-tuning regulator method calls for large system inputs
and yet cannot track the system when the plant is non-
linear. The ML, BPN approaches are capable of tracking
non-linear systems too but take long time to learn the
system. The mapping and training operations of CMAC is
extremely fast. Here estimates are generated virtually
from a table look up procedure. It performs well with
both linear and non-linear systems as well. It has been
found to be easier to implement and seems to be most
suitable for real time applications.
7. FUTURE SCOPE
Hardware Implementation: Hardware implementation
of CMAC requires large amount of memory. Once it is
achieved, it can be used in missile delivery/defence
systems giving a clear and efficient edge over rivals
because of its extremely fast mapping.
CMAC can also be experimented to be efficiently used in
higher order and unstable systems.
8. REFERENCES
[1] L. Ljung, System Identification - Theory for the User.
Prentice-Hall, N.J. 2nd edition, 1999.
[2] J. Schoukens and R. Pintelon, System Identification. A
Frequency Domain Approach, IEEE Press, New York,
2001.
[3] T. Sƶderstrƶm and P. Stoica, System Indentification,
Prentice Hall, Enhlewood Cliffs, NJ. 1989.
[4] P. Eykhoff, System Identification, Parameter and State
Estimation, Wiley, New York, 1974.
[5] A. P. Sage and J. L. Melsa, Estimation Theory with
Application to Communications and Control, McGraw-
Hill, New York, 1971.
[6] H. L. Van Trees, Detection Estimation and Modulation
Theory, Part I. Wiley, New York, 1968.
[7] G. C. Goodwin and R. L. Payne, Dynamic System
Identification, Academic Press, New York, 1977.
[8] K. Hornik, M. Stinchcombe and H. White, Multilayer
Feed-forward Networks are Universal Approximators",
Neural Networks Vol. 2. 1989. pp. 359-366.
[9] G. Cybenko, Approximation by Superposition of
Sigmoidal Functions, Mathematical Control Signals
Systems, Vol. 2. pp. 303-314, 1989.
[10] K. I. Funahashi, On the Approximate Realization of
Continuous Mappings by Neural Networks", Neural
Networks, Vol. 2. No. 3. pp. 1989. 183-192.
[11] M. Leshno, V. Y. Lin, A. Pinkus and S. Schocken,
Multilayer Feed-forward Networks With a Nonpolynomial
Activation Function Can Approximate Any Function,
Neural Networks, Vol. 6. 1993. pp. 861-67
[12] J. S. Albus, A New Approach to Manipulator Control: The
Cerebellar Model Articulation Controller (CMAC),
Transaction of the ASME, Sep. 1975. pp. 220-227.
[13] Y. H. Pao, Adaptive Pattern Recognition and Neural
Networks, Addison-Wesley, Reading, Mass., 1989, pp.
197-222.
[14] D. F. Specht, Polynomial Neural Networks, Neural
Networks, Vol.3. No. 1 pp. 1990. pp. 109-118,
[15] J. Park and I. W. Sandberg, Approximation and Radial-
Basis-Function Networks, Neural Computation, Vol 5. No.
2.
ļ¬ļ¬ļ¬ļ€ 
ļ€ 
Chandradeo Prasad and Tarun Kumar Gupta VSRDIJCSIT, Vol. IV (X) October 2014 / 180
ļ€ 

More Related Content

What's hot

PCA (Principal component analysis) Theory and Toolkits
PCA (Principal component analysis) Theory and ToolkitsPCA (Principal component analysis) Theory and Toolkits
PCA (Principal component analysis) Theory and ToolkitsHopeBay Technologies, Inc.
Ā 
Discrete state space model 9th &10th lecture
Discrete  state space model   9th  &10th  lectureDiscrete  state space model   9th  &10th  lecture
Discrete state space model 9th &10th lectureKhalaf Gaeid Alshammery
Ā 
Ch.02 modeling in frequency domain
Ch.02 modeling in frequency domainCh.02 modeling in frequency domain
Ch.02 modeling in frequency domainNguyen_Tan_Tien
Ā 
Article 1
Article 1Article 1
Article 1badreisai
Ā 
Multiclass Logistic Regression: Derivation and Apache Spark Examples
Multiclass Logistic Regression: Derivation and Apache Spark ExamplesMulticlass Logistic Regression: Derivation and Apache Spark Examples
Multiclass Logistic Regression: Derivation and Apache Spark ExamplesMarjan Sterjev
Ā 
Modern control system
Modern control systemModern control system
Modern control systemPourya Parsa
Ā 
Parallel Algorithms: Sort & Merge, Image Processing, Fault Tolerance
Parallel Algorithms: Sort & Merge, Image Processing, Fault ToleranceParallel Algorithms: Sort & Merge, Image Processing, Fault Tolerance
Parallel Algorithms: Sort & Merge, Image Processing, Fault ToleranceUniversity of Technology - Iraq
Ā 
FEEDBACK LINEARIZATION AND BACKSTEPPING CONTROLLERS FOR COUPLED TANKS
FEEDBACK LINEARIZATION AND BACKSTEPPING CONTROLLERS FOR COUPLED TANKSFEEDBACK LINEARIZATION AND BACKSTEPPING CONTROLLERS FOR COUPLED TANKS
FEEDBACK LINEARIZATION AND BACKSTEPPING CONTROLLERS FOR COUPLED TANKSieijjournal
Ā 
Parallel sorting algorithm
Parallel sorting algorithmParallel sorting algorithm
Parallel sorting algorithmRicha Kumari
Ā 
Modeling of mechanical_systems
Modeling of mechanical_systemsModeling of mechanical_systems
Modeling of mechanical_systemsJulian De Marcos
Ā 
Backstepping Controller Synthesis for Piecewise Polynomial Systems: A Sum of ...
Backstepping Controller Synthesis for Piecewise Polynomial Systems: A Sum of ...Backstepping Controller Synthesis for Piecewise Polynomial Systems: A Sum of ...
Backstepping Controller Synthesis for Piecewise Polynomial Systems: A Sum of ...Behzad Samadi
Ā 
Discrete Nonlinear Optimal Control of S/C Formations Near The L1 and L2 poi...
  Discrete Nonlinear Optimal Control of S/C Formations Near The L1 and L2 poi...  Discrete Nonlinear Optimal Control of S/C Formations Near The L1 and L2 poi...
Discrete Nonlinear Optimal Control of S/C Formations Near The L1 and L2 poi...Belinda Marchand
Ā 
OpenSees: modeling and performing static analysis
OpenSees: modeling and  performing static analysisOpenSees: modeling and  performing static analysis
OpenSees: modeling and performing static analysisDhanaji Chavan
Ā 
CS8451 - Design and Analysis of Algorithms
CS8451 - Design and Analysis of AlgorithmsCS8451 - Design and Analysis of Algorithms
CS8451 - Design and Analysis of AlgorithmsKrishnan MuthuManickam
Ā 

What's hot (20)

PCA (Principal component analysis) Theory and Toolkits
PCA (Principal component analysis) Theory and ToolkitsPCA (Principal component analysis) Theory and Toolkits
PCA (Principal component analysis) Theory and Toolkits
Ā 
Ydstie
YdstieYdstie
Ydstie
Ā 
Chap1see4113
Chap1see4113Chap1see4113
Chap1see4113
Ā 
Discrete state space model 9th &10th lecture
Discrete  state space model   9th  &10th  lectureDiscrete  state space model   9th  &10th  lecture
Discrete state space model 9th &10th lecture
Ā 
Ch.02 modeling in frequency domain
Ch.02 modeling in frequency domainCh.02 modeling in frequency domain
Ch.02 modeling in frequency domain
Ā 
Article 1
Article 1Article 1
Article 1
Ā 
Multiclass Logistic Regression: Derivation and Apache Spark Examples
Multiclass Logistic Regression: Derivation and Apache Spark ExamplesMulticlass Logistic Regression: Derivation and Apache Spark Examples
Multiclass Logistic Regression: Derivation and Apache Spark Examples
Ā 
Modern control system
Modern control systemModern control system
Modern control system
Ā 
Parallel Algorithms: Sort & Merge, Image Processing, Fault Tolerance
Parallel Algorithms: Sort & Merge, Image Processing, Fault ToleranceParallel Algorithms: Sort & Merge, Image Processing, Fault Tolerance
Parallel Algorithms: Sort & Merge, Image Processing, Fault Tolerance
Ā 
FEEDBACK LINEARIZATION AND BACKSTEPPING CONTROLLERS FOR COUPLED TANKS
FEEDBACK LINEARIZATION AND BACKSTEPPING CONTROLLERS FOR COUPLED TANKSFEEDBACK LINEARIZATION AND BACKSTEPPING CONTROLLERS FOR COUPLED TANKS
FEEDBACK LINEARIZATION AND BACKSTEPPING CONTROLLERS FOR COUPLED TANKS
Ā 
Control systems
Control systemsControl systems
Control systems
Ā 
Parallel sorting algorithm
Parallel sorting algorithmParallel sorting algorithm
Parallel sorting algorithm
Ā 
Modeling of mechanical_systems
Modeling of mechanical_systemsModeling of mechanical_systems
Modeling of mechanical_systems
Ā 
Backstepping Controller Synthesis for Piecewise Polynomial Systems: A Sum of ...
Backstepping Controller Synthesis for Piecewise Polynomial Systems: A Sum of ...Backstepping Controller Synthesis for Piecewise Polynomial Systems: A Sum of ...
Backstepping Controller Synthesis for Piecewise Polynomial Systems: A Sum of ...
Ā 
Chapter26
Chapter26Chapter26
Chapter26
Ā 
Introduction
IntroductionIntroduction
Introduction
Ā 
Parallel algorithm in linear algebra
Parallel algorithm in linear algebraParallel algorithm in linear algebra
Parallel algorithm in linear algebra
Ā 
Discrete Nonlinear Optimal Control of S/C Formations Near The L1 and L2 poi...
  Discrete Nonlinear Optimal Control of S/C Formations Near The L1 and L2 poi...  Discrete Nonlinear Optimal Control of S/C Formations Near The L1 and L2 poi...
Discrete Nonlinear Optimal Control of S/C Formations Near The L1 and L2 poi...
Ā 
OpenSees: modeling and performing static analysis
OpenSees: modeling and  performing static analysisOpenSees: modeling and  performing static analysis
OpenSees: modeling and performing static analysis
Ā 
CS8451 - Design and Analysis of Algorithms
CS8451 - Design and Analysis of AlgorithmsCS8451 - Design and Analysis of Algorithms
CS8451 - Design and Analysis of Algorithms
Ā 

Similar to SYSTEM IDENTIFICATION USING CEREBELLAR MODEL ARITHMETIC COMPUTER

Fixed-Point Code Synthesis for Neural Networks
Fixed-Point Code Synthesis for Neural NetworksFixed-Point Code Synthesis for Neural Networks
Fixed-Point Code Synthesis for Neural NetworksIJITE
Ā 
Fixed-Point Code Synthesis for Neural Networks
Fixed-Point Code Synthesis for Neural NetworksFixed-Point Code Synthesis for Neural Networks
Fixed-Point Code Synthesis for Neural Networksgerogepatton
Ā 
Dynamic Kohonen Network for Representing Changes in Inputs
Dynamic Kohonen Network for Representing Changes in InputsDynamic Kohonen Network for Representing Changes in Inputs
Dynamic Kohonen Network for Representing Changes in InputsJean Fecteau
Ā 
Digital Signal Processing[ECEG-3171]-Ch1_L03
Digital Signal Processing[ECEG-3171]-Ch1_L03Digital Signal Processing[ECEG-3171]-Ch1_L03
Digital Signal Processing[ECEG-3171]-Ch1_L03Rediet Moges
Ā 
5 DimensionalityReduction.pdf
5 DimensionalityReduction.pdf5 DimensionalityReduction.pdf
5 DimensionalityReduction.pdfRahul926331
Ā 
4tocontest
4tocontest4tocontest
4tocontestberthin
Ā 
Cs6402 design and analysis of algorithms may june 2016 answer key
Cs6402 design and analysis of algorithms may june 2016 answer keyCs6402 design and analysis of algorithms may june 2016 answer key
Cs6402 design and analysis of algorithms may june 2016 answer keyappasami
Ā 
Maneuvering target track prediction model
Maneuvering target track prediction modelManeuvering target track prediction model
Maneuvering target track prediction modelIJCI JOURNAL
Ā 
A New Approach to Design a Reduced Order Observer
A New Approach to Design a Reduced Order ObserverA New Approach to Design a Reduced Order Observer
A New Approach to Design a Reduced Order ObserverIJERD Editor
Ā 
DSP_FOEHU - MATLAB 01 - Discrete Time Signals and Systems
DSP_FOEHU - MATLAB 01 - Discrete Time Signals and SystemsDSP_FOEHU - MATLAB 01 - Discrete Time Signals and Systems
DSP_FOEHU - MATLAB 01 - Discrete Time Signals and SystemsAmr E. Mohamed
Ā 
lecture4signals-181130200508.pptx
lecture4signals-181130200508.pptxlecture4signals-181130200508.pptx
lecture4signals-181130200508.pptxRockFellerSinghRusse
Ā 
Wiener Filter Hardware Realization
Wiener Filter Hardware RealizationWiener Filter Hardware Realization
Wiener Filter Hardware RealizationSayan Chaudhuri
Ā 
Artificial Neural Networks Lect7: Neural networks based on competition
Artificial Neural Networks Lect7: Neural networks based on competitionArtificial Neural Networks Lect7: Neural networks based on competition
Artificial Neural Networks Lect7: Neural networks based on competitionMohammed Bennamoun
Ā 
Performance Analysis of Input Vector Monitoring Concurrent Built In Self Repa...
Performance Analysis of Input Vector Monitoring Concurrent Built In Self Repa...Performance Analysis of Input Vector Monitoring Concurrent Built In Self Repa...
Performance Analysis of Input Vector Monitoring Concurrent Built In Self Repa...IJTET Journal
Ā 

Similar to SYSTEM IDENTIFICATION USING CEREBELLAR MODEL ARITHMETIC COMPUTER (20)

Fixed-Point Code Synthesis for Neural Networks
Fixed-Point Code Synthesis for Neural NetworksFixed-Point Code Synthesis for Neural Networks
Fixed-Point Code Synthesis for Neural Networks
Ā 
Fixed-Point Code Synthesis for Neural Networks
Fixed-Point Code Synthesis for Neural NetworksFixed-Point Code Synthesis for Neural Networks
Fixed-Point Code Synthesis for Neural Networks
Ā 
Dynamic Kohonen Network for Representing Changes in Inputs
Dynamic Kohonen Network for Representing Changes in InputsDynamic Kohonen Network for Representing Changes in Inputs
Dynamic Kohonen Network for Representing Changes in Inputs
Ā 
Digital Signal Processing[ECEG-3171]-Ch1_L03
Digital Signal Processing[ECEG-3171]-Ch1_L03Digital Signal Processing[ECEG-3171]-Ch1_L03
Digital Signal Processing[ECEG-3171]-Ch1_L03
Ā 
5 DimensionalityReduction.pdf
5 DimensionalityReduction.pdf5 DimensionalityReduction.pdf
5 DimensionalityReduction.pdf
Ā 
Multi Layer Network
Multi Layer NetworkMulti Layer Network
Multi Layer Network
Ā 
4tocontest
4tocontest4tocontest
4tocontest
Ā 
Lecture 4: Classification of system
Lecture 4: Classification of system Lecture 4: Classification of system
Lecture 4: Classification of system
Ā 
Seminar9
Seminar9Seminar9
Seminar9
Ā 
Cs6402 design and analysis of algorithms may june 2016 answer key
Cs6402 design and analysis of algorithms may june 2016 answer keyCs6402 design and analysis of algorithms may june 2016 answer key
Cs6402 design and analysis of algorithms may june 2016 answer key
Ā 
Maneuvering target track prediction model
Maneuvering target track prediction modelManeuvering target track prediction model
Maneuvering target track prediction model
Ā 
Brute force
Brute forceBrute force
Brute force
Ā 
A New Approach to Design a Reduced Order Observer
A New Approach to Design a Reduced Order ObserverA New Approach to Design a Reduced Order Observer
A New Approach to Design a Reduced Order Observer
Ā 
DSP_FOEHU - MATLAB 01 - Discrete Time Signals and Systems
DSP_FOEHU - MATLAB 01 - Discrete Time Signals and SystemsDSP_FOEHU - MATLAB 01 - Discrete Time Signals and Systems
DSP_FOEHU - MATLAB 01 - Discrete Time Signals and Systems
Ā 
lecture4signals-181130200508.pptx
lecture4signals-181130200508.pptxlecture4signals-181130200508.pptx
lecture4signals-181130200508.pptx
Ā 
Wiener Filter Hardware Realization
Wiener Filter Hardware RealizationWiener Filter Hardware Realization
Wiener Filter Hardware Realization
Ā 
Annotations.pdf
Annotations.pdfAnnotations.pdf
Annotations.pdf
Ā 
neural networksNnf
neural networksNnfneural networksNnf
neural networksNnf
Ā 
Artificial Neural Networks Lect7: Neural networks based on competition
Artificial Neural Networks Lect7: Neural networks based on competitionArtificial Neural Networks Lect7: Neural networks based on competition
Artificial Neural Networks Lect7: Neural networks based on competition
Ā 
Performance Analysis of Input Vector Monitoring Concurrent Built In Self Repa...
Performance Analysis of Input Vector Monitoring Concurrent Built In Self Repa...Performance Analysis of Input Vector Monitoring Concurrent Built In Self Repa...
Performance Analysis of Input Vector Monitoring Concurrent Built In Self Repa...
Ā 

Recently uploaded

Application of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptxApplication of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptx959SahilShah
Ā 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024Mark Billinghurst
Ā 
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdfCCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdfAsst.prof M.Gokilavani
Ā 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AIabhishek36461
Ā 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxDeepakSakkari2
Ā 
Heart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxHeart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxPoojaBan
Ā 
HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2RajaP95
Ā 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionDr.Costas Sachpazis
Ā 
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...ZTE
Ā 
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escortsranjana rawat
Ā 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile servicerehmti665
Ā 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVRajaP95
Ā 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxwendy cai
Ā 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130Suhani Kapoor
Ā 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024hassan khalil
Ā 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSCAESB
Ā 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxpurnimasatapathy1234
Ā 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escortsranjana rawat
Ā 

Recently uploaded (20)

Application of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptxApplication of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptx
Ā 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024
Ā 
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdfCCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
Ā 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AI
Ā 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptx
Ā 
Heart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxHeart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptx
Ā 
HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2
Ā 
young call girls in Rajiv ChowkšŸ” 9953056974 šŸ” Delhi escort Service
young call girls in Rajiv ChowkšŸ” 9953056974 šŸ” Delhi escort Serviceyoung call girls in Rajiv ChowkšŸ” 9953056974 šŸ” Delhi escort Service
young call girls in Rajiv ChowkšŸ” 9953056974 šŸ” Delhi escort Service
Ā 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Ā 
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
Ā 
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
Ā 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Ā 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
Ā 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptx
Ā 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
Ā 
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCRCall Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Ā 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024
Ā 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentation
Ā 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptx
Ā 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
Ā 

SYSTEM IDENTIFICATION USING CEREBELLAR MODEL ARITHMETIC COMPUTER

  • 1. VSRD International Journal of Computer Science &Information Technology, Vol. IV Issue X October 2014 / 173 e-ISSN : 2231-2471, p-ISSN : 2319-2224 Ā© VSRD International Journals : www.vsrdjournals.com RESEARCH PAPER SYSTEM IDENTIFICATION USING CEREBELLAR MODEL ARITHMETIC COMPUTER 1Chandradeo Prasad* and 2Tarun Kumar Gupta 1Research Scholar, Department of Computer Science & Engineering, UP Technical University, Lucknow, Uttar Pradesh, INDIA. 2Head of Department, Department of Computer Science & Engineering, RGEC, Meerut, Uttar Pradesh, INDIA. *Corresponding Authorā€™s Email ID: cpsindri@gmail.com ABSTRACT The field of identification is rather complex. This complexity is due to the variety of applications, goals, conditions, etc. Exact specifications of final aim to be attained, analysis of available precision, etc. are some of the basic requirements to guide the work. In this paper we used CMAC look up table for identification of both linear and non-linear systems. It deals with the basic CMAC architectures, its capability and shows the motivations why CMAC net can be used for system identification. CMAC network has hundreds of thousands of adjustable weights that can be trained to approximate non-linearities which are not explicitly written out or even known. Its associative memory has a built in generalization. We have shown that due to its extremely fast mapping and training operations, it is suitable for real time applications. Keywords: CMAC: Cerebellar Model Arithmetic Computer, NN: Neural Networks, MLP: Multilayer Perceptron, BPN: Back Propagation Network. 1. INTRODUCTION System is a deļ¬ne part of the real world. Interactions with the environment are described by input signals, output signals and disturbances. Dynamical System is a system with a memory, i.e. the input value at time t wills inļ¬‚uences the output signal at the future. System identification is defined as the determination on the basis of input and output, of a system within a specified class of systems to which system under test is equivalent. With very mean knowledge about the system and input output values for certain period, system identification determines the followings: ļ‚· Structure, expressed in terms of mathematical identities, block diagram, flow diagrams or graphics, connection matrices. ļ‚· Parameter values. ļ‚· Values of dependent variables (state) at a certain instant or as a function of time. If a certain amount of a priori knowledge is available about any system, then the system identification problem is reduced to the finding of numerical values of the parameters. This is termed as parameter estimation. Parameter estimation is defined as experimental determination of the values of the parameters that governs the dynamic and non-linear behaviour, assuming that the structure of the process model is known. Fig. 1.1: Control Block Diagram Example: Fig. 1.1 illustrates the finding of damping coefficient (Ī¶) and natural frequency (įæ³n) in a second order process. 2. THE CMAC AS A NEURAL NETWORK The basic operation of a two-input two-output CMAC network is illustrated in figure 1.1a. It has three layers, labeled L1, L2, L3 in the figure. The inputs are the values y1 and y2. Layer 1 contains an array of ā€œfeature detectingā€ neurons zij for each input yi. Each of these outputs one for inputs in a limited range, otherwise they output zero (figure 1.1 b). For any input yi, a fixed number of neurons (na ) in each layer 1 array will be
  • 2. Chandradeo Prasad and Tarun Kumar Gupta VSRDIJCSIT, Vol. IV (X) October 2014 / 174 activated (na = 5 in the example). The layer 1 neurons effectively quantize the inputs. Layer 2 contains nv association neurons aij which are connected to one neuron from each layer 1 input array (z1i, z2j ). Each layer 2 neuron outputs 1.0 when all its inputs are nonzero, otherwise it outputs zeroā€” these neurons compute the logical AND of their inputs. They are arranged so exactly na are activated by any input (5 in the example). Layer 3 contains the nx output neurons, each of which computes a weighted sum of all layer 2 outputs, i.e.: xi = āˆ‘wijk ajk The parameters wijk are the weights which parameterize the CMAC mapping (wijk connects ajk to output i). There are nx weights for every layer 2 association neuron, which makes nv nx weights in total. Only a fraction of all the possible association neurons are used. They are distributed in a pattern which conserves weight parameters without degrading the local generalization properties too much (this will be explained later). Each layer 2 neuron has a receptive field that is na Ɨ na units in size, i.e. this is the size of the input space region that activates the neuron. The CMAC was intended by Albus to be a simple model of the cerebellum. Its three layers correspond to sensory feature-detecting neurons, granule cells and Purkinje cells respectively, the last two cell types being dominant in the cortex of the cerebellum. This similarity is rather superficial (as many biological properties of the cerebellum are not modeled) therefore it is not the main reason for using the CMAC in a biologically based control approach. The CMACā€™s implementation speed is a far more useful property. 3. THE CMAC AS A LOOKUP TABLE An alternative table based CMAC which is exactly equivalent to the above neural network version. The inputs y1 and y2 are first scaled and quantized to integers q1 and q2. In this example: q1 = 15 y1 ā€¦ (1.1) q2 = 15 y2 ā€¦ (1.2) as there are 15 inputs quantization steps between 0 and 1. The indexes (q1, q2) are used to look up weights in na two-dimensional lookup tables (since na = 5 is the number of association neurons activated for any input). Each lookup table is called an ā€œassociation unitā€ (AU)ā€”they may be labeled AU1. . .AU5. The AU tables store one weight value in each cell. Each AU has cells which are na times larger than the input quantization cell size, and are also displaced along each axis by some constant. The displacement for AUi along input yj is di . If pi is the table index for AUi along axis yj then, in the example: Pi j = [(qj + di j) / na] = [(qj +di j)/(qj + di j) / 5] ā€¦ (1.3) CMAC mapping algorithm (based on the table form) follows in the next section. Software CMAC implementations never use the neural network form. The CMAC HASH function used by the CMAC QUANTIZE AND ASSOCIATE procedure takes the AU number and the table indexes and generates an index into a weight array. The CMAC is faster than an ā€œequivalentā€ MLP network because only na neurons (i.e. na table entries) need to be considered to compute each output, whereas the MLP must perform calculations for all of its neurons. 4. THE CMAC ALGORITHM Parameters ny Number of inputs (integerā‰„ 1) nx Number of outputs (integerā‰„ 1) na Number of association units (integerā‰„ 1) di j The displacement for AU i along input yj (integer, 0 ā‰¤ di j < na) mini / Input ranges; maxi the minimum and maximum values for yi (real). resi Input resolution; the number of quantization steps for input yi (integer > 1). nw The total number of physical CMAC weights per output. Internal state variables Āµi ā€” An index into the weight tables for association unit i (i = 1...na). Wj[i] Weight i in the weight table for output xj, i = 0...(nw āˆ’1), j = 1...nx. āˆ Increment Quantization Inputs: y1 ...yn y Outputs: q1 ...qn x for i ā† 1...ny ā€”for all inputs if yi < mini then : yi ā† mini limit yi if yi > maxi then : yi ā† maxi limit yi qi ā†ā¦‹ resi (( yi -mini )/(maxi ā€“mini)] +1 quantize input if qi ā‰„ resi then : qi ā† resi Output Calculation ā‡’for i ā† 1...nx for all outputs xi ā† 0 for j ā† 1...na xi ā† xi + Wij for all AUs add table entry to xi Training Inputs : t1 ā€¦ tnx , āˆ (scalar) Increment ā† āˆ ((ti ā€“ xi)/na) Wj[i] = Wj[i] + increment Training Procedure: The training process takes a set of desired input to output mappings (training points) and adjusts the weights so that the global mapping fits the set. It will be useful to distinguish between two different CMAC training modes: j
  • 3. Chandradeo Prasad and Tarun Kumar Gupta VSRDIJCSIT, Vol. IV (X) October 2014 / 175 Target Training: First an input is presented to the CMAC and the output is computed. Then each of the na referenced weights (one per lookup table) is increased so that the outputs xi come closer to the desired target values ti , i.e.: wijk ā† wijk + (ti - xi) ā€¦ (1.5) where Ī± is the learning rate constant (0 ā‰¤ Ī± ā‰¤ 1). If Ī± = 0 then the outputs will not change. If Ī± = 1 then the outputs will be set equal to the targets. This is implemented in the CMAC TARGET-TRAIN procedure. Error Training: First an input is presented to the CMAC and the output is computed. Then each of the na referenced weights is incremented so that the output vector [x1 . . . xnx ] increases in the direction of the error vector [e1 . . . enx ], i.e.: wijk ā† wijk + ei ... (1.6) This results in a trivial change to the CMAC TARGETTRAIN procedure. Target training causes the CMAC output to seek a given target, error. Training causes it to grow in a given direction. The input/desired-output pairs (training points) are normally presented to the CMAC in one of two ways: Random Order: If the CMAC is to be trained to represent some function that is known beforehand, then a selection of training points from the function can be presented in random order to minimize learning interference. Trajectory Order: If the CMAC is being used in an online controller then the training point inputs will probably vary gradually, as they are tied to the system sensors. In this case each training point will be close to the previous one in the input space. Comparison with the MLP: To represent a given set of training data the CMAC normally requires more parameters than an MLP. The CMAC is often over- parameterized for the training data, so the MLP has a smoother representation which ignores noise and corresponds better to the dataā€™s underlying structure. The number of CMAC weights required to represent a given training set is proportional to the volume spanned by that data in the input space. The CMAC can be more easily trained on-line than the MLP, for two reasons. First, the training iterations are fast. Second, the rate of convergence to the training data can be made as high as required, up to the instant convergence of Ī± = 1 (although such high learning rates are never used because of training interference and noise problems). The MLP always requires much iteration to converge, regardless of its learning rate. 5. EXPERIMENTAL RESULTS The CMAC network programs developed in MATLAB. These programs were employed to test a single pendulum system. This system has a driving force and viscous damping force. This system is defined by the following equation: Ī˜`` + 2Ī¶Īø + sinĪø = f(t) Where Īø(t) is the angular displacement of pendulum f(t) is the driving force of the system Ī¶ is the damping coefficient This neural network identification block took the present driving force value and present values of Īø` and Īø`` (angular displacement and angular velocity). These are state variables of the system as inputs and give the estimate of the present value of the damping coefficients. The network was trained with learning rates of 0.2, 0.3 and 0.4. The network was also tested with different number of data points. The results of network with learning rate of 0.2 and 126 number of data points is listed in table number 1.1. The result of the randomly chosen 65 data points for a learning rate of 0.2 from the above given data is listed in table number 1.2. Fig. 1.1: An example two-input two-output CMAC, in Neural Network Form (ny = 2, na = 5, nv = 72, nx = 2)
  • 4. Chandradeo Prasad and Tarun Kumar Gupta VSRDIJCSIT, Vol. IV (X) October 2014 / 176 Table 1.1: Results of Testing with Number of Data Points Ī˜ā€™ Ī˜ā€ f(t) Ī¶ 0 1.0000 1.5000 0 0.0500 0.9988 1.5000 0.0500 0.0998 0.9950 1.5000 0.1000 0.1494 0.9888 1.5000 0.1500 0.1987 0.9801 1.5000 0.2000 0.2474 0.9689 1.5000 0.2500 0.2955 0.9553 1.5000 0.3000 0.3429 0.9394 1.5000 0.3500 0.3894 0.9211 1.5000 0.4000 0.4350 0.9004 1.5000 0.4500 0.4794 0.8776 1.5000 0.5000 0.5227 0.8525 1.5000 0.5500 0.5646 0.8253 1.5000 0.6000 0.6052 0.7961 1.5000 0.6500 0.6442 0.7648 1.5000 0.7000 0.6816 0.7317 1.5000 0.7500 0.7174 0.6967 1.5000 0.8000 0.7513 0.6600 1.5000 0.8500 0.7833 0.6216 1.5000 0.9000 0.8134 0.5817 1.5000 0.9500 0.8415 0.5403 1.5000 1.0000 0.8674 0.4976 1.5000 1.0500 0.8912 0.4536 1.5000 1.1000 0.9128 0.4085 1.5000 1.1500 0.9320 0.3624 1.5000 1.2000 0.9490 0.3153 1.5000 1.2500 0.9636 0.2675 1.5000 1.3000 0.9757 0.2190 1.5000 1.3500 0.9854 0.1700 1.5000 1.4000 0.9927 0.1205 1.5000 1.4500 0.9975 0.0707 1.5000 1.5000 0.9998 0.0208 1.5000 1.5500 0.9996 -0.0292 1.5000 1.6000 0.9969 -0.0791 1.5000 1.6500 0.9917 -0.1288 1.5000 1.7000 0.9840 -0.1782 1.5000 1.7500 0.9738 -0.2272 1.5000 1.8000 0.9613 -0.2756 1.5000 1.8500 0.9463 -0.3233 1.5000 1.9000 0.9290 -0.3702 1.5000 1.9500 0.9093 -0.4161 1.5000 2.0000 0.9093 -0.4161 -2.0000 2.0000 0.8874 -0.4611 -2.0000 1.9000 0.8632 -0.5048 -2.0000 1.8000 0.8369 -0.5474 -2.0000 1.7000 0.8085 -0.5885 -2.0000 1.6000 0.7781 -0.6282 -2.0000 1.5000 0.7457 -0.6663 -2.0000 1.4000 0.7115 -0.7027 -2.0000 1.3000 0.6755 -0.7374 -2.0000 1.2000 0.6378 -0.7702 -2.0000 1.1000 0.5985 -0.8011 -2.0000 1.0000 0.5577 -0.8301 -2.0000 0.9000 0.5155 -0.8569 -2.0000 0.8000
  • 5. Chandradeo Prasad and Tarun Kumar Gupta VSRDIJCSIT, Vol. IV (X) October 2014 / 177 Table 1.2: Test Results Target Values Calculated Outputs 0.0000 0.0000 0.1000 0.1000 0.2000 0.2000 0.3000 0.3000 0.4000 0.4000 0.5000 0.5000 0.6000 0.6000 0.7000 0.7000 0.8000 0.8000 0.9000 0.9000 1.0000 1.0000 1.1000 1.1000 1.2000 1.2000 1.3000 1.3000 1.4000 1.4000 1.5000 1.5000 1.6000 1.6000 1.7000 1.7000 1.8000 1.8000 1.9000 1.9000 2.0000 2.0000 2.1000 2.1000 2.2000 2.2000 2.3000 2.3000 0.4000 0.4000 1.5000 1.5000 0.6000 0.6000 1.7000 1.7000 1.8000 1.8000 0.9000 0.9000 1.0000 1.0000 0.8000 0.8000 0 0 -0.2000 -0.2000 -0.5000 -0.5000 -0.6000 -0.6000 -0.8000 -0.8000 -1.2000 -1.2000 -1.4000 -1.4000 -1.6000 -1.6000 -1.8000 -1.8000 -2.4000 -2.4000 -2.2000 -2.2000 -0.0653 -0.0653 -1.0059 -1.0059 -2.6000 -2.6000 -2.0623 -2.0623 -1.0123 -1.0123 -1.0256 -1.0256
  • 6. Chandradeo Prasad and Tarun Kumar Gupta VSRDIJCSIT, Vol. IV (X) October 2014 / 178 Fig. 1.2: Variance Plot of Omega Fig. 1.3: Variance Plot of Actuation Signal Fig. 1.4: Time Variance Plot of Output Variable
  • 7. Chandradeo Prasad and Tarun Kumar Gupta VSRDIJCSIT, Vol. IV (X) October 2014 / 179 6. CONCLUSION The implementation of CMAC in system identification has been compared with that of other conventional method and other neural network implementations. The self-tuning regulator method calls for large system inputs and yet cannot track the system when the plant is non- linear. The ML, BPN approaches are capable of tracking non-linear systems too but take long time to learn the system. The mapping and training operations of CMAC is extremely fast. Here estimates are generated virtually from a table look up procedure. It performs well with both linear and non-linear systems as well. It has been found to be easier to implement and seems to be most suitable for real time applications. 7. FUTURE SCOPE Hardware Implementation: Hardware implementation of CMAC requires large amount of memory. Once it is achieved, it can be used in missile delivery/defence systems giving a clear and efficient edge over rivals because of its extremely fast mapping. CMAC can also be experimented to be efficiently used in higher order and unstable systems. 8. REFERENCES [1] L. Ljung, System Identification - Theory for the User. Prentice-Hall, N.J. 2nd edition, 1999. [2] J. Schoukens and R. Pintelon, System Identification. A Frequency Domain Approach, IEEE Press, New York, 2001. [3] T. Sƶderstrƶm and P. Stoica, System Indentification, Prentice Hall, Enhlewood Cliffs, NJ. 1989. [4] P. Eykhoff, System Identification, Parameter and State Estimation, Wiley, New York, 1974. [5] A. P. Sage and J. L. Melsa, Estimation Theory with Application to Communications and Control, McGraw- Hill, New York, 1971. [6] H. L. Van Trees, Detection Estimation and Modulation Theory, Part I. Wiley, New York, 1968. [7] G. C. Goodwin and R. L. Payne, Dynamic System Identification, Academic Press, New York, 1977. [8] K. Hornik, M. Stinchcombe and H. White, Multilayer Feed-forward Networks are Universal Approximators", Neural Networks Vol. 2. 1989. pp. 359-366. [9] G. Cybenko, Approximation by Superposition of Sigmoidal Functions, Mathematical Control Signals Systems, Vol. 2. pp. 303-314, 1989. [10] K. I. Funahashi, On the Approximate Realization of Continuous Mappings by Neural Networks", Neural Networks, Vol. 2. No. 3. pp. 1989. 183-192. [11] M. Leshno, V. Y. Lin, A. Pinkus and S. Schocken, Multilayer Feed-forward Networks With a Nonpolynomial Activation Function Can Approximate Any Function, Neural Networks, Vol. 6. 1993. pp. 861-67 [12] J. S. Albus, A New Approach to Manipulator Control: The Cerebellar Model Articulation Controller (CMAC), Transaction of the ASME, Sep. 1975. pp. 220-227. [13] Y. H. Pao, Adaptive Pattern Recognition and Neural Networks, Addison-Wesley, Reading, Mass., 1989, pp. 197-222. [14] D. F. Specht, Polynomial Neural Networks, Neural Networks, Vol.3. No. 1 pp. 1990. pp. 109-118, [15] J. Park and I. W. Sandberg, Approximation and Radial- Basis-Function Networks, Neural Computation, Vol 5. No. 2. ļ¬ļ¬ļ¬ļ€  ļ€ 
  • 8. Chandradeo Prasad and Tarun Kumar Gupta VSRDIJCSIT, Vol. IV (X) October 2014 / 180 ļ€