SlideShare a Scribd company logo
INTRODUCTION TO MATLAB NEURAL NETWORK TOOLBOX
                                         Budi Santosa, Fall 2002

FEEDFORWARD NEURAL NETWORK
To create a feedforward backpropagation network we can use NEWFF

Syntax

net = newff(PR,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF)
Description

            NEWFF(PR,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF) takes,
             PR - Rx2 matrix of min and max values for R input elements.
             Si - Size of ith layer, for Nl layers.
             TFi - Transfer function of ith layer, default = 'tansig'.
             BTF - Backprop network training function, default = 'trainlm'.
             BLF - Backprop weight/bias learning function, default = 'learngdm'.
             PF - Performance function, default = 'mse'.
            and returns an N layer feed-forward backprop network.

Consider this set of data:
p=[-1 -1 2 2;0 5 0 5]
t =[-1 -1 1 1]
where p is input vector and t is target.

Suppose we want to create feed forward neural net with one hidden layer, 3 nodes in
hidden layer, with tangent sigmoid as transfer function in hidden layer and linear function
for output layer, and with gradient descent with momentum backpropagation training
function, just simply use the following commands:

» net=newff([-1 2;0 5],[3 1],{'tansig' 'purelin'},’traingdm’);

Note that the first input [-1 2;0 5] is the minimum and maximum values of vector p. We
might use minmax(p) , especially for large data set, then the command becomes:

»net=newff(minmax(p),[3 1],{'tansig' 'purelin'},’traingdm’);

We then want to train the network net with the following commands and parameter
values1 (or we can write these set of command as M-file):

»     net.trainParam.epochs=30;%(number of epochs)
»     net.trainParam.lr=0.3;%(learning rate)
»     net.trainParam.mc=0.6;%(momentum)
»     net=train (net,p,t);


1
    % is a sign not to execute any words after that, the phrase is just comment. We can ignore this comment


                                                       1
TRAINLM, Epoch 0/30, MSE 1.13535/0, Gradient 5.64818/1e-010
TRAINLM, Epoch 6/30, MSE 2.21698e-025/0, Gradient
4.27745e-012/1e-010
TRAINLM, Minimum gradient reached, performance goal was not
met.

The following is plot of training error vs epochs resulted after we run the above
command:


To simulate the network use the following command:
                                                P e rfo rm a n c e is 2 .2 1 6 9 8 e -0 2 5 , G o a l is 0
                                 5
                            10


                                 0
                            10


                                 -5
                            10


                                 -1 0
      Tra in in g -B lu e




                            10


                                 -1 5
                            10


                                 -2 0
                            10


                                 -2 5
                            10
                                        0   1             2                3                   4             5       6
                                                                      6 E poc hs



» y=sim(net,p);

Then see the result:
» [t;y]

ans =

    -1.0000                                      -1.0000                                           1.0000                1.0000
    -1.0000                                      -1.0000                                           1.0000                1.0000

Notice that t is actual target and y is predicted value.

Pre-processing data

Some transfer functions need that the inputs and targets are scaled so that they fall within
a specified range. In order to meet this requirement we need to pre-process the data.


                                                                                                                 2
PRESTD : preprocesses the data so that the mean is 0 and the standard deviation is 1.
Syntax: [pn,meanp,stdp,tn,meant,stdt] = prestd(p,t)

Description

PRESTD preprocesses the network training set by normalizing the inputs (p) and targets
(t) so that they have means of zero and standard deviations of 1.

PREMNMX: preprocesses data so that the minimum is -1 and the maximum is 1.

Syntax: [pn,minp,maxp,tn,mint,maxt] = premnmx(p,t)

Description
PREMNMX preprocesses the network training set by normalizing the inputs and targets
so that they fall in the interval [-1,1]. Post-processing Data

If we pre-process the data before experiments, then after the simulation in order to see the
output, we need to convert back to the original scale. The corresponding post-processing
subroutine for prestd is poststd, and for premnmx is postmnmx.

Now, consider the following real time-series data. We can save this data as text file in
MS-excel (i.e data.txt).

  107.1       113.5   112.7
  113.5       112.7   114.7
  112.7       114.7   123.4
  114.7       123.4   123.6
  123.4       123.6   116.3
  123.6       116.3   118.5
  116.3       118.5   119.8
  118.5       119.8   120.3
  119.8       120.3   127.4
  120.3       127.4   125.1
  127.4       125.1   127.6
  125.1       127.6     129
  127.6         129   124.6
    129       124.6   134.1
  124.6       134.1   146.5
  134.1       146.5   171.2
  146.5       171.2   178.6
  171.2       178.6   172.2
  178.6       172.2   171.5
  172.2       171.5   163.6




                                             3
Now we would like to work on the above real data set. We have a 20 by 3 matrix. Split
the matrix into two new data sets, first 12 data point as training set and the rest as a
testing set. The following commands are to load the data and specify our training and
testing data:

»   load data.txt;
»   P=data(1:12,1:2);
»   T=data(1:12,3);
»   a=data(13:20,1:2);
»   s=data(13:20,3);

Subroutine premnmx is used to preprocess the data such that the converted data fall in
range [-1,1]. Note that we need to convert our data into row vectors to be able to apply
premnmx..

» [pn,minp,maxp,tn,mint,maxt]=premnmx(P',T');
» [an,mina,maxa,sn,mins,maxs]=premnmx(a',s');

We would like to create two layers neural net with 5 nodes in hidden layer.

» net=newff(minmax(pn),[5 1],{'tansig','tansig'},'traingdm')

Specify these parameters:

»   net.trainParam.epochs=3000;
»   net.trainParam.lr=0.3;
»   net.trainParam.mc=0.6;
»   net=train (net,pn,tn);

After training the network we can simulate our testing data:

                                                  P e rfo rm a n c e is 0 .2 2 5 0 5 1 , G o a l is 0
                                   0
                              10
        Tra in in g -B lu e




                                   -1
                              10
                                        0   500       1000            1500                2000          2 54
                                                                                                           00   3000
                                                                  3000 E poc hs
» y=sim(net,an)

y=

 Columns 1 through 7

 -0.4697 -0.4748 -0.4500 -0.3973                                                                    0.5257   0.6015   0.5142
 Column 8

  0.5280

To convert y back to the original scale use the following command:

» t=postmnmx(y’,mins,maxs);
See the results:
» [t s]
ans =
  138.9175 124.6000
  138.7803 134.1000
  139.4506 146.5000
  140.8737 171.2000
  165.7935 178.6000
  167.8394 172.2000
   165.4832 171.5000
 165.8551        163.6000

» plot(t,'r')
» hold
Current plot held
» plot(s)
» title('Comparison between actual targets and predictions')

                 C o m p a ris o n b e tw e e n a c tu a l ta rg e ts a n d p re d ic tio n s
   180



   170



   160



   150                         y



   140
                                       x


   130



   120
         1   2                3               4               5               6                 7    8


                                                                                                         5
To calculate mse between actual and predicted values use the following commands:

» d=[t-s].^2;
» mse=mean(d)

mse =

   177.5730
To judge our network performance we can also use regression analysis. After post-
processing the predicted values, we can apply the following command:

t» [m,b,r]=postreg(t',s’)
m =
     0.5368
b =
    68.1723
r =
     0.7539


               B es t Linear F it: X = (0.537) T + (68.2)           D ata P oints
       180                                                          X= T
                                                                    B es t Linear F it

       170


       160


       150
   X




       140
                                                  R = 0.754
       130


       120
         120   130      140       150      160       170      180
                                   T




                                                                    6
RADIAL BASIS NETWORK

NEWRB Design a radial basis network.

 Synopsis

   net = newrb
   [net,tr] = newrb(P,T,GOAL,SPREAD,MN,DF)

 Description

Radial basis networks can be used to approximate functions. NEWRB adds neurons to
the hidden layer of a radial basis network until it meets the specified mean squared error
goal.
Consider the same data set we use before. Now we use radial basis network to do an
experiment.

>> net = newrb(P',T',0,3,12,1);
>> Y=sim(net,a')

Y =

150.4579       142.0905       166.3241       167.1528        167.1528       167.1528
167.1528       167.1528

>> d=[Y' s]

d =

   150.4579       124.6000
   142.0905       134.1000
   166.3241       146.5000
   167.1528       171.2000
   167.1528       178.6000
   167.1528       172.2000
   167.1528       171.5000
   167.1528       163.6000

>> diff=d(:,1)-d(:,2)

diff =

    25.8579
     7.9905
    19.8241
    -4.0472


                                             7
-11.4472
     -5.0472
     -4.3472
      3.5528

>> mse=mean(diff.^2)

mse =

    166.2357

RECURRENT NETWORK

Here we will use Elman network. The Elman network differs from conventional two-
layers network in that the first layer has recurrent connection. Elman networks are two-
layer backpropagation networks, with addition of a feedback connection from the output
of the hidden layer to its input. This feedback path allows Elman networks to recognize
and generate temporal patterns, as well as spatial patterns.

The following is an example of applying Elman networks.

»   [an,meana,stda,tn,meant,stdt]=prestd(a',s');
»   [pn,meanp,stdp,Tn,meanT,stdT]=prestd(P',T');
»   net = newelm(minmax(pn),[6 1],{'tansig',‘logsig'},'traingdm');
»   y=sim(net,an);
»   x=poststd(y',meant,stdt);
»   [x s]

ans =

    158.2551     124.6000
    158.2079     134.1000
    158.3120     146.5000
    159.2024     171.2000
    165.3899     178.6000
    171.6465     172.2000
    170.5381     171.5000
    169.3573     163.6000

» plot(x)
» hold
Current plot held
» plot(t,'r')




                                           8
180



         170



         160



         150



         140



         130



         120
               1   2   3   4   5     6    7    8




Learning Vector Quantization (LVQ)
LVQ networks are used to solve classification problems. Now we use iris data to
implement this technique. Iris data is very popular in data mining or statistics study
where the dimension is 150 by 5. The row is the number of observations and the last
column of the data indicates the class. There are 3 classes in the data.
The following set of commands is to specify our training and testing data:
»   load iris.txt
»   x=sortrows(iris,2);
»   P=x(21:150,1:4);
»   T=x(21:150,5);
»   a=x(1:20,1:4);
»   s=x(1:20,5);

An LVQ network can be created with the function newlvq.

Syntax
net = newlvq(PR,S1,PC,LR,LF)

       NET = NEWLVQ(PR,S1,PC,LR,LF) takes these inputs,
         PR - Rx2 matrix of min and max values for R input elements.
         S1 - Number of hidden neurons.
         PC - S2 element vector of typical class percentages.
              The dimension depends on the number of classes in our data
              set. Each element indicates the percentage of each class.
              The percentages must sum to 1.
         LR - Learning rate, default = 0.01.
         LF - Learning function, default = 'learnlv2'.The




                                          9
The following code is to implement LVQ network.

P=input('input matrix training: ');
T=input('input matrix target training: ');
a=input('input matrix testing: ');
s=input('input matrix target testing: ');
Tc = ind2vec(T');
net=newlvq(minmax(P'),6,[49/130 36/130 45/130]);
net.trainParam.epochs=700;
net=train (net,P',Tc);
y=sim(net,a');
lb=vec2ind(y);
[s lb’]
ans

     2     2
     2     2
     2     2
     3     1
     1     1
     2     2
     2     2
     2     2
     2     2
     2     2
     2     2
     2     2
     2     1
     2     2
     2     2
     3     2
     3     1
     3     1
     3     1
     2     2




                                  10

More Related Content

What's hot

Fourier series 1
Fourier series 1Fourier series 1
Fourier series 1
Faiza Saher
 
Brief Introduction About Topological Interference Management (TIM)
Brief Introduction About Topological Interference Management (TIM)Brief Introduction About Topological Interference Management (TIM)
Brief Introduction About Topological Interference Management (TIM)
Pei-Che Chang
 
Kristhyan kurtlazartezubia evidencia1-metodosnumericos
Kristhyan kurtlazartezubia evidencia1-metodosnumericosKristhyan kurtlazartezubia evidencia1-metodosnumericos
Kristhyan kurtlazartezubia evidencia1-metodosnumericos
KristhyanAndreeKurtL
 
Time Series Analysis:Basic Stochastic Signal Recovery
Time Series Analysis:Basic Stochastic Signal RecoveryTime Series Analysis:Basic Stochastic Signal Recovery
Time Series Analysis:Basic Stochastic Signal Recovery
Daniel Cuneo
 
DISTINGUISH BETWEEN WALSH TRANSFORM AND HAAR TRANSFORMDip transforms
DISTINGUISH BETWEEN WALSH TRANSFORM AND HAAR TRANSFORMDip transformsDISTINGUISH BETWEEN WALSH TRANSFORM AND HAAR TRANSFORMDip transforms
DISTINGUISH BETWEEN WALSH TRANSFORM AND HAAR TRANSFORMDip transforms
NITHIN KALLE PALLY
 
Fourier transformation
Fourier transformationFourier transformation
Fourier transformationzertux
 
Fichas de trabalho 3º ano
Fichas de trabalho   3º anoFichas de trabalho   3º ano
Fichas de trabalho 3º anoclaudiaalves71
 
Fast Fourier Transform
Fast Fourier TransformFast Fourier Transform
Fast Fourier Transform
op205
 
NTU ML TENSORFLOW
NTU ML TENSORFLOWNTU ML TENSORFLOW
NTU ML TENSORFLOW
Mark Chang
 
Computational Linguistics week 10
 Computational Linguistics week 10 Computational Linguistics week 10
Computational Linguistics week 10
Mark Chang
 
Distributed Architecture of Subspace Clustering and Related
Distributed Architecture of Subspace Clustering and RelatedDistributed Architecture of Subspace Clustering and Related
Distributed Architecture of Subspace Clustering and Related
Pei-Che Chang
 
Maxima & Minima of Functions - Differential Calculus by Arun Umrao
Maxima & Minima of Functions - Differential Calculus by Arun UmraoMaxima & Minima of Functions - Differential Calculus by Arun Umrao
Maxima & Minima of Functions - Differential Calculus by Arun Umrao
ssuserd6b1fd
 
Local linear approximation
Local linear approximationLocal linear approximation
Local linear approximation
Tarun Gehlot
 
Chapter 5 Image Processing: Fourier Transformation
Chapter 5 Image Processing: Fourier TransformationChapter 5 Image Processing: Fourier Transformation
Chapter 5 Image Processing: Fourier Transformation
Varun Ojha
 
Deep learning with C++ - an introduction to tiny-dnn
Deep learning with C++  - an introduction to tiny-dnnDeep learning with C++  - an introduction to tiny-dnn
Deep learning with C++ - an introduction to tiny-dnn
Taiga Nomi
 
Lab7 task1
Lab7 task1Lab7 task1
Lab7 task1
sufyan ahmed
 
Fourier 3
Fourier 3Fourier 3
Fourier 3
nugon
 

What's hot (20)

Fourier series 1
Fourier series 1Fourier series 1
Fourier series 1
 
Brief Introduction About Topological Interference Management (TIM)
Brief Introduction About Topological Interference Management (TIM)Brief Introduction About Topological Interference Management (TIM)
Brief Introduction About Topological Interference Management (TIM)
 
Kristhyan kurtlazartezubia evidencia1-metodosnumericos
Kristhyan kurtlazartezubia evidencia1-metodosnumericosKristhyan kurtlazartezubia evidencia1-metodosnumericos
Kristhyan kurtlazartezubia evidencia1-metodosnumericos
 
Time Series Analysis:Basic Stochastic Signal Recovery
Time Series Analysis:Basic Stochastic Signal RecoveryTime Series Analysis:Basic Stochastic Signal Recovery
Time Series Analysis:Basic Stochastic Signal Recovery
 
DISTINGUISH BETWEEN WALSH TRANSFORM AND HAAR TRANSFORMDip transforms
DISTINGUISH BETWEEN WALSH TRANSFORM AND HAAR TRANSFORMDip transformsDISTINGUISH BETWEEN WALSH TRANSFORM AND HAAR TRANSFORMDip transforms
DISTINGUISH BETWEEN WALSH TRANSFORM AND HAAR TRANSFORMDip transforms
 
Fourier transformation
Fourier transformationFourier transformation
Fourier transformation
 
Fichas de trabalho 3º ano
Fichas de trabalho   3º anoFichas de trabalho   3º ano
Fichas de trabalho 3º ano
 
Nabaa
NabaaNabaa
Nabaa
 
Fast Fourier Transform
Fast Fourier TransformFast Fourier Transform
Fast Fourier Transform
 
NTU ML TENSORFLOW
NTU ML TENSORFLOWNTU ML TENSORFLOW
NTU ML TENSORFLOW
 
Computational Linguistics week 10
 Computational Linguistics week 10 Computational Linguistics week 10
Computational Linguistics week 10
 
Distributed Architecture of Subspace Clustering and Related
Distributed Architecture of Subspace Clustering and RelatedDistributed Architecture of Subspace Clustering and Related
Distributed Architecture of Subspace Clustering and Related
 
alamouti
alamoutialamouti
alamouti
 
Maxima & Minima of Functions - Differential Calculus by Arun Umrao
Maxima & Minima of Functions - Differential Calculus by Arun UmraoMaxima & Minima of Functions - Differential Calculus by Arun Umrao
Maxima & Minima of Functions - Differential Calculus by Arun Umrao
 
Local linear approximation
Local linear approximationLocal linear approximation
Local linear approximation
 
Chapter 5 Image Processing: Fourier Transformation
Chapter 5 Image Processing: Fourier TransformationChapter 5 Image Processing: Fourier Transformation
Chapter 5 Image Processing: Fourier Transformation
 
Deep learning with C++ - an introduction to tiny-dnn
Deep learning with C++  - an introduction to tiny-dnnDeep learning with C++  - an introduction to tiny-dnn
Deep learning with C++ - an introduction to tiny-dnn
 
Solved problems
Solved problemsSolved problems
Solved problems
 
Lab7 task1
Lab7 task1Lab7 task1
Lab7 task1
 
Fourier 3
Fourier 3Fourier 3
Fourier 3
 

Viewers also liked

Neural network-toolbox
Neural network-toolboxNeural network-toolbox
Neural network-toolbox
angeltejera
 
learning Vector Quantization LVQ2 LVQ3
learning Vector Quantization LVQ2 LVQ3learning Vector Quantization LVQ2 LVQ3
learning Vector Quantization LVQ2 LVQ3ESCOM
 
Matlab Neural Network Toolbox
Matlab Neural Network ToolboxMatlab Neural Network Toolbox
Matlab Neural Network Toolbox
AliMETN
 
Kohonen SOM dan Learning Vector Quantization (LVQ)
Kohonen SOM dan Learning Vector Quantization (LVQ)Kohonen SOM dan Learning Vector Quantization (LVQ)
Kohonen SOM dan Learning Vector Quantization (LVQ)
petrus fendiyanto
 
Neural Network Toolbox MATLAB
Neural Network Toolbox MATLABNeural Network Toolbox MATLAB
Neural Network Toolbox MATLABESCOM
 
Matlab Neural Network Toolbox MATLAB
Matlab Neural Network Toolbox MATLABMatlab Neural Network Toolbox MATLAB
Matlab Neural Network Toolbox MATLABESCOM
 
Learning Vector Quantization LVQ
Learning Vector Quantization LVQLearning Vector Quantization LVQ
Learning Vector Quantization LVQESCOM
 
Learning Vector Quantization LVQ
Learning Vector Quantization LVQLearning Vector Quantization LVQ
Learning Vector Quantization LVQESCOM
 
Sefl Organizing Map
Sefl Organizing MapSefl Organizing Map
Sefl Organizing Map
Nguyen Van Chuc
 
Neural networks Self Organizing Map by Engr. Edgar Carrillo II
Neural networks Self Organizing Map by Engr. Edgar Carrillo IINeural networks Self Organizing Map by Engr. Edgar Carrillo II
Neural networks Self Organizing Map by Engr. Edgar Carrillo II
Edgar Carrillo
 
Neural network in matlab
Neural network in matlab Neural network in matlab
Neural network in matlab
Fahim Khan
 
Self-Organising Maps for Customer Segmentation using R - Shane Lynn - Dublin R
Self-Organising Maps for Customer Segmentation using R - Shane Lynn - Dublin RSelf-Organising Maps for Customer Segmentation using R - Shane Lynn - Dublin R
Self-Organising Maps for Customer Segmentation using R - Shane Lynn - Dublin R
shanelynn
 
Self-organizing maps - Tutorial
Self-organizing maps -  TutorialSelf-organizing maps -  Tutorial
Self-organizing maps - Tutorial
askroll
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networksstellajoseph
 

Viewers also liked (14)

Neural network-toolbox
Neural network-toolboxNeural network-toolbox
Neural network-toolbox
 
learning Vector Quantization LVQ2 LVQ3
learning Vector Quantization LVQ2 LVQ3learning Vector Quantization LVQ2 LVQ3
learning Vector Quantization LVQ2 LVQ3
 
Matlab Neural Network Toolbox
Matlab Neural Network ToolboxMatlab Neural Network Toolbox
Matlab Neural Network Toolbox
 
Kohonen SOM dan Learning Vector Quantization (LVQ)
Kohonen SOM dan Learning Vector Quantization (LVQ)Kohonen SOM dan Learning Vector Quantization (LVQ)
Kohonen SOM dan Learning Vector Quantization (LVQ)
 
Neural Network Toolbox MATLAB
Neural Network Toolbox MATLABNeural Network Toolbox MATLAB
Neural Network Toolbox MATLAB
 
Matlab Neural Network Toolbox MATLAB
Matlab Neural Network Toolbox MATLABMatlab Neural Network Toolbox MATLAB
Matlab Neural Network Toolbox MATLAB
 
Learning Vector Quantization LVQ
Learning Vector Quantization LVQLearning Vector Quantization LVQ
Learning Vector Quantization LVQ
 
Learning Vector Quantization LVQ
Learning Vector Quantization LVQLearning Vector Quantization LVQ
Learning Vector Quantization LVQ
 
Sefl Organizing Map
Sefl Organizing MapSefl Organizing Map
Sefl Organizing Map
 
Neural networks Self Organizing Map by Engr. Edgar Carrillo II
Neural networks Self Organizing Map by Engr. Edgar Carrillo IINeural networks Self Organizing Map by Engr. Edgar Carrillo II
Neural networks Self Organizing Map by Engr. Edgar Carrillo II
 
Neural network in matlab
Neural network in matlab Neural network in matlab
Neural network in matlab
 
Self-Organising Maps for Customer Segmentation using R - Shane Lynn - Dublin R
Self-Organising Maps for Customer Segmentation using R - Shane Lynn - Dublin RSelf-Organising Maps for Customer Segmentation using R - Shane Lynn - Dublin R
Self-Organising Maps for Customer Segmentation using R - Shane Lynn - Dublin R
 
Self-organizing maps - Tutorial
Self-organizing maps -  TutorialSelf-organizing maps -  Tutorial
Self-organizing maps - Tutorial
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networks
 

Similar to Intro matlab-nn

curve fitting or regression analysis-1.pptx
curve fitting or regression analysis-1.pptxcurve fitting or regression analysis-1.pptx
curve fitting or regression analysis-1.pptx
abelmeketa
 
BALLANDBEAM_GROUP7.pptx
BALLANDBEAM_GROUP7.pptxBALLANDBEAM_GROUP7.pptx
BALLANDBEAM_GROUP7.pptx
OthmanBensaoud
 
11. Linear Models
11. Linear Models11. Linear Models
11. Linear Models
FAO
 
Theoretical and Practical Bounds on the Initial Value of Skew-Compensated Cl...
Theoretical and Practical Bounds on the Initial Value of  Skew-Compensated Cl...Theoretical and Practical Bounds on the Initial Value of  Skew-Compensated Cl...
Theoretical and Practical Bounds on the Initial Value of Skew-Compensated Cl...
Xi'an Jiaotong-Liverpool University
 
Surrey dl-4
Surrey dl-4Surrey dl-4
Surrey dl-4ozzie73
 
HW 5-RSAascii2str.mfunction str = ascii2str(ascii) .docx
HW 5-RSAascii2str.mfunction str = ascii2str(ascii)        .docxHW 5-RSAascii2str.mfunction str = ascii2str(ascii)        .docx
HW 5-RSAascii2str.mfunction str = ascii2str(ascii) .docx
wellesleyterresa
 
maXbox starter69 Machine Learning VII
maXbox starter69 Machine Learning VIImaXbox starter69 Machine Learning VII
maXbox starter69 Machine Learning VII
Max Kleiner
 
Traveling salesman problem: Game Scheduling Problem Solution: Ant Colony Opti...
Traveling salesman problem: Game Scheduling Problem Solution: Ant Colony Opti...Traveling salesman problem: Game Scheduling Problem Solution: Ant Colony Opti...
Traveling salesman problem: Game Scheduling Problem Solution: Ant Colony Opti...
Soumen Santra
 
NLP@ICLR2019
NLP@ICLR2019NLP@ICLR2019
NLP@ICLR2019
Kazuki Fujikawa
 
Matlab polynimials and curve fitting
Matlab polynimials and curve fittingMatlab polynimials and curve fitting
Matlab polynimials and curve fitting
Ameen San
 
Predicting organic reaction outcomes with weisfeiler lehman network
Predicting organic reaction outcomes with weisfeiler lehman networkPredicting organic reaction outcomes with weisfeiler lehman network
Predicting organic reaction outcomes with weisfeiler lehman network
Kazuki Fujikawa
 
Hull White model presentation
Hull White model presentationHull White model presentation
Hull White model presentation
Stephan Chang
 

Similar to Intro matlab-nn (20)

curve fitting or regression analysis-1.pptx
curve fitting or regression analysis-1.pptxcurve fitting or regression analysis-1.pptx
curve fitting or regression analysis-1.pptx
 
presentazione
presentazionepresentazione
presentazione
 
DCT
DCTDCT
DCT
 
DCT
DCTDCT
DCT
 
DCT
DCTDCT
DCT
 
BALLANDBEAM_GROUP7.pptx
BALLANDBEAM_GROUP7.pptxBALLANDBEAM_GROUP7.pptx
BALLANDBEAM_GROUP7.pptx
 
MLE Example
MLE ExampleMLE Example
MLE Example
 
11. Linear Models
11. Linear Models11. Linear Models
11. Linear Models
 
Theoretical and Practical Bounds on the Initial Value of Skew-Compensated Cl...
Theoretical and Practical Bounds on the Initial Value of  Skew-Compensated Cl...Theoretical and Practical Bounds on the Initial Value of  Skew-Compensated Cl...
Theoretical and Practical Bounds on the Initial Value of Skew-Compensated Cl...
 
Surrey dl-4
Surrey dl-4Surrey dl-4
Surrey dl-4
 
HW 5-RSAascii2str.mfunction str = ascii2str(ascii) .docx
HW 5-RSAascii2str.mfunction str = ascii2str(ascii)        .docxHW 5-RSAascii2str.mfunction str = ascii2str(ascii)        .docx
HW 5-RSAascii2str.mfunction str = ascii2str(ascii) .docx
 
maXbox starter69 Machine Learning VII
maXbox starter69 Machine Learning VIImaXbox starter69 Machine Learning VII
maXbox starter69 Machine Learning VII
 
Traveling salesman problem: Game Scheduling Problem Solution: Ant Colony Opti...
Traveling salesman problem: Game Scheduling Problem Solution: Ant Colony Opti...Traveling salesman problem: Game Scheduling Problem Solution: Ant Colony Opti...
Traveling salesman problem: Game Scheduling Problem Solution: Ant Colony Opti...
 
Conference ppt
Conference pptConference ppt
Conference ppt
 
NLP@ICLR2019
NLP@ICLR2019NLP@ICLR2019
NLP@ICLR2019
 
parallel
parallelparallel
parallel
 
Matlab polynimials and curve fitting
Matlab polynimials and curve fittingMatlab polynimials and curve fitting
Matlab polynimials and curve fitting
 
Predicting organic reaction outcomes with weisfeiler lehman network
Predicting organic reaction outcomes with weisfeiler lehman networkPredicting organic reaction outcomes with weisfeiler lehman network
Predicting organic reaction outcomes with weisfeiler lehman network
 
Learn Matlab
Learn MatlabLearn Matlab
Learn Matlab
 
Hull White model presentation
Hull White model presentationHull White model presentation
Hull White model presentation
 

Recently uploaded

Operation Blue Star - Saka Neela Tara
Operation Blue Star   -  Saka Neela TaraOperation Blue Star   -  Saka Neela Tara
Operation Blue Star - Saka Neela Tara
Balvir Singh
 
A Survey of Techniques for Maximizing LLM Performance.pptx
A Survey of Techniques for Maximizing LLM Performance.pptxA Survey of Techniques for Maximizing LLM Performance.pptx
A Survey of Techniques for Maximizing LLM Performance.pptx
thanhdowork
 
Azure Interview Questions and Answers PDF By ScholarHat
Azure Interview Questions and Answers PDF By ScholarHatAzure Interview Questions and Answers PDF By ScholarHat
Azure Interview Questions and Answers PDF By ScholarHat
Scholarhat
 
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
Nguyen Thanh Tu Collection
 
The Accursed House by Émile Gaboriau.pptx
The Accursed House by Émile Gaboriau.pptxThe Accursed House by Émile Gaboriau.pptx
The Accursed House by Émile Gaboriau.pptx
DhatriParmar
 
The approach at University of Liverpool.pptx
The approach at University of Liverpool.pptxThe approach at University of Liverpool.pptx
The approach at University of Liverpool.pptx
Jisc
 
STRAND 3 HYGIENIC PRACTICES.pptx GRADE 7 CBC
STRAND 3 HYGIENIC PRACTICES.pptx GRADE 7 CBCSTRAND 3 HYGIENIC PRACTICES.pptx GRADE 7 CBC
STRAND 3 HYGIENIC PRACTICES.pptx GRADE 7 CBC
kimdan468
 
special B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdfspecial B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdf
Special education needs
 
Guidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th SemesterGuidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th Semester
Atul Kumar Singh
 
The basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptxThe basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptx
heathfieldcps1
 
Model Attribute Check Company Auto Property
Model Attribute  Check Company Auto PropertyModel Attribute  Check Company Auto Property
Model Attribute Check Company Auto Property
Celine George
 
The Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official PublicationThe Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official Publication
Delapenabediema
 
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdfUnit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Thiyagu K
 
Introduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp NetworkIntroduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp Network
TechSoup
 
Thesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.pptThesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.ppt
EverAndrsGuerraGuerr
 
Digital Artifact 2 - Investigating Pavilion Designs
Digital Artifact 2 - Investigating Pavilion DesignsDigital Artifact 2 - Investigating Pavilion Designs
Digital Artifact 2 - Investigating Pavilion Designs
chanes7
 
Overview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with MechanismOverview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with Mechanism
DeeptiGupta154
 
CACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdfCACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdf
camakaiclarkmusic
 
Home assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdfHome assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdf
Tamralipta Mahavidyalaya
 
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
Levi Shapiro
 

Recently uploaded (20)

Operation Blue Star - Saka Neela Tara
Operation Blue Star   -  Saka Neela TaraOperation Blue Star   -  Saka Neela Tara
Operation Blue Star - Saka Neela Tara
 
A Survey of Techniques for Maximizing LLM Performance.pptx
A Survey of Techniques for Maximizing LLM Performance.pptxA Survey of Techniques for Maximizing LLM Performance.pptx
A Survey of Techniques for Maximizing LLM Performance.pptx
 
Azure Interview Questions and Answers PDF By ScholarHat
Azure Interview Questions and Answers PDF By ScholarHatAzure Interview Questions and Answers PDF By ScholarHat
Azure Interview Questions and Answers PDF By ScholarHat
 
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
 
The Accursed House by Émile Gaboriau.pptx
The Accursed House by Émile Gaboriau.pptxThe Accursed House by Émile Gaboriau.pptx
The Accursed House by Émile Gaboriau.pptx
 
The approach at University of Liverpool.pptx
The approach at University of Liverpool.pptxThe approach at University of Liverpool.pptx
The approach at University of Liverpool.pptx
 
STRAND 3 HYGIENIC PRACTICES.pptx GRADE 7 CBC
STRAND 3 HYGIENIC PRACTICES.pptx GRADE 7 CBCSTRAND 3 HYGIENIC PRACTICES.pptx GRADE 7 CBC
STRAND 3 HYGIENIC PRACTICES.pptx GRADE 7 CBC
 
special B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdfspecial B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdf
 
Guidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th SemesterGuidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th Semester
 
The basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptxThe basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptx
 
Model Attribute Check Company Auto Property
Model Attribute  Check Company Auto PropertyModel Attribute  Check Company Auto Property
Model Attribute Check Company Auto Property
 
The Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official PublicationThe Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official Publication
 
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdfUnit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdf
 
Introduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp NetworkIntroduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp Network
 
Thesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.pptThesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.ppt
 
Digital Artifact 2 - Investigating Pavilion Designs
Digital Artifact 2 - Investigating Pavilion DesignsDigital Artifact 2 - Investigating Pavilion Designs
Digital Artifact 2 - Investigating Pavilion Designs
 
Overview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with MechanismOverview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with Mechanism
 
CACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdfCACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdf
 
Home assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdfHome assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdf
 
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
 

Intro matlab-nn

  • 1. INTRODUCTION TO MATLAB NEURAL NETWORK TOOLBOX Budi Santosa, Fall 2002 FEEDFORWARD NEURAL NETWORK To create a feedforward backpropagation network we can use NEWFF Syntax net = newff(PR,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF) Description NEWFF(PR,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF) takes, PR - Rx2 matrix of min and max values for R input elements. Si - Size of ith layer, for Nl layers. TFi - Transfer function of ith layer, default = 'tansig'. BTF - Backprop network training function, default = 'trainlm'. BLF - Backprop weight/bias learning function, default = 'learngdm'. PF - Performance function, default = 'mse'. and returns an N layer feed-forward backprop network. Consider this set of data: p=[-1 -1 2 2;0 5 0 5] t =[-1 -1 1 1] where p is input vector and t is target. Suppose we want to create feed forward neural net with one hidden layer, 3 nodes in hidden layer, with tangent sigmoid as transfer function in hidden layer and linear function for output layer, and with gradient descent with momentum backpropagation training function, just simply use the following commands: » net=newff([-1 2;0 5],[3 1],{'tansig' 'purelin'},’traingdm’); Note that the first input [-1 2;0 5] is the minimum and maximum values of vector p. We might use minmax(p) , especially for large data set, then the command becomes: »net=newff(minmax(p),[3 1],{'tansig' 'purelin'},’traingdm’); We then want to train the network net with the following commands and parameter values1 (or we can write these set of command as M-file): » net.trainParam.epochs=30;%(number of epochs) » net.trainParam.lr=0.3;%(learning rate) » net.trainParam.mc=0.6;%(momentum) » net=train (net,p,t); 1 % is a sign not to execute any words after that, the phrase is just comment. We can ignore this comment 1
  • 2. TRAINLM, Epoch 0/30, MSE 1.13535/0, Gradient 5.64818/1e-010 TRAINLM, Epoch 6/30, MSE 2.21698e-025/0, Gradient 4.27745e-012/1e-010 TRAINLM, Minimum gradient reached, performance goal was not met. The following is plot of training error vs epochs resulted after we run the above command: To simulate the network use the following command: P e rfo rm a n c e is 2 .2 1 6 9 8 e -0 2 5 , G o a l is 0 5 10 0 10 -5 10 -1 0 Tra in in g -B lu e 10 -1 5 10 -2 0 10 -2 5 10 0 1 2 3 4 5 6 6 E poc hs » y=sim(net,p); Then see the result: » [t;y] ans = -1.0000 -1.0000 1.0000 1.0000 -1.0000 -1.0000 1.0000 1.0000 Notice that t is actual target and y is predicted value. Pre-processing data Some transfer functions need that the inputs and targets are scaled so that they fall within a specified range. In order to meet this requirement we need to pre-process the data. 2
  • 3. PRESTD : preprocesses the data so that the mean is 0 and the standard deviation is 1. Syntax: [pn,meanp,stdp,tn,meant,stdt] = prestd(p,t) Description PRESTD preprocesses the network training set by normalizing the inputs (p) and targets (t) so that they have means of zero and standard deviations of 1. PREMNMX: preprocesses data so that the minimum is -1 and the maximum is 1. Syntax: [pn,minp,maxp,tn,mint,maxt] = premnmx(p,t) Description PREMNMX preprocesses the network training set by normalizing the inputs and targets so that they fall in the interval [-1,1]. Post-processing Data If we pre-process the data before experiments, then after the simulation in order to see the output, we need to convert back to the original scale. The corresponding post-processing subroutine for prestd is poststd, and for premnmx is postmnmx. Now, consider the following real time-series data. We can save this data as text file in MS-excel (i.e data.txt). 107.1 113.5 112.7 113.5 112.7 114.7 112.7 114.7 123.4 114.7 123.4 123.6 123.4 123.6 116.3 123.6 116.3 118.5 116.3 118.5 119.8 118.5 119.8 120.3 119.8 120.3 127.4 120.3 127.4 125.1 127.4 125.1 127.6 125.1 127.6 129 127.6 129 124.6 129 124.6 134.1 124.6 134.1 146.5 134.1 146.5 171.2 146.5 171.2 178.6 171.2 178.6 172.2 178.6 172.2 171.5 172.2 171.5 163.6 3
  • 4. Now we would like to work on the above real data set. We have a 20 by 3 matrix. Split the matrix into two new data sets, first 12 data point as training set and the rest as a testing set. The following commands are to load the data and specify our training and testing data: » load data.txt; » P=data(1:12,1:2); » T=data(1:12,3); » a=data(13:20,1:2); » s=data(13:20,3); Subroutine premnmx is used to preprocess the data such that the converted data fall in range [-1,1]. Note that we need to convert our data into row vectors to be able to apply premnmx.. » [pn,minp,maxp,tn,mint,maxt]=premnmx(P',T'); » [an,mina,maxa,sn,mins,maxs]=premnmx(a',s'); We would like to create two layers neural net with 5 nodes in hidden layer. » net=newff(minmax(pn),[5 1],{'tansig','tansig'},'traingdm') Specify these parameters: » net.trainParam.epochs=3000; » net.trainParam.lr=0.3; » net.trainParam.mc=0.6; » net=train (net,pn,tn); After training the network we can simulate our testing data: P e rfo rm a n c e is 0 .2 2 5 0 5 1 , G o a l is 0 0 10 Tra in in g -B lu e -1 10 0 500 1000 1500 2000 2 54 00 3000 3000 E poc hs
  • 5. » y=sim(net,an) y= Columns 1 through 7 -0.4697 -0.4748 -0.4500 -0.3973 0.5257 0.6015 0.5142 Column 8 0.5280 To convert y back to the original scale use the following command: » t=postmnmx(y’,mins,maxs); See the results: » [t s] ans = 138.9175 124.6000 138.7803 134.1000 139.4506 146.5000 140.8737 171.2000 165.7935 178.6000 167.8394 172.2000 165.4832 171.5000 165.8551 163.6000 » plot(t,'r') » hold Current plot held » plot(s) » title('Comparison between actual targets and predictions') C o m p a ris o n b e tw e e n a c tu a l ta rg e ts a n d p re d ic tio n s 180 170 160 150 y 140 x 130 120 1 2 3 4 5 6 7 8 5
  • 6. To calculate mse between actual and predicted values use the following commands: » d=[t-s].^2; » mse=mean(d) mse = 177.5730 To judge our network performance we can also use regression analysis. After post- processing the predicted values, we can apply the following command: t» [m,b,r]=postreg(t',s’) m = 0.5368 b = 68.1723 r = 0.7539 B es t Linear F it: X = (0.537) T + (68.2) D ata P oints 180 X= T B es t Linear F it 170 160 150 X 140 R = 0.754 130 120 120 130 140 150 160 170 180 T 6
  • 7. RADIAL BASIS NETWORK NEWRB Design a radial basis network. Synopsis net = newrb [net,tr] = newrb(P,T,GOAL,SPREAD,MN,DF) Description Radial basis networks can be used to approximate functions. NEWRB adds neurons to the hidden layer of a radial basis network until it meets the specified mean squared error goal. Consider the same data set we use before. Now we use radial basis network to do an experiment. >> net = newrb(P',T',0,3,12,1); >> Y=sim(net,a') Y = 150.4579 142.0905 166.3241 167.1528 167.1528 167.1528 167.1528 167.1528 >> d=[Y' s] d = 150.4579 124.6000 142.0905 134.1000 166.3241 146.5000 167.1528 171.2000 167.1528 178.6000 167.1528 172.2000 167.1528 171.5000 167.1528 163.6000 >> diff=d(:,1)-d(:,2) diff = 25.8579 7.9905 19.8241 -4.0472 7
  • 8. -11.4472 -5.0472 -4.3472 3.5528 >> mse=mean(diff.^2) mse = 166.2357 RECURRENT NETWORK Here we will use Elman network. The Elman network differs from conventional two- layers network in that the first layer has recurrent connection. Elman networks are two- layer backpropagation networks, with addition of a feedback connection from the output of the hidden layer to its input. This feedback path allows Elman networks to recognize and generate temporal patterns, as well as spatial patterns. The following is an example of applying Elman networks. » [an,meana,stda,tn,meant,stdt]=prestd(a',s'); » [pn,meanp,stdp,Tn,meanT,stdT]=prestd(P',T'); » net = newelm(minmax(pn),[6 1],{'tansig',‘logsig'},'traingdm'); » y=sim(net,an); » x=poststd(y',meant,stdt); » [x s] ans = 158.2551 124.6000 158.2079 134.1000 158.3120 146.5000 159.2024 171.2000 165.3899 178.6000 171.6465 172.2000 170.5381 171.5000 169.3573 163.6000 » plot(x) » hold Current plot held » plot(t,'r') 8
  • 9. 180 170 160 150 140 130 120 1 2 3 4 5 6 7 8 Learning Vector Quantization (LVQ) LVQ networks are used to solve classification problems. Now we use iris data to implement this technique. Iris data is very popular in data mining or statistics study where the dimension is 150 by 5. The row is the number of observations and the last column of the data indicates the class. There are 3 classes in the data. The following set of commands is to specify our training and testing data: » load iris.txt » x=sortrows(iris,2); » P=x(21:150,1:4); » T=x(21:150,5); » a=x(1:20,1:4); » s=x(1:20,5); An LVQ network can be created with the function newlvq. Syntax net = newlvq(PR,S1,PC,LR,LF) NET = NEWLVQ(PR,S1,PC,LR,LF) takes these inputs, PR - Rx2 matrix of min and max values for R input elements. S1 - Number of hidden neurons. PC - S2 element vector of typical class percentages. The dimension depends on the number of classes in our data set. Each element indicates the percentage of each class. The percentages must sum to 1. LR - Learning rate, default = 0.01. LF - Learning function, default = 'learnlv2'.The 9
  • 10. The following code is to implement LVQ network. P=input('input matrix training: '); T=input('input matrix target training: '); a=input('input matrix testing: '); s=input('input matrix target testing: '); Tc = ind2vec(T'); net=newlvq(minmax(P'),6,[49/130 36/130 45/130]); net.trainParam.epochs=700; net=train (net,P',Tc); y=sim(net,a'); lb=vec2ind(y); [s lb’] ans 2 2 2 2 2 2 3 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 2 2 2 2 3 2 3 1 3 1 3 1 2 2 10