• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
A New Learning Method for Single Layer Neural Networks Based on a Regularized Cost Function
 

A New Learning Method for Single Layer Neural Networks Based on a Regularized Cost Function

on

  • 1,593 views

Presentation at IWANN 2003

Presentation at IWANN 2003

Statistics

Views

Total Views
1,593
Views on SlideShare
1,591
Embed Views
2

Actions

Likes
0
Downloads
5
Comments
0

2 Embeds 2

http://www.slideshare.net 1
http://www.linkedin.com 1

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Thank you very much I’m going to present here a new learning method for single layer neural networks based on a regularized cost function
  • Let me first make an outline of the main points of this presentation I’m going to start with a little introduction about single layer neural networks Next I’ll explain supervised learning with regularization in this kind of networks And show an alternative loss function that allows to obtain a n analytical solution Finally, I’ll show experimental results, the conclusions and future work
  • As we can notice, our supervised learning algorithm is applied to a single layer neural network, with I inputs and J outputs To train the network we have S examples Generally, the activation functions used are non-linear At last , as can be seen, in this kind of networks the outputs are independent one of each other Because the weights related with each output are independent one set from another
  • So, in order to simplify the explanation , we’ll work with one output PRESS NEXT KEY So the real outputs of the network are obtained through a non-linear function, where the input is the sum of the inputs by the weights, plus the bias If the error function used is the MSE, as is in our case, then the goal is to obtain the values of the weights and the bias which minimi z es the MSE between the real and the desired outputs
  • So adding a regularization term to the cost function, our goal is minimi z e this cost function, which has two terms PRESS NEXT KEY The first term is the loss function, here the MSE, which is the square of difference between the desire output and the real output PRESS NEXT KEY And the second term is the regularization term, weighted by the regularization parameter alpha In our case the regularization term used is weight decay, which tries to smooth the obtained curve To minimi z e this cost function, we can derive both terms with respect to weights and bias, and equating to zero PRESS NEXT KEY The problem is that, in the first term, the weights are inside the non-linear function, so it isn’t guaranteed to have a unique minimum And also these minima can’t be obtained using an analytical method , but an iterative method
  • In order to solve this problem, we present here an alternative loss function that is based on the following theorem READ THE THEOREM BRIEFLY
  • Roughly speaking, the idea is that minimi z e the error difference in the output is equivalent to minimi z e the error difference before the non-linear function, weighting it by a factor
  • So now, applying the theorem, we have the new cost function PRESS NEXT KEY In which the alternative loss function is the MSE, but obtained before the non-linear functions PRESS NEXT KEY And the regularization term, which is the same as in the previous cost function We can notice that now the weights and the bias are outside the non-linear function
  • So to minimi z e the new cost function we derive both terms with respect to the weights and the bias, and equating the partial derivatives to zero Obtaining the equations showed in the slide
  • We can rewrite the previous system in order to obtain a new system of I+1 by I+1 linear equations, where PRESS NEXT KEY We have the variables, which are the weights and the bias PRESS NEXT KEY The cofficients PRESS NEXT KEY And the independent terms PRESS NEXT KEY So we can use an analytical method to solve this system of equations, obtaining the optima weights and bias This implies that the training is doing very fast with a low computational cost Also, this system of equations ha s an unique minimum, except for degenerated systems At last, we can do an incremental learning And even a parallel learning, where the training process is divided in several distributed neural networks, and the results are merged to obtain the global training In both cases, only the coefficients matrix and the independent terms vector must be stored
  • In order to probe our algorithm, we have applied it to a classification problem and to a regression problem PRESS NEXT KEY In both cases, we have used the logistic function in the neural functions PRESS NEXT KEY And the parameter alpha has been constrained to the interval [0, 1]
  • The first problem, a classification problem, has been extracted from the KDD Cup 99 competition Each sample summarized a connection between two hosts, and i t’ s formed by 41 inputs The goal is to classify each sample in two classes: attack or normal connection We have 30.000 samples for training, and almost 5.000 for testing
  • In order to study the influence of the training set size and the regularization parameter, we have generated several training sets For doing this , we have generated an initial training set formed by 100 samples. Each new training set is formed adding to the previous set 100 new samples, up to 2500 samples By this way we have 25 training sets For each training set, several neural networks have been trained with differents alphas, from 0 to 1, in steps of 0.005 All this process has been repeated 12 times, to obtain a better estimation of the true error Finally, the regularization parameter that provides the minimum test classification error is chosen
  • As it can be seen in the figure, using regularization in all cases produces a better results that without it Mainly in small training set sizes In order to check that really this difference is statistically significant, we have applied a statistical test, confirming this fact PRESS NEXT KEY Also, we only need 400 samples to stabilize the error using regularization, while without regularization we need 700 samples
  • The other problem is a regression problem, specifically the Box-Jenkins problem The problem consists in estimate the concentration of CO2 in a gas furnace at a time instant from the 4 previous concentrations and the 6 previous methane flow rates
  • Like in t h e previous problem, we have generated several training set sizes Initially we have done a 10-fold cross validation, using 261 samples for training and 29 for testing In each validation round, several training sets have been generated, from 9 to 261 examples, in steps of 9 samples, using the same process as in the case of the intrusion detection problem Finally, for each training set, several neural networks have been trained and tested, varying alpha from 0 to 1 in steps of 0.001 In order to obtain a better estimation of the true error, mainly with small training sets, we have repeated the previous process 1 0 times And at last, the alpha that produces the minimum normalized MSE has been chosen
  • The results are showed in the figure Though it seems that using regularization i s worse than not use it, statistically there is no difference Except for small training sets
  • In this case the neural network performs very well, and using regularization do es n’t enhance the results In order to check the generalization capability of regularization in the presence of noisy data, we have added two normal random noise One with a standard desviation that is the half of the standard desviation of the original time series (gamma 0.5) And the other with the same standard desviation (gamma 1)
  • We can notice the results together with the previous results As we can see, using regularization with nois y data improves the results In fact, in both cases there is a statistically difference using regularization and without regularization In the case of gamma 0.5, this difference only exists until a training set size of 225 samples PRESS NEXT KEY If we search for the smallest training set from which the error stabilizes With gamma 0.5 this size is 198, either using regularization or not But with gamma 1, that is, with a more noisy dat a , this size is 189 with regularization, and 207 without regularization
  • As conclusions , we have proposed a new supervised learning method for single layer neural networks using regularization Among its features, we can remark that it allows to obtain the global optimum in an analytical way and, hence, faster than the current iterative methods It allows incremental learning and distributed learning And, due to the regularization term, a better generalization capability, mainly with small training sets or with noisy data We have applied it to two kind of problems, a classification problem and a regression problem, obtaining generally better results As future work, an analytical method to obtain the regularization parameter is being analyzed
  • Thank you very much

A New Learning Method for Single Layer Neural Networks Based on a Regularized Cost Function A New Learning Method for Single Layer Neural Networks Based on a Regularized Cost Function Presentation Transcript

  • A New Learning Method for Single Layer Neural Networks Based on a Regularized Cost Function Juan A. Suárez-Romero Óscar Fontenla-Romero Bertha Guijarro-Berdiñas Amparo Alonso-Betanzos Laboratory for Research and Development in Artificial Intelligence Department of Computer Science, University of A Coruña, Spain
  • Outline
    • Introduction
    • Supervised learning + regularization
    • Alternative loss function
    • Experimental results
    • Conclusions and Future Work
  • Single layer neural network
    • I inputs
    • J outputs
    • S samples
  • Single layer neural network
  • Cost function
    • Supervised learning + regularization
    MSE Regularization term (Weight Decay) Non-linear neural functions  Not guaranteed to have a unique minimum (local minima)
  • Alternative loss function
    • Theorem Let x js be the j-th input of a one-layer neural network, d js , y js be the j-th desired and actual outputs, w ij , b j be the weights, and f , f -1 , f´ be the nonlinear function, its inverse and its derivative. Then to minimize L j is equivalent to minimize, up to the first order of the Taylor series expansion, the below alternative loss function:
      • where:
  • Alternative loss function
  • Alternative cost function
    • Supervised learning + regularization
    Alternative MSE Regularization term (Weight Decay)
  • Alternative cost function
    • Optimal weights and bias can be obtained deriving it with respect to the weights and the bias of the network and equating the partial derivatives to zero
  • Alternative cost function
    • We can rewrite previous system to obtain a system of (I+1) ×(I+1) linear equations
    • Advantages
      • Solved using a system of linear equations  fast training with low computational cost
      • Convex function  unique minimum
      • Incremental + parallel learning  only the coefficients matrix and the independent terms vector must be stored
    Variables Independent terms Coefficients
  • Experimental results
    • Two kind of problems
    • Intrusion Detection
      • Classification problem
    • Box-Jenkins time series
      • Regression problem
  • Intrusion Detection problem
    • KDD’99 Classifier Learning Contest
    • Two-class classification problem: attack and normal connections
    • Each sample formed by 41 high-level features
    • 300 00 samples for training
    • 4996 samples for testing
  • Intrusion Detection problem
    • In order to study the influence of training set size and regularization parameter
      • Initial training set of 100 samples
      • Next training set is obtained adding 100 new samples to previous set, up to 2500 samples
      • For each training set, several neural networks have been trained, with  from 0 (no regularization) to 1, in steps of 5 × 10 -3
    • In order to obtain a better estimation of the true error
      • Repeat this process 12 times with different training set
    • The  with minimum test classification error is chosen
  • Intrusion Detection problem 700 400
  • Box-Jenkins problem
    • Regression problem
    • Estimate CO 2 concentration in a gas furnace from methane flow rate
    • Predict y(t) from {y(t-1), y(t-2), y(t-3), y(t-4), u(t-1), u(t-2), u(t-3), u(t-4), u(t-5), u(t-6)}
    • 290 samples
  • Box-Jenkins problem
    • In order to study the influence of training set size and regularization parameter
      • 10-fold cross validation (261 examples for training and 29 for testing)
      • For each validation round, generate several training sets, from 9 to 261 examples, in steps of 9 examples
      • For each previous data set, train and test several neural networks varying  from 0 (no regularization) to 1 in steps of 10 -3
    • In order to obtain a better estimation of the true error, mainly with small training sets
      • Repeat validation 10 times with different composition of training sets
    • The  with minimum NMSE error is chosen
  • Box-Jenkins problem
  • Box-Jenkins problem
    • There is no difference using regularization (except for small training sets)
    • The neural network performs well, and using regularization do not enhance results
    • Add normal random noise with  =  t , where  t is standard desviation from original time series, and  {0.5, 1}
  • Box-Jenkins problem 198 189 207
  • Conclusions and Future Work
    • A new supervised learning method for single layer neural networks using regularization has been introduced
      • Global optimum
      • Fast training
      • Incremental and parallel learning
      • Better generalization capability
    • Applied to two problems: classification and regression
      • Regularization generally obtains a better solution, mainly with small training sets or nois y data
    • As future work, an analytical method to obtain the regularization parameter is being analyzed
  • A New Learning Method for Single Layer Neural Networks Based on a Regularized Cost Function Juan A. Suárez-Romero Óscar Fontenla-Romero Bertha Guijarro-Berdiñas Amparo Alonso-Betanzos Laboratory for Research and Development in Artificial Intelligence Department of Computer Science, University of A Coruña, Spain T h a n k y o u f o r y o u r a t t e n t i o n !