Support Vector Machines for Computing Action Mappings in Learning Classifier Systems

  • 348 views
Uploaded on

 

More in: Technology , Education
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
348
On Slideshare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
29
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Support Vector Machines for Computing Action Mappings in Learning Classifier Systems Daniele Loiacono, Andrea Marelli, Pier Luca Lanzi Politecnico di Milano, Italy Illinois Genetic Algorithms Laboratory, University of Illinois at Urbana Champaign, USA CEC 2007, Singapore, September 26, 2007
  • 2. One Minute Intro to Classifier Systems Solution Condition-action rules Representation Problem Online RL Genetic Evaluate Search Algorithm
  • 3. One Minute Intro to Classifier Systems If condition C holds in state S, then action A will produce a payoff p, with an error ε and an accuracy F
  • 4. Many actions = many decisions The more the actions, the more difficult the learning Too many actions? Compute actions, don’t represent them!
  • 5. (our way to) Computed Actions If condition C holds in state S, then action Af(st,w) a will produce st, is payoff p, by an error ε, ε and an as accurate as F a affected with an error i.e., it is accuracy F Classifiers are made of The condition The parameter vector w to compute the action The error ε The fitness F There is no action! There is no reward. The action is computed using the action function af(st,w)
  • 6. Testing Classifiers matching st are put in [M] All the actions are computed For each action a in [M], the classification accuracy C(st,a) is computed as, The action with the highest accuracy is selected
  • 7. Learning The target action at is used to update w Classifier error is updated according to a is the action computed by the classifier εf (xt,at,a) is the error function Several error functions, we used the simplest one: 0 if action is correct (a = at), 1000 otherwise The others parameters are updated as usual
  • 8. Computing Actions with SVMs Previously, arrays of perceptrons and neural networks proved successfull in computing the classifier action Why Support Vector Machines? Good generalization capabilities Fast convergence rate Effective with highly non linear problems
  • 9. Support Vector Machines When new points are red or blue? SVM finds the best separating hyperplane The hyperplane is defined by the nearest points Such points are called Support Vectors
  • 10. Non-linearly separable problems Φ: x → φ(x) INPUT SPACE FEATURES SPACE
  • 11. Training incrementally SVM We used a modified chunking algorithm On the arrival of a new sample: i. Build a training set with the most recent ΘSV Support Vectors and the new sample ii. Train from scratch the SVM on such training set iii. Update the set of Support Vectors (add new ones and remove the ones discarded in the last training process) Computational complexity (worst case) Update: O(2nΘSV ) n is the size of Output: O( n (ΘSV)2 ) inputs Memory: O(ΘSV (n + ΘSV) )
  • 12. Experiments Compute actions using SVM, NN and array of perceptron Several binary functions Boolean Multiplexer Binary Shift Binary Sum Measure the performance as Classification accuracy Number of classifiers
  • 13. Boolean Multiplexer (20 bits) Even in simple problems SVM learn faster than perceptron and NN
  • 14. Binary Shift 8 bits string as input, 8 bits string as output Each bit in the output string is independent (it depends by a single input bit)
  • 15. Binary Sum Input: 8 bits string (two 4 bits operand) Output: a 5 bits string (result of a binary sum) Each bits in the output depends by several input bits
  • 16. Binary Sum SVM learns faster but NN evolves a more compact solution
  • 17. Conclusions Learning actions with SVM is faster than with than NN and perceptron In some problems NN and perceptrons may offer a more compact representation SVM is not suitable when the actions can be decomposed in independent component (e.g. binary shift) SVM is particularly suited for solving sparse problems (see more later)