0
An incremental network for on-line unsupervised classification and        topology learning      Shen Furao       Osamu Ha...
Background: Objective of unsupervised learning (1)    Clustering: Construct decision boundaries      based on unlabeled da...
Background: Objective of unsupervised learning (2)Topology learning: Given some high-dimensional data  distribution, find ...
Background: Online or life-long learningFundamental issue (Stability-Plasticity Dilemma): How can  a learning system adapt...
Objectives of proposed algorithm• To process the on-line non-stationary data.• To do the unsupervised learning without any...
Proposed algorithm          First Layer            Second LayerInput      Growing       First     Growing        Secondpat...
Algorithms• Insert new nodes  – Criterion: nodes with high errors serve as a criterion    to insert a new node  – error-ra...
Experiment• Stationary environment: patterns are randomly chosen  from all area A, B, C, D and E• NON-Stationary environme...
Experiment: Stationary environmentOriginal Data Set   Traditional method: GNG
Experiment: Stationary environmentProposed method: first layer   Proposed method: final results
Experiment: Non-stationary environment  GNG result              GNG-U result
Experiment: Non-stationary environment         Proposed method: first layer
Experiment: Non-stationary environment         Proposed method: first layer
Experiment: Non-stationary environmentProposed method: first layer Proposed method: Final output
Experiment: Non-stationary environmentNumber of growing nodes during online learning     (Environment 1 ~ Environment 7)
Experiment: Real World Data(ATT_FACE)Facial Image               (a) 10 classes               (b) 10 samples of class 1
Experiment:Vector           Vector of (a)           Vector of (b)
Experiment: Face Recognition results                             10 clusters                             Stationary       ...
Experiment: Vector Quantization                             Stationary Environment: Decoding Original Lena (512*512*8)    ...
Experiment: Compare with GNG   Stationary Environment                      Number                                  bpp    ...
Experiment: Non-stationary EnvironmentFirst-layer: 499 nodes, 0.56bpp,   Second-layer: 64 nodes, 0.375bpp,PSNR = 32.91dB  ...
Conclusion• An autonomous learning system for  unsupervised classification and topology  representation task• Grow increme...
Upcoming SlideShare
Loading in...5
×

Original SOINN

42,335

Published on

Published in: Technology, Education
0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
42,335
On Slideshare
0
From Embeds
0
Number of Embeds
22
Actions
Shares
0
Downloads
48
Comments
0
Likes
2
Embeds 0
No embeds

No notes for slide

Transcript of "Original SOINN"

  1. 1. An incremental network for on-line unsupervised classification and topology learning Shen Furao Osamu HasegawaNeural Networks, Vol.19, No.1, pp.90-106, (2006)
  2. 2. Background: Objective of unsupervised learning (1) Clustering: Construct decision boundaries based on unlabeled data. – Single-link, complete-link, CURE • Computation overload • Much memory space • Unsuitable for large data sets or online data – K-means: • Dependence on initial starting conditions • Tendency to result in local minima • Determine the number of clusters k in advance • data sets consisting only of isotropic clusters
  3. 3. Background: Objective of unsupervised learning (2)Topology learning: Given some high-dimensional data distribution, find a topological structure that closely reflects the topology of the data distribution – SOM: self-organizing map • predetermined structure and size • posterior choice of class labels for the prototypes – CHL+NG: competitive Hebbian learning + neural gas • a priori decision about the network size • ranking of all nodes in each adaptation step • use of adaptation parameter – GNG: growing neural gas • permanent increase in the number of nodes • permanent drift of centers to capture input probability density
  4. 4. Background: Online or life-long learningFundamental issue (Stability-Plasticity Dilemma): How can a learning system adapt to new information without corrupting or forgetting previously learned information – GNG-U: deletes nodes which are located in regions of a low input probability density • learned old prototype patterns will be destroyed – Hybrid network: Fuzzy ARTMAP + PNN – Life-long learning with improved GNG: learn number of nodes needed for current task • only for supervised life-long learning
  5. 5. Objectives of proposed algorithm• To process the on-line non-stationary data.• To do the unsupervised learning without any priori condition such as: • suitable number of nodes • a good initial codebook • how many classes there are• Report a suitable number of classes• Represent the topological structure of the input probability density.• Separate the classes with some low-density overlaps• Detect the main structure of clusters polluted by noises.
  6. 6. Proposed algorithm First Layer Second LayerInput Growing First Growing Secondpattern Network Output Network Output Insert Delete Classify Node Node
  7. 7. Algorithms• Insert new nodes – Criterion: nodes with high errors serve as a criterion to insert a new node – error-radius is used to judge if the insert is successful• Delete nodes – Criterion: remove nodes in low probability density regions – Realize: delete nodes with no or only one direct topology neighbor• Classify – Criterion: all nodes linked with edges will be one cluster
  8. 8. Experiment• Stationary environment: patterns are randomly chosen from all area A, B, C, D and E• NON-Stationary environment: Environment I II III IV V VI VII A 1 0 1 0 0 0 0 B 0 1 0 1 0 0 0 C 0 0 1 0 0 1 0 D 0 0 0 1 1 0 0 E1 0 0 0 0 1 0 0 E2 0 0 0 0 0 1 0 Original Data Set E3 0 0 0 0 0 0 1
  9. 9. Experiment: Stationary environmentOriginal Data Set Traditional method: GNG
  10. 10. Experiment: Stationary environmentProposed method: first layer Proposed method: final results
  11. 11. Experiment: Non-stationary environment GNG result GNG-U result
  12. 12. Experiment: Non-stationary environment Proposed method: first layer
  13. 13. Experiment: Non-stationary environment Proposed method: first layer
  14. 14. Experiment: Non-stationary environmentProposed method: first layer Proposed method: Final output
  15. 15. Experiment: Non-stationary environmentNumber of growing nodes during online learning (Environment 1 ~ Environment 7)
  16. 16. Experiment: Real World Data(ATT_FACE)Facial Image (a) 10 classes (b) 10 samples of class 1
  17. 17. Experiment:Vector Vector of (a) Vector of (b)
  18. 18. Experiment: Face Recognition results 10 clusters Stationary Correct Recognition Ratio: 90% Non-Stationary Correct Recognition Ratio: 86%
  19. 19. Experiment: Vector Quantization Stationary Environment: Decoding Original Lena (512*512*8) image, 130 nodes, 0.45bpp, PSNR = 30.79dB
  20. 20. Experiment: Compare with GNG Stationary Environment Number bpp PSNR of Nodes First-layer 130 0.45 30.79 GNG 130 0.45 29.98 Second-layer 52 0.34 29.29 GNG 52 0.34 28.61
  21. 21. Experiment: Non-stationary EnvironmentFirst-layer: 499 nodes, 0.56bpp, Second-layer: 64 nodes, 0.375bpp,PSNR = 32.91dB PSNR = 29.66dB
  22. 22. Conclusion• An autonomous learning system for unsupervised classification and topology representation task• Grow incrementally and learn the number of nodes needed to solve current task• Accommodate input patterns of on-line non- stationary data distribution• Eliminate noise in the input data
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×