Original SOINN
Upcoming SlideShare
Loading in...5
×

Like this? Share it with your network

Share
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
35,798
On Slideshare
1,868
From Embeds
33,930
Number of Embeds
21

Actions

Shares
Downloads
40
Comments
0
Likes
1

Embeds 33,930

http://haselab.info 32,955
http://www.haselab.info 309
http://watashihayamagi.blog74.fc2.com 227
http://translate.googleusercontent.com 187
http://wallybot.blogspot.com 167
http://webcache.googleusercontent.com 28
http://wallybot.blogspot.ch 15
http://wallybot.blogspot.fr 12
http://www.google.co.jp 5
http://131.253.14.98 4
http://honyaku.yahoofs.jp 4
http://admin.blog.fc2.com 3
http://www.google.com 3
http://207.46.192.232 3
https://www.google.com.tw 2
https://www.google.co.jp 1
http://nbqxgzlmmfrc42lomzxq.cmle.ru 1
http://feedly.com 1
https://www.google.co.in 1
http://www.google.com.sg 1
https://www.google.fr 1

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. An incremental network for on-line unsupervised classification and topology learning Shen Furao Osamu HasegawaNeural Networks, Vol.19, No.1, pp.90-106, (2006)
  • 2. Background: Objective of unsupervised learning (1) Clustering: Construct decision boundaries based on unlabeled data. – Single-link, complete-link, CURE • Computation overload • Much memory space • Unsuitable for large data sets or online data – K-means: • Dependence on initial starting conditions • Tendency to result in local minima • Determine the number of clusters k in advance • data sets consisting only of isotropic clusters
  • 3. Background: Objective of unsupervised learning (2)Topology learning: Given some high-dimensional data distribution, find a topological structure that closely reflects the topology of the data distribution – SOM: self-organizing map • predetermined structure and size • posterior choice of class labels for the prototypes – CHL+NG: competitive Hebbian learning + neural gas • a priori decision about the network size • ranking of all nodes in each adaptation step • use of adaptation parameter – GNG: growing neural gas • permanent increase in the number of nodes • permanent drift of centers to capture input probability density
  • 4. Background: Online or life-long learningFundamental issue (Stability-Plasticity Dilemma): How can a learning system adapt to new information without corrupting or forgetting previously learned information – GNG-U: deletes nodes which are located in regions of a low input probability density • learned old prototype patterns will be destroyed – Hybrid network: Fuzzy ARTMAP + PNN – Life-long learning with improved GNG: learn number of nodes needed for current task • only for supervised life-long learning
  • 5. Objectives of proposed algorithm• To process the on-line non-stationary data.• To do the unsupervised learning without any priori condition such as: • suitable number of nodes • a good initial codebook • how many classes there are• Report a suitable number of classes• Represent the topological structure of the input probability density.• Separate the classes with some low-density overlaps• Detect the main structure of clusters polluted by noises.
  • 6. Proposed algorithm First Layer Second LayerInput Growing First Growing Secondpattern Network Output Network Output Insert Delete Classify Node Node
  • 7. Algorithms• Insert new nodes – Criterion: nodes with high errors serve as a criterion to insert a new node – error-radius is used to judge if the insert is successful• Delete nodes – Criterion: remove nodes in low probability density regions – Realize: delete nodes with no or only one direct topology neighbor• Classify – Criterion: all nodes linked with edges will be one cluster
  • 8. Experiment• Stationary environment: patterns are randomly chosen from all area A, B, C, D and E• NON-Stationary environment: Environment I II III IV V VI VII A 1 0 1 0 0 0 0 B 0 1 0 1 0 0 0 C 0 0 1 0 0 1 0 D 0 0 0 1 1 0 0 E1 0 0 0 0 1 0 0 E2 0 0 0 0 0 1 0 Original Data Set E3 0 0 0 0 0 0 1
  • 9. Experiment: Stationary environmentOriginal Data Set Traditional method: GNG
  • 10. Experiment: Stationary environmentProposed method: first layer Proposed method: final results
  • 11. Experiment: Non-stationary environment GNG result GNG-U result
  • 12. Experiment: Non-stationary environment Proposed method: first layer
  • 13. Experiment: Non-stationary environment Proposed method: first layer
  • 14. Experiment: Non-stationary environmentProposed method: first layer Proposed method: Final output
  • 15. Experiment: Non-stationary environmentNumber of growing nodes during online learning (Environment 1 ~ Environment 7)
  • 16. Experiment: Real World Data(ATT_FACE)Facial Image (a) 10 classes (b) 10 samples of class 1
  • 17. Experiment:Vector Vector of (a) Vector of (b)
  • 18. Experiment: Face Recognition results 10 clusters Stationary Correct Recognition Ratio: 90% Non-Stationary Correct Recognition Ratio: 86%
  • 19. Experiment: Vector Quantization Stationary Environment: Decoding Original Lena (512*512*8) image, 130 nodes, 0.45bpp, PSNR = 30.79dB
  • 20. Experiment: Compare with GNG Stationary Environment Number bpp PSNR of Nodes First-layer 130 0.45 30.79 GNG 130 0.45 29.98 Second-layer 52 0.34 29.29 GNG 52 0.34 28.61
  • 21. Experiment: Non-stationary EnvironmentFirst-layer: 499 nodes, 0.56bpp, Second-layer: 64 nodes, 0.375bpp,PSNR = 32.91dB PSNR = 29.66dB
  • 22. Conclusion• An autonomous learning system for unsupervised classification and topology representation task• Grow incrementally and learn the number of nodes needed to solve current task• Accommodate input patterns of on-line non- stationary data distribution• Eliminate noise in the input data