Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
C. C. Hung, H. Ijaz, E. Jung, and B.-C. Kuo # School of Computing and Software Engineering Southern Polytechnic State Univ...
<ul><li>Introduction </li></ul><ul><li>Self-Organizing Maps (SOM) </li></ul><ul><li>Artificial Bee Colony (ABC) Algorithm ...
<ul><li>Self-Organizing Map (SOM) has been used for image classification. </li></ul><ul><li>Similar to K-means algorithm, ...
<ul><li>Can we use  Bee algorithms  (BA) to accomplish the more robust classification? </li></ul><ul><li>Many bee algorith...
<ul><li>The  self-organizing map (SOM)  is a method for unsupervised learning, based on a grid of  artificial neurons  who...
Figure 1:  Each X i  represents one component of the pixel vector for multispectral bands.  L denotes the number of bands ...
Figure 2: SOM Neural Network
<ul><li>Each SOM neuron can be seen as   representing a cluster   containing all the input examples which are mapped to th...
<ul><li>Step 4 : Update weights of the winning node  j*  using </li></ul><ul><li>Δ ( t )  is a monotonically slowly decrea...
<ul><li>The basic process can be summarized by the following steps: </li></ul><ul><li>Step 1: Initialize all nodes to smal...
<ul><li>The Bees Algorithm is a population-based search algorithm inspired by the natural foraging behaviour of honey bees...
<ul><li>Scout bees  search randomly from one patch to another </li></ul>
<ul><li>They deposit their nectar or pollen go to the “ dance floor ” to perform a “waggle dance” </li></ul>Credit: Masary...
<ul><li>Bees  communicate  through the waggle dance which contains the following information:  </li></ul><ul><ul><ul><ul><...
<ul><li>Three type of Bees in ABC:  </li></ul><ul><li>1) Employed bees </li></ul><ul><li>2) Onlooker bees, and  </li></ul>...
<ul><li>ABC employs  four different selection processes : </li></ul><ul><li>1) a global selection process used by onlooker...
<ul><li>The ABC algorithm consists of the following steps: </li></ul><ul><li>Step 1: Initialize by picking k random Employ...
<ul><li>Step 3: Send Onlooker bees to Employed. </li></ul><ul><li>Step 4: Test Onlooker bees against Employed (replace if ...
<ul><li>The algorithm requires these parameters to be set:  </li></ul><ul><li>Number of clusters (k),  </li></ul><ul><li>N...
<ul><li>The proposed algorithm selects some neighboring nodes in SOM for weights update by using the ABC.  </li></ul>
<ul><li>The basic process can be summarized by the following steps: </li></ul><ul><li>Step 1: Initialize all weights to sm...
<ul><li>Illustration: </li></ul>
<ul><li>Some of results on Iris and Glass data sets are shown in Tables 1 and 2. Columns Max, Mean and Variance, give the ...
<ul><li>Table 1: Results and accuracy distribution of 100 runs on Iris data.  </li></ul>avg of    BEE     SOM     BEE+SOM ...
<ul><li>Table 2: Results and accuracy distribution of 500 random experiments on Iris data. </li></ul>Algorithms Max Mean V...
<ul><li>Table 3:  Results and accuracy distribution of 500 random experiments on Glass data. </li></ul>Algorithms Max Mean...
Figure 2.  (a) An original image, (b), (c) and (d)   results of applying ABC, SOM, and the SOM+ABC algorithm, respectively...
<ul><li>From the results we can see that all three achieved the same maximum accuracy for Iris data but the proposed algor...
<ul><li>This is a very primitive experiment.  Further study and comparison with other similar methods are still need be do...
 
Upcoming SlideShare
Loading in …5
×

IGARSS2011_ABC Optimized SOM.ppt

788 views

Published on

  • Be the first to comment

  • Be the first to like this

IGARSS2011_ABC Optimized SOM.ppt

  1. 1. C. C. Hung, H. Ijaz, E. Jung, and B.-C. Kuo # School of Computing and Software Engineering Southern Polytechnic State University, Marietta, Georgia USA # Graduate Institute of Educational Measurement and Statistics, National Taichung University of Education, Taichung, Taiwan, R. O. C.
  2. 2. <ul><li>Introduction </li></ul><ul><li>Self-Organizing Maps (SOM) </li></ul><ul><li>Artificial Bee Colony (ABC) Algorithm </li></ul><ul><li>Combining SOM and ABC </li></ul><ul><li>Experimental Results </li></ul><ul><li>Conclusions & Future Work </li></ul>
  3. 3. <ul><li>Self-Organizing Map (SOM) has been used for image classification. </li></ul><ul><li>Similar to K-means algorithm, the local minimum problem is inevitable in a complex problem domain. </li></ul><ul><li>Several solutions have been proposed for optimizing the SOM in remote sensing applications. Example: Simulated annealing. </li></ul>
  4. 4. <ul><li>Can we use Bee algorithms (BA) to accomplish the more robust classification? </li></ul><ul><li>Many bee algorithms have been developed. </li></ul><ul><li>The artificial bee colony (ABC) was used with SOM for this study. </li></ul>
  5. 5. <ul><li>The self-organizing map (SOM) is a method for unsupervised learning, based on a grid of artificial neurons whose weights are adapted to match input vectors in a training set. </li></ul><ul><li>It was first described by the Finnish professor Teuvo Kohonen and is thus sometimes referred to as a Kohonen map . </li></ul><ul><li>SOM is one of the popular neural computation methods in use, and several thousand scientific articles have been written about it. SOM is especially good at producing visualizations of high-dimensional data. </li></ul>
  6. 6. Figure 1: Each X i represents one component of the pixel vector for multispectral bands. L denotes the number of bands used. Each neuron in the output layer corresponds to one spectral class where its spectral means are stored in the connection between the inputs and the output neurons.
  7. 7. Figure 2: SOM Neural Network
  8. 8. <ul><li>Each SOM neuron can be seen as representing a cluster containing all the input examples which are mapped to that neuron. </li></ul><ul><li>For a given input, the output of SOM is the neuron with weight vector most similar (with respect to Euclidean distance) to that input. </li></ul><ul><li>The “trained” classes are represented by the output nodes and the center of each class is stored in the connection weights between input and output nodes. </li></ul>
  9. 9. <ul><li>Step 4 : Update weights of the winning node j* using </li></ul><ul><li>Δ ( t ) is a monotonically slowly decreasing function of t (i.e. learning rate) and its value is between 0 and 1 </li></ul>
  10. 10. <ul><li>The basic process can be summarized by the following steps: </li></ul><ul><li>Step 1: Initialize all nodes to small random values. </li></ul><ul><li>Step 2: Choose a random data point. </li></ul><ul><li>Step 3: Calculate the winning node. </li></ul><ul><li>Step 4: Update the winning node and neighbors. </li></ul><ul><li>Step 5: Repeat steps 2 to 4 for given number of iterations. </li></ul>
  11. 11. <ul><li>The Bees Algorithm is a population-based search algorithm inspired by the natural foraging behaviour of honey bees to find the optimal solution. </li></ul><ul><li>The algorithm performs a kind of neighbourhood search combined with random search. </li></ul>
  12. 12. <ul><li>Scout bees search randomly from one patch to another </li></ul>
  13. 13. <ul><li>They deposit their nectar or pollen go to the “ dance floor ” to perform a “waggle dance” </li></ul>Credit: Masaryk University, Brno, Czech Republic , Wed 08 Apr 2009
  14. 14. <ul><li>Bees communicate through the waggle dance which contains the following information: </li></ul><ul><ul><ul><ul><ul><li>The direction of flower patches (angle between the sun and the patch) </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>The distance from the hive (duration of the dance) </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>The quality rating (fitness) (frequency of the dance) </li></ul></ul></ul></ul></ul>Credit: Masaryk University, Brno, Czech Republic , Wed 08 Apr 2009
  15. 15. <ul><li>Three type of Bees in ABC: </li></ul><ul><li>1) Employed bees </li></ul><ul><li>2) Onlooker bees, and </li></ul><ul><li>3) Scouts. </li></ul><ul><li>Employed and onlooker bees perform the exploitation search . </li></ul><ul><li>Scouts carry out the exploration search . </li></ul>
  16. 16. <ul><li>ABC employs four different selection processes : </li></ul><ul><li>1) a global selection process used by onlookers, </li></ul><ul><li>2) a local selection process carried out in a region by employed and onlooker bees, </li></ul><ul><li>3) a greedy selection process used by all bees, and </li></ul><ul><li>4) a random selection process used by scouts. </li></ul>
  17. 17. <ul><li>The ABC algorithm consists of the following steps: </li></ul><ul><li>Step 1: Initialize by picking k random Employed bees from data. </li></ul><ul><li>Step 2: Send Scout bees and test against Employed bees (replace if better than Employed is found). </li></ul>
  18. 18. <ul><li>Step 3: Send Onlooker bees to Employed. </li></ul><ul><li>Step 4: Test Onlooker bees against Employed (replace if better than Employed is found). </li></ul><ul><li>Step 5: Reduce the radius of Onlooker bees. </li></ul><ul><li>Step 6: Repeat steps 2 to 5 for a given number of iterations. </li></ul>
  19. 19. <ul><li>The algorithm requires these parameters to be set: </li></ul><ul><li>Number of clusters (k), </li></ul><ul><li>Number of bees including Employed, Onlookers, and Scouts (B), </li></ul><ul><li>Number of iteration (iter), and </li></ul><ul><li>Initial radius of Onlookers (ir). </li></ul>
  20. 20. <ul><li>The proposed algorithm selects some neighboring nodes in SOM for weights update by using the ABC. </li></ul>
  21. 21. <ul><li>The basic process can be summarized by the following steps: </li></ul><ul><li>Step 1: Initialize all weights to small random values. </li></ul><ul><li>Step 2: Choose a random data point. </li></ul><ul><li>Step 3: Calculate the winning node. </li></ul><ul><li>Step 4: Use ABC to select neighboring nodes. </li></ul><ul><li>Step 5: Update the winning node and selected neighboring nodes. </li></ul><ul><li>Step 6: Repeat steps 2 to 5 for a given number of iterations. </li></ul>
  22. 22. <ul><li>Illustration: </li></ul>
  23. 23. <ul><li>Some of results on Iris and Glass data sets are shown in Tables 1 and 2. Columns Max, Mean and Variance, give the maximum accuracy achieved, mean accuracy of 500 runs and the standard deviation respectively. The accuracy distribution is also given. </li></ul>
  24. 24. <ul><li>Table 1: Results and accuracy distribution of 100 runs on Iris data. </li></ul>avg of   BEE     SOM     BEE+SOM   100 runs max mean std max mean std max mean std iris 94 69.15 0.3334 92.67 85.43 0.0824 93.33 90.53 0.014
  25. 25. <ul><li>Table 2: Results and accuracy distribution of 500 random experiments on Iris data. </li></ul>Algorithms Max Mean Var. Accur. [0.9, 1] Accur. [0.85, 0.9) Accur. [0, 0.85) ABC 93.33 % 89.20 % 0.0557 325 155 20 SOM 93.33 % 86.35 % 0.0304 97 229 174 ABC + SOM 93.33 % 90.60 % 0.0117 390 110 0
  26. 26. <ul><li>Table 3: Results and accuracy distribution of 500 random experiments on Glass data. </li></ul>Algorithms Max Mean Var. Accur. [0.55, 1] Accur. [0.50, 0.55) Accur. [0, 0.50) ABC 55.14 % 52.29 % 0.0133 1 493 6 SOM 62.15 % 48.70 % 0.0323 10 157 342 ABC + SOM 56.07 % 52.31 % 0.0305 95 286 119
  27. 27. Figure 2. (a) An original image, (b), (c) and (d) results of applying ABC, SOM, and the SOM+ABC algorithm, respectively. (a) An original image (b) ABC (c) SOM (d) ABC + SOM
  28. 28. <ul><li>From the results we can see that all three achieved the same maximum accuracy for Iris data but the proposed algorithm is more stable than either of the other two. Furthermore, the algorithm can be effective with almost any parameters if one lets it run long enough. </li></ul><ul><li>The proposed algorithm (i.e. ABC + SOM) is an improved time efficient algorithm as compared to ABC. The ratio of computation time for SOM, ABC + SOM, and ABC is about 1:3:20. </li></ul>
  29. 29. <ul><li>This is a very primitive experiment. Further study and comparison with other similar methods are still need be done. </li></ul><ul><li>The robustness of the algorithm can be improved by refining the bee model. </li></ul>

×