Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.



Published on

Published in: Technology, Education
  • Be the first to comment

  • Be the first to like this


  1. 1. A Bayesian Approach to Localized Multi-Kernel Learning Using the Relevance Vector Machine<br />R. Close, J. Wilson, P. Gader<br />
  2. 2. Outline<br />Benefits of kernel methods<br />Multi-kernels and localized multi-kernels<br />Relevance Vector Machines (RVM)<br />Localized multi-kernel RVM (LMK-RVM)<br />Application of LMK-RVM to landmine detection<br />Conclusions<br />2<br />
  3. 3. Kernel Methods Overview<br />Φ<br /> <br />Using a non-linear mapping a decision surface can become linear in a transformed space<br />3<br />
  4. 4. Kernel Methods Overview<br />K<br />If the mapping satisfies Mercer’s theorem (i.e., the it is finitely positive-definite) then it corresponds to an inner-product kernel<br />4<br />
  5. 5. Kernel Methods<br />Feature transformations increase dimensionality to create a linear separation between classes<br />Utilizing the kernel trick, kernel methods construct these feature transformations in an infinite dimensional space that can be finitely characterized<br />The accuracy and robustness of the model becomes directly dependent on the kernel’s ability to represent the correlation between data points<br />A side benefit is an increased understanding of the latent relationships between data points once the kernel parameters are learned<br />5<br />
  6. 6. Multi-Kernel Learning<br />When using kernel methods, a specificform of kernel function is chosen (e.g. a radial basis function).<br />Multi-kernel learning uses a linear combination of kernel functions<br />The weights may be constrained if desired<br />As the model is trained, the weights yielding the best input-space to kernel-space mapping are learned.<br />Any kernel function whose weight approaches 0 is pruned out of the multi-kernel function.<br />6<br />
  7. 7. Localized Multi-Kernel Learning<br />Localized multi-kernel (LMK) learning allows different kernels (or different kernel parameters) to be used in separate areas of the feature space.Thus the model is not limited to the assumption that one kernel function can effectively map the entire feature-space<br />Many LMK approaches attempt to simultaneously partition the feature-space and learn the multi-kernel<br />Different <br />Multi-kernels<br />7<br />
  8. 8. LMK-RVM<br />A localized multi-kernel relevance vector machine (LMK-RVM) uses the ARD (automatic relevance determination) prior of the RVM to select the kernels to use over a given feature-space.<br />This allows greater flexibility in the localization of the kernels and increased sparsity<br />8<br />
  9. 9. RVM Overview<br />Likelihood<br />Posterior<br />ARD Prior<br />9<br />
  10. 10. RVM Overview<br />Likelihood<br />Posterior<br />ARD Prior<br />Note the vector hyper-parameter<br />10<br />
  11. 11. Automatic Relevance Determination<br />Values for and are determined by integrating over the weights, and maximizing the resulting marginal distribution.<br />Those training samples that do not help predict the output of other training samples have αvalues that tend toward infinity. Their associated w priors become δ functions with mean 0, that is, their weight in predicting outcomes at other points should be exactly 0. Thus, these training vectors can be removed.<br />We can use the remaining, relevant, vectors to estimate the outputs associated with new data.<br />The design matrix K=Φis now NxM, where M<<N.<br />11<br />
  12. 12. RVM for Classification<br />Start with a two-class problem<br />t  {0,1}<br />() is logistic sigmoid<br />Same as RVM for regression except must use IRLS to calculate the mode of the posterior distribution<br />12<br />
  13. 13. LMK-RVM<br />Using the multi-kernel with the RVM model, we start with:where wn is the weight on themulti-kernel associated with vector n and wi is the weight on the ith component of each multi-kernel.<br />Unlike some kernel methods (e.g. SVM) the RVM is not constrained to use a positive-definite kernel matrix, thus, there is norequirement that the weights be factorized aswnwi. So, in this setting<br />We show a sample application of LMK-RVM using two radial basis kernels at each training point with different spreads.<br />13<br />
  14. 14. Toy Dataset Example<br />Kernels with larger σ<br />Kernels with smaller σ<br />14<br />
  15. 15. GPR Data Experiments<br />GPR experiments using data with120 dimension spectral features<br />Improvements in classification happen off-diagonal<br />15<br />
  16. 16. GPR ROC<br />16<br />
  17. 17. WEMI Data Experiments<br />WEMI experiments using data with 3 dimension GRANMA features<br />Improvements in classification happen off-diagonal<br />17<br />
  18. 18. WEMI ROC<br />18<br />
  19. 19. Number of Relevant Vectors<br />Number of relevant vectors averaged over all ten folds.<br />WEMI<br />GPR<br />The off-diagonal shows a potentially sparser model<br />19<br />
  20. 20. Conclusions<br />The experiment using GPR data features showed that LMK-RVM can provide definite improvement in SSE, AUC, and the ROC<br />The experiment using the lower-dimensional WEMI data GRANMA features showed that using the same LMK-RVM method provided some improvement in SSE and AUC and an inconclusive ROC<br />Both set of experiments show the potential for sparser models when choosing when using the LMK-RVM<br />Question: is there an effective way to learn values for spreads in our simple class of localized multi-kernels?<br />20<br />
  21. 21. References<br />[1] F. R. Bach, et al., "Multiple Kernel Learning, Conic Duality, and the SMO Algorithm," in International Conference on Machine Learning, Banff, Canada, 2004.<br />[2] T. Damoulas, et al., "Inferring Sparse Kernel Combinations and Relevance Vectors: An Application to Subcellular Localization of Proteins," in Machine Learning and Applications, 2008. ICMLA '08. Seventh International Conference on, 2008, pp. 577-582.<br />[3] G. Camps-Valls, et al., "Nonlinear System Identification With Composite Relevance Vector Machines," Signal Processing Letters, IEEE, vol. 14, pp. 279-282, 2007.<br />[4] B. Wu, et al., "A Genetic Multiple Kernel Relevance Vector Regression Approach," in Education Technology and Computer Science (ETCS), 2010 Second International Workshop on, 2010, pp. 52-55.<br />[5] R. A. Jacobs, et al., "Adaptive Mixtures of Local Experts," Neural Computation, vol. 3, pp. 79-87, 1991.<br />[6] C. E. Rasmussen and Z. Ghahramani, "Infinite Mixtures of Gaussian Process Experts," in Advances in Neural Information Processing Systems, 2002.<br />21<br />
  22. 22. References<br />[7] L. Yen-Yu, et al., "Local Ensemble Kernel Learning for Object Category Recognition," in Computer Vision and Pattern Recognition, 2007. CVPR '07. IEEE Conference on, 2007, pp. 1-8.<br />[8] M. Gonen and E. Alpaydin, "Localized Multiple Kernel Learning," in 25th International Conference on Machine Learning, Helsinki, Finland, 2008.<br />[9] M. Gonen and E. Alpaydin, "Localized Multiple Kernel Regression," in Pattern Recognition (ICPR), 2010 20th International Conference on, 2010, pp. 1425-1428.<br />[10] M. E. Tipping, "The Relevance Vector Machine," Advances in Neural Information Processing Systems, vol. 12, pp. 652-658, 2000.<br />[11] C. M. Bishop, "Relevance Vector Machines (Analysis of Sparsity)," in Pattern Recognition and Machine Learning, ed: Springer, 2007, pp. 349-353.<br />[12] D. Tzikas, A. Likas, and N. Galatsanos. “Large Scale Multikernel Relevance Vector Machine for Object Detection,” International Journal on Artificial Intelligence Tools, 16(6):967-979, December 2007.<br />[13] D. Tzikas, A. Likas, and N. Galatsanos, "Large Scale Multikernel RVM for Object Detection," presented at the Hellenic Conference on Artificial Intelligence, Heraclion, Crete, Greece, 2006.<br />22<br />
  23. 23. Backup Slides<br />Expanded Kernels Discussion<br />23<br />
  24. 24. Kernel Methods Example: The Masked Class Problem<br />In both these problems linear classification methods have difficulty discriminating the blue class from the others!<br />What is the actual problem here?<br /><ul><li>No one line can separate the blue class from the other datapoints! Similar to the “single-layer” perceptron problem (The XOR problem)!</li></ul>24<br />
  25. 25. Decision Surface in Feature Space<br />Can classify the green and black class with no problem!<br />Problems when we try to classify the blue class!!!!<br />25<br />
  26. 26. Revisit Masked Class Problem<br />Are linear methods completely useless on this data?<br />-No, we can perform a non-linear transformation on the data via fixed basis functions!<br />-Many times when we perform this transformation features that where not linearly separable in the original feature space become linearly separable in the transformed feature space.<br />26<br />
  27. 27. Basis Functions<br /> Models can be extended by using fixed basis functions which allows for linear combinations of nonlinear functions of the input variables<br />Gaussian (or RBF) basis function:<br />Basis vector: <br />Dummy basis function used for bias parameter: <br />Basis function center ( ) governs location in input space<br />Scale parameter () determines spatial scale<br />27<br />
  28. 28. Features in Transformed Space are Linearly Separable<br />Transformed datapoints are plotted in the new feature space<br />28<br />
  29. 29. Transformed Decision Surface in Feature Space<br />Again, we can classify the green and black class with no problem!<br />Now we can classify the blue class with no problem!!!<br />29<br />
  30. 30. Common Kernels<br />Squared Exponential: <br />Gaussian Process Kernel:<br />Automatic Relevance Determination (ARD) kernel<br />Other kernels:<br />Neural Network<br />Matern<br />-exponential, etc.<br />30<br />