Machine Learning with WEKA

3,053 views

Published on

0 Comments
3 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
3,053
On SlideShare
0
From Embeds
0
Number of Embeds
5
Actions
Shares
0
Downloads
198
Comments
0
Likes
3
Embeds 0
No embeds

No notes for slide

Machine Learning with WEKA

  1. 1. Machine Learning with WEKA An Introduction 林松江 2005/9/30
  2. 2. WEKA: the bird Copyright: Martin Kramer (mkramer@wxs.nl)
  3. 3. WEKA is available at http://www.cs.waikato.ac.nz/ml/weka/
  4. 4. The format of Dataset in WEKA(1) @relation heart-disease-simplified @attribute age numeric @attribute sex { female, male} @attribute chest_pain_type { typ_angina, asympt, non_anginal, atyp_angina} @attribute cholesterol numeric @attribute exercise_induced_angina { no, yes} @attribute class { present, not_present} @data 63,male,typ_angina,233,no,not_present 67,male,asympt,286,yes,present 67,male,asympt,229,yes,present 38,female,non_anginal,?,no,not_present ...
  5. 5. The format of Dataset in WEKA(2) @relation heart-disease-simplified @attribute age numeric @attribute sex { female, male} @attribute chest_pain_type { typ_angina, asympt, non_anginal, atyp_angina} @attribute cholesterol numeric @attribute exercise_induced_angina { no, yes} @attribute class { present, not_present} @data 63,male,typ_angina,233,no,not_present 67,male,asympt,286,yes,present 67,male,asympt,229,yes,present 38,female,non_anginal,?,no,not_present ...
  6. 6. Data can be imported from a file in various formats: ARFF, CSV, C4.5, binary
  7. 7. Explorer: pre-processing the data Data can be imported from a file in various formats: ARFF, CSV, C4.5, binary Data can also be read from a URL or from an SQL database (using JDBC) Pre-processing tools in WEKA are called “filters” WEKA contains filters for: Discretization, normalization, resampling, attribute selection, transforming and combining attributes, …
  8. 8. Pre-processing tools in WEKA are called “filters”: including discretization, normalization, re- sampling, attribute selection, transforming and combining attributes, …
  9. 9. Visualize class distribution for each attribute
  10. 10. Click here to choose filter algorithm
  11. 11. Click here to set the parameter for filter algorithm
  12. 12. Set parameter
  13. 13. apply the filter algorithm
  14. 14. Equal frequence
  15. 15. Building “classifiers” Classifiers in WEKA are models for predicting nominal or numeric quantities Implemented learning schemes include: Decision trees and lists, instance-based classifiers, support vector machines, multi-layer perceptrons, logistic regression, Bayes’ nets, … “Meta”-classifiers include: Bagging, boosting, stacking, error-correcting output codes, locally weighted learning, …
  16. 16. Choose classification algorithm
  17. 17. J48 : Decision tree algorithm
  18. 18. Click here to set parameter for decision tree
  19. 19. Information about decision tree algorithm
  20. 20. Output setting
  21. 21. Start to build classifier
  22. 22. Fitted result Click right bottom to view more information
  23. 23. View fitted tree
  24. 24. Crosses : correctly classified instances Squares : incorrectly classified instances
  25. 25. SMO : Support Vector Machine algorithm
  26. 26. Choose RBF kernel
  27. 27. Click right bottom to view more information
  28. 28. clustering data WEKA contains “clusterers” for finding groups of similar instances in a dataset Implemented schemes are: k-Means, EM, Cobweb, X-means, FarthestFirst Clusters can be visualized and compared to “true” clusters (if given) Evaluation based on loglikelihood if clustering scheme produces a probability distribution
  29. 29. clustering data
  30. 30. Choose clustering algorithm
  31. 31. Click here to set parameter
  32. 32. Start to perform clustering
  33. 33. View cluster Click right bottom assignments to view more information
  34. 34. Finding associations WEKA contains an implementation of the Apriori algorithm for learning association rules Works only with discrete data Can identify statistical dependencies between groups of attributes: milk, butter ⇒ bread, eggs (with confidence 0.9 and support 2000) Apriori can compute all rules that have a given minimum support and exceed a given confidence
  35. 35. Finding associations
  36. 36. Attribute selection Panel that can be used to investigate which (subsets of) attributes are the most predictive ones Attribute selection methods contain two parts: A search method: best-first, forward selection, random, exhaustive, genetic algorithm, ranking An evaluation method: correlation-based, wrapper, information gain, chi-squared, … Very flexible: WEKA allows (almost) arbitrary combinations of these two
  37. 37. Choose attribute evalutor Choose search method
  38. 38. Data visualization Visualization very useful in practice: e.g. helps to determine difficulty of the learning problem WEKA can visualize single attributes (1-d) and pairs of attributes (2-d) To do: rotating 3-d visualizations (Xgobi-style) Color-coded class values “Jitter” option to deal with nominal attributes (and to detect “hidden” data points) “Zoom-in” function
  39. 39. Adjust view Click here to view point size single result
  40. 40. Choose range to magnify local result
  41. 41. submit to magnify local result
  42. 42. Performing experiments Experimenter makes it easy to compare the performance of different learning schemes For classification and regression problems Results can be written into file or database Evaluation options: cross-validation, learning curve, hold-out Can also iterate over different parameter settings Significance-testing built in!
  43. 43. Click here to run experiment Step 1 : Set output file Step 3 : choose algorithm Step 2 : add dataset
  44. 44. Experiment status
  45. 45. View the experiment result
  46. 46. Click here to perform test
  47. 47. Knowledge Flow GUI New graphical user interface for WEKA Java-Beans-based interface for setting up and running machine learning experiments Data sources, classifiers, etc. are beans and can be connected graphically Data “flows” through components: e.g., “data source” -> “filter” -> “classifier” -> “evaluator” Layouts can be saved and loaded again later
  48. 48. Step 1 : choose data source
  49. 49. Step 2 : choose data source format
  50. 50. Step 3 : set data connection
  51. 51. Visualize data
  52. 52. Step 4 : choose Step 5 : choose Step 6 : choose evaluation filter method classifier Set parameter
  53. 53. Final step : start to run
  54. 54. Finish
  55. 55. Thank you

×