Evaluating Classification Algorithms Applied To Data Streams Esteban Donato


Published on

  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Evaluating Classification Algorithms Applied To Data Streams Esteban Donato

  1. 1. Evaluating classification algorithms applied to data streams Author: Ing. Esteban D. Donato Advisor: Dr. Fazel Famili Co-Advisor: Dra. Ana S. Haedo Dec-2009 Maestría en Explotación de Datos y Descubrimiento del Conocimiento
  2. 2. Introduction <ul><li>Majority of companies and organizations collect and maintain gigantic databases that grow to millions of registers per day. </li></ul><ul><li>Current algorithms for mining complex models from data cannot mine even a fraction of these data in useful time. </li></ul><ul><li>Concept drift: o ccurs when the underlying data distribution changes over time. </li></ul>
  3. 3. Objective <ul><li>To perform a benchmarking analysis between a number of known algorithms applied to data streams. </li></ul><ul><li>The algorithms chosen for this study are: UFFT, CVFDT and VFDTc. </li></ul><ul><li>The analysis will be focused on some aspects that all the algorithms applied to data streams have to deal with. </li></ul>
  4. 4. Related work <ul><li>A data stream is a sequence of data items x 1 ,…,x i ,…,x n . Those items are read one at a time in increasing order of the indices. </li></ul><ul><li>Off-line learning : Assumes that the dataset resides in a static database and that has been generated from a static distribution. Also, they assume that all the data is available before the training and that all the examples can fit into the memory. </li></ul><ul><li>Incremental learning : The items are time-ordered and the distribution that generates them varies over time. Systems evolve and change a concept definition as new observations are processed. </li></ul>
  5. 5. Related work (Cont.): Data Streams Mining <ul><ul><li>A sub area of incremental learning. </li></ul></ul><ul><ul><li>Accumulates faster than it can be mined. </li></ul></ul><ul><ul><li>It must require small constant time per record. </li></ul></ul><ul><ul><li>It must use only a fixed amount of main memory. </li></ul></ul><ul><ul><li>It must be able to build a model using at most one scan of the data. </li></ul></ul><ul><ul><li>It must make a usable model available at any point in time. </li></ul></ul><ul><ul><li>Ideally, it should produce a model that is equivalent to the one that would be obtained by the corresponding ordinary database mining algorithm. </li></ul></ul><ul><ul><li>The model should be up-to-date at any time. </li></ul></ul><ul><ul><li>Types of Algorithms : Set of rules, Induction trees and Ensembles methods. </li></ul></ul>
  6. 6. Related work (Cont.): Very Fast Decision Tree (VFDT) <ul><ul><li>Requires each example to be read only once. Requires a small constant time to process it. </li></ul></ul><ul><ul><li>Building process : given a stream of examples, the first ones will be used to choose the root and the following examples will be passed down to the corresponding leaves. </li></ul></ul><ul><ul><li>To detect how many examples are needed at each node, The Hoeffding bound is used. </li></ul></ul><ul><ul><li>The Hoeffding bound: with probability 1 - φ , the true mean of the variable is at least r - e, where: </li></ul></ul><ul><ul><li>Let ∆ G = G (Xa) - G (Xb) >= 0, if ∆ G > e then ∆G >= ∆ G - e > 0 with probability 1 – φ </li></ul></ul><ul><ul><li>Other features: Pre-pruning, different evaluation measure, Ties, Memory, Poor attributes, Initialization, Rescans. </li></ul></ul><ul><ul><li>Drawbacks : It does not detect Concept Drift. </li></ul></ul>
  7. 7. Related work (Cont.): Concept Drift <ul><ul><li>Change in the target concept </li></ul></ul><ul><ul><li>Depends on some hidden attributes, not given explicitly in the form of predictive features, </li></ul></ul><ul><ul><li>Examples: Weather prediction, customers’ buying, etc. </li></ul></ul><ul><ul><li>Concept drift handling system should be able to: </li></ul></ul><ul><ul><ul><li>Quickly adapt to concept drift </li></ul></ul></ul><ul><ul><ul><li>Be robust to noise and distinguish it from concept drift. </li></ul></ul></ul><ul><ul><ul><li>Recognize and treat recurring contexts. </li></ul></ul></ul><ul><ul><li>Types: sudden, gradual, frequent and virtual concept drift. </li></ul></ul>
  8. 8. Conclusion of literature review <ul><li>Data stream is a sequence of time-ordered items, arriving faster than the time needed to be mined. </li></ul><ul><li>Some changes in the underlying data distribution may occur requiring the algorithms to detect and adapt to these changes. </li></ul><ul><li>The main challenge in incremental learning is how to detect and adapt to a concept drift. </li></ul><ul><li>To deal with the problem of data arriving fast, the algorithms must require a small constant processing time per record. </li></ul><ul><li>One of the first algorithms developed was VFDT, using the Hoeffding bound </li></ul><ul><li>In concept drift, a difficult problem is to distinguish between a true concept drift and noise. </li></ul>
  9. 9. Algorithm: VFDTc <ul><li>V ery F ast D ecision T ree for C ontinuous attributes </li></ul><ul><li>Extension of VFDT in three directions: continuous data, functional leaves, and concept drift. </li></ul><ul><li>For a continuous attribute the split-test is a condition of the form attri <= cut_point. </li></ul><ul><li>Use of Information gain to detect the cut_point. </li></ul><ul><li>Functional tree leaves: An innovative aspect of this algorithm is its ability to use the naive Bayes classifiers at tree leaves </li></ul><ul><li>A leaf must see nmin examples before computing the evaluation function </li></ul><ul><li>Concept Drift is based on the assumption that whatever is the cause of the drift, the decision surface moves. It supports two methods: </li></ul><ul><ul><ul><li>Drift Detection based on Error Estimates (EE/EBP) </li></ul></ul></ul><ul><ul><ul><li>Drift Detection based on Affinity Coefficient (AC) </li></ul></ul></ul><ul><li>Reacting to Drift: method pushes up all the information of the descending leaves to node This is a forgetting mechanism. </li></ul>
  10. 10. Algorithm: UFFT <ul><ul><ul><li>U ltra F ast F orest T ree </li></ul></ul></ul><ul><ul><ul><li>Generates a forest of binary trees </li></ul></ul></ul><ul><ul><ul><li>Processes each example in constant time </li></ul></ul></ul><ul><ul><ul><li>It uses analytical techniques to choose the splitting criteria, and the information gain to estimate the merit of each possible splitting-test </li></ul></ul></ul><ul><ul><ul><li>It maintains a short term memory for initializing the leaves </li></ul></ul></ul><ul><ul><ul><li>To expand a leaf node: information gain positive and statistical support </li></ul></ul></ul><ul><ul><ul><li>Functional leaves </li></ul></ul></ul><ul><ul><ul><li>Concept drift detection: error rate is calculated at each node (Naive-Bayes ). </li></ul></ul></ul><ul><ul><ul><ul><li>Error follows a binomial distribution. </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Two confident interval levels: warning drift </li></ul></ul></ul></ul>
  11. 11. Algorithm: CVFDT <ul><li>C oncept-adapting V ery F ast D ecision T ree </li></ul><ul><li>Extension of VFDT with support to concept drifts </li></ul><ul><li>It works by keeping its model consistent with a sliding window of examples. Updates just statistics. </li></ul><ul><li>It uses information gain for selecting the best attribute. </li></ul><ul><li>Grows an alternative subtree with the new best attribute at its root. </li></ul><ul><li>Periodically scans HT and all alternate trees looking for internal nodes whose performing better than the actual nodes. </li></ul>
  12. 12. Performance measures <ul><li>Capacity to detect and respond to concept drift </li></ul><ul><li>Capacity to detect and respond to virtual concept drift </li></ul><ul><li>Capacity to detect and respond to recurring concept drift </li></ul><ul><li>Capacity to adapt to sudden concept drift </li></ul><ul><li>Capacity to adapt to gradual concept drift </li></ul><ul><li>Capacity to adapt to frequent concept drift </li></ul><ul><li>Accuracy of the classification task </li></ul><ul><li>Capacity to deal with outliers </li></ul><ul><li>Capacity to deal with noisy data </li></ul><ul><li>Speed (Time to take to process an item in the stream) </li></ul>
  13. 13. Data sets generated <ul><li>Data sets based on a moving hyperplane </li></ul><ul><li>d-dimensional space [0; 1] d , is denoted by </li></ul><ul><li>MOA (Massive Online Analysis) tool </li></ul><ul><ul><li>http://sourceforge.net/projects/moa-datastream/ </li></ul></ul><ul><ul><li>Released under GNU. Free and open source </li></ul></ul><ul><li>Current configurabe attributes: </li></ul><ul><ul><li>instanceRandomSeed </li></ul></ul><ul><ul><li>numClasses </li></ul></ul><ul><ul><li>numAtts </li></ul></ul><ul><ul><li>numDriftAtts </li></ul></ul><ul><ul><li>magChange </li></ul></ul><ul><ul><li>noisePercentage </li></ul></ul><ul><ul><li>sigmaPercentage </li></ul></ul><ul><li>New configurable attributes: </li></ul><ul><ul><li>driftFreq </li></ul></ul><ul><ul><li>driftTran </li></ul></ul><ul><ul><li>outlierPercentage </li></ul></ul><ul><ul><li>distributionPercentage </li></ul></ul>
  14. 14. Data sets generated Dataset with no concept drift, outlier of noise Dataset with 10% of noisy data Dataset with 1% of outliers Dataset with 3 concept drift
  15. 15. Results Capacity to detect and respond to concept drift
  16. 16. Results Capacity to detect and respond to virtual concept drift
  17. 17. Results Capacity to detect and respond to recurring concept drift
  18. 18. Results Capacity to adapt to sudden concept drift
  19. 19. Results Capacity to adapt to gradual concept drift
  20. 20. Results Capacity to adapt to frequent concept drift
  21. 21. Results Accuracy of the classification task VFDTc (CA) VFDTc (EBP) UFFT CVFDT measures derived from the confusion matrix     Predicted Predicted     Class 1 Class 2 Actual Class 1 44.5% (887) 5.5% (109) Actual Class 2 5% (101) 45% (903)     Predicted Predicted     Class 1 Class 2 Actual Class 1 39% (777) 11% (219) Actual Class 2 9% (173) 41% (831)     Predicted Predicted     Class 1 Class 2 Actual Class 1 46% (928) 3.5% (68) Actual Class 2 2.5% (48) 48% (956)     Predicted Predicted     Class 1 Class 2 Actual Class 1 34.5% (685) 15.5% (311) Actual Class 2 15.5% (312) 34.5% (692)   Accuracy (AC) True positive (TP) False Positive (FP) True Negative (TN) False Negative (FN) Precision (P) VFDTc (CA) 0.89 0.89 0.10 0.90 0.11 0.90 VFDTc (EBP) 0.80 0.78 0.17 0.83 0.22 0.82 UFFT 0.94 0.93 0.05 0.95 0.07 0.95 CVFDT 0.69 0.69 0.31 0.69 0.31 0.69
  22. 22. Results Dealing with outliers
  23. 23. Results Dealing with noisy data
  24. 24. Results Speed (Time to take to process an item in the stream)
  25. 25. Conclusions & future work <ul><li>Given that the data can be generated very fast, that give us a new and challenging way of developing Data Mining algorithms. </li></ul><ul><li>We have to develop them having in mind that the training phase can never end </li></ul><ul><li>The changes in the data distribution are another challenging scenario that data stream mining has to deal with. </li></ul><ul><li>VFDT was one of the first data stream mining algorithms developed. It implemented the Hoeffding bound </li></ul><ul><li>We generated different datasets using the moving hyperplane algorithm </li></ul><ul><li>UFFT for short term predictions </li></ul><ul><li>CVFDT for long term solutions </li></ul><ul><li>No impact on virtual concept drift or recurring concept drift </li></ul>
  26. 26. Conclusions & future work <ul><li>VFDTc (CA) is not suitable for gradual or sudden concept drift </li></ul><ul><li>VFDTc (CA) or UFFT are not suitable for frequent concept drift </li></ul><ul><li>VFDTc (EBP) and CVFDT for data streams with outliers </li></ul><ul><li>CVFDT for data streams with noisy points </li></ul><ul><li>CVFDT and UFFT fastest algorithms </li></ul><ul><li>Future Work </li></ul><ul><li>Clustering algorithms applied to data streams </li></ul><ul><li>Classification algorithms applied to data streams of unstructured datasets (text, images, etc) </li></ul>
  27. 27. <ul><li>Questions ? </li></ul>E-mail: [email_address] Twitter: @eddonato