Published on

  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide


  1. 1. On Data Mining, Compression, and Kolmogorov Complexity. C. Faloutsos and V. Megalooikonomou Data Mining and Knowledge Discovery, 2007
  2. 2. Problems: <ul><li>Can data mining be formalized into a logical system like the relational algebra? </li></ul><ul><li>Can data mining be automated? </li></ul><ul><ul><li>That is, can we remove the need for a human to decide upon a data mining technique? </li></ul></ul><ul><li>Why do so many different approaches to clustering exist? </li></ul><ul><ul><li>(Implicitly, are these approaches all necessary?) </li></ul></ul><ul><ul><li>Partitioning, hierarchical, density-based, grid-based, model-based, … </li></ul></ul>
  3. 3. Motivation: <ul><li>Can data mining be formalized? </li></ul><ul><ul><li>Implications of an affirmative answer: </li></ul></ul><ul><ul><ul><li>An SQL-like language for data mining could be created. </li></ul></ul></ul><ul><ul><ul><li>Logical axioms and set theoretic concepts could be used to reason about data mining techniques. </li></ul></ul></ul><ul><li>Can data mining be automated? </li></ul><ul><ul><li>Implications: </li></ul></ul><ul><ul><ul><li>A universally optimal approach for data mining could be determined without human intervention. </li></ul></ul></ul><ul><li>Why do so many clustering approaches exist? </li></ul><ul><ul><li>Are some redundant? </li></ul></ul><ul><ul><li>Can we eliminate some of them? </li></ul></ul>
  4. 4. “ Data Mining”? <ul><li>What exactly is “data mining”, anyway? </li></ul><ul><ul><li>Lecture notes: Extraction of interesting ( non-trivial, implicit , previously unknown and potentially useful) information or patterns from data in large databases . </li></ul></ul><ul><ul><li>Paper: Supervised or unsupervised concept learning. </li></ul></ul><ul><ul><ul><li>Supervised: </li></ul></ul></ul><ul><ul><ul><ul><li>Classification </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Regression </li></ul></ul></ul></ul><ul><ul><ul><li>Unsupervised: </li></ul></ul></ul><ul><ul><ul><ul><li>Clustering </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Pattern discovery </li></ul></ul></ul></ul><ul><ul><li>Other tasks: forecasting, outlier detection, …? </li></ul></ul><ul><ul><ul><li>Related to or dependent upon concept learning. </li></ul></ul></ul>
  5. 5. Intuitions? <ul><li>Can concept learning be automated? </li></ul><ul><li>In general, can we always find an optimal model? </li></ul><ul><ul><li>Some ideas: </li></ul></ul><ul><ul><ul><li>MDL Principle : The best model results in the best compression . </li></ul></ul></ul><ul><ul><ul><li>Kolmogorov complexity : Shortest description of a string in a fixed description language (alternatively, shortest Universal Turing Machine that can generate the string): </li></ul></ul></ul><ul><ul><ul><ul><li>E.g. “1 2 4 8 16 32” is less complex than “1 4 2 7 6 3”. </li></ul></ul></ul></ul><ul><ul><ul><li>So data mining is essentially equivalent to data compression ! </li></ul></ul></ul><ul><ul><ul><ul><li>Specifically, all we need to do is find the Kolmogorov complexity. </li></ul></ul></ul></ul><ul><ul><ul><ul><li>There’s only one problem… </li></ul></ul></ul></ul>
  6. 6. <ul><li>Kolmogorov complexity is undecidable! </li></ul>
  7. 7. A Simple Proof: <ul><li>Let K(x) denote the Kolmogorov complexity of string x. </li></ul><ul><li>To compute K(x), we must execute every program, select those that halt and compute x , and select the one with the smallest size. </li></ul><ul><li>The negative solution to the halting problem proves that the underlined step is undecidable. </li></ul><ul><li>Thus Kolmogorov complexity is also undecidable. </li></ul>
  8. 8. Answers: <ul><li>Can data mining be formalized into a logical system like the relational algebra? No. </li></ul><ul><li>Can data mining be automated? No . </li></ul><ul><li>Why do so many different approaches to clustering exist? </li></ul><ul><ul><li>Because we can’t automatically choose the best every time! </li></ul></ul><ul><li>All of these results are due to the undecidability of Kolmogorov complexity. </li></ul>
  9. 9. Formal Definition of Kolmogorov Complexity <ul><li>Conditional Kolmogorov Complexity: </li></ul><ul><ul><li>K U (x|y) = min{|p| : p  0,1 * and U(p,y) = x} </li></ul></ul><ul><ul><ul><li>Such that p is a description of x (such as a program) and a Universal Turing Machine U outputs x when given p and some additional information y . </li></ul></ul></ul><ul><ul><li>In other words, it is the shortest program that can be input to a specific UTM that produces the desired output. </li></ul></ul><ul><li>Unconditional Kolmogorov Complexity: </li></ul><ul><ul><li>K(x) = K(x|  ) , where  is the empty string. </li></ul></ul><ul><ul><li>In other words, no additional information is required. </li></ul></ul><ul><li>Kolmogorov Incompressibility: </li></ul><ul><ul><li>Defined as the condition K(x) ≥ |x|. </li></ul></ul><ul><ul><ul><li>Recall from SLIQ presentation: Cost(M,D) = Cost(D|M) + Cost(M). </li></ul></ul></ul><ul><ul><ul><li>Cost(M) is not always bounded by |x|, so K(x) > x is possible. </li></ul></ul></ul><ul><ul><li>Incompressible strings are random, but not always (though usually) the converse. </li></ul></ul><ul><ul><ul><li>Pi, for example, is random in the traditional sense, but not incompressible. </li></ul></ul></ul><ul><ul><ul><li>In fact, it’s an example of an infinite-length string with a finite (and low) complexity. </li></ul></ul></ul><ul><ul><li>This can be used as a proof technique, per “Kolmogorov Incompressibility in Formal Proofs: A Critical Survey” by V. Megalooikonomou. </li></ul></ul>
  10. 10. Some theorems <ul><li>Lempel-Ziv encoding length may be used as an upper bound on Kolmogorov complexity: </li></ul><ul><ul><li>K(x|n) ≤ l LZ (x) + c, where n is the length of string x and l LZ is the length of the Lempel-Ziv encoded string x. </li></ul></ul><ul><li>We may also approximate the expected complexity of a random sequence using its entropy: </li></ul>
  11. 11. Why is complexity important? <ul><li>Classification </li></ul><ul><ul><li>Usually uses a measure of homogeneity. </li></ul></ul><ul><ul><ul><li>Entropy gain. </li></ul></ul></ul><ul><ul><ul><ul><li>We already demonstrated that this is related to Kolmogorov complexity. </li></ul></ul></ul></ul><ul><ul><ul><li>Gini index. </li></ul></ul></ul><ul><ul><ul><ul><li>A special case of generalized (Renyi) entropies. </li></ul></ul></ul></ul><ul><li>Clustering </li></ul><ul><ul><li>Optimal number of clusters ( k ) difficult to determine. </li></ul></ul><ul><ul><li>Parameter-free approaches penalize complexity using: </li></ul></ul><ul><ul><ul><li>Akaike Information Criterion </li></ul></ul></ul><ul><ul><ul><li>Bayesian Information Criterion </li></ul></ul></ul><ul><ul><ul><li>Minimum Description Length </li></ul></ul></ul><ul><ul><li>All three relate to Kolmogorov complexity. </li></ul></ul><ul><li>Distance functions </li></ul><ul><ul><li>Many common distance functions utilize similar principles, but do not explicitly use Kolmogorov complexity (due to its undecidability). </li></ul></ul>
  12. 12. Relevant and Nontrivial Arguments: <ul><li>Do these arguments work for lossy compression? </li></ul><ul><ul><li>Yes. Lossy schemes can be made lossless by encoding the difference between the model and reality. </li></ul></ul><ul><li>What if the simplest model isn’t the best? </li></ul><ul><ul><li>The simplicity of the model could be deceptive. The model may be simple, but the number of parameters required to fit the data to the model (or the number of digits in a single parameter) may result in high complexity. </li></ul></ul>
  13. 13. Positive aspects <ul><li>Thinking about data mining as compression will aid in the discovery of new parameter-free methods of classification, clustering, and distance-function design. </li></ul><ul><li>Data mining will remain an art; even if we do stumble upon the optimal domain-independent model, we cannot prove it. </li></ul><ul><li>Comparing models, however, is trivial: simply compare the number of bytes to a competing model. </li></ul>
  14. 14. Experiments: Comparing models Example of metabolic rate vs. mass represented using (a) piecewise flat approximation, such as CART, (b) piecewise linear approximation, and (c) linear approximation in a log-log scale. Kleiber’s law: R = M (3/4) . With better models, such as (c), we may perform better compression. However, this may require domain knowledge. A data miner without knowledge of Kleiber’s law may be tempted to use (b).
  15. 15. Conclusion <ul><li>Data mining as compression. </li></ul><ul><li>Optimal compression undecidable. </li></ul><ul><ul><li>Equivalent to finding Kolmogorov complexity, also undecidable. </li></ul></ul><ul><li>Consequently, data mining will always remain an art. </li></ul><ul><ul><li>Unprecedented amounts of data and computing power will continue to challenge and inspire data miners. </li></ul></ul><ul><li>Compression-based view leads to many parameter-free data mining techniques. </li></ul>
  16. 16. References <ul><li>C. Faloutsos and V. Megalooikonomou, &quot; On Data Mining, Compression, and Kolmogorov Complexity, &quot; Data Mining and Knowledge Discovery , Tenth Anniversary Issue, 2007 . </li></ul><ul><li>V. Megalooikonomou, &quot; Kolmogorov Incompressibility Method in Formal Proofs: A Critical Survey, &quot; Technical Report TR CS-97-01, Department of Computer Science Electrical Engineering, University of Maryland, Baltimore County, Jan. 1997. </li></ul>
  17. 17. Thanks! <ul><li>Any questions? </li></ul>