Data Mining:     Concepts and Techniques   — Chapter 5 — <ul><li>Jiawei Han </li></ul><ul><li>Department of Computer Scien...
Chapter 5: Mining Frequent Patterns, Association and Correlations <ul><li>Basic concepts and a road map </li></ul><ul><li>...
What Is Frequent Pattern Analysis? <ul><li>Frequent pattern : a pattern (a set of items, subsequences, substructures, etc....
Why Is Freq. Pattern Mining Important? <ul><li>Discloses an intrinsic and important property of data sets </li></ul><ul><l...
Basic Concepts: Frequent Patterns and Association Rules <ul><li>Itemset X = {x 1 , …, x k } </li></ul><ul><li>Find all the...
Closed Patterns and Max-Patterns <ul><li>A long pattern contains a combinatorial number of sub-patterns, e.g., {a 1 , …, a...
Closed Patterns and Max-Patterns <ul><li>Exercise.  DB = {<a 1 , …, a 100 >, < a 1 , …, a 50 >}  </li></ul><ul><ul><li>Min...
Chapter 5: Mining Frequent Patterns, Association and Correlations <ul><li>Basic concepts and a road map </li></ul><ul><li>...
Scalable Methods for Mining Frequent Patterns <ul><li>The  downward closure  property of frequent patterns </li></ul><ul><...
Apriori: A Candidate Generation-and-Test Approach <ul><li>Apriori pruning principle :  If there is  any  itemset which is ...
The Apriori Algorithm—An Example  Database TDB 1 st  scan C 1 L 1 L 2 C 2 C 2 2 nd  scan C 3 L 3 3 rd  scan Sup min  = 2 B...
The Apriori Algorithm <ul><li>Pseudo-code : </li></ul><ul><ul><ul><li>C k : Candidate itemset of size k </li></ul></ul></u...
Important Details of Apriori <ul><li>How to generate candidates? </li></ul><ul><ul><li>Step 1: self-joining  L k </li></ul...
How to Generate Candidates? <ul><li>Suppose the items in  L k-1  are listed in an order </li></ul><ul><li>Step 1: self-joi...
Challenges of Frequent Pattern Mining <ul><li>Challenges </li></ul><ul><ul><li>Multiple scans of transaction database </li...
Partition: Scan Database Only Twice <ul><li>Any itemset that is potentially frequent in DB must be frequent in at least on...
DIC: Reduce Number of Scans <ul><li>Once both A and D are determined frequent, the counting of AD begins </li></ul><ul><li...
Bottleneck of Frequent-pattern Mining <ul><li>Multiple database scans are  costly </li></ul><ul><li>Mining long patterns n...
Mining Frequent Patterns Without   Candidate Generation <ul><li>Grow long patterns from short ones using local frequent it...
Construct FP-tree from a Transaction Database min_support = 3 TID Items bought   (ordered) frequent items 100 { f, a, c, d...
Benefits of the FP-tree Structure <ul><li>Completeness  </li></ul><ul><ul><li>Preserve complete information for frequent p...
Partition Patterns and Databases <ul><li>Frequent patterns can be partitioned into subsets according to f-list </li></ul><...
Find Patterns Having P From P-conditional Database <ul><li>Starting at the frequent item header table in the FP-tree </li>...
From Conditional Pattern-bases to Conditional FP-trees   <ul><li>For each pattern-base </li></ul><ul><ul><li>Accumulate th...
Recursion: Mining Each Conditional FP-tree Cond. pattern base of “am”: (fc:3) Cond. pattern base of “cm”: (f:3) {} f:3 cm-...
A Special Case: Single Prefix Path in FP-tree <ul><li>Suppose a (conditional) FP-tree T has a shared single prefix-path P ...
Mining Frequent Patterns With FP-trees <ul><li>Idea: Frequent pattern growth </li></ul><ul><ul><li>Recursively grow freque...
FP-Growth vs. Apriori: Scalability With the Support Threshold Data set T25I20D10K
Why Is FP-Growth the Winner? <ul><li>Divide-and-conquer:  </li></ul><ul><ul><li>decompose both the mining task and DB acco...
Visualization of Association Rules: Plane Graph
Visualization of Association Rules: Rule Graph
Visualization of Association Rules  (SGI/MineSet 3.0)
Chapter 5: Mining Frequent Patterns, Association and Correlations <ul><li>Basic concepts and a road map </li></ul><ul><li>...
Mining Various Kinds of Association Rules <ul><li>Mining multilevel association </li></ul><ul><li>Miming multidimensional ...
Mining Multiple-Level Association Rules <ul><li>Items often form hierarchies </li></ul><ul><li>Flexible support settings  ...
Multi-level Association: Redundancy Filtering <ul><li>Some rules may be redundant due to “ancestor” relationships between ...
Mining Multi-Dimensional Association <ul><li>Single-dimensional rules: </li></ul><ul><ul><ul><li>buys(X, “milk”)    buys(...
Mining Quantitative Associations <ul><li>Techniques can be categorized by how numerical attributes, such as  age  or  sala...
Quantitative Association Rules age(X,”34-35”)    income(X,”30-50K”)     buys(X,”high resolution TV”) <ul><li>Proposed by...
Chapter 5: Mining Frequent Patterns, Association and Correlations <ul><li>Basic concepts and a road map </li></ul><ul><li>...
Interestingness Measure: Correlations (Lift) <ul><li>play basketball      eat cereal  [40%, 66.7%]  is misleading </li></...
Are  lift  and   2   Good Measures of Correlation? <ul><li>“ Buy walnuts     buy milk  [1%, 80%]”  is misleading </li></...
Which Measures Should Be Used? <ul><li>lift  and   2   are not good measures for correlations in large transactional DBs ...
Chapter 5: Mining Frequent Patterns, Association and Correlations <ul><li>Basic concepts and a road map </li></ul><ul><li>...
Constraint-based (Query-Directed) Mining <ul><li>Finding  all  the patterns in a database  autonomously ? — unrealistic! <...
Constraints in Data Mining <ul><li>Knowledge type constraint :  </li></ul><ul><ul><li>classification, association, etc. </...
Constrained Mining vs. Constraint-Based Search <ul><li>Constrained mining vs. constraint-based search/reasoning </li></ul>...
Anti-Monotonicity in Constraint Pushing <ul><li>Anti-monotonicity </li></ul><ul><ul><li>When an itemset S  violates  the c...
Monotonicity for Constraint Pushing <ul><li>Monotonicity </li></ul><ul><ul><li>When an itemset S  satisfies  the constrain...
Succinctness <ul><li>Succinctness: </li></ul><ul><ul><li>Given  A 1,  the set of items satisfying a succinctness constrain...
The Apriori Algorithm — Example Database D Scan D C 1 L 1 L 2 C 2 C 2 Scan D C 3 L 3 Scan D
Naïve Algorithm: Apriori + Constraint  Database D Scan D C 1 L 1 L 2 C 2 C 2 Scan D C 3 L 3 Scan D Constraint:  Sum{S.pric...
The Constrained Apriori Algorithm: Push an Anti-monotone Constraint Deep   Database D Scan D C 1 L 1 L 2 C 2 C 2 Scan D C ...
The Constrained Apriori Algorithm: Push a Succinct Constraint Deep   Database D Scan D C 1 L 1 L 2 C 2 C 2 Scan D C 3 L 3 ...
Converting “Tough” Constraints <ul><li>Convert tough constraints into anti-monotone or monotone by properly ordering items...
Strongly Convertible Constraints <ul><li>avg(X)    25 is convertible anti-monotone w.r.t. item  value descending  order R...
Mining With Convertible Constraints <ul><li>C: avg(X) >= 25, min_sup=2 </li></ul><ul><li>List items in every transaction i...
Handling Multiple Constraints <ul><li>Different constraints may require different or even conflicting item-ordering </li><...
What Constraints Are Convertible? …… No No Yes sum(S)    v (items could be of any value, v    0) No Yes No sum(S)    v ...
Constraint-Based Mining—A General Picture no yes no support(S)        no yes no range(S)    v no yes no sum(S)    v ( ...
A Classification of Constraints Convertible anti-monotone Convertible monotone Strongly convertible Inconvertible Succinct...
Chapter 5: Mining Frequent Patterns, Association and Correlations <ul><li>Basic concepts and a road map </li></ul><ul><li>...
Frequent-Pattern Mining: Summary <ul><li>Frequent pattern mining—an important task in data mining </li></ul><ul><li>Scalab...
Frequent-Pattern Mining: Research Problems <ul><li>Mining fault-tolerant frequent, sequential and structured patterns </li...
Ref: Basic Concepts of Frequent Pattern Mining <ul><li>( Association Rules ) R. Agrawal, T. Imielinski, and A. Swami.  Min...
Ref: Apriori and Its Improvements <ul><li>R. Agrawal and R. Srikant. Fast algorithms for mining association rules. VLDB'94...
Ref: Depth-First, Projection-Based FP Mining <ul><li>R. Agarwal, C. Aggarwal, and V. V. V. Prasad. A tree projection algor...
Ref: Vertical Format and Row Enumeration Methods <ul><li>M. J. Zaki, S. Parthasarathy, M. Ogihara, and W. Li. Parallel alg...
Ref: Mining Multi-Level and Quantitative Rules <ul><li>R. Srikant and R. Agrawal. Mining generalized association rules. VL...
Ref: Mining Correlations and Interesting Rules <ul><li>M. Klemettinen, H. Mannila, P. Ronkainen, H. Toivonen, and A. I. Ve...
Ref: Mining Other Kinds of Rules <ul><li>R. Meo, G. Psaila, and S. Ceri.  A new SQL-like operator for mining association r...
Ref: Constraint-Based Pattern Mining <ul><li>R. Srikant, Q. Vu, and R. Agrawal. Mining association rules with item constra...
Ref: Mining Sequential and Structured Patterns <ul><li>R. Srikant and R. Agrawal. Mining sequential patterns: Generalizati...
Ref: Mining Spatial, Multimedia, and Web Data <ul><li>K. Koperski and J. Han, Discovery of Spatial Association Rules in Ge...
Ref: Mining Frequent Patterns in Time-Series Data <ul><li>B. Ozden, S. Ramaswamy, and A. Silberschatz. Cyclic association ...
Ref: Iceberg Cube and Cube Computation <ul><li>S. Agarwal, R. Agrawal, P. M. Deshpande, A. Gupta, J. F. Naughton, R. Ramak...
Ref: Iceberg Cube and Cube Exploration <ul><li>J. Han, J. Pei, G. Dong, and K. Wang, Computing Iceberg Data Cubes with Com...
Ref: FP for Classification and Clustering <ul><li>G. Dong and J. Li. Efficient mining of emerging patterns: Discovering tr...
Ref: Stream and Privacy-Preserving FP Mining <ul><li>A. Evfimievski, R. Srikant, R. Agrawal, J. Gehrke.  Privacy Preservin...
Ref: Other Freq. Pattern Mining Applications <ul><li>Y. Huhtala, J. Kärkkäinen, P. Porkka, H. Toivonen. Efficient Discover...
Upcoming SlideShare
Loading in...5
×

Association Rule Mining

2,954

Published on

0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
2,954
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
166
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Association Rule Mining

  1. 1. Data Mining: Concepts and Techniques — Chapter 5 — <ul><li>Jiawei Han </li></ul><ul><li>Department of Computer Science </li></ul><ul><li>University of Illinois at Urbana-Champaign </li></ul><ul><li>www.cs.uiuc.edu/~hanj </li></ul><ul><li>©2006 Jiawei Han and Micheline Kamber, All rights reserved </li></ul><ul><li>Revised by Zhongfei (Mark) Zhang </li></ul><ul><li>Department of Computer Science </li></ul><ul><li>SUNY Binghamton </li></ul>
  2. 2. Chapter 5: Mining Frequent Patterns, Association and Correlations <ul><li>Basic concepts and a road map </li></ul><ul><li>Efficient and scalable frequent itemset mining methods </li></ul><ul><li>Mining various kinds of association rules </li></ul><ul><li>From association mining to correlation analysis </li></ul><ul><li>Constraint-based association mining </li></ul><ul><li>Summary </li></ul>
  3. 3. What Is Frequent Pattern Analysis? <ul><li>Frequent pattern : a pattern (a set of items, subsequences, substructures, etc.) that occurs frequently in a data set </li></ul><ul><li>First proposed by Agrawal, Imielinski, and Swami [AIS93] in the context of frequent itemsets and association rule mining </li></ul><ul><li>Motivation: Finding inherent regularities in data </li></ul><ul><ul><li>What products were often purchased together?— Beer and diapers?! </li></ul></ul><ul><ul><li>What are the subsequent purchases after buying a PC? </li></ul></ul><ul><ul><li>What kinds of DNA are sensitive to this new drug? </li></ul></ul><ul><ul><li>Can we automatically classify web documents? </li></ul></ul><ul><li>Applications </li></ul><ul><ul><li>Basket data analysis, cross-marketing, catalog design, sale campaign analysis, Web log (click stream) analysis, and DNA sequence analysis. </li></ul></ul>
  4. 4. Why Is Freq. Pattern Mining Important? <ul><li>Discloses an intrinsic and important property of data sets </li></ul><ul><li>Forms the foundation for many essential data mining tasks </li></ul><ul><ul><li>Association, correlation, and causality analysis </li></ul></ul><ul><ul><li>Sequential, structural (e.g., sub-graph) patterns </li></ul></ul><ul><ul><li>Pattern analysis in spatiotemporal, multimedia, time-series, and stream data </li></ul></ul><ul><ul><li>Classification: associative classification </li></ul></ul><ul><ul><li>Cluster analysis: frequent pattern-based clustering </li></ul></ul><ul><ul><li>Data warehousing: iceberg cube and cube-gradient </li></ul></ul><ul><ul><li>Semantic data compression: fascicles </li></ul></ul><ul><ul><li>Broad applications </li></ul></ul>
  5. 5. Basic Concepts: Frequent Patterns and Association Rules <ul><li>Itemset X = {x 1 , …, x k } </li></ul><ul><li>Find all the rules X  Y with minimum support and confidence </li></ul><ul><ul><li>support , s , probability that a transaction contains X  Y </li></ul></ul><ul><ul><li>confidence , c, conditional probability that a transaction having X also contains Y </li></ul></ul><ul><li>Let sup min = 50%, conf min = 50% </li></ul><ul><li>Freq. Pat.: { A:3, B:3, D:4, E:3, AD:3 } </li></ul><ul><li>Association rules: </li></ul><ul><ul><li>A  D (60%, 100%) </li></ul></ul><ul><ul><li>D  A (60%, 75%) </li></ul></ul>Customer buys diaper Customer buys both Customer buys beer B, E, F 40 B, C, D, E, F 50 A, D, E 30 A, C, D 20 A, B, D 10 Items bought Transaction-id
  6. 6. Closed Patterns and Max-Patterns <ul><li>A long pattern contains a combinatorial number of sub-patterns, e.g., {a 1 , …, a 100 } contains ( 100 1 ) + ( 100 2 ) + … + ( 1 1 0 0 0 0 ) = 2 100 – 1 = 1.27*10 30 sub-patterns! </li></ul><ul><li>Solution: Mine closed patterns and max-patterns instead </li></ul><ul><li>An itemset X is closed if X is frequent and there exists no super-pattern Y כ X, with the same support as X (proposed by Pasquier, et al. @ ICDT’99) </li></ul><ul><li>An itemset X is a max-pattern if X is frequent and there exists no frequent super-pattern Y כ X (proposed by Bayardo @ SIGMOD’98) </li></ul><ul><li>Closed pattern is a lossless compression of freq. patterns </li></ul><ul><ul><li>Reducing the # of patterns and rules </li></ul></ul>
  7. 7. Closed Patterns and Max-Patterns <ul><li>Exercise. DB = {<a 1 , …, a 100 >, < a 1 , …, a 50 >} </li></ul><ul><ul><li>Min_sup = 1. </li></ul></ul><ul><li>What is the set of closed itemset ? </li></ul><ul><ul><li><a 1 , …, a 100 >: 1 </li></ul></ul><ul><ul><li>< a 1 , …, a 50 >: 2 </li></ul></ul><ul><li>What is the set of max-pattern? </li></ul><ul><ul><li><a 1 , …, a 100 >: 1 </li></ul></ul><ul><li>What is the set of all patterns ? </li></ul><ul><ul><li>!! </li></ul></ul>
  8. 8. Chapter 5: Mining Frequent Patterns, Association and Correlations <ul><li>Basic concepts and a road map </li></ul><ul><li>Efficient and scalable frequent itemset mining methods </li></ul><ul><li>Mining various kinds of association rules </li></ul><ul><li>From association mining to correlation analysis </li></ul><ul><li>Constraint-based association mining </li></ul><ul><li>Summary </li></ul>
  9. 9. Scalable Methods for Mining Frequent Patterns <ul><li>The downward closure property of frequent patterns </li></ul><ul><ul><li>Any subset of a frequent itemset must be frequent </li></ul></ul><ul><ul><li>If {beer, diaper, nuts} is frequent, so is {beer, diaper} </li></ul></ul><ul><ul><li>i.e., every transaction having {beer, diaper, nuts} also contains {beer, diaper} </li></ul></ul><ul><li>Scalable mining methods: Three major approaches </li></ul><ul><ul><li>Apriori (Agrawal & Srikant@VLDB’94) </li></ul></ul><ul><ul><li>Freq. pattern growth (FPgrowth—Han, Pei & Yin @SIGMOD’00) </li></ul></ul><ul><ul><li>Vertical data format approach (Charm—Zaki & Hsiao @SDM’02) </li></ul></ul>
  10. 10. Apriori: A Candidate Generation-and-Test Approach <ul><li>Apriori pruning principle : If there is any itemset which is infrequent, its superset should not be generated/tested! ( Agrawal & Srikant @VLDB’94, Mannila, et al. @ KDD’ 94) </li></ul><ul><li>Method: </li></ul><ul><ul><li>Initially, scan DB once to get frequent 1-itemset </li></ul></ul><ul><ul><li>Generate length (k+1) candidate itemsets from length k frequent itemsets </li></ul></ul><ul><ul><li>Test the candidates against DB </li></ul></ul><ul><ul><li>Terminate when no frequent or candidate set can be generated </li></ul></ul>
  11. 11. The Apriori Algorithm—An Example Database TDB 1 st scan C 1 L 1 L 2 C 2 C 2 2 nd scan C 3 L 3 3 rd scan Sup min = 2 B, E 40 A, B, C, E 30 B, C, E 20 A, C, D 10 Items Tid 1 {D} 3 {E} 3 {C} 3 {B} 2 {A} sup Itemset 3 {E} 3 {C} 3 {B} 2 {A} sup Itemset {C, E} {B, E} {B, C} {A, E} {A, C} {A, B} Itemset 1 {A, B} 2 {A, C} 1 {A, E} 2 {B, C} 3 {B, E} 2 {C, E} sup Itemset 2 {A, C} 2 {B, C} 3 {B, E} 2 {C, E} sup Itemset {B, C, E} Itemset 2 {B, C, E} sup Itemset
  12. 12. The Apriori Algorithm <ul><li>Pseudo-code : </li></ul><ul><ul><ul><li>C k : Candidate itemset of size k </li></ul></ul></ul><ul><ul><ul><li>L k : frequent itemset of size k </li></ul></ul></ul><ul><ul><ul><li>L 1 = {frequent items}; </li></ul></ul></ul><ul><ul><ul><li>for ( k = 1; L k !=  ; k ++) do begin </li></ul></ul></ul><ul><ul><ul><li>C k+1 = candidates generated from L k ; </li></ul></ul></ul><ul><ul><ul><li>for each transaction t in database do </li></ul></ul></ul><ul><ul><ul><ul><li>increment the count of all candidates in C k+1 that are contained in t </li></ul></ul></ul></ul><ul><ul><ul><li>L k+1 = candidates in C k+1 with min_support </li></ul></ul></ul><ul><ul><ul><li>end </li></ul></ul></ul><ul><ul><ul><li>return  k L k ; </li></ul></ul></ul>
  13. 13. Important Details of Apriori <ul><li>How to generate candidates? </li></ul><ul><ul><li>Step 1: self-joining L k </li></ul></ul><ul><ul><li>Step 2: pruning </li></ul></ul><ul><li>How to count supports of candidates? </li></ul><ul><li>Example of Candidate-generation </li></ul><ul><ul><li>L 3 = { abc, abd, acd, ace, bcd } </li></ul></ul><ul><ul><li>Self-joining: L 3 *L 3 </li></ul></ul><ul><ul><ul><li>abcd from abc and abd </li></ul></ul></ul><ul><ul><ul><li>acde from acd and ace </li></ul></ul></ul><ul><ul><li>Pruning: </li></ul></ul><ul><ul><ul><li>acde is removed because ade is not in L 3 </li></ul></ul></ul><ul><ul><li>C 4 ={ abcd } </li></ul></ul>
  14. 14. How to Generate Candidates? <ul><li>Suppose the items in L k-1 are listed in an order </li></ul><ul><li>Step 1: self-joining L k-1 </li></ul><ul><ul><li>insert into C k </li></ul></ul><ul><ul><li>select p.item 1 , p.item 2 , …, p.item k-1 , q.item k-1 </li></ul></ul><ul><ul><li>from L k-1 p, L k-1 q </li></ul></ul><ul><ul><li>where p.item 1 =q.item 1 , …, p.item k-2 =q.item k-2 , p.item k-1 < q.item k-1 </li></ul></ul><ul><li>Step 2: pruning </li></ul><ul><ul><li>forall itemsets c in C k do </li></ul></ul><ul><ul><ul><li>forall (k-1)-subsets s of c do </li></ul></ul></ul><ul><ul><ul><ul><li>if (s is not in L k-1 ) then delete c from C k </li></ul></ul></ul></ul>
  15. 15. Challenges of Frequent Pattern Mining <ul><li>Challenges </li></ul><ul><ul><li>Multiple scans of transaction database </li></ul></ul><ul><ul><li>Huge number of candidates </li></ul></ul><ul><ul><li>Tedious workload of support counting for candidates </li></ul></ul><ul><li>Improving Apriori: general ideas </li></ul><ul><ul><li>Reduce passes of transaction database scans </li></ul></ul><ul><ul><li>Shrink number of candidates </li></ul></ul><ul><ul><li>Facilitate support counting of candidates </li></ul></ul>
  16. 16. Partition: Scan Database Only Twice <ul><li>Any itemset that is potentially frequent in DB must be frequent in at least one of the partitions of DB </li></ul><ul><ul><li>Scan 1: partition database and find local frequent patterns </li></ul></ul><ul><ul><li>Scan 2: consolidate global frequent patterns </li></ul></ul><ul><li>A. Savasere, E. Omiecinski, and S. Navathe. An efficient algorithm for mining association in large databases . In VLDB’95 </li></ul>
  17. 17. DIC: Reduce Number of Scans <ul><li>Once both A and D are determined frequent, the counting of AD begins </li></ul><ul><li>Once all length-2 subsets of BCD are determined frequent, the counting of BCD begins </li></ul>ABCD ABC ABD ACD BCD AB AC BC AD BD CD A B C D {} Itemset lattice Transactions 1-itemsets 2-itemsets … Apriori 1-itemsets 2-items 3-items DIC S. Brin R. Motwani, J. Ullman, and S. Tsur. Dynamic itemset counting and implication rules for market basket data . In SIGMOD’97
  18. 18. Bottleneck of Frequent-pattern Mining <ul><li>Multiple database scans are costly </li></ul><ul><li>Mining long patterns needs many passes of scanning and generates lots of candidates </li></ul><ul><ul><li>To find frequent itemset i 1 i 2 …i 100 </li></ul></ul><ul><ul><ul><li># of scans: 100 </li></ul></ul></ul><ul><ul><ul><li># of Candidates: ( 100 1 ) + ( 100 2 ) + … + ( 1 1 0 0 0 0 ) = 2 100 -1 = 1.27*10 30 ! </li></ul></ul></ul><ul><li>Bottleneck: candidate-generation-and-test </li></ul><ul><li>Can we avoid candidate generation? </li></ul>
  19. 19. Mining Frequent Patterns Without Candidate Generation <ul><li>Grow long patterns from short ones using local frequent items </li></ul><ul><ul><li>“ abc” is a frequent pattern </li></ul></ul><ul><ul><li>Get all transactions having “abc”: DB|abc </li></ul></ul><ul><ul><li>“ d” is a local frequent item in DB|abc  abcd is a frequent pattern </li></ul></ul>
  20. 20. Construct FP-tree from a Transaction Database min_support = 3 TID Items bought (ordered) frequent items 100 { f, a, c, d, g, i, m, p } { f, c, a, m, p } 200 { a, b, c, f, l, m, o } { f, c, a, b, m } 300 { b, f, h, j, o, w } { f, b } 400 { b, c, k, s, p } { c, b, p } 500 { a, f, c, e, l, p, m, n } { f, c, a, m, p } <ul><li>Scan DB once, find frequent 1-itemset (single item pattern) </li></ul><ul><li>Sort frequent items in frequency descending order, f-list </li></ul><ul><li>Scan DB again, construct FP-tree </li></ul>F-list =f-c-a-b-m-p {} f:4 c:1 b:1 p:1 b:1 c:3 a:3 b:1 m:2 p:2 m:1 Header Table Item frequency head f 4 c 4 a 3 b 3 m 3 p 3
  21. 21. Benefits of the FP-tree Structure <ul><li>Completeness </li></ul><ul><ul><li>Preserve complete information for frequent pattern mining </li></ul></ul><ul><ul><li>Never break a long pattern of any transaction </li></ul></ul><ul><li>Compactness </li></ul><ul><ul><li>Reduce irrelevant info—infrequent items are gone </li></ul></ul><ul><ul><li>Items in frequency descending order: the more frequently occurring, the more likely to be shared </li></ul></ul><ul><ul><li>Never be larger than the original database (not count node-links and the count field) </li></ul></ul>
  22. 22. Partition Patterns and Databases <ul><li>Frequent patterns can be partitioned into subsets according to f-list </li></ul><ul><ul><li>F-list=f-c-a-b-m-p </li></ul></ul><ul><ul><li>Patterns containing p </li></ul></ul><ul><ul><li>Patterns having m but no p </li></ul></ul><ul><ul><li>… </li></ul></ul><ul><ul><li>Patterns having c but no a nor b, m, p </li></ul></ul><ul><ul><li>Pattern f </li></ul></ul><ul><li>Completeness and non-redundancy </li></ul>
  23. 23. Find Patterns Having P From P-conditional Database <ul><li>Starting at the frequent item header table in the FP-tree </li></ul><ul><li>Traverse the FP-tree by following the link of each frequent item p </li></ul><ul><li>Accumulate all of transformed prefix paths of item p to form p’ s conditional pattern base </li></ul>Conditional pattern bases item cond. pattern base c f:3 a fc:3 b fca:1, f:1, c:1 m fca:2, fcab:1 p fcam:2, cb:1 {} f:4 c:1 b:1 p:1 b:1 c:3 a:3 b:1 m:2 p:2 m:1 Header Table Item frequency head f 4 c 4 a 3 b 3 m 3 p 3
  24. 24. From Conditional Pattern-bases to Conditional FP-trees <ul><li>For each pattern-base </li></ul><ul><ul><li>Accumulate the count for each item in the base </li></ul></ul><ul><ul><li>Construct the FP-tree for the frequent items of the pattern base </li></ul></ul><ul><li>m-conditional pattern base: </li></ul><ul><ul><li>fca:2, fcab:1 </li></ul></ul>All frequent patterns relate to m m, fm, cm, am, fcm, fam, cam, fcam   {} f:4 c:1 b:1 p:1 b:1 c:3 a:3 b:1 m:2 p:2 m:1 Header Table Item frequency head f 4 c 4 a 3 b 3 m 3 p 3 {} f:3 c:3 a:3 m-conditional FP-tree
  25. 25. Recursion: Mining Each Conditional FP-tree Cond. pattern base of “am”: (fc:3) Cond. pattern base of “cm”: (f:3) {} f:3 cm-conditional FP-tree Cond. pattern base of “cam”: (f:3) {} f:3 cam-conditional FP-tree {} f:3 c:3 a:3 m-conditional FP-tree {} f:3 c:3 am-conditional FP-tree
  26. 26. A Special Case: Single Prefix Path in FP-tree <ul><li>Suppose a (conditional) FP-tree T has a shared single prefix-path P </li></ul><ul><li>Mining can be decomposed into two parts </li></ul><ul><ul><li>Reduction of the single prefix path into one node </li></ul></ul><ul><ul><li>Concatenation of the mining results of the two parts </li></ul></ul> + a 2 :n 2 a 3 :n 3 a 1 :n 1 {} b 1 :m 1 C 1 :k 1 C 2 :k 2 C 3 :k 3 b 1 :m 1 C 1 :k 1 C 2 :k 2 C 3 :k 3 r 1 a 2 :n 2 a 3 :n 3 a 1 :n 1 {} r 1 =
  27. 27. Mining Frequent Patterns With FP-trees <ul><li>Idea: Frequent pattern growth </li></ul><ul><ul><li>Recursively grow frequent patterns by pattern and database partition </li></ul></ul><ul><li>Method </li></ul><ul><ul><li>For each frequent item, construct its conditional pattern-base, and then its conditional FP-tree </li></ul></ul><ul><ul><li>Repeat the process on each newly created conditional FP-tree </li></ul></ul><ul><ul><li>Until the resulting FP-tree is empty, or it contains only one path—single path will generate all the combinations of its sub-paths, each of which is a frequent pattern </li></ul></ul>
  28. 28. FP-Growth vs. Apriori: Scalability With the Support Threshold Data set T25I20D10K
  29. 29. Why Is FP-Growth the Winner? <ul><li>Divide-and-conquer: </li></ul><ul><ul><li>decompose both the mining task and DB according to the frequent patterns obtained so far </li></ul></ul><ul><ul><li>leads to focused search of smaller databases </li></ul></ul><ul><li>Other factors </li></ul><ul><ul><li>no candidate generation, no candidate test </li></ul></ul><ul><ul><li>compressed database: FP-tree structure </li></ul></ul><ul><ul><li>no repeated scan of entire database </li></ul></ul><ul><ul><li>basic ops—counting local freq items and building sub FP-tree, no pattern search and matching </li></ul></ul>
  30. 30. Visualization of Association Rules: Plane Graph
  31. 31. Visualization of Association Rules: Rule Graph
  32. 32. Visualization of Association Rules (SGI/MineSet 3.0)
  33. 33. Chapter 5: Mining Frequent Patterns, Association and Correlations <ul><li>Basic concepts and a road map </li></ul><ul><li>Efficient and scalable frequent itemset mining methods </li></ul><ul><li>Mining various kinds of association rules </li></ul><ul><li>From association mining to correlation analysis </li></ul><ul><li>Constraint-based association mining </li></ul><ul><li>Summary </li></ul>
  34. 34. Mining Various Kinds of Association Rules <ul><li>Mining multilevel association </li></ul><ul><li>Miming multidimensional association </li></ul><ul><li>Mining quantitative association </li></ul><ul><li>Mining interesting correlation patterns </li></ul>
  35. 35. Mining Multiple-Level Association Rules <ul><li>Items often form hierarchies </li></ul><ul><li>Flexible support settings </li></ul><ul><ul><li>Items at the lower level are expected to have lower support </li></ul></ul><ul><li>Exploration of shared multi-level mining (Agrawal & Srikant@VLB’95, Han & Fu@VLDB’95) </li></ul>uniform support Milk [support = 10%] 2% Milk [support = 6%] Skim Milk [support = 4%] Level 1 min_sup = 5% Level 2 min_sup = 5% Level 1 min_sup = 5% Level 2 min_sup = 3% reduced support
  36. 36. Multi-level Association: Redundancy Filtering <ul><li>Some rules may be redundant due to “ancestor” relationships between items. </li></ul><ul><li>Example </li></ul><ul><ul><li>milk  wheat bread [support = 8%, confidence = 70%] </li></ul></ul><ul><ul><li>2% milk  wheat bread [support = 2%, confidence = 72%] </li></ul></ul><ul><li>We say the first rule is an ancestor of the second rule. </li></ul><ul><li>A rule is redundant if its support is close to the “expected” value, based on the rule’s ancestor. </li></ul>
  37. 37. Mining Multi-Dimensional Association <ul><li>Single-dimensional rules: </li></ul><ul><ul><ul><li>buys(X, “milk”)  buys(X, “bread”) </li></ul></ul></ul><ul><li>Multi-dimensional rules:  2 dimensions or predicates </li></ul><ul><ul><li>Inter-dimension assoc. rules ( no repeated predicates ) </li></ul></ul><ul><ul><ul><li>age(X,”19-25”)  occupation(X,“student”)  buys(X, “coke”) </li></ul></ul></ul><ul><ul><li>hybrid-dimension assoc. rules ( repeated predicates ) </li></ul></ul><ul><ul><ul><li>age(X,”19-25”)  buys(X, “popcorn”)  buys(X, “coke”) </li></ul></ul></ul><ul><li>Categorical Attributes: finite number of possible values, no ordering among values—data cube approach </li></ul><ul><li>Quantitative Attributes: numeric, implicit ordering among values—discretization, clustering, and gradient approaches </li></ul>
  38. 38. Mining Quantitative Associations <ul><li>Techniques can be categorized by how numerical attributes, such as age or salary are treated </li></ul><ul><li>Static discretization based on predefined concept hierarchies (data cube methods) </li></ul><ul><li>Dynamic discretization based on data distribution (quantitative rules, e.g., Agrawal & Srikant@SIGMOD96) </li></ul><ul><li>Clustering: Distance-based association (e.g., Yang & Miller@SIGMOD97) </li></ul><ul><ul><li>one dimensional clustering then association </li></ul></ul><ul><li>Deviation: (such as Aumann and Lindell@KDD99) </li></ul><ul><ul><li>Sex = female => Wage: mean=$7/hr (overall mean = $9) </li></ul></ul>
  39. 39. Quantitative Association Rules age(X,”34-35”)  income(X,”30-50K”)  buys(X,”high resolution TV”) <ul><li>Proposed by Lent, Swami and Widom ICDE’97 </li></ul><ul><li>Numeric attributes are dynamically discretized </li></ul><ul><ul><li>Such that the confidence or compactness of the rules mined is maximized </li></ul></ul><ul><li>2-D quantitative association rules: A quan1  A quan2  A cat </li></ul><ul><li>Cluster adjacent association rules to form general rules using a 2-D grid </li></ul><ul><li>Example </li></ul>
  40. 40. Chapter 5: Mining Frequent Patterns, Association and Correlations <ul><li>Basic concepts and a road map </li></ul><ul><li>Efficient and scalable frequent itemset mining methods </li></ul><ul><li>Mining various kinds of association rules </li></ul><ul><li>From association mining to correlation analysis </li></ul><ul><li>Constraint-based association mining </li></ul><ul><li>Summary </li></ul>
  41. 41. Interestingness Measure: Correlations (Lift) <ul><li>play basketball  eat cereal [40%, 66.7%] is misleading </li></ul><ul><ul><li>The overall % of students eating cereal is 75% > 66.7%. </li></ul></ul><ul><li>play basketball  not eat cereal [20%, 33.3%] is more accurate, although with lower support and confidence </li></ul><ul><li>Measure of dependent/correlated events: lift </li></ul>5000 2000 3000 Sum(col.) 1250 250 1000 Not cereal 3750 1750 2000 Cereal Sum (row) Not basketball Basketball
  42. 42. Are lift and  2 Good Measures of Correlation? <ul><li>“ Buy walnuts  buy milk [1%, 80%]” is misleading </li></ul><ul><ul><li>if 85% of customers buy milk </li></ul></ul><ul><li>Support and confidence are not good to represent correlations </li></ul><ul><li>So many interestingness measures? (Tan, Kumar, Sritastava @KDD’02) </li></ul> ~m m Sum(col.) ~c ~m, ~c m, ~c No Coffee c ~m, c m, c Coffee Sum (row) No Milk Milk 0 0.33 0.5 1 1000 1000 1000 1000 A4 8172 0.09 0.09 9.18 100,000 10000 100 1000 A3 670 0.05 0.09 8.44 100,000 1000 1000 100 A2 9055 0.83 0.91 9.26 10,000 100 100 1000 A1  2 coh all-conf lift ~m~c m~c ~m, c m, c DB
  43. 43. Which Measures Should Be Used? <ul><li>lift and  2 are not good measures for correlations in large transactional DBs </li></ul><ul><li>all-conf or coherence could be good measures (Omiecinski@TKDE’03) </li></ul><ul><li>Both all-conf and coherence have the downward closure property </li></ul><ul><li>Efficient algorithms can be derived for mining (Lee et al. @ICDM’03sub) </li></ul>
  44. 44. Chapter 5: Mining Frequent Patterns, Association and Correlations <ul><li>Basic concepts and a road map </li></ul><ul><li>Efficient and scalable frequent itemset mining methods </li></ul><ul><li>Mining various kinds of association rules </li></ul><ul><li>From association mining to correlation analysis </li></ul><ul><li>Constraint-based association mining </li></ul><ul><li>Summary </li></ul>
  45. 45. Constraint-based (Query-Directed) Mining <ul><li>Finding all the patterns in a database autonomously ? — unrealistic! </li></ul><ul><ul><li>The patterns could be too many but not focused! </li></ul></ul><ul><li>Data mining should be an interactive process </li></ul><ul><ul><li>User directs what to be mined using a data mining query language (or a graphical user interface) </li></ul></ul><ul><li>Constraint-based mining </li></ul><ul><ul><li>User flexibility: provides constraints on what to be mined </li></ul></ul><ul><ul><li>System optimization: explores such constraints for efficient mining— constraint-based mining </li></ul></ul>
  46. 46. Constraints in Data Mining <ul><li>Knowledge type constraint : </li></ul><ul><ul><li>classification, association, etc. </li></ul></ul><ul><li>Data constraint — using SQL-like queries </li></ul><ul><ul><li>find product pairs sold together in stores in Chicago in Dec.’02 </li></ul></ul><ul><li>Dimension/level constraint </li></ul><ul><ul><li>in relevance to region, price, brand, customer category </li></ul></ul><ul><li>Rule (or pattern) constraint </li></ul><ul><ul><li>small sales (price < $10) triggers big sales (sum > $200) </li></ul></ul><ul><li>Interestingness constraint </li></ul><ul><ul><li>strong rules: min_support  3%, min_confidence  60% </li></ul></ul>
  47. 47. Constrained Mining vs. Constraint-Based Search <ul><li>Constrained mining vs. constraint-based search/reasoning </li></ul><ul><ul><li>Both are aimed at reducing search space </li></ul></ul><ul><ul><li>Finding all patterns satisfying constraints vs. finding some (or one) answer in constraint-based search in AI </li></ul></ul><ul><ul><li>Constraint-pushing vs. heuristic search </li></ul></ul><ul><ul><li>It is an interesting research problem on how to integrate them </li></ul></ul><ul><li>Constrained mining vs. query processing in DBMS </li></ul><ul><ul><li>Database query processing requires to find all </li></ul></ul><ul><ul><li>Constrained pattern mining shares a similar philosophy as pushing selections deeply in query processing </li></ul></ul>
  48. 48. Anti-Monotonicity in Constraint Pushing <ul><li>Anti-monotonicity </li></ul><ul><ul><li>When an itemset S violates the constraint, so does any of its superset </li></ul></ul><ul><ul><li>sum(S.Price)  v is anti-monotone </li></ul></ul><ul><ul><li>sum(S.Price)  v is not anti-monotone </li></ul></ul><ul><li>Example. C: range(S.profit)  15 is anti-monotone </li></ul><ul><ul><li>Itemset ab violates C </li></ul></ul><ul><ul><li>So does every superset of ab </li></ul></ul>TDB (min_sup=2) Transaction TID a, b, c, d, f 10 b, c, d, f, g, h 20 a, c, d, e, f 30 c, e, f, g 40 -10 h 20 g 30 f -30 e 10 d -20 c 0 b 40 a Profit Item
  49. 49. Monotonicity for Constraint Pushing <ul><li>Monotonicity </li></ul><ul><ul><li>When an itemset S satisfies the constraint, so does any of its superset </li></ul></ul><ul><ul><li>sum(S.Price)  v is monotone </li></ul></ul><ul><ul><li>min(S.Price)  v is monotone </li></ul></ul><ul><li>Example. C: range(S.profit)  15 </li></ul><ul><ul><li>Itemset ab satisfies C </li></ul></ul><ul><ul><li>So does every superset of ab </li></ul></ul>TDB (min_sup=2) Transaction TID a, b, c, d, f 10 b, c, d, f, g, h 20 a, c, d, e, f 30 c, e, f, g 40 -10 h 20 g 30 f -30 e 10 d -20 c 0 b 40 a Profit Item
  50. 50. Succinctness <ul><li>Succinctness: </li></ul><ul><ul><li>Given A 1, the set of items satisfying a succinctness constraint C , then any set S satisfying C is based on A 1 , i.e., S contains a subset belonging to A 1 </li></ul></ul><ul><ul><li>Idea: Without looking at the transaction database, whether an itemset S satisfies constraint C can be determined based on the selection of items </li></ul></ul><ul><ul><li>min ( S.Price )  v is succinct </li></ul></ul><ul><ul><li>sum ( S.Price )  v is not succinct </li></ul></ul><ul><li>Optimization: If C is succinct, C is pre-counting pushable </li></ul>
  51. 51. The Apriori Algorithm — Example Database D Scan D C 1 L 1 L 2 C 2 C 2 Scan D C 3 L 3 Scan D
  52. 52. Naïve Algorithm: Apriori + Constraint Database D Scan D C 1 L 1 L 2 C 2 C 2 Scan D C 3 L 3 Scan D Constraint: Sum{S.price} < 5
  53. 53. The Constrained Apriori Algorithm: Push an Anti-monotone Constraint Deep Database D Scan D C 1 L 1 L 2 C 2 C 2 Scan D C 3 L 3 Scan D Constraint: Sum{S.price} < 5
  54. 54. The Constrained Apriori Algorithm: Push a Succinct Constraint Deep Database D Scan D C 1 L 1 L 2 C 2 C 2 Scan D C 3 L 3 Scan D Constraint: min{S.price } <= 1 not immediately to be used
  55. 55. Converting “Tough” Constraints <ul><li>Convert tough constraints into anti-monotone or monotone by properly ordering items </li></ul><ul><li>Examine C: avg( S .profit)  25 </li></ul><ul><ul><li>Order items in value-descending order </li></ul></ul><ul><ul><ul><li>< a, f, g, d, b, h, c, e > </li></ul></ul></ul><ul><ul><li>If an itemset afb violates C </li></ul></ul><ul><ul><ul><li>So does afbh, afb* </li></ul></ul></ul><ul><ul><ul><li>It becomes anti-monotone! </li></ul></ul></ul>TDB (min_sup=2) Transaction TID a, b, c, d, f 10 b, c, d, f, g, h 20 a, c, d, e, f 30 c, e, f, g 40 -10 h 20 g 30 f -30 e 10 d -20 c 0 b 40 a Profit Item
  56. 56. Strongly Convertible Constraints <ul><li>avg(X)  25 is convertible anti-monotone w.r.t. item value descending order R: < a, f, g, d, b, h, c, e > </li></ul><ul><ul><li>If an itemset af violates a constraint C, so does every itemset with af as prefix, such as afd </li></ul></ul><ul><li>avg(X)  25 is convertible monotone w.r.t. item value ascending order R -1 : < e, c, h, b, d, g, f, a > </li></ul><ul><ul><li>If an itemset d satisfies a constraint C , so does itemsets df and dfa , which having d as a prefix </li></ul></ul><ul><li>Thus, avg(X)  25 is strongly convertible </li></ul>-10 h 20 g 30 f -30 e 10 d -20 c 0 b 40 a Profit Item
  57. 57. Mining With Convertible Constraints <ul><li>C: avg(X) >= 25, min_sup=2 </li></ul><ul><li>List items in every transaction in value descending order R: <a, f, g, d, b, h, c, e> </li></ul><ul><ul><li>C is convertible anti-monotone w.r.t. R </li></ul></ul><ul><li>Scan TDB once </li></ul><ul><ul><li>remove infrequent items </li></ul></ul><ul><ul><ul><li>Item h is dropped </li></ul></ul></ul><ul><ul><li>Itemsets a and f are good, … </li></ul></ul><ul><li>Projection-based mining </li></ul><ul><ul><li>Imposing an appropriate order on item projection </li></ul></ul><ul><ul><li>Many tough constraints can be converted into (anti)-monotone </li></ul></ul>TDB (min_sup=2) Transaction TID a, f, d, b, c 10 f, g, d, b, c 20 a, f, d, c, e 30 f, g, h, c, e 40 -10 h 20 g 10 d 30 f -30 e -20 c 0 b 40 a Value Item
  58. 58. Handling Multiple Constraints <ul><li>Different constraints may require different or even conflicting item-ordering </li></ul><ul><li>If there exists an order R s.t. both C 1 and C 2 are convertible w.r.t. R, then there is no conflict between the two convertible constraints </li></ul><ul><li>If there exists conflict on order of items </li></ul><ul><ul><li>Try to satisfy one constraint first </li></ul></ul><ul><ul><li>Then using the order for the other constraint to mine frequent itemsets in the corresponding projected database </li></ul></ul>
  59. 59. What Constraints Are Convertible? …… No No Yes sum(S)  v (items could be of any value, v  0) No Yes No sum(S)  v (items could be of any value, v  0) No Yes No sum(S)  v (items could be of any value, v  0) No No Yes sum(S)  v (items could be of any value, v  0) Yes Yes Yes median(S)  ,  v Yes Yes Yes avg(S)  ,  v Strongly convertible Convertible monotone Convertible anti-monotone Constraint
  60. 60. Constraint-Based Mining—A General Picture no yes no support(S)   no yes no range(S)  v no yes no sum(S)  v ( a  S, a  0 ) weakly yes no count(S)  v yes yes no max(S)  v yes no yes max(S)  v yes no yes min(S)  v yes yes no min(S)  v yes no yes S  V yes yes no S  V no no yes support(S)   no convertible convertible avg(S)  v,   {  ,  ,  } no no yes range(S)  v no no yes sum(S)  v ( a  S, a  0 ) weakly no yes count(S)  v yes yes no v  S Succinct Monotone Antimonotone Constraint
  61. 61. A Classification of Constraints Convertible anti-monotone Convertible monotone Strongly convertible Inconvertible Succinct Antimonotone Monotone
  62. 62. Chapter 5: Mining Frequent Patterns, Association and Correlations <ul><li>Basic concepts and a road map </li></ul><ul><li>Efficient and scalable frequent itemset mining methods </li></ul><ul><li>Mining various kinds of association rules </li></ul><ul><li>From association mining to correlation analysis </li></ul><ul><li>Constraint-based association mining </li></ul><ul><li>Summary </li></ul>
  63. 63. Frequent-Pattern Mining: Summary <ul><li>Frequent pattern mining—an important task in data mining </li></ul><ul><li>Scalable frequent pattern mining methods </li></ul><ul><ul><li>Apriori (Candidate generation & test) </li></ul></ul><ul><ul><li>FP-Tree based </li></ul></ul><ul><li>Mining a variety of rules and interesting patterns </li></ul><ul><li>Constraint-based mining </li></ul><ul><li>Mining sequential and structured patterns </li></ul><ul><li>Extensions and applications </li></ul>
  64. 64. Frequent-Pattern Mining: Research Problems <ul><li>Mining fault-tolerant frequent, sequential and structured patterns </li></ul><ul><ul><li>Patterns allows limited faults (insertion, deletion, mutation) </li></ul></ul><ul><li>Mining truly interesting patterns </li></ul><ul><ul><li>Surprising, novel, concise, … </li></ul></ul><ul><li>Application exploration </li></ul><ul><ul><li>E.g., DNA sequence analysis and bio-pattern classification </li></ul></ul><ul><ul><li>“ Invisible” data mining </li></ul></ul>
  65. 65. Ref: Basic Concepts of Frequent Pattern Mining <ul><li>( Association Rules ) R. Agrawal, T. Imielinski, and A. Swami. Mining association rules between sets of items in large databases. SIGMOD'93. </li></ul><ul><li>( Max-pattern ) R. J. Bayardo. Efficiently mining long patterns from databases. SIGMOD'98. </li></ul><ul><li>( Closed-pattern ) N. Pasquier, Y. Bastide, R. Taouil, and L. Lakhal. Discovering frequent closed itemsets for association rules. ICDT'99. </li></ul><ul><li>( Sequential pattern ) R. Agrawal and R. Srikant. Mining sequential patterns. ICDE'95 </li></ul>
  66. 66. Ref: Apriori and Its Improvements <ul><li>R. Agrawal and R. Srikant. Fast algorithms for mining association rules. VLDB'94. </li></ul><ul><li>H. Mannila, H. Toivonen, and A. I. Verkamo. Efficient algorithms for discovering association rules. KDD'94. </li></ul><ul><li>A. Savasere, E. Omiecinski, and S. Navathe. An efficient algorithm for mining association rules in large databases. VLDB'95. </li></ul><ul><li>J. S. Park, M. S. Chen, and P. S. Yu. An effective hash-based algorithm for mining association rules. SIGMOD'95. </li></ul><ul><li>H. Toivonen. Sampling large databases for association rules. VLDB'96. </li></ul><ul><li>S. Brin, R. Motwani, J. D. Ullman, and S. Tsur. Dynamic itemset counting and implication rules for market basket analysis. SIGMOD'97. </li></ul><ul><li>S. Sarawagi, S. Thomas, and R. Agrawal. Integrating association rule mining with relational database systems: Alternatives and implications. SIGMOD'98. </li></ul>
  67. 67. Ref: Depth-First, Projection-Based FP Mining <ul><li>R. Agarwal, C. Aggarwal, and V. V. V. Prasad. A tree projection algorithm for generation of frequent itemsets. J. Parallel and Distributed Computing:02. </li></ul><ul><li>J. Han, J. Pei, and Y. Yin. Mining frequent patterns without candidate generation . SIGMOD’ 00. </li></ul><ul><li>J. Pei, J. Han, and R. Mao. CLOSET: An Efficient Algorithm for Mining Frequent Closed Itemsets. DMKD'00. </li></ul><ul><li>J. Liu, Y. Pan, K. Wang, and J. Han. Mining Frequent Item Sets by Opportunistic Projection. KDD'02. </li></ul><ul><li>J. Han, J. Wang, Y. Lu, and P. Tzvetkov. Mining Top-K Frequent Closed Patterns without Minimum Support. ICDM'02. </li></ul><ul><li>J. Wang, J. Han, and J. Pei. CLOSET+: Searching for the Best Strategies for Mining Frequent Closed Itemsets. KDD'03. </li></ul><ul><li>G. Liu, H. Lu, W. Lou, J. X. Yu. On Computing, Storing and Querying Frequent Patterns. KDD'03. </li></ul>
  68. 68. Ref: Vertical Format and Row Enumeration Methods <ul><li>M. J. Zaki, S. Parthasarathy, M. Ogihara, and W. Li. Parallel algorithm for discovery of association rules. DAMI:97. </li></ul><ul><li>Zaki and Hsiao. CHARM: An Efficient Algorithm for Closed Itemset Mining, SDM'02. </li></ul><ul><li>C. Bucila, J. Gehrke, D. Kifer, and W. White. DualMiner: A Dual-Pruning Algorithm for Itemsets with Constraints. KDD’02. </li></ul><ul><li>F. Pan, G. Cong, A. K. H. Tung, J. Yang, and M. Zaki , CARPENTER: Finding Closed Patterns in Long Biological Datasets. KDD'03. </li></ul>
  69. 69. Ref: Mining Multi-Level and Quantitative Rules <ul><li>R. Srikant and R. Agrawal. Mining generalized association rules. VLDB'95. </li></ul><ul><li>J. Han and Y. Fu. Discovery of multiple-level association rules from large databases. VLDB'95. </li></ul><ul><li>R. Srikant and R. Agrawal. Mining quantitative association rules in large relational tables. SIGMOD'96. </li></ul><ul><li>T. Fukuda, Y. Morimoto, S. Morishita, and T. Tokuyama. Data mining using two-dimensional optimized association rules: Scheme, algorithms, and visualization. SIGMOD'96. </li></ul><ul><li>K. Yoda, T. Fukuda, Y. Morimoto, S. Morishita, and T. Tokuyama. Computing optimized rectilinear regions for association rules. KDD'97. </li></ul><ul><li>R.J. Miller and Y. Yang. Association rules over interval data. SIGMOD'97. </li></ul><ul><li>Y. Aumann and Y. Lindell. A Statistical Theory for Quantitative Association Rules KDD'99. </li></ul>
  70. 70. Ref: Mining Correlations and Interesting Rules <ul><li>M. Klemettinen, H. Mannila, P. Ronkainen, H. Toivonen, and A. I. Verkamo. Finding interesting rules from large sets of discovered association rules. CIKM'94. </li></ul><ul><li>S. Brin, R. Motwani, and C. Silverstein. Beyond market basket: Generalizing association rules to correlations. SIGMOD'97. </li></ul><ul><li>C. Silverstein, S. Brin, R. Motwani, and J. Ullman. Scalable techniques for mining causal structures. VLDB'98. </li></ul><ul><li>P.-N. Tan, V. Kumar, and J. Srivastava. Selecting the Right Interestingness Measure for Association Patterns. KDD'02. </li></ul><ul><li>E. Omiecinski. Alternative Interest Measures for Mining Associations. TKDE’03. </li></ul><ul><li>Y. K. Lee, W.Y. Kim, Y. D. Cai, and J. Han. CoMine: Efficient Mining of Correlated Patterns. ICDM’03. </li></ul>
  71. 71. Ref: Mining Other Kinds of Rules <ul><li>R. Meo, G. Psaila, and S. Ceri. A new SQL-like operator for mining association rules. VLDB'96. </li></ul><ul><li>B. Lent, A. Swami, and J. Widom. Clustering association rules. ICDE'97. </li></ul><ul><li>A. Savasere, E. Omiecinski, and S. Navathe. Mining for strong negative associations in a large database of customer transactions. ICDE'98. </li></ul><ul><li>D. Tsur, J. D. Ullman, S. Abitboul, C. Clifton, R. Motwani, and S. Nestorov. Query flocks: A generalization of association-rule mining. SIGMOD'98. </li></ul><ul><li>F. Korn, A. Labrinidis, Y. Kotidis, and C. Faloutsos. Ratio rules: A new paradigm for fast, quantifiable data mining. VLDB'98. </li></ul><ul><li>K. Wang, S. Zhou, J. Han. Profit Mining: From Patterns to Actions. EDBT’02. </li></ul>
  72. 72. Ref: Constraint-Based Pattern Mining <ul><li>R. Srikant, Q. Vu, and R. Agrawal. Mining association rules with item constraints. KDD'97. </li></ul><ul><li>R. Ng, L.V.S. Lakshmanan, J. Han & A. Pang. Exploratory mining and pruning optimizations of constrained association rules. SIGMOD’98. </li></ul><ul><li>M.N. Garofalakis, R. Rastogi, K. Shim: SPIRIT: Sequential Pattern Mining with Regular Expression Constraints. VLDB’99. </li></ul><ul><li>G. Grahne, L. Lakshmanan, and X. Wang. Efficient mining of constrained correlated sets. ICDE'00. </li></ul><ul><li>J. Pei, J. Han, and L. V. S. Lakshmanan. Mining Frequent Itemsets with Convertible Constraints. ICDE'01. </li></ul><ul><li>J. Pei, J. Han, and W. Wang, Mining Sequential Patterns with Constraints in Large Databases, CIKM'02. </li></ul>
  73. 73. Ref: Mining Sequential and Structured Patterns <ul><li>R. Srikant and R. Agrawal. Mining sequential patterns: Generalizations and performance improvements. EDBT’96. </li></ul><ul><li>H. Mannila, H Toivonen, and A. I. Verkamo. Discovery of frequent episodes in event sequences. DAMI:97. </li></ul><ul><li>M. Zaki. SPADE: An Efficient Algorithm for Mining Frequent Sequences. Machine Learning:01. </li></ul><ul><li>J. Pei, J. Han, H. Pinto, Q. Chen, U. Dayal, and M.-C. Hsu. PrefixSpan: Mining Sequential Patterns Efficiently by Prefix-Projected Pattern Growth. ICDE'01. </li></ul><ul><li>M. Kuramochi and G. Karypis. Frequent Subgraph Discovery. ICDM'01. </li></ul><ul><li>X. Yan, J. Han, and R. Afshar. CloSpan: Mining Closed Sequential Patterns in Large Datasets. SDM'03. </li></ul><ul><li>X. Yan and J. Han. CloseGraph: Mining Closed Frequent Graph Patterns. KDD'03. </li></ul>
  74. 74. Ref: Mining Spatial, Multimedia, and Web Data <ul><li>K. Koperski and J. Han, Discovery of Spatial Association Rules in Geographic Information Databases, SSD’95. </li></ul><ul><li>O. R. Zaiane, M. Xin, J. Han, Discovering Web Access Patterns and Trends by Applying OLAP and Data Mining Technology on Web Logs. ADL'98. </li></ul><ul><li>O. R. Zaiane, J. Han, and H. Zhu, Mining Recurrent Items in Multimedia with Progressive Resolution Refinement. ICDE'00. </li></ul><ul><li>D. Gunopulos and I. Tsoukatos. Efficient Mining of Spatiotemporal Patterns. SSTD'01. </li></ul>
  75. 75. Ref: Mining Frequent Patterns in Time-Series Data <ul><li>B. Ozden, S. Ramaswamy, and A. Silberschatz. Cyclic association rules. ICDE'98. </li></ul><ul><li>J. Han, G. Dong and Y. Yin, Efficient Mining of Partial Periodic Patterns in Time Series Database, ICDE'99. </li></ul><ul><li>H. Lu, L. Feng, and J. Han. Beyond Intra-Transaction Association Analysis: Mining Multi-Dimensional Inter-Transaction Association Rules. TOIS:00. </li></ul><ul><li>B.-K. Yi, N. Sidiropoulos, T. Johnson, H. V. Jagadish, C. Faloutsos, and A. Biliris. Online Data Mining for Co-Evolving Time Sequences. ICDE'00. </li></ul><ul><li>W. Wang, J. Yang, R. Muntz. TAR: Temporal Association Rules on Evolving Numerical Attributes. ICDE’01. </li></ul><ul><li>J. Yang, W. Wang, P. S. Yu. Mining Asynchronous Periodic Patterns in Time Series Data. TKDE’03. </li></ul>
  76. 76. Ref: Iceberg Cube and Cube Computation <ul><li>S. Agarwal, R. Agrawal, P. M. Deshpande, A. Gupta, J. F. Naughton, R. Ramakrishnan, and S. Sarawagi. On the computation of multidimensional aggregates. VLDB'96. </li></ul><ul><li>Y. Zhao, P. M. Deshpande, and J. F. Naughton. An array-based algorithm for simultaneous multidi-mensional aggregates. SIGMOD'97. </li></ul><ul><li>J. Gray, et al. Data cube: A relational aggregation operator generalizing group-by, cross-tab and sub-totals. DAMI: 97. </li></ul><ul><li>M. Fang, N. Shivakumar, H. Garcia-Molina, R. Motwani, and J. D. Ullman. Computing iceberg queries efficiently. VLDB'98. </li></ul><ul><li>S. Sarawagi, R. Agrawal, and N. Megiddo. Discovery-driven exploration of OLAP data cubes. EDBT'98. </li></ul><ul><li>K. Beyer and R. Ramakrishnan. Bottom-up computation of sparse and iceberg cubes. SIGMOD'99. </li></ul>
  77. 77. Ref: Iceberg Cube and Cube Exploration <ul><li>J. Han, J. Pei, G. Dong, and K. Wang, Computing Iceberg Data Cubes with Complex Measures. SIGMOD’ 01. </li></ul><ul><li>W. Wang, H. Lu, J. Feng, and J. X. Yu. Condensed Cube: An Effective Approach to Reducing Data Cube Size. ICDE'02. </li></ul><ul><li>G. Dong, J. Han, J. Lam, J. Pei, and K. Wang. Mining Multi-Dimensional Constrained Gradients in Data Cubes. VLDB'01. </li></ul><ul><li>T. Imielinski, L. Khachiyan, and A. Abdulghani. Cubegrades: Generalizing association rules. DAMI:02. </li></ul><ul><li>L. V. S. Lakshmanan, J. Pei, and J. Han. Quotient Cube: How to Summarize the Semantics of a Data Cube. VLDB'02. </li></ul><ul><li>D. Xin, J. Han, X. Li, B. W. Wah. Star-Cubing: Computing Iceberg Cubes by Top-Down and Bottom-Up Integration. VLDB'03. </li></ul>
  78. 78. Ref: FP for Classification and Clustering <ul><li>G. Dong and J. Li. Efficient mining of emerging patterns: Discovering trends and differences. KDD'99. </li></ul><ul><li>B. Liu, W. Hsu, Y. Ma. Integrating Classification and Association Rule Mining. KDD’98. </li></ul><ul><li>W. Li, J. Han, and J. Pei. CMAR: Accurate and Efficient Classification Based on Multiple Class-Association Rules. ICDM'01. </li></ul><ul><li>H. Wang, W. Wang, J. Yang, and P.S. Yu. Clustering by pattern similarity in large data sets. SIGMOD’ 02. </li></ul><ul><li>J. Yang and W. Wang. CLUSEQ: efficient and effective sequence clustering. ICDE’03. </li></ul><ul><li>B. Fung, K. Wang, and M. Ester. Large Hierarchical Document Clustering Using Frequent Itemset. SDM’03. </li></ul><ul><li>X. Yin and J. Han. CPAR: Classification based on Predictive Association Rules. SDM'03. </li></ul>
  79. 79. Ref: Stream and Privacy-Preserving FP Mining <ul><li>A. Evfimievski, R. Srikant, R. Agrawal, J. Gehrke. Privacy Preserving Mining of Association Rules. KDD’02. </li></ul><ul><li>J. Vaidya and C. Clifton. Privacy Preserving Association Rule Mining in Vertically Partitioned Data. KDD’02. </li></ul><ul><li>G. Manku and R. Motwani. Approximate Frequency Counts over Data Streams. VLDB’02 . </li></ul><ul><li>Y. Chen, G. Dong, J. Han, B. W. Wah, and J. Wang. Multi-Dimensional Regression Analysis of Time-Series Data Streams. VLDB'02. </li></ul><ul><li>C. Giannella, J. Han, J. Pei, X. Yan and P. S. Yu. Mining Frequent Patterns in Data Streams at Multiple Time Granularities, Next Generation Data Mining:03. </li></ul><ul><li>A. Evfimievski, J. Gehrke, and R. Srikant. Limiting Privacy Breaches in Privacy Preserving Data Mining. PODS’03. </li></ul>
  80. 80. Ref: Other Freq. Pattern Mining Applications <ul><li>Y. Huhtala, J. Kärkkäinen, P. Porkka, H. Toivonen. Efficient Discovery of Functional and Approximate Dependencies Using Partitions. ICDE’98. </li></ul><ul><li>H. V. Jagadish, J. Madar, and R. Ng. Semantic Compression and Pattern Extraction with Fascicles. VLDB'99. </li></ul><ul><li>T. Dasu, T. Johnson, S. Muthukrishnan, and V. Shkapenyuk. Mining Database Structure; or How to Build a Data Quality Browser. SIGMOD'02. </li></ul>
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×