SlideShare a Scribd company logo
1 of 150
DATA MINING
LECTURE 4
Frequent Itemsets, Association Rules
Evaluation
Alternative Algorithms
Subrata Kumer Paul
Assistant Professor, Dept. of CSE, BAUET
sksubrata96@gmail.com
RECAP
Mining Frequent Itemsets
• Itemset
• A collection of one or more items
• Example: {Milk, Bread, Diaper}
• k-itemset
• An itemset that contains k items
• Support ()
• Count: Frequency of occurrence of an itemset
• E.g. ({Milk, Bread,Diaper}) = 2
• Fraction: Fraction of transactions that contain an itemset
• E.g. s({Milk, Bread, Diaper}) = 40%
• Frequent Itemset
• An itemset whose support is greater than or equal to a minsup threshold,
𝑠 𝐼 ≥ minsup
• Problem Definition
• Input: A set of transactions T, over a set of items I, minsup value
• Output: All itemsets with items in I having 𝑠 𝐼 ≥ minsup
TID Items
1 Bread, Milk
2 Bread, Diaper, Beer, Eggs
3 Milk, Diaper, Beer, Coke
4 Bread, Milk, Diaper, Beer
5 Bread, Milk, Diaper, Coke
The itemset lattice
null
AB AC AD AE BC BD BE CD CE DE
A B C D E
ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE
ABCD ABCE ABDE ACDE BCDE
ABCDE
Given d items, there are
2d possible itemsets
Too expensive to test all!
The Apriori Principle
• Apriori principle (Main observation):
– If an itemset is frequent, then all of its subsets must also
be frequent
– If an itemset is not frequent, then all of its supersets
cannot be frequent
– The support of an itemset never exceeds the support of
its subsets
– This is known as the anti-monotone property of support
)
(
)
(
)
(
:
, Y
s
X
s
Y
X
Y
X 



Illustration of the Apriori principle
Found to be frequent
Frequent
subsets
Illustration of the Apriori principle
Found to be
Infrequent
null
AB AC AD AE BC BD BE CD CE DE
A B C D E
ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE
ABCD ABCE ABDE ACDE BCDE
ABCDE
null
AB AC AD AE BC BD BE CD CE DE
A B C D E
ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE
ABCD ABCE ABDE ACDE BCDE
ABCDE
Pruned
Infrequent supersets
R. Agrawal, R. Srikant: "Fast Algorithms for Mining Association Rules",
Proc. of the 20th Int'l Conference on Very Large Databases, 1994.
The Apriori algorithm
Level-wise approach
Ck = candidate itemsets of size k
Lk = frequent itemsets of size k
Candidate
generation
Frequent
itemset
generation
1. k = 1, C1 = all items
2. While Ck not empty
3. Scan the database to find which itemsets in
Ck are frequent and put them into Lk
4. Use Lk to generate a collection of candidate
itemsets Ck+1 of size k+1
5. k = k+1
Candidate Generation
• Basic principle (Apriori):
• An itemset of size k+1 is candidate to be frequent only if
all of its subsets of size k are known to be frequent
• Main idea:
• Construct a candidate of size k+1 by combining two
frequent itemsets of size k
• Prune the generated k+1-itemsets that do not have all
k-subsets to be frequent
Computing Frequent Itemsets
• Given the set of candidate itemsets Ck, we need to compute
the support and find the frequent itemsets Lk.
• Scan the data, and use a hash structure to keep a counter
for each candidate itemset that appears in the data
TID Items
1 Bread, Milk
2 Bread, Diaper, Beer, Eggs
3 Milk, Diaper, Beer, Coke
4 Bread, Milk, Diaper, Beer
5 Bread, Milk, Diaper, Coke
N
Transactions Hash Structure
k
Buckets
Ck
A simple hash structure
• Create a dictionary (hash table) that stores the
candidate itemsets as keys, and the number of
appearances as the value.
• Initialize with zero
• Increment the counter for each itemset that you
see in the data
Example
Suppose you have 15 candidate
itemsets of length 3:
{1 4 5}, {1 2 4}, {4 5 7}, {1 2 5}, {4 5 8},
{1 5 9}, {1 3 6}, {2 3 4}, {5 6 7}, {3 4 5},
{3 5 6}, {3 5 7}, {6 8 9}, {3 6 7}, {3 6 8}
Hash table stores the counts of the
candidate itemsets as they have been
computed so far
Key Value
{3 6 7} 0
{3 4 5} 1
{1 3 6} 3
{1 4 5} 5
{2 3 4} 2
{1 5 9} 1
{3 6 8} 0
{4 5 7} 2
{6 8 9} 0
{5 6 7} 3
{1 2 4} 8
{3 5 7} 1
{1 2 5} 0
{3 5 6} 1
{4 5 8} 0
Example
Tuple {1,2,3,5,6} generates the
following itemsets of length 3:
{1 2 3}, {1 2 5}, {1 2 6}, {1 3 5}, {1 3 6},
{1 5 6}, {2 3 5}, {2 3 6}, {3 5 6},
Increment the counters for the itemsets
in the dictionary
Key Value
{3 6 7} 0
{3 4 5} 1
{1 3 6} 3
{1 4 5} 5
{2 3 4} 2
{1 5 9} 1
{3 6 8} 0
{4 5 7} 2
{6 8 9} 0
{5 6 7} 3
{1 2 4} 8
{3 5 7} 1
{1 2 5} 0
{3 5 6} 1
{4 5 8} 0
Example
Tuple {1,2,3,5,6} generates the
following itemsets of length 3:
{1 2 3}, {1 2 5}, {1 2 6}, {1 3 5}, {1 3 6},
{1 5 6}, {2 3 5}, {2 3 6}, {3 5 6},
Increment the counters for the itemsets
in the dictionary
Key Value
{3 6 7} 0
{3 4 5} 1
{1 3 6} 4
{1 4 5} 5
{2 3 4} 2
{1 5 9} 1
{3 6 8} 0
{4 5 7} 2
{6 8 9} 0
{5 6 7} 3
{1 2 4} 8
{3 5 7} 1
{1 2 5} 1
{3 5 6} 2
{4 5 8} 0
Mining Association Rules
Example:
Beer
}
Diaper
,
Milk
{ 
4
.
0
5
2
|
T
|
)
Beer
Diaper,
,
Milk
(




s
67
.
0
3
2
)
Diaper
,
Milk
(
)
Beer
Diaper,
Milk,
(





c
 Association Rule
– An implication expression of the form
X  Y, where X and Y are itemsets
– {Milk, Diaper}  {Beer}
 Rule Evaluation Metrics
– Support (s)
 Fraction of transactions that contain both X
and Y = the probability P(X,Y) that X and Y
occur together
– Confidence (c)
 How often Y appears in transactions that
contain X = the conditional probability P(Y|X)
that Y occurs given that X has occurred.
TID Items
1 Bread, Milk
2 Bread, Diaper, Beer, Eggs
3 Milk, Diaper, Beer, Coke
4 Bread, Milk, Diaper, Beer
5 Bread, Milk, Diaper, Coke
 Problem Definition
– Input A set of transactions T, over a set of items I, minsup, minconf values
– Output: All rules with items in I having s ≥ minsup and c≥ minconf
Mining Association Rules
• Two-step approach:
1. Frequent Itemset Generation
– Generate all itemsets whose support  minsup
2. Rule Generation
– Generate high confidence rules from each frequent itemset,
where each rule is a partitioning of a frequent itemset into
Left-Hand-Side (LHS) and Right-Hand-Side (RHS)
Frequent itemset: {A,B,C,D}
Rule: ABCD
Association Rule anti-monotonicity
• Confidence is anti-monotone w.r.t. number of
items on the RHS of the rule (or monotone with
respect to the LHS of the rule)
• e.g., L = {A,B,C,D}:
c(ABC  D)  c(AB  CD)  c(A  BCD)
Rule Generation for APriori Algorithm
• Candidate rule is generated by merging two rules that
share the same prefix
in the RHS
• join(CDAB,BDAC)
would produce the candidate
rule D  ABC
• Prune rule D  ABC if its
subset ADBC does not have
high confidence
• Essentially we are doing APriori on the RHS
BD->AC
CD->AB
D->ABC
RESULT
POST-PROCESSING
Compact Representation of Frequent
Itemsets
• Some itemsets are redundant because they have identical
support as their supersets
• Number of frequent itemsets
• Need a compact representation
TID A1 A2 A3 A4 A5 A6 A7 A8 A9 A10 B1 B2 B3 B4 B5 B6 B7 B8 B9 B10 C1 C2 C3 C4 C5 C6 C7 C8 C9 C10
1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
3 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
4 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
5 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
6 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
7 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
8 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
9 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
10 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
11 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1
12 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1
13 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1
14 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1
15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1










10
1
10
3 k
k
Maximal Frequent Itemset
null
AB AC AD AE BC BD BE CD CE DE
A B C D E
ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE
ABCD ABCE ABDE ACDE BCDE
ABCDE
Border
Infrequent
Itemsets
Maximal
Itemsets
An itemset is maximal frequent if none of its immediate supersets is
frequent
Maximal itemsets = positive border
Maximal: no superset has this property
Negative Border
null
AB AC AD AE BC BD BE CD CE DE
A B C D E
ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE
ABCD ABCE ABDE ACDE BCDE
ABCDE
Border
Infrequent
Itemsets
Itemsets that are not frequent, but all their immediate subsets are
frequent.
Minimal: no subset has this property
Border
• Border = Positive Border + Negative Border
• Itemsets such that all their immediate subsets are
frequent and all their immediate supersets are
infrequent.
• Either the positive, or the negative border is
sufficient to summarize all frequent itemsets.
Closed Itemset
• An itemset is closed if none of its immediate supersets
has the same support as the itemset
TID Items
1 {A,B}
2 {B,C,D}
3 {A,B,C,D}
4 {A,B,D}
5 {A,B,C,D}
Itemset Support
{A} 4
{B} 5
{C} 3
{D} 4
{A,B} 4
{A,C} 2
{A,D} 3
{B,C} 3
{B,D} 4
{C,D} 3
Itemset Support
{A,B,C} 2
{A,B,D} 3
{A,C,D} 2
{B,C,D} 3
{A,B,C,D} 2
Maximal vs Closed Itemsets
TID Items
1 ABC
2 ABCD
3 BCE
4 ACDE
5 DE
null
AB AC AD AE BC BD BE CD CE DE
A B C D E
ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE
ABCD ABCE ABDE ACDE BCDE
ABCDE
124 123 1234 245 345
12 124 24 4 123 2 3 24 34 45
12 2 24 4 4 2 3 4
2 4
Transaction
Ids
Not supported
by any
transactions
Maximal vs Closed Frequent Itemsets
null
AB AC AD AE BC BD BE CD CE DE
A B C D E
ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE
ABCD ABCE ABDE ACDE BCDE
ABCDE
124 123 1234 245 345
12 124 24 4 123 2 3 24 34 45
12 2 24 4 4 2 3 4
2 4
Minimum support = 2
# Closed = 9
# Maximal = 4
Closed
and
maximal
Closed but not
maximal
Maximal vs Closed Itemsets
Frequent
Itemsets
Closed
Frequent
Itemsets
Maximal
Frequent
Itemsets
Pattern Evaluation
• Association rule algorithms tend to produce too many rules but
many of them are uninteresting or redundant
• Redundant if {A,B,C}  {D} and {A,B}  {D} have same support &
confidence
• Summarization techniques
• Uninteresting, if the pattern that is revealed does not offer useful
information.
• Interestingness measures: a hard problem to define
• Interestingness measures can be used to prune/rank the
derived patterns
• Subjective measures: require human analyst
• Objective measures: rely on the data.
• In the original formulation of association rules, support &
confidence are the only measures used
Computing Interestingness Measure
• Given a rule X  Y, information needed to compute rule
interestingness can be obtained from a contingency table
𝑌 𝑌
𝑋 f11 f10 f1+
𝑋 f01 f00 fo+
f+1 f+0 N
Contingency table for X  Y
f11: support of X and Y
f10: support of X and Y
f01: support of X and Y
f00: support of X and Y
Used to define various measures
 support, confidence, lift, Gini,
J-measure, etc.
𝑋: itemset X appears in tuple
𝑌: itemset Y appears in tuple
𝑋: itemset X does not appear in tuple
𝑌: itemset Y does not appear in tuple
Drawback of Confidence
Coffee Coffee
Tea 15 5 20
Tea 75 5 80
90 10 100
Association Rule: Tea  Coffee
Confidence= P(Coffee|Tea) =
15
20
= 0.75
but P(Coffee) =
90
100
= 0.9
• Although confidence is high, rule is misleading
• P(Coffee|Tea) = 0.9375
Number of people that
drink coffee and tea
Number of people that
drink coffee but not tea
Number of people that
drink coffee
Number of people that
drink tea
Statistical Independence
• Population of 1000 students
• 600 students know how to swim (S)
• 700 students know how to bike (B)
• 420 students know how to swim and bike (S,B)
• P(SB) = 420/1000 = 0.42
• P(S)  P(B) = 0.6  0.7 = 0.42
• P(SB) = P(S)  P(B) => Statistical independence
Statistical Independence
• Population of 1000 students
• 600 students know how to swim (S)
• 700 students know how to bike (B)
• 500 students know how to swim and bike (S,B)
• P(SB) = 500/1000 = 0.5
• P(S)  P(B) = 0.6  0.7 = 0.42
• P(SB) > P(S)  P(B) => Positively correlated
Statistical Independence
• Population of 1000 students
• 600 students know how to swim (S)
• 700 students know how to bike (B)
• 300 students know how to swim and bike (S,B)
• P(SB) = 300/1000 = 0.3
• P(S)  P(B) = 0.6  0.7 = 0.42
• P(SB) < P(S)  P(B) => Negatively correlated
Statistical-based Measures
• Measures that take into account statistical dependence
• Lift/Interest/PMI
Lift =
𝑃(𝑌|𝑋)
𝑃(𝑌)
=
𝑃(𝑋, 𝑌)
𝑃 𝑋 𝑃(𝑌)
= Interest
In text mining it is called: Pointwise Mutual Information
• Piatesky-Shapiro
PS = 𝑃 𝑋, 𝑌 − 𝑃 𝑋 𝑃(𝑌)
• All these measures measure deviation from independence
• The higher, the better (why?)
Example: Lift/Interest
Coffee Coffee
Tea 15 5 20
Tea 75 5 80
90 10 100
Association Rule: Tea  Coffee
Confidence= P(Coffee|Tea) = 0.75
but P(Coffee) = 0.9
 Lift = 0.75/0.9= 0.8333 (< 1, therefore is negatively associated)
= 0.15/(0.9*0.2)
Another Example
of the of, the
Fraction of
documents
0.9 0.9 0.8
P(of, the) ≈ P of P(the)
If I was creating a document by picking words randomly, (of, the) have
more or less the same probability of appearing together by chance
hong kong hong, kong
Fraction of
documents
0.2 0.2 0.19
P hong, kong ≫ P hong P(kong)
(hong, kong) have much lower probability to appear together by chance.
The two words appear almost always only together
obama karagounis obama, karagounis
Fraction of
documents
0.2 0.2 0.001
P obama, karagounis ≪
P obama P(karagounis)
(obama, karagounis) have much higher probability to appear together by chance.
The two words appear almost never together
No correlation
Positive correlation
Negative correlation
Drawbacks of Lift/Interest/Mutual Information
honk konk honk, konk
Fraction of
documents
0.0001 0.0001 0.0001
𝑀𝐼 ℎ𝑜𝑛𝑘, 𝑘𝑜𝑛𝑘 =
0.0001
0.0001 ∗ 0.0001
= 10000
hong kong hong, kong
Fraction of
documents
0.2 0.2 0.19
𝑀𝐼 ℎ𝑜𝑛𝑔, 𝑘𝑜𝑛𝑔 =
0.19
0.2 ∗ 0.2
= 4.75
Rare co-occurrences are deemed more interesting.
But this is not always what we want
ALTERNATIVE FREQUENT
ITEMSET COMPUTATION
Slides taken from Mining Massive Datasets course by
Anand Rajaraman and Jeff Ullman.
C1 L1 C2 L2 C3
Filter Filter Construct
Construct
First
pass
Second
pass
All
items
All pairs
of items
from L1
Count
the pairs
Count
the items
Frequent
items
Frequent
pairs
Finding the frequent pairs is usually
the most expensive operation
40
Picture of A-Priori
Item counts
Pass 1 Pass 2
Frequent items
Counts of
pairs of
frequent
items
41
PCY Algorithm
• During Pass 1 (computing frequent
items) of Apriori, most memory is idle.
• Use that memory to keep counts of
buckets into which pairs of items are
hashed.
• Just the count, not the pairs themselves.
Item counts
Pass 1
42
Needed Extensions
1. Pairs of items need to be generated from the
input file; they are not present in the file.
2. We are not just interested in the presence of a
pair, but we need to see whether it is present
at least s (support) times.
43
PCY Algorithm – (2)
• A bucket is frequent if its count is at least the
support threshold.
• If a bucket is not frequent, no pair that hashes to
that bucket could possibly be a frequent pair.
• The opposite is not true, a bucket may be frequent but
hold infrequent pairs
• On Pass 2 (frequent pairs), we only count pairs
that hash to frequent buckets.
44
PCYAlgorithm –
Before Pass 1 Organize Main Memory
• Space to count each item.
• One (typically) 4-byte integer per item.
• Use the rest of the space for as many integers,
representing buckets, as we can.
45
Picture of PCY
Hash
table
Item counts
Pass 1
46
PCY Algorithm – Pass 1
FOR (each basket) {
FOR (each item in the basket)
add 1 to item’s count;
FOR (each pair of items in the basket)
{
hash the pair to a bucket;
add 1 to the count for that bucket
}
}
47
Observations About Buckets
1. A bucket that a frequent pair hashes to is
surely frequent.
• We cannot use the hash table to eliminate any
member of this bucket.
2. Even without any frequent pair, a bucket can
be frequent.
• Again, nothing in the bucket can be eliminated.
3. But in the best case, the count for a bucket is
less than the support s.
• Now, all pairs that hash to this bucket can be eliminated
as candidates, even if the pair consists of two frequent
items.
48
PCY Algorithm – Between Passes
• Replace the buckets by a bit-vector:
• 1 means the bucket is frequent; 0 means it is not.
• 4-byte integers are replaced by bits, so the bit-
vector requires 1/32 of memory.
• Also, find which items are frequent and list them
for the second pass.
• Same as with Apriori
49
Picture of PCY
Hash
table
Item counts
Bitmap
Pass 1 Pass 2
Frequent items
Counts of
candidate
pairs
50
PCY Algorithm – Pass 2
• Count all pairs {i, j } that meet the conditions
for being a candidate pair:
1. Both i and j are frequent items.
2. The pair {i, j }, hashes to a bucket number whose bit
in the bit vector is 1.
• Notice both these conditions are necessary for
the pair to have a chance of being frequent.
51
All (Or Most) Frequent Itemsets
in less than 2 Passes
• A-Priori, PCY, etc., take k passes to find
frequent itemsets of size k.
• Other techniques use 2 or fewer passes for all
sizes:
• Simple sampling algorithm.
• SON (Savasere, Omiecinski, and Navathe).
• Toivonen.
52
Simple Sampling Algorithm – (1)
• Take a random sample of the market baskets.
• Run Apriori or one of its improvements (for sets
of all sizes, not just pairs) in main memory, so
you don’t pay for disk I/O each time you increase
the size of itemsets.
• Make sure the sample is such that there is enough
space for counts.
53
Main-Memory Picture
Copy of
sample
baskets
Space
for
counts
54
Simple Algorithm – (2)
• Use as your support threshold a suitable,
scaled-back number.
• E.g., if your sample is 1/100 of the baskets, use
s /100 as your support threshold instead of s.
• You could stop here (single pass)
• What could be the problem?
55
Simple Algorithm – Option
• Optionally, verify that your guesses are truly
frequent in the entire data set by a second
pass (eliminate false positives)
• But you don’t catch sets frequent in the whole
but not in the sample. (false negatives)
• Smaller threshold, e.g., s /125, helps catch more
truly frequent itemsets.
• But requires more space.
56
SON Algorithm – (1)
• First pass: Break the data into chunks that can be
processed in main memory.
• Read one chunk at the time
• Find all frequent itemsets for each chunk.
• Threshold = s/number of chunks
• An itemset becomes a candidate if it is found to
be frequent in any one or more chunks of the
baskets.
57
SON Algorithm – (2)
• Second pass: count all the candidate itemsets
and determine which are frequent in the entire
set.
• Key “monotonicity” idea: an itemset cannot be
frequent in the entire set of baskets unless it is
frequent in at least one subset.
• Why?
58
SON Algorithm – Distributed Version
• This idea lends itself to distributed data
mining.
• If baskets are distributed among many nodes,
compute frequent itemsets at each node, then
distribute the candidates from each node.
• Finally, accumulate the counts of all
candidates.
59
Toivonen’s Algorithm – (1)
• Start as in the simple sampling algorithm, but
lower the threshold slightly for the sample.
• Example: if the sample is 1% of the baskets, use s /125
as the support threshold rather than s /100.
• Goal is to avoid missing any itemset that is frequent in
the full set of baskets.
60
Toivonen’s Algorithm – (2)
• Add to the itemsets that are frequent in the
sample the negative border of these itemsets.
• An itemset is in the negative border if it is not
deemed frequent in the sample, but all its
immediate subsets are.
61
Reminder: Negative Border
• ABCD is in the negative border if and only if:
1. It is not frequent in the sample, but
2. All of ABC, BCD, ACD, and ABD are.
• A is in the negative border if and only if it is
not frequent in the sample.
 Because the empty set is always frequent.
 Unless there are fewer baskets than the support
threshold (silly case).
62
Picture of Negative Border
…
triples
pairs
singletons
Negative Border
Frequent Itemsets
from Sample
63
Toivonen’s Algorithm – (3)
• In a second pass, count all candidate frequent
itemsets from the first pass, and also count their
negative border.
• If no itemset from the negative border turns out to
be frequent, then the candidates found to be
frequent in the whole data are exactly the
frequent itemsets.
64
Toivonen’s Algorithm – (4)
• What if we find that something in the negative
border is actually frequent?
• We must start over again!
• Try to choose the support threshold so the
probability of failure is low, while the number of
itemsets checked on the second pass fits in main-
memory.
65
If Something in the Negative Border is
Frequent . . .
…
tripletons
doubletons
singletons
Negative Border
Frequent Itemsets
from Sample
We broke through the
negative border. How
far does the problem
go?
66
Theorem:
• If there is an itemset that is frequent in the whole,
but not frequent in the sample, then there is a
member of the negative border for the sample
that is frequent in the whole.
67
• Proof: Suppose not; i.e.;
1. There is an itemset S frequent in the whole but not
frequent in the sample, and
2. Nothing in the negative border is frequent in the
whole.
• Let T be a smallest subset of S that is not
frequent in the sample.
• T is frequent in the whole (S is frequent +
monotonicity).
• T is in the negative border (else not
“smallest”).
Example
null
AB AC AD AE BC BD BE CD CE DE
A B C D E
ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE
ABCD ABCE ABDE ACDE BCDE
ABCDE
Border
THE FP-TREE AND THE
FP-GROWTH ALGORITHM
Slides from course lecture of E. Pitoura
Overview
• The FP-tree contains a compressed
representation of the transaction database.
• A trie (prefix-tree) data structure is used
• Each transaction is a path in the tree – paths can
overlap.
• Once the FP-tree is constructed the recursive,
divide-and-conquer FP-Growth algorithm is used
to enumerate all frequent itemsets.
FP-tree Construction
• The FP-tree is a trie (prefix tree)
• Since transactions are sets of
items, we need to transform them
into ordered sequences so that
we can have prefixes
• Otherwise, there is no common prefix
between sets {A,B} and {B,C,A}
• We need to impose an order to
the items
• Initially, assume a lexicographic order.
TID Items
1 {A,B}
2 {B,C,D}
3 {A,C,D,E}
4 {A,D,E}
5 {A,B,C}
6 {A,B,C,D}
7 {B,C}
8 {A,B,C}
9 {A,B,D}
10 {B,C,E}
FP-tree Construction
• Initially the tree is empty
null
TID Items
1 {A,B}
2 {B,C,D}
3 {A,C,D,E}
4 {A,D,E}
5 {A,B,C}
6 {A,B,C,D}
7 {B,C}
8 {A,B,C}
9 {A,B,D}
10 {B,C,E}
FP-tree Construction
• Reading transaction TID = 1
• Each node in the tree has a label consisting of the item
and the support (number of transactions that reach that
node, i.e. follow that path)
TID Items
1 {A,B}
2 {B,C,D}
3 {A,C,D,E}
4 {A,D,E}
5 {A,B,C}
6 {A,B,C,D}
7 {B,C}
8 {A,B,C}
9 {A,B,D}
10 {B,C,E}
null
A:1
B:1
Node label = item:support
FP-tree Construction
• Reading transaction TID = 2
• We add pointers between nodes that refer to the
same item
TID Items
1 {A,B}
2 {B,C,D}
3 {A,C,D,E}
4 {A,D,E}
5 {A,B,C}
6 {A,B,C,D}
7 {B,C}
8 {A,B,C}
9 {A,B,D}
10 {B,C,E}
null
A:1
B:1
B:1
C:1
D:1
Each transaction is a path in the tree
FP-tree Construction
TID Items
1 {A,B}
2 {B,C,D}
3 {A,C,D,E}
4 {A,D,E}
5 {A,B,C}
6 {A,B,C,D}
7 {B,C}
8 {A,B,C}
9 {A,B,D}
10 {B,C,E}
null
A:1
B:1
B:1
C:1
D:1
After reading
transactions TID=1, 2:
Item Pointer
A
B
C
D
E
Header Table
The Header Table and the
pointers assist in
computing the itemset
support
FP-tree Construction
• Reading transaction TID = 3
TID Items
1 {A,B}
2 {B,C,D}
3 {A,C,D,E}
4 {A,D,E}
5 {A,B,C}
6 {A,B,C,D}
7 {B,C}
8 {A,B,C}
9 {A,B,D}
10 {B,C,E}
null
A:1
B:1
B:1
C:1
Item Pointer
A
B
C
D
E
A:1
D:1
FP-tree Construction
• Reading transaction TID = 3
TID Items
1 {A,B}
2 {B,C,D}
3 {A,C,D,E}
4 {A,D,E}
5 {A,B,C}
6 {A,B,C,D}
7 {B,C}
8 {A,B,C}
9 {A,B,D}
10 {B,C,E}
null
B:1
B:1
C:1
D:1
Item Pointer
A
B
C
D
E
A:2
C:1
D:1
E:1
FP-tree Construction
• Reading transaction TID = 3
TID Items
1 {A,B}
2 {B,C,D}
3 {A,C,D,E}
4 {A,D,E}
5 {A,B,C}
6 {A,B,C,D}
7 {B,C}
8 {A,B,C}
9 {A,B,D}
10 {B,C,E}
null
B:1
B:1
C:1
D:1
Item Pointer
A
B
C
D
E
A:2
C:1
D:1
E:1
Each transaction is a path in the tree
FP-Tree Construction
null
A:7
B:5
B:3
C:3
D:1
C:1
D:1
C:3
D:1
D:1
E:1
E:1
TID Items
1 {A,B}
2 {B,C,D}
3 {A,C,D,E}
4 {A,D,E}
5 {A,B,C}
6 {A,B,C,D}
7 {B,C}
8 {A,B,C}
9 {A,B,D}
10 {B,C,E}
Pointers are used to assist
frequent itemset generation
D:1
E:1
Transaction
Database
Item Pointer
A
B
C
D
E
Header table
Each transaction is a path in the tree
FP-tree size
• Every transaction is a path in the FP-tree
• The size of the tree depends on the
compressibility of the data
• Extreme case: All transactions are the same, the FP-
tree is a single branch
• Extreme case: All transactions are different the size of
the tree is the same as that of the database (bigger
actually since we need additional pointers)
Item ordering
• The size of the tree also depends on the ordering of the items.
• Heuristic: order the items in according to their frequency from
larger to smaller.
• We would need to do an extra pass over the dataset to count
frequencies
• Example:
TID Items
1 {A,B}
2 {B,C,D}
3 {A,C,D,E}
4 {A,D,E}
5 {A,B,C}
6 {A,B,C,D}
7 {B,C}
8 {A,B,C}
9 {A,B,D}
10 {B,C,E}
TID Items
1 {Β,Α}
2 {B,C,D}
3 {A,C,D,E}
4 {A,D,E}
5 {Β,Α,C}
6 {Β,Α,C,D}
7 {B,C}
8 {Β,Α,C}
9 {Β,Α,D}
10 {B,C,E}
σ(Α)=7, σ(Β)=8,
σ(C)=7, σ(D)=5,
σ(Ε)=3
Ordering : Β,Α,C,D,E
Finding Frequent Itemsets
• Input: The FP-tree
• Output: All Frequent Itemsets and their support
• Method:
• Divide and Conquer:
• Consider all itemsets that end in: E, D, C, B, A
• For each possible ending item, consider the itemsets with last
items one of items preceding it in the ordering
• E.g, for E, consider all itemsets with last item D, C, B, A. This
way we get all the itesets ending at DE, CE, BE, AE
• Proceed recursively this way.
• Do this for all items.
Frequent itemsets
All Itemsets
Ε D C B A
DE CE BE AE CD BD AD BC AC AB
CDE BDE ADE BCE ACE ABE BCD ACD ABD ABC
ACDE BCDE ABDE ABCE ABCD
ABCDE
Frequent Itemsets
All Itemsets
Ε D C B A
DE CE BE AE CD BD AD BC AC AB
CDE BDE ADE BCE ACE ABE BCD ACD ABD ABC
ACDE BCDE ABDE ABCE ABCD
ABCDE
Frequent?;
Frequent?;
Frequent?
Frequent?
Frequent Itemsets
All Itemsets
Ε D C B A
DE CE BE AE CD BD AD BC AC AB
CDE BDE ADE BCE ACE ABE BCD ACD ABD ABC
ACDE BCDE ABDE ABCE ABCD
ABCDE
Frequent?
Frequent?
Frequent?
Frequent?
Frequent?
Frequent Itemsets
All Itemsets
Ε D C B A
DE CE BE AE CD BD AD BC AC AB
CDE BDE ADE BCE ACE ABE BCD ACD ABD ABC
ACDE BCDE ABDE ABCE ABCD
ABCDE
Frequent?
Frequent?
Frequent?
We can generate all itemsets this way
We expect the FP-tree to contain a lot less
Using the FP-tree to find frequent itemsets
null
A:7
B:5
B:3
C:3
D:1
C:1
D:1
C:3
D:1
D:1
E:1
E:1
TID Items
1 {A,B}
2 {B,C,D}
3 {A,C,D,E}
4 {A,D,E}
5 {A,B,C}
6 {A,B,C,D}
7 {B,C}
8 {A,B,C}
9 {A,B,D}
10 {B,C,E}
Bottom-up traversal of the tree.
First, itemsets ending in E, then D,
etc, each time a suffix-based class
D:1
E:1
Transaction
Database
Item Pointer
A
B
C
D
E
Header table
null
A:7
B:5
B:3
C:3
D:1
C:1
D:1
C:3
D:1
D:1
E:1 E:1
D:1
E:1
Item Pointer
A
B
C
D
E
Header table
Subproblem: find frequent
itemsets ending in E
 We will then see how to compute the support for the possible itemsets
Finding Frequent Itemsets
null
A:7
B:5
B:3
C:3
D:1
C:1
D:1
C:3
D:1
D:1
E:1 E:1
D:1
E:1
Item Pointer
A
B
C
D
E
Header table
Ending in D
Finding Frequent Itemsets
null
A:7
B:5
B:3
C:3
D:1
C:1
D:1
C:3
D:1
D:1
E:1 E:1
D:1
E:1
Item Pointer
A
B
C
D
E
Header table
Ending in C
Finding Frequent Itemsets
null
A:7
B:5
B:3
C:3
D:1
C:1
D:1
C:3
D:1
D:1
E:1 E:1
D:1
E:1
Item Pointer
A
B
C
D
E
Header table
Ending in B
Finding Frequent Itemsets
null
A:7
B:5
B:3
C:3
D:1
C:1
D:1
C:3
D:1
D:1
E:1 E:1
D:1
E:1
Item Pointer
A
B
C
D
E
Header table
Ending in Α
Finding Frequent Itemsets
Algorithm
• For each suffix X
• Phase 1
• Construct the prefix tree for X as shown before, and
compute the support using the header table and the
pointers
• Phase 2
• If X is frequent, construct the conditional FP-tree for X in
the following steps
1. Recompute support
2. Prune infrequent items
3. Prune leaves and recurse
null
A:7
B:5
B:3
C:3
D:1
C:1
D:1
C:3
D:1
D:1
E:1 E:1
D:1
E:1
Item Pointer
A
B
C
D
E
Header table
Phase 1 – construct
prefix tree
Find all prefix paths that
contain E
Suffix Paths for Ε:
{A,C,D,E}, {A,D,Ε}, {B,C,E}
Example
null
A:7 B:3
C:3
C:1
D:1
D:1
E:1 E:1
E:1
Phase 1 – construct
prefix tree
Find all prefix paths that
contain E
Prefix Paths for Ε:
{A,C,D,E}, {A,D,Ε}, {B,C,E}
Example
null
A:7 B:3
C:3
C:1
D:1
D:1
E:1 E:1
E:1
Compute Support for E
(minsup = 2)
How?
Follow pointers while
summing up counts:
1+1+1 = 3 > 2
E is frequent
{E} is frequent so we can now consider suffixes DE, CE, BE, AE
Example
null
A:7 B:3
C:3
C:1
D:1
D:1
E:1 E:1
E:1
Phase 2
Convert the prefix tree of E into a
conditional FP-tree
Two changes
(1) Recompute support
(2) Prune infrequent
Example
E is frequent so we proceed with Phase 2
null
A:7 B:3
C:3
C:1
D:1
D:1
E:1 E:1
E:1
Example
Recompute Support
The support counts for some of the
nodes include transactions that do
not end in E
For example in null->B->C->E we
count {B, C}
The support of any node is equal to
the sum of the support of leaves
with label E in its subtree
null
B:3
C:3
C:1
D:1
D:1
E:1 E:1
E:1
A:7
Example
null
B:3
C:1
C:1
D:1
D:1
E:1 E:1
E:1
A:7
Example
null
A:7 B:1
C:1
C:1
D:1
D:1
E:1 E:1
E:1
Example
null
A:7 B:1
C:1
C:1
D:1
D:1
E:1 E:1
E:1
Example
null
A:7 B:1
C:1
C:1
D:1
D:1
E:1 E:1
E:1
Example
null
A:2 B:1
C:1
C:1
D:1
D:1
E:1 E:1
E:1
Example
null
A:2 B:1
C:1
C:1
D:1
D:1
E:1 E:1
E:1
Example
null
A:2 B:1
C:1
C:1
D:1
D:1
E:1 E:1
E:1
Truncate
Delete the nodes of Ε
Example
null
A:2 B:1
C:1
C:1
D:1
D:1
E:1 E:1
E:1
Truncate
Delete the nodes of Ε
Example
null
A:2 B:1
C:1
C:1
D:1
D:1
Truncate
Delete the nodes of Ε
Example
null
A:2 B:1
C:1
C:1
D:1
D:1
Prune infrequent
In the conditional FP-tree
some nodes may have
support less than minsup
e.g., B needs to be
pruned
This means that B
appears with E less than
minsup times
Example
null
A:2 B:1
C:1
C:1
D:1
D:1
Example
null
A:2 C:1
C:1
D:1
D:1
Example
null
A:2 C:1
C:1
D:1
D:1
The conditional FP-tree for E
Repeat the algorithm for {D, E}, {C, E}, {A, E}
Example
null
A:2 C:1
C:1
D:1
D:1
Example
Phase 1
Find all prefix paths that contain D (DE) in the conditional FP-tree
null
A:2
C:1
D:1
D:1
Example
Phase 1
Find all prefix paths that contain D (DE) in the conditional FP-tree
null
A:2
C:1
D:1
D:1
Example
Compute the support of {D,E} by following the pointers in the tree
1+1 = 2 ≥ 2 = minsup
{D,E} is frequent
null
A:2
C:1
D:1
D:1
Example
Phase 2
Construct the conditional FP-tree
1. Recompute Support
2. Prune nodes
null
A:2
C:1
D:1
D:1
Example
Recompute support
null
A:2
C:1
D:1
D:1
Example
Prune nodes
null
A:2
C:1
Example
Prune nodes
null
A:2
C:1 Small support
Example
Prune nodes
null
A:2
Example
Final condition FP-tree for {D,E}
The support of A is ≥ minsup so {A,D,E} is frequent
Since the tree has a single node we return to the next
subproblem
null
A:2 C:1
C:1
D:1
D:1
Example
The conditional FP-tree for E
We repeat the algorithm for {D,E}, {C,E}, {A,E}
null
A:2 C:1
C:1
D:1
D:1
Example
Phase 1
Find all prefix paths that contain C (CE) in the conditional FP-tree
null
A:2 C:1
C:1
Example
Phase 1
Find all prefix paths that contain C (CE) in the conditional FP-tree
null
A:2 C:1
C:1
Example
Compute the support of {C,E} by following the pointers in the tree
1+1 = 2 ≥ 2 = minsup
{C,E} is frequent
null
A:2 C:1
C:1
Example
Phase 2
Construct the conditional FP-tree
1. Recompute Support
2. Prune nodes
null
A:1 C:1
C:1
Example
Recompute support
null
A:1 C:1
C:1
Example
Prune nodes
null
A:1
Example
Prune nodes
null
A:1
Example
Prune nodes
null
Example
Prune nodes
Return to the previous subproblem
null
A:2 C:1
C:1
D:1
D:1
Example
The conditional FP-tree for E
We repeat the algorithm for {D,E}, {C,E}, {A,E}
null
A:2 C:1
C:1
D:1
D:1
Example
Phase 1
Find all prefix paths that contain A (AE) in the conditional FP-tree
null
A:2
Example
Phase 1
Find all prefix paths that contain A (AE) in the conditional FP-tree
null
A:2
Example
Compute the support of {A,E} by following the pointers in the tree
2 ≥ minsup
{A,E} is frequent
There is no conditional FP-tree for {A,E}
Example
• So for E we have the following frequent itemsets
{E}, {D,E}, {C,E}, {A,E}
• We proceed with D
null
A:7
B:5
B:3
C:3
D:1
C:1
D:1
C:3
D:1
D:1
E:1 E:1
D:1
E:1
Item Pointer
A
B
C
D
E
Header table
Ending in D
Example
null
A:7
B:5
B:3
C:3
D:1
C:1
D:1
C:3
D:1
D:1
D:1
Phase 1 – construct
prefix tree
Find all prefix paths that
contain D
Support 5 > minsup, D is
frequent
Phase 2
Convert prefix tree into
conditional FP-tree
Example
null
A:7
B:5
B:3
C:3
D:1
C:1
D:1
C:1
D:1
D:1
D:1
Recompute support
Example
null
A:7
B:2
B:3
C:3
D:1
C:1
D:1
C:1
D:1
D:1
D:1
Recompute support
Example
null
A:3
B:2
B:3
C:3
D:1
C:1
D:1
C:1
D:1
D:1
D:1
Recompute support
Example
null
A:3
B:2
B:3
C:1
D:1
C:1
D:1
C:1
D:1
D:1
D:1
Recompute support
Example
null
A:3
B:2
B:1
C:1
D:1
C:1
D:1
C:1
D:1
D:1
D:1
Recompute support
Example
null
A:3
B:2
B:1
C:1
D:1
C:1
D:1
C:1
D:1
D:1
D:1
Prune nodes
Example
null
A:3
B:2
B:1
C:1
C:1
C:1
Prune nodes
Example
null
A:3
B:2
B:1
C:1
C:1
C:1
Construct conditional FP-trees for {C,D}, {B,D}, {A,D}
And so on….
Example
Observations
• At each recursive step we solve a subproblem
• Construct the prefix tree
• Compute the new support
• Prune nodes
• Subproblems are disjoint so we never consider
the same itemset twice
• Support computation is efficient – happens
together with the computation of the frequent
itemsets.
Observations
• The efficiency of the algorithm depends on the
compaction factor of the dataset
• If the tree is bushy then the algorithm does not
work well, it increases a lot of number of
subproblems that need to be solved.
FREQUENT ITEMSET
RESEARCH
Data Mining Lecture_4.pptx

More Related Content

What's hot

Cap. I - Fotogrametria II
Cap. I - Fotogrametria IICap. I - Fotogrametria II
Cap. I - Fotogrametria IIUFPR
 
Unutarnji dijelovi računala 1
Unutarnji dijelovi računala 1Unutarnji dijelovi računala 1
Unutarnji dijelovi računala 1LovroBevanda
 
Prática de laboratório utilizando views, stored procedures e triggers
Prática de laboratório   utilizando views, stored procedures e triggersPrática de laboratório   utilizando views, stored procedures e triggers
Prática de laboratório utilizando views, stored procedures e triggersDaniel Maia
 
Telekomunikasi Analog & Digital - Slide week 4 - modulasi amplitudo
Telekomunikasi Analog & Digital - Slide week 4 - modulasi amplitudoTelekomunikasi Analog & Digital - Slide week 4 - modulasi amplitudo
Telekomunikasi Analog & Digital - Slide week 4 - modulasi amplitudoBeny Nugraha
 
Utilizando views, stored procedures e triggers
Utilizando views, stored procedures e triggersUtilizando views, stored procedures e triggers
Utilizando views, stored procedures e triggersDaniel Maia
 
Modulasi frekuensi dan modulasi phase (Fm dan Pm)
Modulasi frekuensi dan modulasi phase (Fm dan Pm)Modulasi frekuensi dan modulasi phase (Fm dan Pm)
Modulasi frekuensi dan modulasi phase (Fm dan Pm)Ishardi Nassogi
 
Deep Learning A-Z™: Regression & Classification - Simple Linear Regression - ...
Deep Learning A-Z™: Regression & Classification - Simple Linear Regression - ...Deep Learning A-Z™: Regression & Classification - Simple Linear Regression - ...
Deep Learning A-Z™: Regression & Classification - Simple Linear Regression - ...Kirill Eremenko
 
Konsep Sinyal dan Sistem
Konsep Sinyal dan SistemKonsep Sinyal dan Sistem
Konsep Sinyal dan Sistemyusufbf
 
Data warehousing - Técnicas e procedimentos
Data warehousing - Técnicas e procedimentosData warehousing - Técnicas e procedimentos
Data warehousing - Técnicas e procedimentosMarcos Pessoa
 
Grafos De Bruijn para identificação de variações genéticas usando GBS
Grafos De Bruijn para identificação de variações genéticas usando GBSGrafos De Bruijn para identificação de variações genéticas usando GBS
Grafos De Bruijn para identificação de variações genéticas usando GBSMarcos Castro
 
Introdução à visualização espacial em R
Introdução à visualização espacial em RIntrodução à visualização espacial em R
Introdução à visualização espacial em RVitor Vieira Vasconcelos
 
Informe ospf
Informe ospfInforme ospf
Informe ospfEFJJavier
 
Rangkaian sekuensial
Rangkaian sekuensialRangkaian sekuensial
Rangkaian sekuensialKhairil Anwar
 
Banco de Dados II Aula 14 - Projeto de Banco de Dados e Estudo de Caso (Postg...
Banco de Dados II Aula 14 - Projeto de Banco de Dados e Estudo de Caso (Postg...Banco de Dados II Aula 14 - Projeto de Banco de Dados e Estudo de Caso (Postg...
Banco de Dados II Aula 14 - Projeto de Banco de Dados e Estudo de Caso (Postg...Leinylson Fontinele
 
Bab 8 kode konverter (pertemuan 11, 12)
Bab 8 kode konverter (pertemuan 11, 12)Bab 8 kode konverter (pertemuan 11, 12)
Bab 8 kode konverter (pertemuan 11, 12)personal
 

What's hot (20)

Cap. I - Fotogrametria II
Cap. I - Fotogrametria IICap. I - Fotogrametria II
Cap. I - Fotogrametria II
 
PostgreSQL
PostgreSQLPostgreSQL
PostgreSQL
 
Unutarnji dijelovi računala 1
Unutarnji dijelovi računala 1Unutarnji dijelovi računala 1
Unutarnji dijelovi računala 1
 
Prática de laboratório utilizando views, stored procedures e triggers
Prática de laboratório   utilizando views, stored procedures e triggersPrática de laboratório   utilizando views, stored procedures e triggers
Prática de laboratório utilizando views, stored procedures e triggers
 
Telekomunikasi Analog & Digital - Slide week 4 - modulasi amplitudo
Telekomunikasi Analog & Digital - Slide week 4 - modulasi amplitudoTelekomunikasi Analog & Digital - Slide week 4 - modulasi amplitudo
Telekomunikasi Analog & Digital - Slide week 4 - modulasi amplitudo
 
Utilizando views, stored procedures e triggers
Utilizando views, stored procedures e triggersUtilizando views, stored procedures e triggers
Utilizando views, stored procedures e triggers
 
Modulasi frekuensi dan modulasi phase (Fm dan Pm)
Modulasi frekuensi dan modulasi phase (Fm dan Pm)Modulasi frekuensi dan modulasi phase (Fm dan Pm)
Modulasi frekuensi dan modulasi phase (Fm dan Pm)
 
Deep Learning A-Z™: Regression & Classification - Simple Linear Regression - ...
Deep Learning A-Z™: Regression & Classification - Simple Linear Regression - ...Deep Learning A-Z™: Regression & Classification - Simple Linear Regression - ...
Deep Learning A-Z™: Regression & Classification - Simple Linear Regression - ...
 
Konsep Sinyal dan Sistem
Konsep Sinyal dan SistemKonsep Sinyal dan Sistem
Konsep Sinyal dan Sistem
 
Data warehousing - Técnicas e procedimentos
Data warehousing - Técnicas e procedimentosData warehousing - Técnicas e procedimentos
Data warehousing - Técnicas e procedimentos
 
Grafos De Bruijn para identificação de variações genéticas usando GBS
Grafos De Bruijn para identificação de variações genéticas usando GBSGrafos De Bruijn para identificação de variações genéticas usando GBS
Grafos De Bruijn para identificação de variações genéticas usando GBS
 
Slide minggu ke 4 pertemuan 1
Slide minggu ke 4 pertemuan 1Slide minggu ke 4 pertemuan 1
Slide minggu ke 4 pertemuan 1
 
Introdução à visualização espacial em R
Introdução à visualização espacial em RIntrodução à visualização espacial em R
Introdução à visualização espacial em R
 
Dispositivos de Bloco
Dispositivos de BlocoDispositivos de Bloco
Dispositivos de Bloco
 
Informe ospf
Informe ospfInforme ospf
Informe ospf
 
POO - 21 - Java e Banco de Dados
POO - 21 - Java e Banco de DadosPOO - 21 - Java e Banco de Dados
POO - 21 - Java e Banco de Dados
 
Rangkaian sekuensial
Rangkaian sekuensialRangkaian sekuensial
Rangkaian sekuensial
 
Banco de Dados II Aula 14 - Projeto de Banco de Dados e Estudo de Caso (Postg...
Banco de Dados II Aula 14 - Projeto de Banco de Dados e Estudo de Caso (Postg...Banco de Dados II Aula 14 - Projeto de Banco de Dados e Estudo de Caso (Postg...
Banco de Dados II Aula 14 - Projeto de Banco de Dados e Estudo de Caso (Postg...
 
Dsp manual print
Dsp manual printDsp manual print
Dsp manual print
 
Bab 8 kode konverter (pertemuan 11, 12)
Bab 8 kode konverter (pertemuan 11, 12)Bab 8 kode konverter (pertemuan 11, 12)
Bab 8 kode konverter (pertemuan 11, 12)
 

Similar to Data Mining Lecture_4.pptx

Association Analysis in Data Mining
Association Analysis in Data MiningAssociation Analysis in Data Mining
Association Analysis in Data MiningKamal Acharya
 
The comparative study of apriori and FP-growth algorithm
The comparative study of apriori and FP-growth algorithmThe comparative study of apriori and FP-growth algorithm
The comparative study of apriori and FP-growth algorithmdeepti92pawar
 
Rules of data mining
Rules of data miningRules of data mining
Rules of data miningSulman Ahmed
 
Apriori algorithm
Apriori algorithm Apriori algorithm
Apriori algorithm DHIVYADEVAKI
 
Market Basket Analysis in SQL Server Machine Learning Services
Market Basket Analysis in SQL Server Machine Learning ServicesMarket Basket Analysis in SQL Server Machine Learning Services
Market Basket Analysis in SQL Server Machine Learning ServicesLuca Zavarella
 
DM -Unit 2-PPT.ppt
DM -Unit 2-PPT.pptDM -Unit 2-PPT.ppt
DM -Unit 2-PPT.pptraju980973
 
Mining Frequent Patterns, Association and Correlations
Mining Frequent Patterns, Association and CorrelationsMining Frequent Patterns, Association and Correlations
Mining Frequent Patterns, Association and CorrelationsJustin Cletus
 
Rules of data mining
Rules of data miningRules of data mining
Rules of data miningSulman Ahmed
 
AssociationRule.pdf
AssociationRule.pdfAssociationRule.pdf
AssociationRule.pdfWailaBaba
 
ASSOCIATION Rule plus MArket basket Analysis.pptx
ASSOCIATION Rule plus MArket basket Analysis.pptxASSOCIATION Rule plus MArket basket Analysis.pptx
ASSOCIATION Rule plus MArket basket Analysis.pptxSherishJaved
 
SPIE Conference V3.0
SPIE Conference V3.0SPIE Conference V3.0
SPIE Conference V3.0Robert Fry
 
Strata NYC 2015: Sketching Big Data with Spark: randomized algorithms for lar...
Strata NYC 2015: Sketching Big Data with Spark: randomized algorithms for lar...Strata NYC 2015: Sketching Big Data with Spark: randomized algorithms for lar...
Strata NYC 2015: Sketching Big Data with Spark: randomized algorithms for lar...Databricks
 
Mining Frequent Patterns And Association Rules
Mining Frequent Patterns And Association RulesMining Frequent Patterns And Association Rules
Mining Frequent Patterns And Association RulesRashmi Bhat
 
Sean Kandel - Data profiling: Assessing the overall content and quality of a ...
Sean Kandel - Data profiling: Assessing the overall content and quality of a ...Sean Kandel - Data profiling: Assessing the overall content and quality of a ...
Sean Kandel - Data profiling: Assessing the overall content and quality of a ...huguk
 

Similar to Data Mining Lecture_4.pptx (20)

Data Mining Lecture_3.pptx
Data Mining Lecture_3.pptxData Mining Lecture_3.pptx
Data Mining Lecture_3.pptx
 
Association Analysis in Data Mining
Association Analysis in Data MiningAssociation Analysis in Data Mining
Association Analysis in Data Mining
 
The comparative study of apriori and FP-growth algorithm
The comparative study of apriori and FP-growth algorithmThe comparative study of apriori and FP-growth algorithm
The comparative study of apriori and FP-growth algorithm
 
Rules of data mining
Rules of data miningRules of data mining
Rules of data mining
 
06FPBasic02.pdf
06FPBasic02.pdf06FPBasic02.pdf
06FPBasic02.pdf
 
Apriori algorithm
Apriori algorithm Apriori algorithm
Apriori algorithm
 
Market Basket Analysis in SQL Server Machine Learning Services
Market Basket Analysis in SQL Server Machine Learning ServicesMarket Basket Analysis in SQL Server Machine Learning Services
Market Basket Analysis in SQL Server Machine Learning Services
 
DM -Unit 2-PPT.ppt
DM -Unit 2-PPT.pptDM -Unit 2-PPT.ppt
DM -Unit 2-PPT.ppt
 
Mining Frequent Patterns, Association and Correlations
Mining Frequent Patterns, Association and CorrelationsMining Frequent Patterns, Association and Correlations
Mining Frequent Patterns, Association and Correlations
 
DMML4_apriori (1).ppt
DMML4_apriori (1).pptDMML4_apriori (1).ppt
DMML4_apriori (1).ppt
 
Rules of data mining
Rules of data miningRules of data mining
Rules of data mining
 
AssociationRule.pdf
AssociationRule.pdfAssociationRule.pdf
AssociationRule.pdf
 
My6asso
My6assoMy6asso
My6asso
 
ASSOCIATION Rule plus MArket basket Analysis.pptx
ASSOCIATION Rule plus MArket basket Analysis.pptxASSOCIATION Rule plus MArket basket Analysis.pptx
ASSOCIATION Rule plus MArket basket Analysis.pptx
 
SPIE Conference V3.0
SPIE Conference V3.0SPIE Conference V3.0
SPIE Conference V3.0
 
Association-Analysis.pdf
Association-Analysis.pdfAssociation-Analysis.pdf
Association-Analysis.pdf
 
Strata NYC 2015: Sketching Big Data with Spark: randomized algorithms for lar...
Strata NYC 2015: Sketching Big Data with Spark: randomized algorithms for lar...Strata NYC 2015: Sketching Big Data with Spark: randomized algorithms for lar...
Strata NYC 2015: Sketching Big Data with Spark: randomized algorithms for lar...
 
Apriori algorithm
Apriori algorithmApriori algorithm
Apriori algorithm
 
Mining Frequent Patterns And Association Rules
Mining Frequent Patterns And Association RulesMining Frequent Patterns And Association Rules
Mining Frequent Patterns And Association Rules
 
Sean Kandel - Data profiling: Assessing the overall content and quality of a ...
Sean Kandel - Data profiling: Assessing the overall content and quality of a ...Sean Kandel - Data profiling: Assessing the overall content and quality of a ...
Sean Kandel - Data profiling: Assessing the overall content and quality of a ...
 

More from Subrata Kumer Paul

Chapter 9. Classification Advanced Methods.ppt
Chapter 9. Classification Advanced Methods.pptChapter 9. Classification Advanced Methods.ppt
Chapter 9. Classification Advanced Methods.pptSubrata Kumer Paul
 
Chapter 13. Trends and Research Frontiers in Data Mining.ppt
Chapter 13. Trends and Research Frontiers in Data Mining.pptChapter 13. Trends and Research Frontiers in Data Mining.ppt
Chapter 13. Trends and Research Frontiers in Data Mining.pptSubrata Kumer Paul
 
Chapter 8. Classification Basic Concepts.ppt
Chapter 8. Classification Basic Concepts.pptChapter 8. Classification Basic Concepts.ppt
Chapter 8. Classification Basic Concepts.pptSubrata Kumer Paul
 
Chapter 12. Outlier Detection.ppt
Chapter 12. Outlier Detection.pptChapter 12. Outlier Detection.ppt
Chapter 12. Outlier Detection.pptSubrata Kumer Paul
 
Chapter 7. Advanced Frequent Pattern Mining.ppt
Chapter 7. Advanced Frequent Pattern Mining.pptChapter 7. Advanced Frequent Pattern Mining.ppt
Chapter 7. Advanced Frequent Pattern Mining.pptSubrata Kumer Paul
 
Chapter 11. Cluster Analysis Advanced Methods.ppt
Chapter 11. Cluster Analysis Advanced Methods.pptChapter 11. Cluster Analysis Advanced Methods.ppt
Chapter 11. Cluster Analysis Advanced Methods.pptSubrata Kumer Paul
 
Chapter 10. Cluster Analysis Basic Concepts and Methods.ppt
Chapter 10. Cluster Analysis Basic Concepts and Methods.pptChapter 10. Cluster Analysis Basic Concepts and Methods.ppt
Chapter 10. Cluster Analysis Basic Concepts and Methods.pptSubrata Kumer Paul
 
Chapter 6. Mining Frequent Patterns, Associations and Correlations Basic Conc...
Chapter 6. Mining Frequent Patterns, Associations and Correlations Basic Conc...Chapter 6. Mining Frequent Patterns, Associations and Correlations Basic Conc...
Chapter 6. Mining Frequent Patterns, Associations and Correlations Basic Conc...Subrata Kumer Paul
 
Chapter 5. Data Cube Technology.ppt
Chapter 5. Data Cube Technology.pptChapter 5. Data Cube Technology.ppt
Chapter 5. Data Cube Technology.pptSubrata Kumer Paul
 
Chapter 3. Data Preprocessing.ppt
Chapter 3. Data Preprocessing.pptChapter 3. Data Preprocessing.ppt
Chapter 3. Data Preprocessing.pptSubrata Kumer Paul
 
Chapter 4. Data Warehousing and On-Line Analytical Processing.ppt
Chapter 4. Data Warehousing and On-Line Analytical Processing.pptChapter 4. Data Warehousing and On-Line Analytical Processing.ppt
Chapter 4. Data Warehousing and On-Line Analytical Processing.pptSubrata Kumer Paul
 
Data Mining Lecture_10(b).pptx
Data Mining Lecture_10(b).pptxData Mining Lecture_10(b).pptx
Data Mining Lecture_10(b).pptxSubrata Kumer Paul
 

More from Subrata Kumer Paul (20)

Chapter 9. Classification Advanced Methods.ppt
Chapter 9. Classification Advanced Methods.pptChapter 9. Classification Advanced Methods.ppt
Chapter 9. Classification Advanced Methods.ppt
 
Chapter 13. Trends and Research Frontiers in Data Mining.ppt
Chapter 13. Trends and Research Frontiers in Data Mining.pptChapter 13. Trends and Research Frontiers in Data Mining.ppt
Chapter 13. Trends and Research Frontiers in Data Mining.ppt
 
Chapter 8. Classification Basic Concepts.ppt
Chapter 8. Classification Basic Concepts.pptChapter 8. Classification Basic Concepts.ppt
Chapter 8. Classification Basic Concepts.ppt
 
Chapter 2. Know Your Data.ppt
Chapter 2. Know Your Data.pptChapter 2. Know Your Data.ppt
Chapter 2. Know Your Data.ppt
 
Chapter 12. Outlier Detection.ppt
Chapter 12. Outlier Detection.pptChapter 12. Outlier Detection.ppt
Chapter 12. Outlier Detection.ppt
 
Chapter 7. Advanced Frequent Pattern Mining.ppt
Chapter 7. Advanced Frequent Pattern Mining.pptChapter 7. Advanced Frequent Pattern Mining.ppt
Chapter 7. Advanced Frequent Pattern Mining.ppt
 
Chapter 11. Cluster Analysis Advanced Methods.ppt
Chapter 11. Cluster Analysis Advanced Methods.pptChapter 11. Cluster Analysis Advanced Methods.ppt
Chapter 11. Cluster Analysis Advanced Methods.ppt
 
Chapter 10. Cluster Analysis Basic Concepts and Methods.ppt
Chapter 10. Cluster Analysis Basic Concepts and Methods.pptChapter 10. Cluster Analysis Basic Concepts and Methods.ppt
Chapter 10. Cluster Analysis Basic Concepts and Methods.ppt
 
Chapter 6. Mining Frequent Patterns, Associations and Correlations Basic Conc...
Chapter 6. Mining Frequent Patterns, Associations and Correlations Basic Conc...Chapter 6. Mining Frequent Patterns, Associations and Correlations Basic Conc...
Chapter 6. Mining Frequent Patterns, Associations and Correlations Basic Conc...
 
Chapter 5. Data Cube Technology.ppt
Chapter 5. Data Cube Technology.pptChapter 5. Data Cube Technology.ppt
Chapter 5. Data Cube Technology.ppt
 
Chapter 3. Data Preprocessing.ppt
Chapter 3. Data Preprocessing.pptChapter 3. Data Preprocessing.ppt
Chapter 3. Data Preprocessing.ppt
 
Chapter 4. Data Warehousing and On-Line Analytical Processing.ppt
Chapter 4. Data Warehousing and On-Line Analytical Processing.pptChapter 4. Data Warehousing and On-Line Analytical Processing.ppt
Chapter 4. Data Warehousing and On-Line Analytical Processing.ppt
 
Chapter 1. Introduction.ppt
Chapter 1. Introduction.pptChapter 1. Introduction.ppt
Chapter 1. Introduction.ppt
 
Data Mining Lecture_8(a).pptx
Data Mining Lecture_8(a).pptxData Mining Lecture_8(a).pptx
Data Mining Lecture_8(a).pptx
 
Data Mining Lecture_13.pptx
Data Mining Lecture_13.pptxData Mining Lecture_13.pptx
Data Mining Lecture_13.pptx
 
Data Mining Lecture_9.pptx
Data Mining Lecture_9.pptxData Mining Lecture_9.pptx
Data Mining Lecture_9.pptx
 
Data Mining Lecture_7.pptx
Data Mining Lecture_7.pptxData Mining Lecture_7.pptx
Data Mining Lecture_7.pptx
 
Data Mining Lecture_10(b).pptx
Data Mining Lecture_10(b).pptxData Mining Lecture_10(b).pptx
Data Mining Lecture_10(b).pptx
 
Data Mining Lecture_8(b).pptx
Data Mining Lecture_8(b).pptxData Mining Lecture_8(b).pptx
Data Mining Lecture_8(b).pptx
 
Data Mining Lecture_6.pptx
Data Mining Lecture_6.pptxData Mining Lecture_6.pptx
Data Mining Lecture_6.pptx
 

Recently uploaded

HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVRajaP95
 
Introduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECHIntroduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECHC Sai Kiran
 
An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...Chandu841456
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxJoão Esperancinha
 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AIabhishek36461
 
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)Dr SOUNDIRARAJ N
 
pipeline in computer architecture design
pipeline in computer architecture  designpipeline in computer architecture  design
pipeline in computer architecture designssuser87fa0c1
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionDr.Costas Sachpazis
 
Concrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptxConcrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptxKartikeyaDwivedi3
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfAsst.prof M.Gokilavani
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxDeepakSakkari2
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort servicejennyeacort
 
Work Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvWork Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvLewisJB
 
Introduction-To-Agricultural-Surveillance-Rover.pptx
Introduction-To-Agricultural-Surveillance-Rover.pptxIntroduction-To-Agricultural-Surveillance-Rover.pptx
Introduction-To-Agricultural-Surveillance-Rover.pptxk795866
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024Mark Billinghurst
 
Effects of rheological properties on mixing
Effects of rheological properties on mixingEffects of rheological properties on mixing
Effects of rheological properties on mixingviprabot1
 

Recently uploaded (20)

HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
 
Introduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECHIntroduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECH
 
An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AI
 
young call girls in Green Park🔝 9953056974 🔝 escort Service
young call girls in Green Park🔝 9953056974 🔝 escort Serviceyoung call girls in Green Park🔝 9953056974 🔝 escort Service
young call girls in Green Park🔝 9953056974 🔝 escort Service
 
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
 
pipeline in computer architecture design
pipeline in computer architecture  designpipeline in computer architecture  design
pipeline in computer architecture design
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
 
Concrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptxConcrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptx
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptx
 
Design and analysis of solar grass cutter.pdf
Design and analysis of solar grass cutter.pdfDesign and analysis of solar grass cutter.pdf
Design and analysis of solar grass cutter.pdf
 
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
 
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
 
Work Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvWork Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvv
 
Introduction-To-Agricultural-Surveillance-Rover.pptx
Introduction-To-Agricultural-Surveillance-Rover.pptxIntroduction-To-Agricultural-Surveillance-Rover.pptx
Introduction-To-Agricultural-Surveillance-Rover.pptx
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024
 
Effects of rheological properties on mixing
Effects of rheological properties on mixingEffects of rheological properties on mixing
Effects of rheological properties on mixing
 

Data Mining Lecture_4.pptx

  • 1. DATA MINING LECTURE 4 Frequent Itemsets, Association Rules Evaluation Alternative Algorithms Subrata Kumer Paul Assistant Professor, Dept. of CSE, BAUET sksubrata96@gmail.com
  • 3. Mining Frequent Itemsets • Itemset • A collection of one or more items • Example: {Milk, Bread, Diaper} • k-itemset • An itemset that contains k items • Support () • Count: Frequency of occurrence of an itemset • E.g. ({Milk, Bread,Diaper}) = 2 • Fraction: Fraction of transactions that contain an itemset • E.g. s({Milk, Bread, Diaper}) = 40% • Frequent Itemset • An itemset whose support is greater than or equal to a minsup threshold, 𝑠 𝐼 ≥ minsup • Problem Definition • Input: A set of transactions T, over a set of items I, minsup value • Output: All itemsets with items in I having 𝑠 𝐼 ≥ minsup TID Items 1 Bread, Milk 2 Bread, Diaper, Beer, Eggs 3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer 5 Bread, Milk, Diaper, Coke
  • 4. The itemset lattice null AB AC AD AE BC BD BE CD CE DE A B C D E ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE ABCD ABCE ABDE ACDE BCDE ABCDE Given d items, there are 2d possible itemsets Too expensive to test all!
  • 5. The Apriori Principle • Apriori principle (Main observation): – If an itemset is frequent, then all of its subsets must also be frequent – If an itemset is not frequent, then all of its supersets cannot be frequent – The support of an itemset never exceeds the support of its subsets – This is known as the anti-monotone property of support ) ( ) ( ) ( : , Y s X s Y X Y X    
  • 6. Illustration of the Apriori principle Found to be frequent Frequent subsets
  • 7. Illustration of the Apriori principle Found to be Infrequent null AB AC AD AE BC BD BE CD CE DE A B C D E ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE ABCD ABCE ABDE ACDE BCDE ABCDE null AB AC AD AE BC BD BE CD CE DE A B C D E ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE ABCD ABCE ABDE ACDE BCDE ABCDE Pruned Infrequent supersets
  • 8. R. Agrawal, R. Srikant: "Fast Algorithms for Mining Association Rules", Proc. of the 20th Int'l Conference on Very Large Databases, 1994. The Apriori algorithm Level-wise approach Ck = candidate itemsets of size k Lk = frequent itemsets of size k Candidate generation Frequent itemset generation 1. k = 1, C1 = all items 2. While Ck not empty 3. Scan the database to find which itemsets in Ck are frequent and put them into Lk 4. Use Lk to generate a collection of candidate itemsets Ck+1 of size k+1 5. k = k+1
  • 9. Candidate Generation • Basic principle (Apriori): • An itemset of size k+1 is candidate to be frequent only if all of its subsets of size k are known to be frequent • Main idea: • Construct a candidate of size k+1 by combining two frequent itemsets of size k • Prune the generated k+1-itemsets that do not have all k-subsets to be frequent
  • 10. Computing Frequent Itemsets • Given the set of candidate itemsets Ck, we need to compute the support and find the frequent itemsets Lk. • Scan the data, and use a hash structure to keep a counter for each candidate itemset that appears in the data TID Items 1 Bread, Milk 2 Bread, Diaper, Beer, Eggs 3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer 5 Bread, Milk, Diaper, Coke N Transactions Hash Structure k Buckets Ck
  • 11. A simple hash structure • Create a dictionary (hash table) that stores the candidate itemsets as keys, and the number of appearances as the value. • Initialize with zero • Increment the counter for each itemset that you see in the data
  • 12. Example Suppose you have 15 candidate itemsets of length 3: {1 4 5}, {1 2 4}, {4 5 7}, {1 2 5}, {4 5 8}, {1 5 9}, {1 3 6}, {2 3 4}, {5 6 7}, {3 4 5}, {3 5 6}, {3 5 7}, {6 8 9}, {3 6 7}, {3 6 8} Hash table stores the counts of the candidate itemsets as they have been computed so far Key Value {3 6 7} 0 {3 4 5} 1 {1 3 6} 3 {1 4 5} 5 {2 3 4} 2 {1 5 9} 1 {3 6 8} 0 {4 5 7} 2 {6 8 9} 0 {5 6 7} 3 {1 2 4} 8 {3 5 7} 1 {1 2 5} 0 {3 5 6} 1 {4 5 8} 0
  • 13. Example Tuple {1,2,3,5,6} generates the following itemsets of length 3: {1 2 3}, {1 2 5}, {1 2 6}, {1 3 5}, {1 3 6}, {1 5 6}, {2 3 5}, {2 3 6}, {3 5 6}, Increment the counters for the itemsets in the dictionary Key Value {3 6 7} 0 {3 4 5} 1 {1 3 6} 3 {1 4 5} 5 {2 3 4} 2 {1 5 9} 1 {3 6 8} 0 {4 5 7} 2 {6 8 9} 0 {5 6 7} 3 {1 2 4} 8 {3 5 7} 1 {1 2 5} 0 {3 5 6} 1 {4 5 8} 0
  • 14. Example Tuple {1,2,3,5,6} generates the following itemsets of length 3: {1 2 3}, {1 2 5}, {1 2 6}, {1 3 5}, {1 3 6}, {1 5 6}, {2 3 5}, {2 3 6}, {3 5 6}, Increment the counters for the itemsets in the dictionary Key Value {3 6 7} 0 {3 4 5} 1 {1 3 6} 4 {1 4 5} 5 {2 3 4} 2 {1 5 9} 1 {3 6 8} 0 {4 5 7} 2 {6 8 9} 0 {5 6 7} 3 {1 2 4} 8 {3 5 7} 1 {1 2 5} 1 {3 5 6} 2 {4 5 8} 0
  • 15. Mining Association Rules Example: Beer } Diaper , Milk {  4 . 0 5 2 | T | ) Beer Diaper, , Milk (     s 67 . 0 3 2 ) Diaper , Milk ( ) Beer Diaper, Milk, (      c  Association Rule – An implication expression of the form X  Y, where X and Y are itemsets – {Milk, Diaper}  {Beer}  Rule Evaluation Metrics – Support (s)  Fraction of transactions that contain both X and Y = the probability P(X,Y) that X and Y occur together – Confidence (c)  How often Y appears in transactions that contain X = the conditional probability P(Y|X) that Y occurs given that X has occurred. TID Items 1 Bread, Milk 2 Bread, Diaper, Beer, Eggs 3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer 5 Bread, Milk, Diaper, Coke  Problem Definition – Input A set of transactions T, over a set of items I, minsup, minconf values – Output: All rules with items in I having s ≥ minsup and c≥ minconf
  • 16. Mining Association Rules • Two-step approach: 1. Frequent Itemset Generation – Generate all itemsets whose support  minsup 2. Rule Generation – Generate high confidence rules from each frequent itemset, where each rule is a partitioning of a frequent itemset into Left-Hand-Side (LHS) and Right-Hand-Side (RHS) Frequent itemset: {A,B,C,D} Rule: ABCD
  • 17. Association Rule anti-monotonicity • Confidence is anti-monotone w.r.t. number of items on the RHS of the rule (or monotone with respect to the LHS of the rule) • e.g., L = {A,B,C,D}: c(ABC  D)  c(AB  CD)  c(A  BCD)
  • 18. Rule Generation for APriori Algorithm • Candidate rule is generated by merging two rules that share the same prefix in the RHS • join(CDAB,BDAC) would produce the candidate rule D  ABC • Prune rule D  ABC if its subset ADBC does not have high confidence • Essentially we are doing APriori on the RHS BD->AC CD->AB D->ABC
  • 20. Compact Representation of Frequent Itemsets • Some itemsets are redundant because they have identical support as their supersets • Number of frequent itemsets • Need a compact representation TID A1 A2 A3 A4 A5 A6 A7 A8 A9 A10 B1 B2 B3 B4 B5 B6 B7 B8 B9 B10 C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 7 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 8 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 9 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 10 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 11 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 12 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 13 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 14 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1           10 1 10 3 k k
  • 21. Maximal Frequent Itemset null AB AC AD AE BC BD BE CD CE DE A B C D E ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE ABCD ABCE ABDE ACDE BCDE ABCDE Border Infrequent Itemsets Maximal Itemsets An itemset is maximal frequent if none of its immediate supersets is frequent Maximal itemsets = positive border Maximal: no superset has this property
  • 22. Negative Border null AB AC AD AE BC BD BE CD CE DE A B C D E ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE ABCD ABCE ABDE ACDE BCDE ABCDE Border Infrequent Itemsets Itemsets that are not frequent, but all their immediate subsets are frequent. Minimal: no subset has this property
  • 23. Border • Border = Positive Border + Negative Border • Itemsets such that all their immediate subsets are frequent and all their immediate supersets are infrequent. • Either the positive, or the negative border is sufficient to summarize all frequent itemsets.
  • 24. Closed Itemset • An itemset is closed if none of its immediate supersets has the same support as the itemset TID Items 1 {A,B} 2 {B,C,D} 3 {A,B,C,D} 4 {A,B,D} 5 {A,B,C,D} Itemset Support {A} 4 {B} 5 {C} 3 {D} 4 {A,B} 4 {A,C} 2 {A,D} 3 {B,C} 3 {B,D} 4 {C,D} 3 Itemset Support {A,B,C} 2 {A,B,D} 3 {A,C,D} 2 {B,C,D} 3 {A,B,C,D} 2
  • 25. Maximal vs Closed Itemsets TID Items 1 ABC 2 ABCD 3 BCE 4 ACDE 5 DE null AB AC AD AE BC BD BE CD CE DE A B C D E ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE ABCD ABCE ABDE ACDE BCDE ABCDE 124 123 1234 245 345 12 124 24 4 123 2 3 24 34 45 12 2 24 4 4 2 3 4 2 4 Transaction Ids Not supported by any transactions
  • 26. Maximal vs Closed Frequent Itemsets null AB AC AD AE BC BD BE CD CE DE A B C D E ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE ABCD ABCE ABDE ACDE BCDE ABCDE 124 123 1234 245 345 12 124 24 4 123 2 3 24 34 45 12 2 24 4 4 2 3 4 2 4 Minimum support = 2 # Closed = 9 # Maximal = 4 Closed and maximal Closed but not maximal
  • 27. Maximal vs Closed Itemsets Frequent Itemsets Closed Frequent Itemsets Maximal Frequent Itemsets
  • 28. Pattern Evaluation • Association rule algorithms tend to produce too many rules but many of them are uninteresting or redundant • Redundant if {A,B,C}  {D} and {A,B}  {D} have same support & confidence • Summarization techniques • Uninteresting, if the pattern that is revealed does not offer useful information. • Interestingness measures: a hard problem to define • Interestingness measures can be used to prune/rank the derived patterns • Subjective measures: require human analyst • Objective measures: rely on the data. • In the original formulation of association rules, support & confidence are the only measures used
  • 29. Computing Interestingness Measure • Given a rule X  Y, information needed to compute rule interestingness can be obtained from a contingency table 𝑌 𝑌 𝑋 f11 f10 f1+ 𝑋 f01 f00 fo+ f+1 f+0 N Contingency table for X  Y f11: support of X and Y f10: support of X and Y f01: support of X and Y f00: support of X and Y Used to define various measures  support, confidence, lift, Gini, J-measure, etc. 𝑋: itemset X appears in tuple 𝑌: itemset Y appears in tuple 𝑋: itemset X does not appear in tuple 𝑌: itemset Y does not appear in tuple
  • 30. Drawback of Confidence Coffee Coffee Tea 15 5 20 Tea 75 5 80 90 10 100 Association Rule: Tea  Coffee Confidence= P(Coffee|Tea) = 15 20 = 0.75 but P(Coffee) = 90 100 = 0.9 • Although confidence is high, rule is misleading • P(Coffee|Tea) = 0.9375 Number of people that drink coffee and tea Number of people that drink coffee but not tea Number of people that drink coffee Number of people that drink tea
  • 31. Statistical Independence • Population of 1000 students • 600 students know how to swim (S) • 700 students know how to bike (B) • 420 students know how to swim and bike (S,B) • P(SB) = 420/1000 = 0.42 • P(S)  P(B) = 0.6  0.7 = 0.42 • P(SB) = P(S)  P(B) => Statistical independence
  • 32. Statistical Independence • Population of 1000 students • 600 students know how to swim (S) • 700 students know how to bike (B) • 500 students know how to swim and bike (S,B) • P(SB) = 500/1000 = 0.5 • P(S)  P(B) = 0.6  0.7 = 0.42 • P(SB) > P(S)  P(B) => Positively correlated
  • 33. Statistical Independence • Population of 1000 students • 600 students know how to swim (S) • 700 students know how to bike (B) • 300 students know how to swim and bike (S,B) • P(SB) = 300/1000 = 0.3 • P(S)  P(B) = 0.6  0.7 = 0.42 • P(SB) < P(S)  P(B) => Negatively correlated
  • 34. Statistical-based Measures • Measures that take into account statistical dependence • Lift/Interest/PMI Lift = 𝑃(𝑌|𝑋) 𝑃(𝑌) = 𝑃(𝑋, 𝑌) 𝑃 𝑋 𝑃(𝑌) = Interest In text mining it is called: Pointwise Mutual Information • Piatesky-Shapiro PS = 𝑃 𝑋, 𝑌 − 𝑃 𝑋 𝑃(𝑌) • All these measures measure deviation from independence • The higher, the better (why?)
  • 35. Example: Lift/Interest Coffee Coffee Tea 15 5 20 Tea 75 5 80 90 10 100 Association Rule: Tea  Coffee Confidence= P(Coffee|Tea) = 0.75 but P(Coffee) = 0.9  Lift = 0.75/0.9= 0.8333 (< 1, therefore is negatively associated) = 0.15/(0.9*0.2)
  • 36. Another Example of the of, the Fraction of documents 0.9 0.9 0.8 P(of, the) ≈ P of P(the) If I was creating a document by picking words randomly, (of, the) have more or less the same probability of appearing together by chance hong kong hong, kong Fraction of documents 0.2 0.2 0.19 P hong, kong ≫ P hong P(kong) (hong, kong) have much lower probability to appear together by chance. The two words appear almost always only together obama karagounis obama, karagounis Fraction of documents 0.2 0.2 0.001 P obama, karagounis ≪ P obama P(karagounis) (obama, karagounis) have much higher probability to appear together by chance. The two words appear almost never together No correlation Positive correlation Negative correlation
  • 37. Drawbacks of Lift/Interest/Mutual Information honk konk honk, konk Fraction of documents 0.0001 0.0001 0.0001 𝑀𝐼 ℎ𝑜𝑛𝑘, 𝑘𝑜𝑛𝑘 = 0.0001 0.0001 ∗ 0.0001 = 10000 hong kong hong, kong Fraction of documents 0.2 0.2 0.19 𝑀𝐼 ℎ𝑜𝑛𝑔, 𝑘𝑜𝑛𝑔 = 0.19 0.2 ∗ 0.2 = 4.75 Rare co-occurrences are deemed more interesting. But this is not always what we want
  • 38. ALTERNATIVE FREQUENT ITEMSET COMPUTATION Slides taken from Mining Massive Datasets course by Anand Rajaraman and Jeff Ullman.
  • 39. C1 L1 C2 L2 C3 Filter Filter Construct Construct First pass Second pass All items All pairs of items from L1 Count the pairs Count the items Frequent items Frequent pairs Finding the frequent pairs is usually the most expensive operation
  • 40. 40 Picture of A-Priori Item counts Pass 1 Pass 2 Frequent items Counts of pairs of frequent items
  • 41. 41 PCY Algorithm • During Pass 1 (computing frequent items) of Apriori, most memory is idle. • Use that memory to keep counts of buckets into which pairs of items are hashed. • Just the count, not the pairs themselves. Item counts Pass 1
  • 42. 42 Needed Extensions 1. Pairs of items need to be generated from the input file; they are not present in the file. 2. We are not just interested in the presence of a pair, but we need to see whether it is present at least s (support) times.
  • 43. 43 PCY Algorithm – (2) • A bucket is frequent if its count is at least the support threshold. • If a bucket is not frequent, no pair that hashes to that bucket could possibly be a frequent pair. • The opposite is not true, a bucket may be frequent but hold infrequent pairs • On Pass 2 (frequent pairs), we only count pairs that hash to frequent buckets.
  • 44. 44 PCYAlgorithm – Before Pass 1 Organize Main Memory • Space to count each item. • One (typically) 4-byte integer per item. • Use the rest of the space for as many integers, representing buckets, as we can.
  • 46. 46 PCY Algorithm – Pass 1 FOR (each basket) { FOR (each item in the basket) add 1 to item’s count; FOR (each pair of items in the basket) { hash the pair to a bucket; add 1 to the count for that bucket } }
  • 47. 47 Observations About Buckets 1. A bucket that a frequent pair hashes to is surely frequent. • We cannot use the hash table to eliminate any member of this bucket. 2. Even without any frequent pair, a bucket can be frequent. • Again, nothing in the bucket can be eliminated. 3. But in the best case, the count for a bucket is less than the support s. • Now, all pairs that hash to this bucket can be eliminated as candidates, even if the pair consists of two frequent items.
  • 48. 48 PCY Algorithm – Between Passes • Replace the buckets by a bit-vector: • 1 means the bucket is frequent; 0 means it is not. • 4-byte integers are replaced by bits, so the bit- vector requires 1/32 of memory. • Also, find which items are frequent and list them for the second pass. • Same as with Apriori
  • 49. 49 Picture of PCY Hash table Item counts Bitmap Pass 1 Pass 2 Frequent items Counts of candidate pairs
  • 50. 50 PCY Algorithm – Pass 2 • Count all pairs {i, j } that meet the conditions for being a candidate pair: 1. Both i and j are frequent items. 2. The pair {i, j }, hashes to a bucket number whose bit in the bit vector is 1. • Notice both these conditions are necessary for the pair to have a chance of being frequent.
  • 51. 51 All (Or Most) Frequent Itemsets in less than 2 Passes • A-Priori, PCY, etc., take k passes to find frequent itemsets of size k. • Other techniques use 2 or fewer passes for all sizes: • Simple sampling algorithm. • SON (Savasere, Omiecinski, and Navathe). • Toivonen.
  • 52. 52 Simple Sampling Algorithm – (1) • Take a random sample of the market baskets. • Run Apriori or one of its improvements (for sets of all sizes, not just pairs) in main memory, so you don’t pay for disk I/O each time you increase the size of itemsets. • Make sure the sample is such that there is enough space for counts.
  • 54. 54 Simple Algorithm – (2) • Use as your support threshold a suitable, scaled-back number. • E.g., if your sample is 1/100 of the baskets, use s /100 as your support threshold instead of s. • You could stop here (single pass) • What could be the problem?
  • 55. 55 Simple Algorithm – Option • Optionally, verify that your guesses are truly frequent in the entire data set by a second pass (eliminate false positives) • But you don’t catch sets frequent in the whole but not in the sample. (false negatives) • Smaller threshold, e.g., s /125, helps catch more truly frequent itemsets. • But requires more space.
  • 56. 56 SON Algorithm – (1) • First pass: Break the data into chunks that can be processed in main memory. • Read one chunk at the time • Find all frequent itemsets for each chunk. • Threshold = s/number of chunks • An itemset becomes a candidate if it is found to be frequent in any one or more chunks of the baskets.
  • 57. 57 SON Algorithm – (2) • Second pass: count all the candidate itemsets and determine which are frequent in the entire set. • Key “monotonicity” idea: an itemset cannot be frequent in the entire set of baskets unless it is frequent in at least one subset. • Why?
  • 58. 58 SON Algorithm – Distributed Version • This idea lends itself to distributed data mining. • If baskets are distributed among many nodes, compute frequent itemsets at each node, then distribute the candidates from each node. • Finally, accumulate the counts of all candidates.
  • 59. 59 Toivonen’s Algorithm – (1) • Start as in the simple sampling algorithm, but lower the threshold slightly for the sample. • Example: if the sample is 1% of the baskets, use s /125 as the support threshold rather than s /100. • Goal is to avoid missing any itemset that is frequent in the full set of baskets.
  • 60. 60 Toivonen’s Algorithm – (2) • Add to the itemsets that are frequent in the sample the negative border of these itemsets. • An itemset is in the negative border if it is not deemed frequent in the sample, but all its immediate subsets are.
  • 61. 61 Reminder: Negative Border • ABCD is in the negative border if and only if: 1. It is not frequent in the sample, but 2. All of ABC, BCD, ACD, and ABD are. • A is in the negative border if and only if it is not frequent in the sample.  Because the empty set is always frequent.  Unless there are fewer baskets than the support threshold (silly case).
  • 62. 62 Picture of Negative Border … triples pairs singletons Negative Border Frequent Itemsets from Sample
  • 63. 63 Toivonen’s Algorithm – (3) • In a second pass, count all candidate frequent itemsets from the first pass, and also count their negative border. • If no itemset from the negative border turns out to be frequent, then the candidates found to be frequent in the whole data are exactly the frequent itemsets.
  • 64. 64 Toivonen’s Algorithm – (4) • What if we find that something in the negative border is actually frequent? • We must start over again! • Try to choose the support threshold so the probability of failure is low, while the number of itemsets checked on the second pass fits in main- memory.
  • 65. 65 If Something in the Negative Border is Frequent . . . … tripletons doubletons singletons Negative Border Frequent Itemsets from Sample We broke through the negative border. How far does the problem go?
  • 66. 66 Theorem: • If there is an itemset that is frequent in the whole, but not frequent in the sample, then there is a member of the negative border for the sample that is frequent in the whole.
  • 67. 67 • Proof: Suppose not; i.e.; 1. There is an itemset S frequent in the whole but not frequent in the sample, and 2. Nothing in the negative border is frequent in the whole. • Let T be a smallest subset of S that is not frequent in the sample. • T is frequent in the whole (S is frequent + monotonicity). • T is in the negative border (else not “smallest”).
  • 68. Example null AB AC AD AE BC BD BE CD CE DE A B C D E ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE ABCD ABCE ABDE ACDE BCDE ABCDE Border
  • 69. THE FP-TREE AND THE FP-GROWTH ALGORITHM Slides from course lecture of E. Pitoura
  • 70. Overview • The FP-tree contains a compressed representation of the transaction database. • A trie (prefix-tree) data structure is used • Each transaction is a path in the tree – paths can overlap. • Once the FP-tree is constructed the recursive, divide-and-conquer FP-Growth algorithm is used to enumerate all frequent itemsets.
  • 71. FP-tree Construction • The FP-tree is a trie (prefix tree) • Since transactions are sets of items, we need to transform them into ordered sequences so that we can have prefixes • Otherwise, there is no common prefix between sets {A,B} and {B,C,A} • We need to impose an order to the items • Initially, assume a lexicographic order. TID Items 1 {A,B} 2 {B,C,D} 3 {A,C,D,E} 4 {A,D,E} 5 {A,B,C} 6 {A,B,C,D} 7 {B,C} 8 {A,B,C} 9 {A,B,D} 10 {B,C,E}
  • 72. FP-tree Construction • Initially the tree is empty null TID Items 1 {A,B} 2 {B,C,D} 3 {A,C,D,E} 4 {A,D,E} 5 {A,B,C} 6 {A,B,C,D} 7 {B,C} 8 {A,B,C} 9 {A,B,D} 10 {B,C,E}
  • 73. FP-tree Construction • Reading transaction TID = 1 • Each node in the tree has a label consisting of the item and the support (number of transactions that reach that node, i.e. follow that path) TID Items 1 {A,B} 2 {B,C,D} 3 {A,C,D,E} 4 {A,D,E} 5 {A,B,C} 6 {A,B,C,D} 7 {B,C} 8 {A,B,C} 9 {A,B,D} 10 {B,C,E} null A:1 B:1 Node label = item:support
  • 74. FP-tree Construction • Reading transaction TID = 2 • We add pointers between nodes that refer to the same item TID Items 1 {A,B} 2 {B,C,D} 3 {A,C,D,E} 4 {A,D,E} 5 {A,B,C} 6 {A,B,C,D} 7 {B,C} 8 {A,B,C} 9 {A,B,D} 10 {B,C,E} null A:1 B:1 B:1 C:1 D:1 Each transaction is a path in the tree
  • 75. FP-tree Construction TID Items 1 {A,B} 2 {B,C,D} 3 {A,C,D,E} 4 {A,D,E} 5 {A,B,C} 6 {A,B,C,D} 7 {B,C} 8 {A,B,C} 9 {A,B,D} 10 {B,C,E} null A:1 B:1 B:1 C:1 D:1 After reading transactions TID=1, 2: Item Pointer A B C D E Header Table The Header Table and the pointers assist in computing the itemset support
  • 76. FP-tree Construction • Reading transaction TID = 3 TID Items 1 {A,B} 2 {B,C,D} 3 {A,C,D,E} 4 {A,D,E} 5 {A,B,C} 6 {A,B,C,D} 7 {B,C} 8 {A,B,C} 9 {A,B,D} 10 {B,C,E} null A:1 B:1 B:1 C:1 Item Pointer A B C D E A:1 D:1
  • 77. FP-tree Construction • Reading transaction TID = 3 TID Items 1 {A,B} 2 {B,C,D} 3 {A,C,D,E} 4 {A,D,E} 5 {A,B,C} 6 {A,B,C,D} 7 {B,C} 8 {A,B,C} 9 {A,B,D} 10 {B,C,E} null B:1 B:1 C:1 D:1 Item Pointer A B C D E A:2 C:1 D:1 E:1
  • 78. FP-tree Construction • Reading transaction TID = 3 TID Items 1 {A,B} 2 {B,C,D} 3 {A,C,D,E} 4 {A,D,E} 5 {A,B,C} 6 {A,B,C,D} 7 {B,C} 8 {A,B,C} 9 {A,B,D} 10 {B,C,E} null B:1 B:1 C:1 D:1 Item Pointer A B C D E A:2 C:1 D:1 E:1 Each transaction is a path in the tree
  • 79. FP-Tree Construction null A:7 B:5 B:3 C:3 D:1 C:1 D:1 C:3 D:1 D:1 E:1 E:1 TID Items 1 {A,B} 2 {B,C,D} 3 {A,C,D,E} 4 {A,D,E} 5 {A,B,C} 6 {A,B,C,D} 7 {B,C} 8 {A,B,C} 9 {A,B,D} 10 {B,C,E} Pointers are used to assist frequent itemset generation D:1 E:1 Transaction Database Item Pointer A B C D E Header table Each transaction is a path in the tree
  • 80. FP-tree size • Every transaction is a path in the FP-tree • The size of the tree depends on the compressibility of the data • Extreme case: All transactions are the same, the FP- tree is a single branch • Extreme case: All transactions are different the size of the tree is the same as that of the database (bigger actually since we need additional pointers)
  • 81. Item ordering • The size of the tree also depends on the ordering of the items. • Heuristic: order the items in according to their frequency from larger to smaller. • We would need to do an extra pass over the dataset to count frequencies • Example: TID Items 1 {A,B} 2 {B,C,D} 3 {A,C,D,E} 4 {A,D,E} 5 {A,B,C} 6 {A,B,C,D} 7 {B,C} 8 {A,B,C} 9 {A,B,D} 10 {B,C,E} TID Items 1 {Β,Α} 2 {B,C,D} 3 {A,C,D,E} 4 {A,D,E} 5 {Β,Α,C} 6 {Β,Α,C,D} 7 {B,C} 8 {Β,Α,C} 9 {Β,Α,D} 10 {B,C,E} σ(Α)=7, σ(Β)=8, σ(C)=7, σ(D)=5, σ(Ε)=3 Ordering : Β,Α,C,D,E
  • 82. Finding Frequent Itemsets • Input: The FP-tree • Output: All Frequent Itemsets and their support • Method: • Divide and Conquer: • Consider all itemsets that end in: E, D, C, B, A • For each possible ending item, consider the itemsets with last items one of items preceding it in the ordering • E.g, for E, consider all itemsets with last item D, C, B, A. This way we get all the itesets ending at DE, CE, BE, AE • Proceed recursively this way. • Do this for all items.
  • 83. Frequent itemsets All Itemsets Ε D C B A DE CE BE AE CD BD AD BC AC AB CDE BDE ADE BCE ACE ABE BCD ACD ABD ABC ACDE BCDE ABDE ABCE ABCD ABCDE
  • 84. Frequent Itemsets All Itemsets Ε D C B A DE CE BE AE CD BD AD BC AC AB CDE BDE ADE BCE ACE ABE BCD ACD ABD ABC ACDE BCDE ABDE ABCE ABCD ABCDE Frequent?; Frequent?; Frequent? Frequent?
  • 85. Frequent Itemsets All Itemsets Ε D C B A DE CE BE AE CD BD AD BC AC AB CDE BDE ADE BCE ACE ABE BCD ACD ABD ABC ACDE BCDE ABDE ABCE ABCD ABCDE Frequent? Frequent? Frequent? Frequent? Frequent?
  • 86. Frequent Itemsets All Itemsets Ε D C B A DE CE BE AE CD BD AD BC AC AB CDE BDE ADE BCE ACE ABE BCD ACD ABD ABC ACDE BCDE ABDE ABCE ABCD ABCDE Frequent? Frequent? Frequent? We can generate all itemsets this way We expect the FP-tree to contain a lot less
  • 87. Using the FP-tree to find frequent itemsets null A:7 B:5 B:3 C:3 D:1 C:1 D:1 C:3 D:1 D:1 E:1 E:1 TID Items 1 {A,B} 2 {B,C,D} 3 {A,C,D,E} 4 {A,D,E} 5 {A,B,C} 6 {A,B,C,D} 7 {B,C} 8 {A,B,C} 9 {A,B,D} 10 {B,C,E} Bottom-up traversal of the tree. First, itemsets ending in E, then D, etc, each time a suffix-based class D:1 E:1 Transaction Database Item Pointer A B C D E Header table
  • 88. null A:7 B:5 B:3 C:3 D:1 C:1 D:1 C:3 D:1 D:1 E:1 E:1 D:1 E:1 Item Pointer A B C D E Header table Subproblem: find frequent itemsets ending in E  We will then see how to compute the support for the possible itemsets Finding Frequent Itemsets
  • 93. Algorithm • For each suffix X • Phase 1 • Construct the prefix tree for X as shown before, and compute the support using the header table and the pointers • Phase 2 • If X is frequent, construct the conditional FP-tree for X in the following steps 1. Recompute support 2. Prune infrequent items 3. Prune leaves and recurse
  • 94. null A:7 B:5 B:3 C:3 D:1 C:1 D:1 C:3 D:1 D:1 E:1 E:1 D:1 E:1 Item Pointer A B C D E Header table Phase 1 – construct prefix tree Find all prefix paths that contain E Suffix Paths for Ε: {A,C,D,E}, {A,D,Ε}, {B,C,E} Example
  • 95. null A:7 B:3 C:3 C:1 D:1 D:1 E:1 E:1 E:1 Phase 1 – construct prefix tree Find all prefix paths that contain E Prefix Paths for Ε: {A,C,D,E}, {A,D,Ε}, {B,C,E} Example
  • 96. null A:7 B:3 C:3 C:1 D:1 D:1 E:1 E:1 E:1 Compute Support for E (minsup = 2) How? Follow pointers while summing up counts: 1+1+1 = 3 > 2 E is frequent {E} is frequent so we can now consider suffixes DE, CE, BE, AE Example
  • 97. null A:7 B:3 C:3 C:1 D:1 D:1 E:1 E:1 E:1 Phase 2 Convert the prefix tree of E into a conditional FP-tree Two changes (1) Recompute support (2) Prune infrequent Example E is frequent so we proceed with Phase 2
  • 98. null A:7 B:3 C:3 C:1 D:1 D:1 E:1 E:1 E:1 Example Recompute Support The support counts for some of the nodes include transactions that do not end in E For example in null->B->C->E we count {B, C} The support of any node is equal to the sum of the support of leaves with label E in its subtree
  • 109. null A:2 B:1 C:1 C:1 D:1 D:1 Prune infrequent In the conditional FP-tree some nodes may have support less than minsup e.g., B needs to be pruned This means that B appears with E less than minsup times Example
  • 112. null A:2 C:1 C:1 D:1 D:1 The conditional FP-tree for E Repeat the algorithm for {D, E}, {C, E}, {A, E} Example
  • 113. null A:2 C:1 C:1 D:1 D:1 Example Phase 1 Find all prefix paths that contain D (DE) in the conditional FP-tree
  • 114. null A:2 C:1 D:1 D:1 Example Phase 1 Find all prefix paths that contain D (DE) in the conditional FP-tree
  • 115. null A:2 C:1 D:1 D:1 Example Compute the support of {D,E} by following the pointers in the tree 1+1 = 2 ≥ 2 = minsup {D,E} is frequent
  • 116. null A:2 C:1 D:1 D:1 Example Phase 2 Construct the conditional FP-tree 1. Recompute Support 2. Prune nodes
  • 121. null A:2 Example Final condition FP-tree for {D,E} The support of A is ≥ minsup so {A,D,E} is frequent Since the tree has a single node we return to the next subproblem
  • 122. null A:2 C:1 C:1 D:1 D:1 Example The conditional FP-tree for E We repeat the algorithm for {D,E}, {C,E}, {A,E}
  • 123. null A:2 C:1 C:1 D:1 D:1 Example Phase 1 Find all prefix paths that contain C (CE) in the conditional FP-tree
  • 124. null A:2 C:1 C:1 Example Phase 1 Find all prefix paths that contain C (CE) in the conditional FP-tree
  • 125. null A:2 C:1 C:1 Example Compute the support of {C,E} by following the pointers in the tree 1+1 = 2 ≥ 2 = minsup {C,E} is frequent
  • 126. null A:2 C:1 C:1 Example Phase 2 Construct the conditional FP-tree 1. Recompute Support 2. Prune nodes
  • 131. null Example Prune nodes Return to the previous subproblem
  • 132. null A:2 C:1 C:1 D:1 D:1 Example The conditional FP-tree for E We repeat the algorithm for {D,E}, {C,E}, {A,E}
  • 133. null A:2 C:1 C:1 D:1 D:1 Example Phase 1 Find all prefix paths that contain A (AE) in the conditional FP-tree
  • 134. null A:2 Example Phase 1 Find all prefix paths that contain A (AE) in the conditional FP-tree
  • 135. null A:2 Example Compute the support of {A,E} by following the pointers in the tree 2 ≥ minsup {A,E} is frequent There is no conditional FP-tree for {A,E}
  • 136. Example • So for E we have the following frequent itemsets {E}, {D,E}, {C,E}, {A,E} • We proceed with D
  • 138. null A:7 B:5 B:3 C:3 D:1 C:1 D:1 C:3 D:1 D:1 D:1 Phase 1 – construct prefix tree Find all prefix paths that contain D Support 5 > minsup, D is frequent Phase 2 Convert prefix tree into conditional FP-tree Example
  • 146. null A:3 B:2 B:1 C:1 C:1 C:1 Construct conditional FP-trees for {C,D}, {B,D}, {A,D} And so on…. Example
  • 147. Observations • At each recursive step we solve a subproblem • Construct the prefix tree • Compute the new support • Prune nodes • Subproblems are disjoint so we never consider the same itemset twice • Support computation is efficient – happens together with the computation of the frequent itemsets.
  • 148. Observations • The efficiency of the algorithm depends on the compaction factor of the dataset • If the tree is bushy then the algorithm does not work well, it increases a lot of number of subproblems that need to be solved.

Editor's Notes

  1. 5
  2. 8