SlideShare a Scribd company logo
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
Data Analytics (KIT-601)
Unit-4: Frequent Itemsets and Clustering
Dr. Radhey Shyam
Professor
Department of Information Technology
SRMCEM Lucknow
(Affiliated to Dr. A.P.J. Abdul Kalam Technical University, Lucknow)
Unit-4 has been prepared and compiled by Dr. Radhey Shyam, with grateful acknowledgment to those who
made their course contents freely available or (Contributed directly or indirectly). Feel free to use this
study material for your own academic purposes. For any query, communication can be made through this
email : shyam0058@gmail.com.
April 12, 2024
Data Analytics (KIT 601)
Course Outcome ( CO) Bloomโ€™s Knowledge Level (KL)
At the end of course , the student will be able to
CO 1 Discuss various concepts of data analytics pipeline K1, K2
CO 2 Apply classification and regression techniques K3
CO 3 Explain and apply mining techniques on streaming data K2, K3
CO 4 Compare different clustering and frequent pattern mining algorithms K4
CO 5 Describe the concept of R programming and implement analytics on Big data using R. K2,K3
DETAILED SYLLABUS 3-0-0
Unit Topic Proposed
Lecture
I
Introduction to Data Analytics: Sources and nature of data, classification of data
(structured, semi-structured, unstructured), characteristics of data, introduction to Big Data
platform, need of data analytics, evolution of analytic scalability, analytic process and
tools, analysis vs reporting, modern data analytic tools, applications of data analytics.
Data Analytics Lifecycle: Need, key roles for successful analytic projects, various phases
of data analytics lifecycle โ€“ discovery, data preparation, model planning, model building,
communicating results, operationalization.
08
II
Data Analysis: Regression modeling, multivariate analysis, Bayesian modeling, inference
and Bayesian networks, support vector and kernel methods, analysis of time series: linear
systems analysis & nonlinear dynamics, rule induction, neural networks: learning and
generalisation, competitive learning, principal component analysis and neural networks,
fuzzy logic: extracting fuzzy models from data, fuzzy decision trees, stochastic search
methods.
08
III
Mining Data Streams: Introduction to streams concepts, stream data model and
architecture, stream computing, sampling data in a stream, filtering streams, counting
distinct elements in a stream, estimating moments, counting oneness in a window,
decaying window, Real-time Analytics Platform ( RTAP) applications, Case studies โ€“ real
time sentiment analysis, stock market predictions.
08
IV
Frequent Itemsets and Clustering: Mining frequent itemsets, market based modelling,
Apriori algorithm, handling large data sets in main memory, limited pass algorithm,
counting frequent itemsets in a stream, clustering techniques: hierarchical, K-means,
clustering high dimensional data, CLIQUE and ProCLUS, frequent pattern based clustering
methods, clustering in non-euclidean space, clustering for streams and parallelism.
08
V
Frame Works and Visualization: MapReduce, Hadoop, Pig, Hive, HBase, MapR,
Sharding, NoSQL Databases, S3, Hadoop Distributed File Systems, Visualization: visual
data analysis techniques, interaction techniques, systems and applications.
Introduction to R - R graphical user interfaces, data import and export, attribute and data
types, descriptive statistics, exploratory data analysis, visualization before analysis,
analytics for unstructured data.
08
Text books and References:
1. Michael Berthold, David J. Hand, Intelligent Data Analysis, Springer
2. Anand Rajaraman and Jeffrey David Ullman, Mining of Massive Datasets, Cambridge University Press.
3. John Garrett,Data Analytics for IT Networks : Developing Innovative Use Cases, Pearson Education
Curriculum & Evaluation Scheme IT & CSI (V & VI semester) 23
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
Unit-IV: Frequent Itemsets and
Clustering
1 Mining Frequent Itemsets
Frequent itemset mining is a popular data mining task that involves identifying sets of items that frequently
co-occur in a given dataset. In other words, it involves finding the items that occur together frequently and
then grouping them into sets of items. One way to approach this problem is by using the Apriori algorithm,
which is one of the most widely used algorithms for frequent itemset mining.
The Apriori algorithm works by iteratively generating candidate itemsets and then checking their fre-
quency against a minimum support threshold. The algorithm starts by generating all possible itemsets of
size 1 and counting their frequencies in the dataset. The itemsets that meet the minimum support threshold
are then selected as frequent itemsets. The algorithm then proceeds to generate candidate itemsets of size
2 from the frequent itemsets of size 1 and counts their frequencies. This process is repeated until no more
frequent itemsets can be generated.
However, when dealing with large datasets, this approach can become computationally expensive due
to the potentially large number of candidate itemsets that need to be generated and counted. Point-wise
frequent itemset mining is a more efficient alternative that can reduce the computational complexity of the
Apriori algorithm by exploiting the sparsity of the dataset.
Point-wise frequent itemset mining works by iterating over the transactions in the dataset and identifying
the itemsets that occur in each transaction. For each transaction, the algorithm generates a bitmap vector
where each bit corresponds to an item in the dataset, and its value is set to 1 if the item occurs in the
transaction and 0 otherwise. The algorithm then performs a bitwise AND operation between the bitmap
vectors of each transaction to identify the itemsets that occur in all the transactions. The itemsets that meet
the minimum support threshold are then selected as frequent itemsets.
The advantage of point-wise frequent itemset mining is that it avoids generating candidate itemsets that
are not present in the dataset, thereby reducing the number of itemsets that need to be generated and
counted. Additionally, point-wise frequent itemset mining can be parallelized, making it suitable for mining
large datasets on distributed systems.
In summary, point-wise frequent itemset mining is an efficient alternative to the Apriori algorithm for
3
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
frequent itemset mining. It works by iterating over the transactions in the dataset and identifying the
itemsets that occur in each transaction, thereby avoiding the generation of candidate itemsets that are not
present in the dataset.
2 Market Based Modelling
Market-based modeling is a technique used in economics and business to analyze and simulate the behavior of
markets, particularly in relation to the supply and demand of goods and services. This modeling technique
involves creating mathematical models that can simulate how different market participants (consumers,
producers, and intermediaries) interact with each other in a market setting.
One of the most common market-based models is the supply and demand model, which assumes that the
price of a good or service is determined by the balance between its supply and demand. In this model, the
price of a good or service will rise if the demand for it exceeds its supply, and will fall if the supply exceeds
the demand.
Another popular market-based model is the game theory model, which is used to analyze how different
participants in a market interact with each other. Game theory models assume that market participants are
rational and act in their own self-interest, and seek to identify the strategies that each participant is likely
to adopt in a given situation.
Market-based models can be used to analyze a wide range of economic phenomena, from the pricing
of individual goods and services to the behavior of entire industries and markets. They can also be used
to test the potential impact of various policies and interventions on the behavior of markets and market
participants.
Overall, market-based modeling is a powerful tool for understanding and predicting the behavior of
markets and the economy as a whole. By creating mathematical models that simulate the behavior of
market participants and the interactions between them, economists and business analysts can gain valuable
insights into the workings of markets, and develop strategies for managing and optimizing their performance.
3 Apriori Algorithm
The Apriori algorithm is a popular algorithm used in data mining and machine learning to discover frequent
itemsets in large transactional datasets. It was proposed by Agrawal and Srikant in 1994 and is widely used
4
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
in association rule mining, market basket analysis, and other data mining applications.
The Apriori algorithm uses a bottom-up approach to generate all frequent itemsets by first identifying
frequent individual items and then using those items to generate larger itemsets. The algorithm works by
performing the following steps:
ห† First, the algorithm scans the entire dataset to identify all individual items and their frequency of
occurrence. This information is used to generate the initial set of frequent itemsets.
ห† Next, the algorithm uses a level-wise search strategy to generate larger itemsets by combining fre-
quent itemsets from the previous level. The algorithm starts with two-itemsets and then progressively
generates larger itemsets until no more frequent itemsets can be found.
ห† At each level, the algorithm prunes the search space by eliminating itemsets that cannot be frequent
based on the minimum support threshold. This is done using the Apriori principle, which states that
any subset of a frequent itemset must also be frequent.
The algorithm terminates when no more frequent itemsets can be generated or when the maximum
itemset size is reached.
Once all frequent itemsets have been identified, the Apriori algorithm can be used to generate association
rules that describe the relationships between different items in the dataset. An association rule is a statement
of the form X โˆ’ > Y, where X and Y are itemsets and X is a subset of Y. The rule indicates that there is a
strong relationship between items in X and items in Y.
The strength of an association rule is measured using two metrics: support and confidence. Support is
the percentage of transactions in the dataset that contain both X and Y, while confidence is the percentage
of transactions that contain Y given that they also contain X.
Overall, the Apriori algorithm is a powerful tool for discovering frequent itemsets and association rules
in large datasets. By identifying patterns and relationships between different items in the dataset, it can
be used to gain valuable insights into consumer behavior, market trends, and other important business and
economic phenomena.
4 Handling Large Datasets in Main Memory
Handling large datasets in main memory can be a challenging task, as the amount of memory available on
most computer systems is often limited. However, there are several techniques and strategies that can be
5
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
used to effectively manage and analyze large datasets in main memory:
ห† Use data compression: Data compression techniques can be used to reduce the amount of memory
required to store a dataset. Techniques such as gzip or bzip2 can compress text data, while binary
data can be compressed using libraries like LZ4 or Snappy.
ห† Use data partitioning: Large datasets can be partitioned into smaller, more manageable subsets,
which can be processed and analyzed in main memory. This can be done using techniques such as
horizontal partitioning, vertical partitioning, or hybrid partitioning.
ห† Use data sampling: Data sampling can be used to select a representative subset of data for analysis,
without requiring the entire dataset to be loaded into memory. Random sampling, stratified sampling,
and cluster sampling are some of the commonly used sampling techniques.
ห† Use in-memory databases: In-memory databases can be used to store large datasets in main
memory for faster querying and analysis. Examples of in-memory databases include Apache Ignite,
SAP HANA, and VoltDB.
ห† Use parallel processing: Parallel processing techniques can be used to distribute the processing of
large datasets across multiple processors or cores. This can be done using libraries like Apache Spark,
which provides distributed data processing capabilities.
ห† Use data streaming: Data streaming techniques can be used to process large datasets in real-time
by processing data as it is generated, rather than storing it in memory. Apache Kafka, Apache Flink,
and Apache Storm are some of the popular data streaming platforms.
Overall, effective management of large datasets in main memory requires a combination of data compres-
sion, partitioning, sampling, in-memory databases, parallel processing, and data streaming techniques. By
leveraging these techniques, it is possible to effectively analyze and process large datasets in main memory,
without requiring expensive hardware upgrades or specialized software tools.
5 Limited Pass Algorithm
A limited pass algorithm is a technique used in data processing and analysis to efficiently process large
datasets with limited memory resources.
6
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
In a limited pass algorithm, the dataset is processed in a fixed number of passes or iterations, where each
pass involves processing a subset of the data. The algorithm ensures that each pass is designed to capture
the relevant information needed for the analysis, while minimizing the memory required to store the data.
For example, a limited pass algorithm for processing a large text file could involve reading the file in chunks
or sections, processing each section in memory, and then discarding the processed data before moving onto
the next section. This approach enables the algorithm to handle large datasets that cannot be loaded entirely
into memory.
Limited pass algorithms are often used in situations where the data cannot be stored in main memory,
or when the processing of the data requires significant computational resources. Examples of applications
that use limited pass algorithms include text processing, machine learning, and data mining.
While limited pass algorithms can be useful for processing large datasets with limited memory resources,
they can also be less efficient than algorithms that can process the entire dataset in a single pass. Therefore,
it is important to carefully design the algorithm to ensure that it can capture the relevant information needed
for the analysis, while minimizing the number of passes required to process the data.
6 Counting Frequent Itemsets in a Stream
Counting frequent itemsets in a stream is a problem of finding the most frequent itemsets in a continuous
stream of transactions. This problem is commonly known as the Frequent Itemset Mining problem. Here
are the steps involved in counting frequent itemsets in a stream:
1. Initialize a hash table to store the counts of each itemset. The size of the hash table should be limited
to prevent it from becoming too large.
2. Read each transaction in the stream one at a time.
3. Generate all the possible itemsets from the transaction. This can be done using the Apriori algorithm,
which generates candidate itemsets by combining smaller frequent itemsets.
4. Increment the count of each itemset in the hash table.
5. Prune infrequent itemsets from the hash table. An itemset is infrequent if its count is less than a
predefined threshold.
6. Repeat steps 2-5 for each transaction in the stream.
7
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
7. Output the frequent itemsets that remain in the hash table after processing all the transactions.
The main challenge in counting frequent itemsets in a stream is to keep track of the changing frequencies
of the itemsets as new transactions arrive. This can be done efficiently using the hash table to store the
counts of the itemsets. However, the hash table can become too large if the number of distinct itemsets is
too large. To prevent this, the hash table can be limited in size by using a hash function that maps each
itemset to a fixed number of hash buckets. The size of the hash table can be adjusted dynamically based on
the number of items and transactions in the stream.
Another challenge in counting frequent itemsets in a stream is to choose the threshold for the minimum
count of an itemset to be considered frequent. The threshold should be set high enough to exclude infrequent
itemsets, but low enough to include all the important frequent itemsets. The threshold can be determined
using heuristics or by using machine learning techniques to learn the optimal threshold from the data.
7 Clustering Techniques
Clustering techniques are used to group similar data points together in a dataset based on their similarity
or distance measures. Here are some popular clustering techniques:
7.1 K-Means Clustering:
This is a popular clustering algorithm that partitions a dataset into K clusters based on the mean dis-
tance of the data points to their assigned cluster centers. It involves an iterative process of assigning data
points to clusters and updating the cluster centers until convergence. K-Means is commonly used in image
segmentation, marketing, and customer segmentation.
7.1.1 K-means Clustering algorithm
K-Means clustering is a popular unsupervised machine learning algorithm that partitions a dataset into k
clusters, where k is a pre-defined number of clusters. The algorithm works as follows:
ห† Initialize the k cluster centroids randomly.
ห† Assign each data point to the nearest cluster centroid based on its distance.
ห† Calculate the new cluster centroids based on the mean of all data points assigned to that cluster.
8
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
ห† Repeat steps 2-3 until the cluster centroids no longer change significantly, or a maximum number of
iterations is reached.
ห† The distance metric used for step 2 is typically the Euclidean distance, but other distance metrics can
be used as well.
ห† The K-Means algorithm aims to minimize the sum of squared distances between each data point and
its assigned cluster centroid. This objective function is known as the within-cluster sum of squares
(WCSS) or the sum of squared errors (SSE).
ห† To determine the optimal number of clusters, a common approach is to use the elbow method. This
involves plotting the WCSS or SSE against the number of clusters and selecting the number of clusters
at the โ€elbowโ€ point, where the rate of decrease in WCSS or SSE begins to level off.
K-Means is a computationally efficient algorithm that can scale to large datasets. It is particularly useful
when the data is high-dimensional and traditional clustering algorithms may be too slow. However, K-Means
requires the number of clusters to be pre-defined and may converge to a suboptimal solution if the initial
cluster centroids are not well chosen. It is also sensitive to non-linear data and may not work well with such
data. Here are some of its advantages and disadvantages:
9
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
Advantages:
ห† Simple and easy to understand: K-Means is
easy to understand and implement, making it
a popular choice for clustering tasks.
ห† Fast and scalable: K-Means is a computation-
ally efficient algorithm that can scale to large
datasets. It is particularly useful when the data
is high-dimensional and traditional clustering
algorithms may be too slow.
ห† Works well with circular or spherical clusters:
K-Means works well with circular or spherical
clusters, making it suitable for datasets that ex-
hibit these types of shapes.
ห† Provides a clear and interpretable result: K-
Means provides a clear and interpretable clus-
tering result, where each data point is assigned
to one of the k clusters.
Disadvantages:
ห† Requires pre-defined number of clusters: K-
Means requires the number of clusters to be
pre-defined, which can be a challenge when the
number of clusters is unknown or difficult to
determine.
ห† Sensitive to initial cluster centers: K-Means is
sensitive to the initial placement of cluster cen-
ters and can converge to a suboptimal solution
if the initial centers are not well chosen.
ห† Can converge to a local minimum: K-Means
can converge to a local minimum rather than
the global minimum, resulting in a suboptimal
clustering solution.
ห† Not suitable for non-linear data: K-Means as-
sumes that the data is linearly separable and
may not work well with non-linear data.
In summary, K-Means is a simple and fast clustering algorithm that works well with circular or spherical
clusters. However, it requires the number of clusters to be pre-defined and may converge to a suboptimal
solution if the initial cluster centers are not well chosen. It is also sensitive to non-linear data and may not
work well with such data.
7.2 Hierarchical Clustering:
This technique builds a hierarchy of clusters by recursively dividing or merging clusters based on their
similarity. It can be agglomerative (bottom-up) or divisive (top-down). In agglomerative clustering, each
data point starts in its own cluster, and then pairs of clusters are successively merged until all data points
belong to a single cluster. Divisive clustering starts with all data points in a single cluster and recursively
divides them into smaller clusters. Hierarchical clustering is useful in gene expression analysis, social network
10
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
analysis, and image analysis.
7.3 Density-based Clustering:
This technique identifies clusters based on the density of data points. It assumes that clusters are areas of
higher density separated by areas of lower density. Density-based clustering algorithms, such as DBSCAN
(Density-Based Spatial Clustering of Applications with Noise), group together data points that are closely
packed together and separate outliers. Density-based clustering is commonly used in image processing,
geospatial data analysis, and anomaly detection.
7.4 Gaussian Mixture Models:
This technique models the distribution of data points using a mixture of Gaussian probability distributions.
Each component of the mixture represents a cluster, and the algorithm estimates the parameters of the
mixture using the Expectation-Maximization algorithm. Gaussian Mixture Models are commonly used in
image segmentation, handwriting recognition, and speech recognition.
7.5 Spectral Clustering:
This technique converts the data points into a graph and then partitions the graph into clusters based
on the eigenvalues and eigenvectors of the graph Laplacian matrix. Spectral clustering is useful in image
segmentation, community detection in social networks, and document clustering.
Each clustering technique has its own strengths and weaknesses, and the choice of clustering algorithm
depends on the nature of the data, the clustering objective, and the computational resources available.
8 Clustering high-dimensional data
Clustering high-dimensional data is a challenging task because the distance or similarity measures used in
most clustering algorithms become less meaningful in high-dimensional space. Here are some techniques for
clustering high-dimensional data:
11
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
8.1 Dimensionality Reduction:
High-dimensional data can be transformed into a lower-dimensional space using dimensionality reduction
techniques, such as Principal Component Analysis (PCA) or t-SNE (t-distributed Stochastic Neighbor Em-
bedding). Dimensionality reduction can help to reduce the curse of dimensionality and make the clustering
algorithms more effective.
8.2 Feature Selection:
Not all features in high-dimensional data are equally informative. Feature selection techniques can be used
to identify the most relevant features for clustering and discard the redundant or noisy features. This can
help to improve the clustering accuracy and reduce the computational cost.
8.3 Subspace Clustering:
Subspace clustering is a clustering technique that identifies clusters in subspaces of the high-dimensional
space. This technique assumes that the data points lie in a union of subspaces, each of which represents
a cluster. Subspace clustering algorithms, such as CLIQUE (CLustering In QUEst), identify the subspaces
and clusters simultaneously.
8.4 Density-Based Clustering:
Density-based clustering algorithms, such as DBSCAN, can be used for clustering high-dimensional data by
defining the density of data points in each dimension. The clustering algorithm identifies regions of high
density in the multidimensional space, which correspond to clusters.
8.5 Ensemble Clustering:
Ensemble clustering combines multiple clustering algorithms or different parameter settings of the same
algorithm to improve the clustering performance. Ensemble clustering can help to reduce the sensitivity of
the clustering results to the choice of algorithm or parameter settings.
8.6 Deep Learning-Based Clustering:
Deep learning-based clustering techniques, such as Deep Embedded Clustering (DEC) and Autoencoder-
based Clustering (AE-Clustering), use neural networks to learn a low-dimensional representation of high-
12
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
dimensional data and cluster the data in the reduced space. These techniques have shown promising results in
clustering high-dimensional data in various domains, including image analysis and gene expression analysis.
Clustering high-dimensional data requires careful consideration of the choice of clustering algorithm,
feature selection or dimensionality reduction technique, and parameter settings. A combination of different
techniques may be required to achieve the best clustering performance.
8.7 CLIQUE and ProCLUS
CLIQUE (CLustering In QUEst) and ProCLUS are two popular subspace clustering algorithms for high-
dimensional data.
CLIQUE is a density-based algorithm that works by identifying dense subspaces in the data. It assumes
that clusters exist in subspaces of the data that are dense in at least k dimensions, where k is a user-defined
parameter. The algorithm identifies all possible dense subspaces by enumerating all combinations of k
dimensions and checking if the corresponding subspaces are dense. It then merges the overlapping subspaces
to form clusters. CLIQUE is efficient for high-dimensional data because it only considers a small number of
dimensions at a time.
ProCLUS (PROjective CLUSters) is a subspace clustering algorithm that works by identifying clusters
in a low-dimensional projection of the data. It first selects a random projection matrix and projects the data
onto a lower-dimensional space. It then uses K-Means clustering to cluster the projected data. The algorithm
iteratively refines the projection matrix and re-clusters the data until convergence. The final clusters are
projected back to the original high-dimensional space. ProCLUS is effective for high-dimensional data
because it reduces the dimensionality of the data while preserving the clustering structure.
Both CLIQUE and ProCLUS are designed to handle high-dimensional data by identifying clusters in
subspaces of the data. They are effective for clustering data that have a natural subspace structure. However,
they may not work well for data that do not have a clear subspace structure or when the data points are
widely spread out in the high-dimensional space. It is important to carefully choose the appropriate algorithm
based on the characteristics of the data and the clustering objectives.
9 Frequent pattern-based clustering methods
Frequent pattern-based clustering methods combine frequent pattern mining with clustering techniques to
identify clusters based on frequent patterns in the data. Here are some examples of frequent pattern-based
13
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
clustering methods:
1. Frequent Pattern-based Clustering: is a clustering algorithm that uses frequent pattern mining to
identify clusters in transactional data. The algorithm first identifies frequent itemsets in the data
using Apriori or FP-Growth algorithms. It then constructs a graph where each frequent itemset is a
node, and the edges represent the overlap between the itemsets. The graph is partitioned into clusters
using a graph clustering algorithm. The resulting clusters are then used to assign objects to clusters
based on their membership in the frequent itemsets.
2. Frequent Pattern-based Clustering Method: is a clustering algorithm that uses frequent pattern mining
to identify clusters in high-dimensional data. The algorithm first discretizes the continuous data into
categorical data. It then uses Apriori or FP-Growth algorithms to identify frequent itemsets in the
categorical data. The frequent itemsets are used to construct a binary matrix that represents the
membership of objects in the frequent itemsets. The binary matrix is clustered using a standard
clustering algorithm, such as K-Means or Hierarchical clustering. The resulting clusters are then used
to assign objects to clusters based on their membership in the frequent itemsets.
3. Clustering based on Frequent Pattern Combination: is a clustering algorithm that combines frequent
pattern mining with pattern combination techniques to identify clusters in transactional data. The
algorithm first identifies frequent itemsets in the data using Apriori or FP-Growth algorithms. It
then uses pattern combination techniques, such as Minimum Description Length (MDL) or Bayesian
Information Criterion (BIC), to generate composite patterns from the frequent itemsets. The composite
patterns are then used to construct a graph, which is partitioned into clusters using a graph clustering
algorithm.
Frequent pattern-based clustering methods are effective for identifying clusters based on frequent patterns
in the data. They can be applied to a wide range of data types, including transactional data and high-
dimensional data. However, these methods may suffer from the curse of dimensionality when applied to
high-dimensional data. It is important to carefully select the appropriate frequent pattern mining and
clustering techniques based on the characteristics of the data and the clustering objectives.
14
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
10 Clustering in non-Euclidean space
Clustering in non-Euclidean space refers to the clustering of data points that are not represented in the
Euclidean space, such as graphs, time series, or text data. Traditional clustering algorithms, such as K-
Means and Hierarchical clustering, assume that the data points are represented in the Euclidean space and
use distance metrics, such as Euclidean distance or cosine similarity, to measure the similarity between data
points. However, in non-Euclidean spaces, the notion of distance is different, and distance-based clustering
methods may not be suitable.
Here are some approaches for clustering in non-Euclidean spaces:
1. Spectral clustering: Spectral clustering is a popular clustering algorithm that can be applied to data
represented in non-Euclidean spaces, such as graphs or time series. It uses the eigenvalues and eigen-
vectors of the Laplacian matrix of the data to identify clusters. Spectral clustering converts the data
points into a graph representation and then computes the Laplacian matrix of the graph. The eigen-
vectors of the Laplacian matrix are used to embed the data points into a lower-dimensional space,
where clustering is performed using a standard clustering algorithm, such as K-Means or Hierarchical
clustering.
2. Density-Based Spatial Clustering of Applications with Noise: is a density-based clustering algorithm
that can be applied to data represented in non-Euclidean spaces. It does not rely on a distance
metric and can cluster data points based on their density. DBSCAN identifies clusters by defining two
parameters: the minimum number of points required to form a cluster and a radius that determines
the neighborhood of a point. DBSCAN labels each point as either a core point, a border point, or a
noise point, based on its neighborhood. The core points are used to form clusters.
3. Topic modeling: Topic modeling is a clustering method that can be applied to text data, which is
typically represented in a non-Euclidean space. Topic modeling identifies latent topics in the text data
by analyzing the co-occurrence of words. It represents each document as a distribution over topics,
and each topic as a distribution over words. The resulting topic distribution of each document can be
used to cluster the documents based on their similarity.
Clustering in non-Euclidean spaces requires careful consideration of the appropriate algorithms and tech-
niques that are suitable for the specific data type. Spectral clustering and DBSCAN are effective for clustering
15
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
data represented as graphs or time series, while topic modeling is suitable for text data. Other approaches,
such as manifold learning and kernel methods, can also be used for clustering in non-Euclidean spaces.
11 Clustering for streams and parallelism
Clustering for streams and parallelism are two important considerations for clustering large datasets. Stream
data refers to data that arrives continuously and in real-time, while parallelism refers to the ability to
distribute the clustering task across multiple computing resources.
Here are some approaches for clustering streams and parallelism:
1. Online clustering: Online clustering is a technique that can be applied to streaming data. It updates
the clustering model continuously as new data arrives. Online clustering algorithms, such as BIRCH
and CluStream, are designed to handle data streams and can scale to large datasets. These algo-
rithms incrementally update the cluster model as new data arrives and discard outdated data points
to maintain the cluster modelโ€™s accuracy and efficiency.
2. Parallel clustering: Parallel clustering refers to the use of multiple computing resources, such as multiple
processors or computing clusters, to speed up the clustering process. Parallel clustering algorithms,
such as K-Means Parallel, Hierarchical Parallel, and DBSCAN Parallel, distribute the clustering task
across multiple computing resources. These algorithms partition the data into smaller subsets and
assign each subset to a separate computing resource. The resulting clusters are then merged to produce
the final clustering result.
3. Distributed clustering: Distributed clustering refers to the use of multiple computing resources that
are distributed across different physical locations, such as different data centers or cloud resources.
Distributed clustering algorithms, such as MapReduce and Hadoop, distribute the clustering task
across multiple computing resources and handle data that is too large to fit into a single computing
resourceโ€™s memory. These algorithms partition the data into smaller subsets and assign each subset to
a separate computing resource. The resulting clusters are then merged to produce the final clustering
result.
Clustering for streams and parallelism requires careful consideration of the appropriate algorithms and
techniques that are suitable for the specific clustering objectives and data types. Online clustering is effective
16
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
for clustering streaming data, while parallel clustering and distributed clustering can speed up the clustering
process for large datasets.
Q1: Write R function to check whether the given number is prime or not.
# Program to check if the input number is prime or not
# take input from the user
num = as.integer(readline(prompt=โ€Enter a number: โ€))
flag = 0
# prime numbers are greater than 1 if(num ยฟ 1)
# check for factors flag = 1
for(i in 2:(num-1)) {
if ((num %% i) == 0)
flag = 0
break
}
}
}
if(num == 2) flag = 1
if(flag == 1)
print(paste(num,โ€is a prime numberโ€))
else
print(paste(num,โ€is not a prime numberโ€))
17
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
Apriori algorithm:โ€”The apriori algorithm solves the frequent item sets problem. The algorithm ana-
lyzes a data set to determine which combinations of items occur together frequently. The Apriori algorithm
is at the core of various algorithms for data mining problems. The best known problem is finding the asso-
ciation rules that hold in a basket -item relation.
Numerical:
Given:
Support = 60% = 60/100 โˆ— 5 = 3
Confidence = 70%
ITERATION:1
STEP 1: (C1)
Itemsets Counts
A 1
C 2
D 1
E 4
I 1
K 5
M 3
N 2
O 3
U 1
Y 3
STEP 2: (L2)
Itemsets Counts
E 4
K 5
M 3
O 3
Y 3
ITERATION 2:
STEP 3: (C2)
Itemsets Counts
E, K 4
E, M 2
E, O 3
E, Y 2
K, M 3
K, O 3
K, Y 3
M, O 1
M, Y 2
O, Y 2
STEP 4: (L2)
Itemsets Counts
E, K 4
E, O 3
K, M 3
K, O 3
K, Y 3
ITERATION 3:
STEP 5: (C3)
Itemsets Counts
E, K, O 3
K, M, O 1
K, M, Y 2
STEP 6: (L3)
Itemsets Counts
E, K, O 3
Now, stop since no more combinations can be made in L3.
ASSOCIATION RULE:
1. [E, K] โ†’ O = 3/4 = 75%
2. [K, O] โ†’ E = 3/3 = 100%
3. [E, O] โ†’ K = 3/3 = 100%
4. E โ†’ [K, O] = 3/4 = 75%
18
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
5. K โ†’ [E, O] = 3/5 = 60%
6. O โ†’ [E, K] = 3/3 = 100%
Therefore, Rule no. 5 is discarded because confidence โ‰ฅ 70%
So, Rule 1,2,3,4,6 are selected.
19
Printed Page: 1 of 2
Subject Code: KIT601
0Roll No: 0 0 0 0 0 0 0 0 0 0 0 0 0
BTECH
(SEM VI) THEORY EXAMINATION 2021-22
DATA ANALYTICS
Time: 3 Hours Total Marks: 100
Note: Attempt all Sections. If you require any missing data, then choose suitably.
SECTION A
1. Attempt all questions in brief. 2*10 = 20
Qno Questions CO
(a) Discuss the need of data analytics. 1
(b) Give the classification of data. 1
(c) Define neural network. 2
(d) What is multivariate analysis? 2
(e) Give the full form of RTAP and discuss its application. 3
(f) What is the role of sampling data in a stream? 3
(g) Discuss the use of limited pass algorithm. 4
(h) What is the principle behind hierarchical clustering technique? 4
(i) List five R functions used in descriptive statistics. 5
(j) List the names of any 2 visualization tools. 5
SECTION B
2. Attempt any three of the following: 10*3 = 30
Qno Questions CO
(a) Explain the process model and computation model for Big data
platform.
1
(b) Explain the use and advantages of decision trees. 2
(c) Explain the architecture of data stream model. 3
(d) Illustrate the K-means algorithm in detail with its advantages. 4
(e) Differentiate between NoSQL and RDBMS databases. 5
SECTION C
3. Attempt any one part of the following: 10*1 = 10
Qno Questions CO
(a) Explain the various phases of data analytics life cycle. 1
(b) Explain modern data analytics tools in detail. 1
4. Attempt any one part of the following: 10 *1 = 10
Qno Questions CO
(a) Compare various types of support vector and kernel methods of data
analysis.
2
(b) Given data= {2,3,4,5,6,7;1,5,3,6,7,8}. Compute the principal
component using PCA algorithm.
2
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
Printed Page: 2 of 2
Subject Code: KIT601
0Roll No: 0 0 0 0 0 0 0 0 0 0 0 0 0
BTECH
(SEM VI) THEORY EXAMINATION 2021-22
DATA ANALYTICS
5. Attempt any one part of the following: 10*1 = 10
Qno Questions CO
(a) Explain any one algorithm to count number of distinct elements in a
data stream.
3
(b) Discuss the case study of stock market predictions in detail. 3
6. Attempt any one part of the following: 10*1 = 10
Qno Questions CO
(a) Differentiate between CLIQUE and ProCLUS clustering. 4
(b) A database has 5 transactions. Let min_sup=60% and min_conf=80%.
TID Items_Bought
T100 {M, O, N, K, E, Y}
T200 {D, O, N, K, E, Y}
T300 {M, A, K, E}
T400 {M, U, C, K, Y}
T500 {C, O, O, K, I, E}
i) Find all frequent itemsets using Apriori algorithm.
ii) List all the strong association rules (with support s and confidence
c).
4
7. Attempt any one part of the following: 10*1 = 10
Qno Questions CO
(a) Explain the HIVE architecture with its features in detail. 5
(b) Write R function to check whether the given number is prime or not. 5
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
Appendix [17]: Additional Study Material For
Numerical Perspectives
24
DATABASE
SYSTEMS
GROUP
What is Frequent Itemset Mining?
Frequent Itemset Mining:
Finding frequent patterns, associations, correlations, or causal structures
among sets of items or objects in transaction databases, relational
databases, and other information repositories.
โ€ข Given:
โ€“ A set of items ๐ผ = {๐‘–1, ๐‘–2, โ€ฆ , ๐‘–๐‘š}
โ€“ A database of transactions ๐ท, where a transaction ๐‘‡ โŠ† ๐ผ is a set of items
โ€ข Task 1: find all subsets of items that occur together in many
transactions.
โ€“ E.g.: 85% of transactions contain the itemset {milk, bread, butter}
โ€ข Task 2: find all rules that correlate the presence of one set of items with
that of another set of items in the transaction database.
โ€“ E.g.: 98% of people buying tires and auto accessories also get automotive service
done
โ€ข Applications: Basket data analysis, cross-marketing, catalog design,
loss-leader analysis, clustering, classification, recommendation systems,
etc.
Frequent Itemset Mining ๏ƒ  Introduction 3
DATABASE
SYSTEMS
GROUP
Example: Basket Data Analysis
โ€ข Transaction database
D= {{butter, bread, milk, sugar};
{butter, flour, milk, sugar};
{butter, eggs, milk, salt};
{eggs};
{butter, flour, milk, salt, sugar}}
โ€ข Question of interest:
โ€“ Which items are bought together frequently?
โ€ข Applications
โ€“ Improved store layout
โ€“ Cross marketing
โ€“ Focused attached mailings / add-on sales
โ€“ * ๏ƒž Maintenance Agreement
(What the store should do to boost Maintenance Agreement sales)
โ€“ Home Electronics ๏ƒž * (What other products should the store stock up?)
Frequent Itemset Mining ๏ƒ  Introduction 4
items frequency
{butter} 4
{milk} 4
{butter, milk} 4
{sugar} 3
{butter, sugar} 3
{milk, sugar} 3
{butter, milk, sugar} 3
{eggs} 2
โ€ฆ
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
DATABASE
SYSTEMS
GROUP
Chapter 3: Frequent Itemset Mining
1) Introduction
โ€“ Transaction databases, market basket data analysis
2) Mining Frequent Itemsets
โ€“ Apriori algorithm, hash trees, FP-tree
3) Simple Association Rules
โ€“ Basic notions, rule generation, interestingness measures
4) Further Topics
โ€“ Hierarchical Association Rules
โ€ข Motivation, notions, algorithms, interestingness
โ€“ Quantitative Association Rules
โ€ข Motivation, basic idea, partitioning numerical attributes, adaptation of
apriori algorithm, interestingness
5) Extensions and Summary
Outline 5
DATABASE
SYSTEMS
GROUP
Mining Frequent Itemsets: Basic
Notions
๏‚ง Items ๐ผ = {๐‘–1, ๐‘–2, โ€ฆ , ๐‘–๐‘š} : a set of literals (denoting items)
โ€ข Itemset ๐‘‹: Set of items ๐‘‹ โŠ† ๐ผ
โ€ข Database ๐ท: Set of transactions ๐‘‡, each transaction is a set of items T โŠ†
๐ผ
โ€ข Transaction ๐‘‡ contains an itemset ๐‘‹: ๐‘‹ โŠ† ๐‘‡
โ€ข The items in transactions and itemsets are sorted lexicographically:
โ€“ itemset ๐‘‹ = (๐‘ฅ1, ๐‘ฅ2, โ€ฆ , ๐‘ฅ๐‘˜ ), where ๐‘ฅ1 ๏‚ฃ ๐‘ฅ2 ๏‚ฃ โ€ฆ ๏‚ฃ ๐‘ฅ๐‘˜
โ€ข Length of an itemset: number of elements in the itemset
โ€ข k-itemset: itemset of length k
โ€ข The support of an itemset X is defined as: ๐‘ ๐‘ข๐‘๐‘๐‘œ๐‘Ÿ๐‘ก ๐‘‹ = ๐‘‡ โˆˆ ๐ท|๐‘‹ โŠ† ๐‘‡
โ€ข Frequent itemset: an itemset X is called frequent for database ๐ท iff it is
contained in more than ๐‘š๐‘–๐‘›๐‘†๐‘ข๐‘ many transactions: ๐‘ ๐‘ข๐‘๐‘๐‘œ๐‘Ÿ๐‘ก(๐‘‹) โ‰ฅ
๐‘š๐‘–๐‘›๐‘†๐‘ข๐‘
โ€ข Goal 1: Given a database ๐ทand a threshold ๐‘š๐‘–๐‘›๐‘†๐‘ข๐‘ , find all frequent
itemsets X โˆˆ ๐‘ƒ๐‘œ๐‘ก(๐ผ).
Frequent Itemset Mining ๏ƒ  Algorithms 6
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
DATABASE
SYSTEMS
GROUP
Mining Frequent Itemsets: Basic Idea
โ€ข Naรฏve Algorithm
โ€“ count the frequency of all possible subsets of ๐ผ in the database
๏ƒ  too expensive since there are 2m such itemsets for |๐ผ| = ๐‘š items
โ€ข The Apriori principle (anti-monotonicity):
Any non-empty subset of a frequent itemset is frequent, too!
A โŠ† I with support A โ‰ฅ minSup โ‡’ โˆ€Aโ€ฒ
โŠ‚ A โˆง Aโ€ฒ
โ‰  โˆ…: support Aโ€ฒ
โ‰ฅ minSup
Any superset of a non-frequent itemset is non-frequent, too!
A โŠ† I with support A < minSup โ‡’ โˆ€Aโ€ฒ
โŠƒ A: support Aโ€ฒ
< minSup
โ€ข Method based on the apriori principle
โ€“ First count the 1-itemsets, then the 2-itemsets,
then the 3-itemsets, and so on
โ€“ When counting (k+1)-itemsets, only consider those
(k+1)-itemsets where all subsets of length k have been
determined as frequent in the previous step
Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  Apriori Algorithm 7
cardinality of power set
ร˜
A B C D
AB AC AD BC BD CD
ABC ABD ACD BCD
ABCD not frequent
DATABASE
SYSTEMS
GROUP
The Apriori Algorithm
variable Ck: candidate itemsets of size k
variable Lk: frequent itemsets of size k
L1 = {frequent items}
for (k = 1; Lk !=๏ƒ†; k++) do begin
// JOIN STEP: join Lk with itself to produce Ck+1
// PRUNE STEP: discard (k+1)-itemsets from Ck+1 that
contain non-frequent k-itemsets as subsets
Ck+1 = candidates generated from Lk
for each transaction t in database do
Increment the count of all candidates in Ck+1
that are contained in t
Lk+1 = candidates in Ck+1 with min_support
return ๏ƒˆk Lk
Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  Apriori Algorithm 8
produce
candidates
prove
candidates
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
DATABASE
SYSTEMS
GROUP
Generating Candidates (Join Step)
โ€ข Requirements for set of all candidate ๐‘˜ + 1 -itemsets ๐ถ๐‘˜+1
โ€“ Completeness:
Must contain all frequent ๐‘˜ + 1 -itemsets (superset property ๐ถ๐‘˜+1 ๏ƒŠ ๐ฟ๐‘˜+1)
โ€“ Selectiveness:
Significantly smaller than the set of all ๐‘˜ + 1 -subsets
โ€“ Suppose the items are sorted by any order (e.g., lexicograph.)
โ€ข Step 1: Joining (๐ถ๐‘˜+1 = ๐ฟ๐‘˜ โ‹ˆ ๐ฟ๐‘˜)
โ€“ Consider frequent ๐‘˜-itemsets ๐‘ and ๐‘ž
โ€“ ๐‘ and ๐‘ž are joined if they share the same first ๐‘˜ โˆ’ 1 items
insert into Ck+1
select p.i1, p.i2, โ€ฆ, p.ikโ€“1, p.ik, q.ik
from Lk : p, Lk : q
where p.i1=q.i1, โ€ฆ, p.ik โ€“1 =q.ikโ€“1, p.ik < q.ik
Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  Apriori Algorithm 9
p ๏ƒŽ Lk=3 (A, C, F)
(A, C, F, G) ๏ƒŽ Ck+1=4
q ๏ƒŽ Lk=3 (A, C, G)
DATABASE
SYSTEMS
GROUP
Generating Candidates (Prune Step)
โ€ข Step 2: Pruning (๐ฟ๐‘˜+1 = {X โˆˆ ๐ถ๐‘˜+1|๐‘ ๐‘ข๐‘๐‘๐‘œ๐‘Ÿ๐‘ก ๐‘‹ โ‰ฅ ๐‘š๐‘–๐‘›๐‘†๐‘ข๐‘} )
โ€“ Naรฏve: Check support of every itemset in ๐ถ๐‘˜+1 ๏ƒŸ inefficient for huge ๐ถ๐‘˜+1
โ€“ Instead, apply Apriori principle first: Remove candidate (k+1) -itemsets
which contain a non-frequent k -subset s, i.e., s ๏ƒ Lk
forall itemsets c in Ck+1 do
forall k-subsets s of c do
if (s is not in Lk) then delete c from Ck+1
โ€ข Example 1
โ€“ L3 = {(ACF), (ACG), (AFG), (AFH), (CFG)}
โ€“ Candidates after the join step: {(ACFG), (AFGH)}
โ€“ In the pruning step: delete (AFGH) because (FGH) ๏ƒ L3, i.e., (FGH) is not a
frequent 3-itemset; also (AGH) ๏ƒ L3
๏ƒ  C4 = {(ACFG)} ๏ƒ  check the support to generate L4
Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  Apriori Algorithm 10
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
DATABASE
SYSTEMS
GROUP
Apriori Algorithm โ€“ Full Example
TID items
100 1 3 4 6
200 2 3 5
300 1 2 3 5
400 1 5 6
Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  Apriori Algorithm 11
itemsetcount
{1} 3
{2} 2
{3} 3
{4} 1
{5} 3
{6} 2
database D
scan D
minSup=0.5 C1 itemsetcount
{1} 3
{2} 2
{3} 3
{5} 3
{6} 2
L1
๐ฟ1 โ‹ˆ ๐ฟ1
itemset
{1 2}
{1 3}
{1 5}
{1 6}
{2 3}
{2 5}
{2 6}
{3 5}
{3 6}
{5 6}
C2
prune C1 scan D
C2 C2 itemsetcount
{1 3} 2
{1 5} 2
{1 6} 2
{2 3} 2
{2 5} 2
{3 5} 2
L2
itemset
{1 2}
{1 3}
{1 5}
{1 6}
{2 3}
{2 5}
{2 6}
{3 5}
{3 6}
{5 6}
itemsetcount
{1 2} 1
{1 3} 2
{1 5} 2
{1 6} 2
{2 3} 2
{2 5} 2
{2 6} 0
{3 5} 2
{3 6} 1
{5 6} 1
๐ฟ2 โ‹ˆ ๐ฟ2
itemset
{1 3 5}
{1 3 6}
{1 5 6}
{2 3 5}
C3
prune C2
itemset
{1 3 5}
{1 3 6} โœ—
{1 5 6} โœ—
{2 3 5}
C3
scan D
itemsetcount
{1 3 5} 1
{2 3 5} 2
C3 itemsetcount
{2 3 5} 2
L3
๐ฟ3 โ‹ˆ ๐ฟ3
C4 is empty
DATABASE
SYSTEMS
GROUP
How to Count Supports of
Candidates?
โ€ข Why is counting supports of candidates a problem?
โ€“ The total number of candidates can be very huge
โ€“ One transaction may contain many candidates
โ€ข Method: Hash-Tree
โ€“ Candidate itemsets are stored in a hash-tree
โ€“ Leaf nodes of hash-tree contain lists of itemsets and their support (i.e.,
counts)
โ€“ Interior nodes contain hash tables
โ€“ Subset function finds all the candidates contained in a transaction
Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  Apriori Algorithm 12
h(K) = K mod 3
e.g. for 3-Itemsets
0 1 2
0 1 2 0 1 2 0 1 2
(3 6 7) 0 1 2 (3 5 7)
(3 5 11)
(7 9 12)
(1 6 11)
(1 4 11)
(1 7 9)
(7 8 9)
(1 11 12)
(2 3 8)
(5 6 7)
0 1 2 (2 5 6)
(2 5 7)
(5 8 11)
(3 4 15) (3 7 11)
(3 4 11)
(3 4 8)
(2 4 6)
(2 7 9)
(2 4 7)
(5 7 10)
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
DATABASE
SYSTEMS
GROUP
Hash-Tree โ€“ Construction
โ€ข Searching for an itemset
โ€“ Start at the root (level 1)
โ€“ At level d: apply the hash function h to the d-th item in the itemset
โ€ข Insertion of an itemset
โ€“ search for the corresponding leaf node, and insert the itemset into that leaf
โ€“ if an overflow occurs:
โ€ข Transform the leaf node into an internal node
โ€ข Distribute the entries to the new leaf nodes according to the hash
function
Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  Apriori Algorithm 13
h(K) = K mod 3
for 3-Itemsets
0 1 2
0 1 2 0 1 2 0 1 2
(3 6 7) 0 1 2 (3 5 7)
(3 5 11)
(7 9 12)
(1 6 11)
(1 4 11)
(1 7 9)
(7 8 9)
(1 11 12)
(2 3 8)
(5 6 7)
0 1 2 (2 5 6)
(2 5 7)
(5 8 11)
(3 4 15) (3 7 11)
(3 4 11)
(3 4 8)
(2 4 6)
(2 7 9)
(2 4 7)
(5 7 10)
DATABASE
SYSTEMS
GROUP
Hash-Tree โ€“ Counting
โ€ข Search all candidate itemsets contained in a transaction T = (t1 t2 ... tn) for a
current itemset length of k
โ€ข At the root
โ€“ Determine the hash values for each item t1 t2 ... tn-k+1 in T
โ€“ Continue the search in the resulting child nodes
โ€ข At an internal node at level d (reached after hashing of item ๐‘ก๐‘–)
โ€“ Determine the hash values and continue the search for each item ๐‘ก๐‘— with ๐‘– < ๐‘— โ‰ค ๐‘› โˆ’
๐‘˜ + ๐‘‘
โ€ข At a leaf node
โ€“ Check whether the itemsets in the leaf node are contained in transaction T
Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  Apriori Algorithm 14
0 1 2
0 1 2 0 1 2 0 1 2
(3 6 7) 0 1 2 (3 5 7)
(3 5 11)
(7 9 12)
(1 6 11)
(1 4 11)
(1 7 9)
(7 8 9)
(1 11 12)
(2 3 8)
(5 6 7)
0 1 2 (2 5 6)
(2 5 7)
(5 8 11)
(3 4 15) (3 7 11)
(3 4 11)
(3 4 8)
(2 4 6)
(2 7 9)
(2 4 7)
(5 7 10)
3
9 7 3,9 7
1,7
9,12
Pruned subtrees
Tested leaf nodes
Transaction (1, 3, 7, 9, 12)
h(K) = K mod 3
in our example n=5 and k=3
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
DATABASE
SYSTEMS
GROUP
Is Apriori Fast Enough? โ€”
Performance Bottlenecks
โ€ข The core of the Apriori algorithm:
โ€“ Use frequent (k โ€“ 1)-itemsets to generate candidate frequent k-itemsets
โ€“ Use database scan and pattern matching to collect counts for the candidate
itemsets
โ€ข The bottleneck of Apriori: candidate generation
โ€“ Huge candidate sets:
โ€ข 104 frequent 1-itemsets will generate 107 candidate 2-itemsets
โ€ข To discover a frequent pattern of size 100, e.g., {a1, a2, โ€ฆ, a100}, one
needs to generate 2100 ๏‚ป 1030 candidates.
โ€“ Multiple scans of database:
โ€ข Needs n or n+1 scans, n is the length of the longest pattern
๏ƒ  Is it possible to mine the complete set of frequent itemsets without
candidate generation?
Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  Apriori Algorithm 15
DATABASE
SYSTEMS
GROUP
Mining Frequent Patterns Without
Candidate Generation
โ€ข Compress a large database into a compact, Frequent-Pattern tree (FP-
tree) structure
โ€“ highly condensed, but complete for frequent pattern mining
โ€“ avoid costly database scans
โ€ข Develop an efficient, FP-tree-based frequent pattern mining method
โ€“ A divide-and-conquer methodology: decompose mining tasks into smaller
ones
โ€“ Avoid candidate generation: sub-database test only!
โ€ข Idea:
โ€“ Compress database into FP-tree, retaining the itemset association
information
โ€“ Divide the compressed database into conditional databases, each associated
with one frequent item and mine each such database separately.
Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  FP-Tree 16
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
DATABASE
SYSTEMS
GROUP
Construct FP-tree from a Transaction
DB
Steps for compressing the database into a FP-tree:
1. Scan DB once, find frequent 1-itemsets (single items)
2. Order frequent items in frequency descending order
Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  FP-Tree 17
item frequency
f 4
c 4
a 3
b 3
m 3
p 3
1&2
header table:
TID items bought
100 {f, a, c, d, g, i, m, p}
200 {a, b, c, f, l, m, o}
300 {b, f, h, j, o}
400 {b, c, k, s, p}
500 {a, f, c, e, l, p, m, n}
sort items in the order
of descending support
minSup=0.5
DATABASE
SYSTEMS
GROUP
Construct FP-tree from a Transaction
DB
Steps for compressing the database into a FP-tree:
1. Scan DB once, find frequent 1-itemsets (single items)
2. Order frequent items in frequency descending order
3. Scan DB again, construct FP-tree starting with most frequent item per transaction
Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  FP-Tree 18
item frequency
f 4
c 4
a 3
b 3
m 3
p 3
header table:
TID items bought (ordered) frequent
items
100 {f, a, c, d, g, i, m, p} {f, c, a, m, p}
200 {a, b, c, f, l, m, o} {f, c, a, b, m}
300 {b, f, h, j, o} {f, b}
400 {b, c, k, s, p} {c, b, p}
500 {a, f, c, e, l, p, m, n} {f, c, a, m, p}
for each transaction only
keep its frequent items
sorted in descending
order of their frequencies
1&2
3a
for each transaction build a path in the FP-tree:
- If a path with common prefix exists:
increment frequency of nodes on this path
and append suffix
- Otherwise: create a new branch
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
DATABASE
SYSTEMS
GROUP
Construct FP-tree from a Transaction
DB
Steps for compressing the database into a FP-tree:
1. Scan DB once, find frequent 1-itemsets (single items)
2. Order frequent items in frequency descending order
3. Scan DB again, construct FP-tree starting with most frequent item per transaction
Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  FP-Tree 19
item frequency head
f 4
c 4
a 3
b 3
m 3
p 3
{}
f:4 c:1
b:1
p:1
b:1
c:3
a:3
b:1
m:2
p:2 m:1
header table:
TID items bought (ordered) frequent
items
100 {f, a, c, d, g, i, m, p} {f, c, a, m, p}
200 {a, b, c, f, l, m, o} {f, c, a, b, m}
300 {b, f, h, j, o} {f, b}
400 {b, c, k, s, p} {c, b, p}
500 {a, f, c, e, l, p, m, n} {f, c, a, m, p}
1&2
3a
3b
header table
references the
occurrences of the
frequent items in the
FP-tree
DATABASE
SYSTEMS
GROUP
Benefits of the FP-tree Structure
โ€ข Completeness:
โ€“ never breaks a long pattern of any transaction
โ€“ preserves complete information for frequent pattern mining
โ€ข Compactness
โ€“ reduce irrelevant informationโ€”infrequent items are gone
โ€“ frequency descending ordering: more frequent items are more likely to be
shared
โ€“ never be larger than the original database (if not count node-links and
counts)
โ€“ Experiments demonstrate compression ratios over 100
Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  FP-Tree 20
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
DATABASE
SYSTEMS
GROUP
Mining Frequent Patterns Using
FP-tree
โ€ข General idea (divide-and-conquer)
โ€“ Recursively grow frequent pattern path using the FP-tree
โ€ข Method
โ€“ For each item, construct its conditional pattern-base (prefix paths), and then
its conditional FP-tree
โ€“ Repeat the process on each newly created conditional FP-tree โ€ฆ
โ€“ โ€ฆuntil the resulting FP-tree is empty, or it contains only one path (single
path will generate all the combinations of its sub-paths, each of which is a
frequent pattern)
Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  FP-Tree 21
DATABASE
SYSTEMS
GROUP
Major Steps to Mine FP-tree
1) Construct conditional pattern base for each node in the FP-tree
2) Construct conditional FP-tree from each conditional pattern-base
3) Recursively mine conditional FP-trees and grow frequent patterns
obtained so far
โ€“ If the conditional FP-tree contains a single path, simply enumerate all the
patterns
Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  FP-Tree 22
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
DATABASE
SYSTEMS
GROUP
Major Steps to Mine FP-tree:
Conditional Pattern Base
1) Construct conditional pattern base for each node in the FP-tree
โ€“ Starting at the frequent header table in the FP-tree
โ€“ Traverse FP-tree by following the link of each frequent item (dashed lines)
โ€“ Accumulate all of transformed prefix paths of that item to form a conditional
pattern base
โ€ข For each item its prefixes are regarded as condition for it being a suffix. These
prefixes form the conditional pattern base. The frequency of the prefixes can be
read in the node of the item.
Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  FP-Tree 23
{}
f:4 c:1
b:1
p:1
b:1
c:3
a:3
b:1
m:2
p:2 m:1
item frequency head
f 4
c 4
a 3
b 3
m 3
p 3
header table:
item cond. pattern base
f {}
c f:3, {}
a fc:3
b fca:1, f:1, c:1
m fca:2, fcab:1
p fcam:2, cb:1
conditional pattern base:
DATABASE
SYSTEMS
GROUP
Properties of FP-tree for Conditional
Pattern Bases
โ€ข Node-link property
โ€“ For any frequent item ai, all the possible frequent patterns that contain ai
can be obtained by following ai's node-links, starting from ai's head in the
FP-tree header
โ€ข Prefix path property
โ€“ To calculate the frequent patterns for a node ai in a path P, only the prefix
sub-path of ai in P needs to be accumulated, and its frequency count should
carry the same count as node ai.
Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  FP-Tree 24
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
DATABASE
SYSTEMS
GROUP
Major Steps to Mine FP-tree:
Conditional FP-tree
1) Construct conditional pattern base for each node in the FP-tree โœ”
2) Construct conditional FP-tree from each conditional pattern-base
โ€“ The prefix paths of a suffix represent the conditional basis.
๏ƒ They can be regarded as transactions of a database.
โ€“ Those prefix paths whose support โ‰ฅ minSup, induce a conditional FP-tree
โ€“ For each pattern-base
โ€ข Accumulate the count for each item in the base
โ€ข Construct the FP-tree for the frequent items of the pattern base
Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  FP-Tree 25
conditional pattern base: m-conditional FP-tree
{}|m
f:3
c:3
a:3
item frequency
f 3 ..
c 3 ..
a 3 ..
b 1โœ—
item cond. pattern base
f {}
c f:3
a fc:3
b fca:1, f:1, c:1
m fca:2, fcab:1
p fcam:2, cb:1
DATABASE
SYSTEMS
GROUP
Major Steps to Mine FP-tree:
Conditional FP-tree
1) Construct conditional pattern base for each node in the FP-tree โœ”
2) Construct conditional FP-tree from each conditional pattern-base
Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  FP-Tree 26
conditional pattern base:
{}|m
f:3
c:3
a:3
item cond. pattern base
f {}
c f:3
a fc:3
b fca:1, f:1, c:1
m fca:2, fcab:1
p fcam:2, cb:1
{}|f = {} {}|c
f:3
{}|a
f:3
c:3
{}|b = {} {}|p
c:3
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
DATABASE
SYSTEMS
GROUP
Major Steps to Mine FP-tree
1) Construct conditional pattern base for each node in the FP-tree โœ”
2) Construct conditional FP-tree from each conditional pattern-base โœ”
3) Recursively mine conditional FP-trees and grow frequent patterns
obtained so far
โ€“ If the conditional FP-tree contains a single path, simply enumerate all the
patterns (enumerate all combinations of sub-paths)
Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  FP-Tree 27
example:
m-conditional FP-tree
{}|m
f:3
c:3
a:3
All frequent patterns
concerning m
m,
fm, cm, am,
fcm, fam, cam,
fcam
just a single path
DATABASE
SYSTEMS
GROUP
FP-tree: Full Example
Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  FP-Tree 28
item frequency head
f 4
b 3
c 3
{}
b:1
c:1
header table:
TID items bought (ordered) frequent items
100 {b, c, f} {f, b, c}
200 {a, b, c} {b, c}
300 {d, f} {f}
400 {b, c, e, f} {f, b, c}
500 {f, g} {f}
minSup=0.4
f:4
b:2
c:2
database:
item cond. pattern base
f {}
b f:2, {}
c fb:2, b:1
conditional pattern base:
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
DATABASE
SYSTEMS
GROUP
FP-tree: Full Example
Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  FP-Tree 29
{}
b:1
c:1
f:4
b:2
c:2
item cond. pattern base
f {}
b f:2
c fb:2, b:1
conditional pattern base 1:
{}|f = {} {}|b
f:2
{}|c
b:1
f:2
b:2
item cond. pattern base
b f:2
f {}
conditional pattern base 2:
{}|fc = {} {}|bc
f:2
{{f}}
{{b},{fb}}
{{fc}} {{bc},{fbc}}
DATABASE
SYSTEMS
GROUP
Principles of Frequent Pattern
Growth
โ€ข Pattern growth property
โ€“ Let ๏ก be a frequent itemset in DB, B be ๏ก's conditional pattern base, and ๏ข
be an itemset in B. Then ๏ก ๏ƒˆ ๏ข is a frequent itemset in DB iff ๏ข is frequent
in B.
โ€ข โ€œabcdef โ€ is a frequent pattern, if and only if
โ€“ โ€œabcde โ€ is a frequent pattern, and
โ€“ โ€œf โ€ is frequent in the set of transactions containing โ€œabcde โ€
Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  FP-Tree 30
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
DATABASE
SYSTEMS
GROUP
0
10
20
30
40
50
60
70
80
90
100
0 0,5 1 1,5 2 2,5 3
Support threshold(%)
Run
time(sec.)
D1 FP-grow th runtime
D1 Apriori runtime
Why Is Frequent Pattern Growth
Fast?
โ€ข Performance study in [Han, Pei&Yin โ€™00] shows
โ€“ FP-growth is an order of
magnitude faster than Apriori,
and is also faster than
tree-projection
โ€ข Reasoning
โ€“ No candidate generation, no candidate test
โ€ข Apriori algorithm has to proceed breadth-first
โ€“ Use compact data structure
โ€“ Eliminate repeated database scan
โ€“ Basic operation is counting and FP-tree building
Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  FP-Tree 31
Data set T25I20D10K:
T 25 avg. length of transactions
I 20 avg. length of frequent itemsets
D 10K database size (#transactions)
DATABASE
SYSTEMS
GROUP
Maximal or Closed Frequent Itemsets
โ€ข Big challenge: database contains potentially a huge number of frequent
itemsets (especially if minSup is set too low).
โ€“ A frequent itemset of length 100 contains 2100-1 many frequent subsets
โ€ข Closed frequent itemset:
An itemset X is closed in a data set D if there exists no proper super-
itemset Y such that ๐‘ ๐‘ข๐‘๐‘๐‘œ๐‘Ÿ๐‘ก(๐‘‹) = ๐‘ ๐‘ข๐‘๐‘๐‘œ๐‘Ÿ๐‘ก(๐‘Œ) in D.
โ€“ The set of closed frequent itemsets contains complete information regarding
its corresponding frequent itemsets.
โ€ข Maximal frequent itemset:
An itemset X is maximal in a data set D if there exists no proper super-
itemset Y such that ๐‘ ๐‘ข๐‘๐‘๐‘œ๐‘Ÿ๐‘ก ๐‘Œ โ‰ฅ ๐‘š๐‘–๐‘›๐‘†๐‘ข๐‘ in D.
โ€“ The set of maximal itemsets does not contain the complete support
information
โ€“ More compact representation
Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  Maximal or Closed Frequent Itemsets 32
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
DATABASE
SYSTEMS
GROUP
Chapter 3: Frequent Itemset Mining
1) Introduction
โ€“ Transaction databases, market basket data analysis
2) Mining Frequent Itemsets
โ€“ Apriori algorithm, hash trees, FP-tree
3) Simple Association Rules
โ€“ Basic notions, rule generation, interestingness measures
4) Further Topics
โ€“ Hierarchical Association Rules
โ€ข Motivation, notions, algorithms, interestingness
โ€“ Quantitative Association Rules
โ€ข Motivation, basic idea, partitioning numerical attributes, adaptation of
apriori algorithm, interestingness
5) Extensions and Summary
Outline 33
DATABASE
SYSTEMS
GROUP
Simple Association Rules:
Introduction
โ€ข Transaction database:
D= {{butter, bread, milk, sugar};
{butter, flour, milk, sugar};
{butter, eggs, milk, salt};
{eggs};
{butter, flour, milk, salt, sugar}}
โ€ข Frequent itemsets:
โ€ข Question of interest:
โ€“ If milk and sugar are bought, will the customer always buy butter as well?
๐‘š๐‘–๐‘™๐‘˜, ๐‘ ๐‘ข๐‘”๐‘Ž๐‘Ÿ โ‡’ ๐‘๐‘ข๐‘ก๐‘ก๐‘’๐‘Ÿ ?
โ€“ In this case, what would be the probability of buying butter?
Frequent Itemset Mining ๏ƒ  Simple Association Rules 34
items support
{butter} 4
{milk} 4
{butter, milk} 4
{sugar} 3
{butter, sugar} 3
{milk, sugar} 3
{butter, milk, sugar} 3
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
DATABASE
SYSTEMS
GROUP
Simple Association Rules: Basic
Notions
๏‚ง Items ๐ผ = {๐‘–1, ๐‘–2, โ€ฆ , ๐‘–๐‘š} : a set of literals (denoting items)
โ€ข Itemset ๐‘‹: Set of items ๐‘‹ โŠ† ๐ผ
โ€ข Database ๐ท: Set of transactions ๐‘‡, each transaction is a set of items T โŠ† ๐ผ
โ€ข Transaction ๐‘‡ contains an itemset ๐‘‹: ๐‘‹ โŠ† ๐‘‡
โ€ข The items in transactions and itemsets are sorted lexicographically:
โ€“ itemset ๐‘‹ = (๐‘ฅ1, ๐‘ฅ2, โ€ฆ , ๐‘ฅ๐‘˜ ), where ๐‘ฅ1 ๏‚ฃ ๐‘ฅ2 ๏‚ฃ โ€ฆ ๏‚ฃ ๐‘ฅ๐‘˜
โ€ข Length of an itemset: cardinality of the itemset (k-itemset: itemset of length
k)
โ€ข The support of an itemset X is defined as: ๐‘ ๐‘ข๐‘๐‘๐‘œ๐‘Ÿ๐‘ก ๐‘‹ = ๐‘‡ โˆˆ ๐ท|๐‘‹ โŠ† ๐‘‡
โ€ข Frequent itemset: an itemset X is called frequent iff ๐‘ ๐‘ข๐‘๐‘๐‘œ๐‘Ÿ๐‘ก(๐‘‹) โ‰ฅ ๐‘š๐‘–๐‘›๐‘†๐‘ข๐‘
โ€ข Association rule: An association rule is an implication of the form ๐‘‹ โ‡’ ๐‘Œ
where ๐‘‹, ๐‘Œ โŠ† ๐ผ are two itemsets with ๐‘‹ โˆฉ ๐‘Œ = โˆ….
โ€ข Note: simply enumerating all possible association rules is not reasonable!
๏ƒ  What are the interesting association rules w.r.t. ๐ท?
Frequent Itemset Mining ๏ƒ  Simple Association Rules 35
DATABASE
SYSTEMS
GROUP
Interestingness of Association Rules
โ€ข Interestingness of an association rule:
Quantify the interestingness of an association rule with respect to a
transaction database D:
โ€“ Support: frequency (probability) of the entire rule with respect to D
๐‘ ๐‘ข๐‘๐‘๐‘œ๐‘Ÿ๐‘ก ๐‘‹ โ‡’ ๐‘Œ = ๐‘ƒ ๐‘‹ โˆช ๐‘Œ =
{๐‘‡ โˆˆ ๐ท|๐‘‹ โˆช ๐‘Œ โŠ† ๐‘‡}
๐ท
= ๐‘ ๐‘ข๐‘๐‘๐‘œ๐‘Ÿ๐‘ก(๐‘‹ โˆช ๐‘Œ)
โ€œprobability that a transaction in ๐ท contains the itemset ๐‘‹ โˆช ๐‘Œโ€
โ€“ Confidence: indicates the strength of implication in the rule
๐‘๐‘œ๐‘›๐‘“๐‘–๐‘‘๐‘’๐‘›๐‘๐‘’ ๐‘‹ โ‡’ ๐‘Œ = ๐‘ƒ ๐‘Œ|๐‘‹ =
{๐‘‡ โˆˆ ๐ท|๐‘‹ โˆช ๐‘Œ โŠ† ๐‘‡}
{๐‘‡ โˆˆ ๐ท|๐‘‹ โŠ† ๐‘‡}
=
๐‘ ๐‘ข๐‘๐‘๐‘œ๐‘Ÿ๐‘ก(๐‘‹ โˆช ๐‘Œ)
๐‘ ๐‘ข๐‘๐‘๐‘œ๐‘Ÿ๐‘ก(๐‘‹)
โ€œconditional probability that a transaction in ๐ท containing the itemset ๐‘‹ also
contains itemset ๐‘Œโ€
โ€“ Rule form: โ€œ๐ต๐‘œ๐‘‘๐‘ฆ โ‡’ ๐ป๐‘’๐‘Ž๐‘‘ [๐‘ ๐‘ข๐‘๐‘๐‘œ๐‘Ÿ๐‘ก, ๐‘๐‘œ๐‘›๐‘“๐‘–๐‘‘๐‘’๐‘›๐‘๐‘’]โ€
โ€ข Association rule examples:
โ€“ buys diapers ๏ƒž buys beers [0.5%, 60%]
โ€“ major in CS โˆง takes DB ๏ƒž avg. grade A [1%, 75%]
Frequent Itemset Mining ๏ƒ  Simple Association Rules 36
buys beer
buys diapers
buys both
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
DATABASE
SYSTEMS
GROUP
Mining of Association Rules
โ€ข Task of mining association rules:
Given a database ๐ท, determine all association rules having a ๐‘ ๐‘ข๐‘๐‘๐‘œ๐‘Ÿ๐‘ก โ‰ฅ
๐‘š๐‘–๐‘›๐‘†๐‘ข๐‘ and a ๐‘๐‘œ๐‘›๐‘“๐‘–๐‘‘๐‘’๐‘›๐‘๐‘’ โ‰ฅ ๐‘š๐‘–๐‘›๐ถ๐‘œ๐‘›๐‘“ (so-called strong association
rules).
โ€ข Key steps of mining association rules:
1) Find frequent itemsets, i.e., itemsets that have at least support = ๐‘š๐‘–๐‘›๐‘†๐‘ข๐‘
2) Use the frequent itemsets to generate association rules
โ€ข For each itemset ๐‘‹ and every nonempty subset Y โŠ‚ ๐‘‹ generate rule Y โ‡’ (๐‘‹ โˆ’
๐‘Œ) if ๐‘š๐‘–๐‘›๐‘†๐‘ข๐‘ and ๐‘š๐‘–๐‘›๐ถ๐‘œ๐‘›๐‘“ are fulfilled
โ€ข we have 2|๐‘‹|
โˆ’ 2 many association rule candidates for each itemset ๐‘‹
โ€ข Example
frequent itemsets
rule candidates: A โ‡’ ๐ต; ๐ต โ‡’ ๐ด; A โ‡’ ๐ถ; ๐ถ โ‡’ A; ๐ต โ‡’ ๐ถ; C โ‡’ ๐ต;
๐ด, ๐ต โ‡’ ๐ถ; ๐ด, ๐ถ โ‡’ ๐ต; ๐ถ, ๐ต โ‡’ ๐ด; ๐ด โ‡’ ๐ต, ๐ถ; ๐ต โ‡’ ๐ด, ๐ถ; ๐ถ โ‡’ ๐ด, ๐ต
Frequent Itemset Mining ๏ƒ  Simple Association Rules 37
1-itemset count 2-itemset count 3-itemset count
{A}
{B}
{C}
3
4
5
{A, B}
{A, C}
{B, C}
3
2
4
{A, B, C} 2
DATABASE
SYSTEMS
GROUP
Generating Rules from Frequent
Itemsets
โ€ข For each frequent itemset X
โ€“ For each nonempty subset Y of X, form a rule Y โ‡’ (๐‘‹ โˆ’ ๐‘Œ)
โ€“ Delete those rules that do not have minimum confidence
Note: 1) support always exceeds ๐‘š๐‘–๐‘›๐‘†๐‘ข๐‘
2) the support values of the frequent itemsets suffice to calculate the
confidence
โ€ข Example: ๐‘‹ = {๐ด, ๐ต, ๐ถ}, ๐‘š๐‘–๐‘›๐ถ๐‘œ๐‘›๐‘“ = 60%
โ€“ conf (A ๏ƒž B) = 3/3; โœ”
โ€“ conf (B ๏ƒž A) = 3/4; โœ”
โ€“ conf (A ๏ƒž C) = 2/3; โœ”
โ€“ conf (C ๏ƒž A) = 2/5; โœ—
โ€“ conf (B ๏ƒž C) = 4/4; โœ”
โ€“ conf (C ๏ƒž B) = 4/5; โœ”
โ€“ conf (A ๏ƒž B, C) = 2/3; โœ” conf (B, C ๏ƒž A) = ยฝ โœ—
โ€“ conf (B ๏ƒž A, C) = 2/4; โœ— conf (A, C ๏ƒž B) = 1 โœ”
โ€“ conf (C ๏ƒž A, B) = 2/5; โœ— conf (A, B ๏ƒž C) = 2/3 โœ”
โ€ข Exploit anti-monotonicity for generating candidates for strong
association rules!
Frequent Itemset Mining ๏ƒ  Simple Association Rules 38
itemset count
{A}
{B}
{C}
3
4
5
{A, B}
{A, C}
{B, C}
3
2
4
{A, B, C} 2
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
DATABASE
SYSTEMS
GROUP
Interestingness Measurements
โ€ข Objective measures
โ€“ Two popular measurements:
โ€“ support and
โ€“ confidence
โ€ข Subjective measures [Silberschatz & Tuzhilin, KDD95]
โ€“ A rule (pattern) is interesting if it is
โ€“ unexpected (surprising to the user) and/or
โ€“ actionable (the user can do something with it)
Frequent Itemset Mining ๏ƒ  Simple Association Rules 39
DATABASE
SYSTEMS
GROUP
Criticism to Support and Confidence
Example 1 [Aggarwal & Yu, PODS98]
โ€ข Among 5000 students
โ€“ 3000 play basketball (=60%)
โ€“ 3750 eat cereal (=75%)
โ€“ 2000 both play basket ball and eat cereal (=40%)
โ€ข Rule play basketball ๏ƒž eat cereal [40%, 66.7%] is misleading because
the overall percentage of students eating cereal is 75% which is higher
than 66.7%
โ€ข Rule play basketball ๏ƒž not eat cereal [20%, 33.3%] is far more
accurate, although with lower support and confidence
โ€ข Observation: play basketball and eat cereal are negatively correlated
๏ƒ˜ Not all strong association rules are interesting and some can be
misleading.
๏ƒ  augment the support and confidence values with interestingness
measures such as the correlation ๐ด โ‡’ ๐ต [๐‘ ๐‘ข๐‘๐‘, ๐‘๐‘œ๐‘›๐‘“, ๐‘๐‘œ๐‘Ÿ๐‘Ÿ]
Frequent Itemset Mining ๏ƒ  Simple Association Rules 40
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
DATABASE
SYSTEMS
GROUP
Other Interestingness Measures:
Correlation
โ€ข Lift is a simple correlation measure between two items A and B:
! The two rules ๐ด โ‡’ ๐ต and ๐ต โ‡’ ๐ด have the same correlation coefficient.
โ€ข take both P(A) and P(B) in consideration
โ€ข ๐‘๐‘œ๐‘Ÿ๐‘Ÿ๐ด,๐ต > 1 the two items A and B are positively correlated
โ€ข ๐‘๐‘œ๐‘Ÿ๐‘Ÿ๐ด,๐ต = 1 there is no correlation between the two items A and B
โ€ข ๐‘๐‘œ๐‘Ÿ๐‘Ÿ๐ด,๐ต < 1 the two items A and B are negatively correlated
Frequent Itemset Mining ๏ƒ  Simple Association Rules 41
๐‘๐‘œ๐‘Ÿ๐‘Ÿ๐ด,๐ต =
๐‘ƒ(๐ด โ€ซฺ‚โ€ฌ ๐ต)
๐‘ƒ ๐ด ๐‘ƒ(๐ต)
=
๐‘ƒ ๐ต ๐ด )
๐‘ƒ ๐ต
=
๐‘๐‘œ๐‘›๐‘“(๐ดโ‡’๐ต)
๐‘ ๐‘ข๐‘๐‘(๐ต)
DATABASE
SYSTEMS
GROUP
Other Interestingness Measures:
Correlation
โ€ข Example 2:
โ€ข X and Y: positively correlated
โ€ข X and Z: negatively related
โ€ข support and confidence of X=>Z dominates
โ€ข but items X and Z are negatively correlated
โ€ข Items X and Y are positively correlated
Frequent Itemset Mining ๏ƒ  Simple Association Rules 42
X 1 1 1 1 0 0 0 0
Y 1 1 0 0 0 0 0 0
Z 0 1 1 1 1 1 1 1
rule support confidence correlation
๐‘‹ โ‡’ ๐‘Œ 25% 50% 2
๐‘‹ โ‡’ ๐‘ 37.5% 75% 0.86
๐‘Œ โ‡’ ๐‘ 12.5% 50% 0.57
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
DATABASE
SYSTEMS
GROUP
Chapter 3: Frequent Itemset Mining
1) Introduction
โ€“ Transaction databases, market basket data analysis
2) Mining Frequent Itemsets
โ€“ Apriori algorithm, hash trees, FP-tree
3) Simple Association Rules
โ€“ Basic notions, rule generation, interestingness measures
4) Further Topics
โ€“ Hierarchical Association Rules
โ€ข Motivation, notions, algorithms, interestingness
โ€“ Quantitative Association Rules
โ€ข Motivation, basic idea, partitioning numerical attributes, adaptation of
apriori algorithm, interestingness
5) Extensions and Summary
Outline 43
DATABASE
SYSTEMS
GROUP
Hierarchical Association Rules:
Motivation
โ€ข Problem of association rules in plain itemsets
โ€“ High minsup: apriori finds only few rules
โ€“ Low minsup: apriori finds unmanagably many rules
โ€ข Exploit item taxonomies (generalizations, is-a hierarchies) which exist
in many applications
โ€ข New task: find all generalized association rules between generalized
items ๏ƒ  Body and Head of a rule may have items of any level of the
hierarchy
โ€ข Generalized association rule: ๐‘‹ โ‡’ ๐‘Œ
with ๐‘‹, ๐‘Œ โŠ‚ ๐ผ, ๐‘‹ โˆฉ ๐‘Œ = โˆ… and no item in ๐‘Œ is an ancestor of any item in ๐‘‹
i.e., ๐‘—๐‘Ž๐‘๐‘˜๐‘’๐‘ก๐‘  โ‡’ ๐‘๐‘™๐‘œ๐‘กโ„Ž๐‘’๐‘  is essentially true
Frequent Itemset Mining ๏ƒ  Further Topics ๏ƒ  Hierarchical Association Rules 44
shoes
sports shoes boots
outerwear
jackets jeans
clothes
shirts
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
DATABASE
SYSTEMS
GROUP
Hierarchical Association Rules:
Motivating Example
โ€ข Examples
Jeans ๏ƒž boots
jackets ๏ƒž boots
Outerwear ๏ƒž boots Support > minsup
โ€ข Characteristics
โ€“ Support(โ€œouterwear ๏ƒž bootsโ€) is not necessarily equal to the sum
support(โ€œjackets ๏ƒž bootsโ€) + support( โ€œjeans ๏ƒž bootsโ€)
e.g. if a transaction with jackets, jeans and boots exists
โ€“ Support for sets of generalizations (e.g., product groups) is higher
than support for sets of individual items
If the support of rule โ€œouterwear ๏ƒž bootsโ€ exceeds minsup, then the
support of rule โ€œclothes ๏ƒž bootsโ€ does, too
Frequent Itemset Mining ๏ƒ  Further Topics ๏ƒ  Hierarchical Association Rules 45
Support < minSup
DATABASE
SYSTEMS
GROUP
Mining Multi-Level Associations
โ€ข A top_down, progressive deepening approach:
โ€“ First find high-level strong rules:
โ€ข milk ๏ƒž bread [20%, 60%].
โ€“ Then find their lower-level โ€œweakerโ€ rules:
โ€ข 1.5% milk ๏ƒž wheat bread [6%, 50%].
โ€ข Different min_support threshold across multi-levels lead to different
algorithms:
โ€“ adopting the same min_support across multi-levels
โ€“ adopting reduced min_support at lower levels
Frequent Itemset Mining ๏ƒ  Further Topics ๏ƒ  Hierarchical Association Rules 46
Food
bread
milk
3.5%
Sunset
Fraser
1.5% white
wheat
Wonder
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
DATABASE
SYSTEMS
GROUP
Minimum Support for Multiple Levels
โ€ข Uniform Support
+ the search procedure is simplified (monotonicity)
+ the user is required to specify only one support threshold
โ€ข Reduced Support
(Variable Support)
+ takes the lower frequency of items in lower levels into consideration
Frequent Itemset Mining ๏ƒ  Further Topics ๏ƒ  Hierarchical Association Rules 47
minsup = 5 %
minsup = 5 %
milk
support = 10 %
3.5%
support = 6 %
1.5%
support = 4 %
milk
support = 10 %
3.5%
support = 6 %
1.5%
support = 4 %
minsup = 3 %
minsup = 5 %
DATABASE
SYSTEMS
GROUP
Multilevel Association Mining using
Reduced Support
โ€ข A top_down, progressive deepening approach:
โ€“ First find high-level strong rules:
โ€ข milk ๏ƒž bread [20%, 60%].
โ€“ Then find their lower-level โ€œweakerโ€ rules:
โ€ข 1.5% milk ๏ƒž wheat bread [6%, 50%].
3 approaches using reduced Support:
โ€ข Level-by-level independent method:
โ€“ Examine each node in the hierarchy, regardless of whether or not its parent
node is found to be frequent
โ€ข Level-cross-filtering by single item:
โ€“ Examine a node only if its parent node at the preceding level is frequent
โ€ข Level-cross- filtering by k-itemset:
โ€“ Examine a k-itemset at a given level only if its parent k-itemset at the
preceding level is frequent
Frequent Itemset Mining ๏ƒ  Further Topics ๏ƒ  Hierarchical Association Rules 48
Food
bread
milk
3.5%
Sunset
Fraser
1.5% white
wheat
Wonder
level-wise processing (breadth first)
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
DATABASE
SYSTEMS
GROUP
Multilevel Associations: Variants
โ€ข A top_down, progressive deepening approach:
โ€“ First find high-level strong rules:
โ€ข milk ๏ƒž bread [20%, 60%].
โ€“ Then find their lower-level โ€œweakerโ€ rules:
โ€ข 1.5% milk ๏ƒž wheat bread [6%, 50%].
โ€ข Variations at mining multiple-level association rules.
โ€“ Level-crossed association rules:
โ€ข 1.5 % milk ๏ƒž Wonder wheat bread
โ€“ Association rules with multiple, alternative hierarchies:
โ€ข 1.5 % milk ๏ƒž Wonder bread
Frequent Itemset Mining ๏ƒ  Further Topics ๏ƒ  Hierarchical Association Rules 49
Food
bread
milk
3.5%
Sunset
Fraser
1.5% white
wheat
Wonder
level-wise processing (breadth first)
DATABASE
SYSTEMS
GROUP
Multi-level Association: Redundancy
Filtering
โ€ข Some rules may be redundant due to โ€œancestorโ€ relationships between
items.
โ€ข Example
โ€“ ๐‘…1: milk ๏ƒž wheat bread [support = 8%, confidence = 70%]
โ€“ ๐‘…2: 1.5% milk ๏ƒž wheat bread [support = 2%, confidence = 72%]
โ€ข We say that rule 1 is an ancestor of rule 2.
โ€ข Redundancy:
A rule is redundant if its support is close to the โ€œexpectedโ€ value, based
on the ruleโ€™s ancestor.
Frequent Itemset Mining ๏ƒ  Further Topics ๏ƒ  Hierarchical Association Rules 50
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
DATABASE
SYSTEMS
GROUP
Interestingness of Hierarchical
Association Rules: Notions
Let ๐‘‹, ๐‘‹โ€ฒ, ๐‘Œ, ๐‘Œโ€ฒ โŠ† ๐ผ be itemsets.
โ€ข An itemset ๐‘‹โ€ฒ is an ancestor of ๐‘‹ iff there exist ancestors ๐‘ฅ1
โ€ฒ
, โ€ฆ , ๐‘ฅ๐‘˜
โ€ฒ
of
๐‘ฅ1, โ€ฆ , ๐‘ฅ๐‘˜ โˆˆ ๐‘‹ and ๐‘ฅ๐‘˜+1, โ€ฆ , ๐‘ฅ๐‘› with ๐‘› = ๐‘‹ such that
๐‘‹โ€ฒ = {๐‘ฅ1
โ€ฒ
, โ€ฆ , ๐‘ฅ๐‘˜
โ€ฒ
, ๐‘ฅ๐‘˜+1, โ€ฆ , ๐‘ฅ๐‘›}.
โ€ข Let ๐‘‹โ€ฒ
and ๐‘Œโ€ฒ be ancestors of ๐‘‹ and ๐‘Œ. Then we call the rules ๐‘‹โ€ฒ ๏ƒž ๐‘Œโ€ฒ,
๐‘‹๏ƒž๐‘Œโ€ฒ, and ๐‘‹โ€ฒ๏ƒž๐‘Œ ancestors of the rule X ๏ƒž Y .
โ€ข The rule Xยด ๏ƒž Yยด is a direct ancestor of rule X ๏ƒž Y in a set of rules if:
โ€“ Rule Xยด ๏ƒž Yโ€˜ is an ancestor of rule X ๏ƒž Y, and
โ€“ There is no rule Xโ€œ ๏ƒž Yโ€œ such that Xโ€œ ๏ƒž Yโ€œ is an ancestor of
X ๏ƒž Y and Xยด ๏ƒž Yยด is an ancestor of Xโ€œ ๏ƒž Yโ€œ
โ€ข A hierarchical association rule X ๏ƒž Y is called R-interesting if:
โ€“ There are no direct ancestors of X ๏ƒž Y or
โ€“ The actual support is larger than R times the expected support or
โ€“ The actual confidence is larger than R times the expected confidence
Frequent Itemset Mining ๏ƒ  Further Topics ๏ƒ  Hierarchical Association Rules 51
DATABASE
SYSTEMS
GROUP
Expected Support and Expected
Confidence
โ€ข How to compute the expected support?
Given the rule for X ๏ƒž Y and its ancestor rule Xยด ๏ƒž Yยด the expected
support of X ๏ƒž Y is defined as:
๐ธ๐‘โ€ฒ P ๐‘ =
P(๐‘ง1)
P(๐‘ง1
โ€ฒ
)
ร— โ‹ฏ ร—
P ๐‘ง๐‘—
P(๐‘ง๐‘—
โ€ฒ
)
ร— P ๐‘โ€ฒ
where ๐‘ = ๐‘‹ โˆช ๐‘Œ = {๐‘ง1, โ€ฆ , ๐‘ง๐‘›}, ๐‘โ€ฒ
= ๐‘‹โ€ฒ
โˆช ๐‘Œโ€ฒ
= {๐‘ง1
โ€ฒ
, โ€ฆ , ๐‘ง๐‘—
โ€ฒ
, ๐‘ง๐‘—+1, โ€ฆ , ๐‘ง๐‘›} and
each ๐‘ง๐‘–
โ€ฒ
โˆˆ ๐‘โ€ฒ
is an ancestor of ๐‘ง๐‘– โˆˆ ๐‘
Frequent Itemset Mining ๏ƒ  Further Topics ๏ƒ  Hierarchical Association Rules 52
[SAโ€™95] R. Srikant, R. Agrawal: Mining Generalized Association Rules. In VLDB, 1995.
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
DATABASE
SYSTEMS
GROUP
Expected Support and Expected
Confidence
โ€ข How to compute the expected confidence?
Given the rule for X ๏ƒž Y and its ancestor rule Xยด ๏ƒž Yยด, then the
expected confidence of X ๏ƒž Y is defined as:
๐ธ๐‘‹โ€ฒโ‡’๐‘Œโ€ฒ P ๐‘Œ|๐‘‹ =
P(๐‘ฆ1)
P(๐‘ฆ1
โ€ฒ
)
ร— โ‹ฏ ร—
P ๐‘ฆ๐‘—
P ๐‘ฆ๐‘—
โ€ฒ
ร— P ๐‘Œโ€ฒ|๐‘‹โ€ฒ
where ๐‘Œ = {๐‘ฆ1, โ€ฆ , ๐‘ฆ๐‘›} and ๐‘Œโ€ฒ = ๐‘ฆ1
โ€ฒ
, โ€ฆ , ๐‘ฆ๐‘—
โ€ฒ
, ๐‘ฆ๐‘—+1, โ€ฆ , ๐‘ฆ๐‘› and each ๐‘ฆ๐‘–
โ€ฒ
โˆˆ ๐‘Œโ€ฒ is
an ancestor of ๐‘ฆ๐‘– โˆˆ ๐‘Œ
Frequent Itemset Mining ๏ƒ  Further Topics ๏ƒ  Hierarchical Association Rules 53
[SAโ€™95] R. Srikant, R. Agrawal: Mining Generalized Association Rules. In VLDB, 1995.
DATABASE
SYSTEMS
GROUP
Interestingness of Hierarchical
Association Rules:Example
โ€ข Example
โ€“ Let R = 1.6
โ€ข
Frequent Itemset Mining ๏ƒ  Further Topics ๏ƒ  Hierarchical Association Rules 54
Item Support
clothes 20
outerwear 10
jackets 4
No rule support R-interesting?
1 clothes ๏ƒž shoes 10 yes: no ancestors
2 outerwear ๏ƒž shoes 9 yes:
Support > R *exp. support (wrt. rule 1) =
(1.6 โ‹… (
10
20
โ‹… 10)) = 8
3 jackets ๏ƒž shoes 4 Not wrt. support:
Support > R * exp. support (wrt. rule 1) = 3.2
Support < R * exp. support (wrt. rule 2) = 5.75
๏ƒ  still need to check the confidence!
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
DATABASE
SYSTEMS
GROUP
Chapter 3: Frequent Itemset Mining
1) Introduction
โ€“ Transaction databases, market basket data analysis
2) Simple Association Rules
โ€“ Basic notions, rule generation, interestingness measures
3) Mining Frequent Itemsets
โ€“ Apriori algorithm, hash trees, FP-tree
4) Further Topics
โ€“ Hierarchical Association Rules
โ€ข Motivation, notions, algorithms, interestingness
โ€“ Multidimensional and Quantitative Association Rules
โ€ข Motivation, basic idea, partitioning numerical attributes, adaptation of
apriori algorithm, interestingness
5) Summary
Outline 55
DATABASE
SYSTEMS
GROUP
Multi-Dimensional Association:
Concepts
โ€ข Single-dimensional rules:
โ€“ buys milk ๏ƒž buys bread
โ€ข Multi-dimensional rules: ๏‚ณ 2 dimensions
โ€“ Inter-dimension association rules (no repeated dimensions)
โ€ข age between 19-25 ๏ƒ™ status is student ๏ƒž buys coke
โ€“ hybrid-dimension association rules (repeated dimensions)
โ€ข age between 19-25 ๏ƒ™ buys popcorn ๏ƒž buys coke
Frequent Itemset Mining ๏ƒ  Extensions & Summary 56
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
DATABASE
SYSTEMS
GROUP
Techniques for Mining Multi-
Dimensional Associations
โ€ข Search for frequent k-predicate set:
โ€“ Example: {age, occupation, buys} is a 3-predicate set.
โ€“ Techniques can be categorized by how age is treated.
1. Using static discretization of quantitative attributes
โ€“ Quantitative attributes are statically discretized by using predefined concept
hierarchies.
2. Quantitative association rules
โ€“ Quantitative attributes are dynamically discretized into โ€œbinsโ€based on the
distribution of the data.
3. Distance-based association rules
โ€“ This is a dynamic discretization process that considers the distance between
data points.
Frequent Itemset Mining ๏ƒ  Extensions & Summary 57
DATABASE
SYSTEMS
GROUP
Quantitative Association Rules
โ€ข Up to now: associations of boolean attributes only
โ€ข Now: numerical attributes, too
โ€ข Example:
โ€“ Original database
โ€“ Boolean database
Frequent Itemset Mining ๏ƒ  Further Topics ๏ƒ  Quantitative Association Rules 58
ID age marital status # cars
1 23 single 0
2 38 married 2
ID age: 20..29 age: 30..39 m-status: single m-status: married . . .
1 1 0 1 0 . . .
2 0 1 0 1 . . .
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
DATABASE
SYSTEMS
GROUP
Quantitative Association Rules: Ideas
โ€ข Static discretization
โ€“ Discretization of all attributes before mining the association rules
โ€“ E.g. by using a generalization hierarchy for each attribute
โ€“ Substitute numerical attribute values by ranges or intervals
โ€ข Dynamic discretization
โ€“ Discretization of the attributes during association rule mining
โ€“ Goal (e.g.): maximization of confidence
โ€“ Unification of neighboring association rules to a generalized rule
Frequent Itemset Mining ๏ƒ  Further Topics ๏ƒ  Quantitative Association Rules 59
DATABASE
SYSTEMS
GROUP
Partitioning of Numerical Attributes
โ€ข Problem: Minimum support
โ€“ Too many intervals ๏‚ฎ๏€ too small support for each individual interval
โ€“ Too few intervals ๏‚ฎ too small confidence of the rules
โ€ข Solution
โ€“ First, partition the domain into many intervals
โ€“ Afterwards, create new intervals by merging adjacent interval
โ€ข Numeric attributes are dynamically discretized such that the confidence
or compactness of the rules mined is maximized.
Frequent Itemset Mining ๏ƒ  Further Topics ๏ƒ  Quantitative Association Rules 60
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
DATABASE
SYSTEMS
GROUP
Quantitative Association Rules
โ€ข 2-D quantitative association rules: Aquan1 ๏ƒ™ Aquan2 ๏ƒž Acat
โ€ข Cluster โ€œadjacentโ€ association
rules to form general rules
using a 2-D grid.
โ€ข Example:
Frequent Itemset Mining ๏ƒ  Further Topics ๏ƒ  Quantitative Association Rules 61
age(X,โ€30-34โ€) ๏ƒ™ income(X,โ€24K - 48Kโ€)
๏ƒž buys(X,โ€high resolution TVโ€)
DATABASE
SYSTEMS
GROUP
Chapter 3: Frequent Itemset Mining
1) Introduction
โ€“ Transaction databases, market basket data analysis
2) Mining Frequent Itemsets
โ€“ Apriori algorithm, hash trees, FP-tree
3) Simple Association Rules
โ€“ Basic notions, rule generation, interestingness measures
4) Further Topics
โ€“ Hierarchical Association Rules
โ€ข Motivation, notions, algorithms, interestingness
โ€“ Quantitative Association Rules
โ€ข Motivation, basic idea, partitioning numerical attributes, adaptation of
apriori algorithm, interestingness
5) Summary
Outline 62
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
12 Reference
[1] https://www.jigsawacademy.com/blogs/hr-analytics/data-analytics-lifecycle/
[2] https://statacumen.com/teach/ADA1/ADA1_notes_F14.pdf
[3] https://www.youtube.com/watch?v=fDRa82lxzaU
[4] https://www.investopedia.com/terms/d/data-analytics.asp
[5] http://egyankosh.ac.in/bitstream/123456789/10935/1/Unit-2.pdf
[6] http://epgp.inflibnet.ac.in/epgpdata/uploads/epgp_content/computer_science/16._d
ata_analytics/03._evolution_of_analytical_scalability/et/9280_et_3_et.pdf
[7] https://bhavanakhivsara.files.wordpress.com/2018/06/data-science-and-big-data-analy
-nieizv_book.pdf
[8] https://www.researchgate.net/publication/317214679_Sentiment_Analysis_for_Effect
ive_Stock_Market_Prediction
[9] https://snscourseware.org/snscenew/files/1569681518.pdf
[10] http://csis.pace.edu/ctappert/cs816-19fall/books/2015DataScience&BigDataAnalytics.
pdf
[11] https://www.youtube.com/watch?v=mccsmoh2_3c
[12] https://mentalmodels4life.net/2015/11/18/agile-data-science-applying-kanban-in-the-a
nalytics-life-cycle/
[13] https://www.sas.com/en_in/insights/big-data/what-is-big-data.html#:~:text=Big%20dat
a%20refers%20to%20data,around%20for%20a%20long%20time.
[14] https://www.javatpoint.com/big-data-characteristics
[15] Liu, S., Wang, M., Zhan, Y., & Shi, J. (2009). Daily work stress and alcohol use: Testing the cross-
level moderation effects of neuroticism and job involvement. Personnel Psychology,62(3), 575โ€“597.
http://dx.doi.org/10.1111/j.1744-6570.2009.01149.x
55
A
p
r
i
l
1
2
,
2
0
2
4
/
D
r
.
R
S
[16] https://www.google.com/search?q=architecture+of+data+stream+model&sxsrf=APwXEdf9LJ8N
XMypRU-Sg28SH8m_pwiUDA:1679823244352&source=lnms&tbm=isch&sa=X&ved=2ahUKEwjGgY-epfn9
AhX5xTgGHRWjDmMQ_AUoAXoECAEQAw&biw=1366&bih=622#imgrc=wnFWJQ01p-w_jM
[17] Prof. Dr. Thomas Seidl, Frequent Itemset Mining, Knowledge Discovery in Databases, SS 2016.
********************
56

More Related Content

Similar to KIT-601 Lecture Notes-UNIT-4.pdf Frequent Itemsets and Clustering

Volume 2-issue-6-2081-2084
Volume 2-issue-6-2081-2084Volume 2-issue-6-2081-2084
Volume 2-issue-6-2081-2084
Editor IJARCET
ย 
Volume 2-issue-6-2081-2084
Volume 2-issue-6-2081-2084Volume 2-issue-6-2081-2084
Volume 2-issue-6-2081-2084
Editor IJARCET
ย 
Ijcatr04051008
Ijcatr04051008Ijcatr04051008
Ijcatr04051008
Editor IJCATR
ย 
Simulation and Performance Analysis of Long Term Evolution (LTE) Cellular Net...
Simulation and Performance Analysis of Long Term Evolution (LTE) Cellular Net...Simulation and Performance Analysis of Long Term Evolution (LTE) Cellular Net...
Simulation and Performance Analysis of Long Term Evolution (LTE) Cellular Net...
ijsrd.com
ย 
A Brief Overview On Frequent Pattern Mining Algorithms
A Brief Overview On Frequent Pattern Mining AlgorithmsA Brief Overview On Frequent Pattern Mining Algorithms
A Brief Overview On Frequent Pattern Mining Algorithms
Sara Alvarez
ย 
Hadoop Map-Reduce To Generate Frequent Item Set on Large Datasets Using Impro...
Hadoop Map-Reduce To Generate Frequent Item Set on Large Datasets Using Impro...Hadoop Map-Reduce To Generate Frequent Item Set on Large Datasets Using Impro...
Hadoop Map-Reduce To Generate Frequent Item Set on Large Datasets Using Impro...
BRNSSPublicationHubI
ย 
A NOVEL APPROACH TO MINE FREQUENT PATTERNS FROM LARGE VOLUME OF DATASET USING...
A NOVEL APPROACH TO MINE FREQUENT PATTERNS FROM LARGE VOLUME OF DATASET USING...A NOVEL APPROACH TO MINE FREQUENT PATTERNS FROM LARGE VOLUME OF DATASET USING...
A NOVEL APPROACH TO MINE FREQUENT PATTERNS FROM LARGE VOLUME OF DATASET USING...
IAEME Publication
ย 
Association and Correlation analysis.....
Association and Correlation analysis.....Association and Correlation analysis.....
Association and Correlation analysis.....
anjanasharma77573
ย 
Trading outlier detection machine learning approach
Trading outlier detection  machine learning approachTrading outlier detection  machine learning approach
Trading outlier detection machine learning approach
EditorIJAERD
ย 
BINARY DECISION TREE FOR ASSOCIATION RULES MINING IN INCREMENTAL DATABASES
BINARY DECISION TREE FOR ASSOCIATION RULES MINING IN INCREMENTAL DATABASESBINARY DECISION TREE FOR ASSOCIATION RULES MINING IN INCREMENTAL DATABASES
BINARY DECISION TREE FOR ASSOCIATION RULES MINING IN INCREMENTAL DATABASES
IJDKP
ย 
BINARY DECISION TREE FOR ASSOCIATION RULES MINING IN INCREMENTAL DATABASES
BINARY DECISION TREE FOR ASSOCIATION RULES MINING IN INCREMENTAL DATABASES BINARY DECISION TREE FOR ASSOCIATION RULES MINING IN INCREMENTAL DATABASES
BINARY DECISION TREE FOR ASSOCIATION RULES MINING IN INCREMENTAL DATABASES
IJDKP
ย 
F033026029
F033026029F033026029
F033026029
ijceronline
ย 
A comprehensive study of major techniques of multi level frequent pattern min...
A comprehensive study of major techniques of multi level frequent pattern min...A comprehensive study of major techniques of multi level frequent pattern min...
A comprehensive study of major techniques of multi level frequent pattern min...
eSAT Journals
ย 
A comprehensive study of major techniques of multi level frequent pattern min...
A comprehensive study of major techniques of multi level frequent pattern min...A comprehensive study of major techniques of multi level frequent pattern min...
A comprehensive study of major techniques of multi level frequent pattern min...
eSAT Publishing House
ย 
International journal of computer science and innovation vol 2015-n1-paper4
International journal of computer science and innovation  vol 2015-n1-paper4International journal of computer science and innovation  vol 2015-n1-paper4
International journal of computer science and innovation vol 2015-n1-paper4
sophiabelthome
ย 
Output Privacy Protection With Pattern-Based Heuristic Algorithm
Output Privacy Protection With Pattern-Based Heuristic AlgorithmOutput Privacy Protection With Pattern-Based Heuristic Algorithm
Output Privacy Protection With Pattern-Based Heuristic Algorithm
ijcsit
ย 
An improved apriori algorithm for association rules
An improved apriori algorithm for association rulesAn improved apriori algorithm for association rules
An improved apriori algorithm for association rules
ijnlc
ย 
Ijariie1129
Ijariie1129Ijariie1129
Ijariie1129
IJARIIE JOURNAL
ย 
Ijcatr04051004
Ijcatr04051004Ijcatr04051004
Ijcatr04051004
Editor IJCATR
ย 
IMPROVED APRIORI ALGORITHM FOR ASSOCIATION RULES
IMPROVED APRIORI ALGORITHM FOR ASSOCIATION RULESIMPROVED APRIORI ALGORITHM FOR ASSOCIATION RULES
IMPROVED APRIORI ALGORITHM FOR ASSOCIATION RULES
International Journal of Technical Research & Application
ย 

Similar to KIT-601 Lecture Notes-UNIT-4.pdf Frequent Itemsets and Clustering (20)

Volume 2-issue-6-2081-2084
Volume 2-issue-6-2081-2084Volume 2-issue-6-2081-2084
Volume 2-issue-6-2081-2084
ย 
Volume 2-issue-6-2081-2084
Volume 2-issue-6-2081-2084Volume 2-issue-6-2081-2084
Volume 2-issue-6-2081-2084
ย 
Ijcatr04051008
Ijcatr04051008Ijcatr04051008
Ijcatr04051008
ย 
Simulation and Performance Analysis of Long Term Evolution (LTE) Cellular Net...
Simulation and Performance Analysis of Long Term Evolution (LTE) Cellular Net...Simulation and Performance Analysis of Long Term Evolution (LTE) Cellular Net...
Simulation and Performance Analysis of Long Term Evolution (LTE) Cellular Net...
ย 
A Brief Overview On Frequent Pattern Mining Algorithms
A Brief Overview On Frequent Pattern Mining AlgorithmsA Brief Overview On Frequent Pattern Mining Algorithms
A Brief Overview On Frequent Pattern Mining Algorithms
ย 
Hadoop Map-Reduce To Generate Frequent Item Set on Large Datasets Using Impro...
Hadoop Map-Reduce To Generate Frequent Item Set on Large Datasets Using Impro...Hadoop Map-Reduce To Generate Frequent Item Set on Large Datasets Using Impro...
Hadoop Map-Reduce To Generate Frequent Item Set on Large Datasets Using Impro...
ย 
A NOVEL APPROACH TO MINE FREQUENT PATTERNS FROM LARGE VOLUME OF DATASET USING...
A NOVEL APPROACH TO MINE FREQUENT PATTERNS FROM LARGE VOLUME OF DATASET USING...A NOVEL APPROACH TO MINE FREQUENT PATTERNS FROM LARGE VOLUME OF DATASET USING...
A NOVEL APPROACH TO MINE FREQUENT PATTERNS FROM LARGE VOLUME OF DATASET USING...
ย 
Association and Correlation analysis.....
Association and Correlation analysis.....Association and Correlation analysis.....
Association and Correlation analysis.....
ย 
Trading outlier detection machine learning approach
Trading outlier detection  machine learning approachTrading outlier detection  machine learning approach
Trading outlier detection machine learning approach
ย 
BINARY DECISION TREE FOR ASSOCIATION RULES MINING IN INCREMENTAL DATABASES
BINARY DECISION TREE FOR ASSOCIATION RULES MINING IN INCREMENTAL DATABASESBINARY DECISION TREE FOR ASSOCIATION RULES MINING IN INCREMENTAL DATABASES
BINARY DECISION TREE FOR ASSOCIATION RULES MINING IN INCREMENTAL DATABASES
ย 
BINARY DECISION TREE FOR ASSOCIATION RULES MINING IN INCREMENTAL DATABASES
BINARY DECISION TREE FOR ASSOCIATION RULES MINING IN INCREMENTAL DATABASES BINARY DECISION TREE FOR ASSOCIATION RULES MINING IN INCREMENTAL DATABASES
BINARY DECISION TREE FOR ASSOCIATION RULES MINING IN INCREMENTAL DATABASES
ย 
F033026029
F033026029F033026029
F033026029
ย 
A comprehensive study of major techniques of multi level frequent pattern min...
A comprehensive study of major techniques of multi level frequent pattern min...A comprehensive study of major techniques of multi level frequent pattern min...
A comprehensive study of major techniques of multi level frequent pattern min...
ย 
A comprehensive study of major techniques of multi level frequent pattern min...
A comprehensive study of major techniques of multi level frequent pattern min...A comprehensive study of major techniques of multi level frequent pattern min...
A comprehensive study of major techniques of multi level frequent pattern min...
ย 
International journal of computer science and innovation vol 2015-n1-paper4
International journal of computer science and innovation  vol 2015-n1-paper4International journal of computer science and innovation  vol 2015-n1-paper4
International journal of computer science and innovation vol 2015-n1-paper4
ย 
Output Privacy Protection With Pattern-Based Heuristic Algorithm
Output Privacy Protection With Pattern-Based Heuristic AlgorithmOutput Privacy Protection With Pattern-Based Heuristic Algorithm
Output Privacy Protection With Pattern-Based Heuristic Algorithm
ย 
An improved apriori algorithm for association rules
An improved apriori algorithm for association rulesAn improved apriori algorithm for association rules
An improved apriori algorithm for association rules
ย 
Ijariie1129
Ijariie1129Ijariie1129
Ijariie1129
ย 
Ijcatr04051004
Ijcatr04051004Ijcatr04051004
Ijcatr04051004
ย 
IMPROVED APRIORI ALGORITHM FOR ASSOCIATION RULES
IMPROVED APRIORI ALGORITHM FOR ASSOCIATION RULESIMPROVED APRIORI ALGORITHM FOR ASSOCIATION RULES
IMPROVED APRIORI ALGORITHM FOR ASSOCIATION RULES
ย 

More from Dr. Radhey Shyam

KIT-601 Lecture Notes-UNIT-5.pdf Frame Works and Visualization
KIT-601 Lecture Notes-UNIT-5.pdf Frame Works and VisualizationKIT-601 Lecture Notes-UNIT-5.pdf Frame Works and Visualization
KIT-601 Lecture Notes-UNIT-5.pdf Frame Works and Visualization
Dr. Radhey Shyam
ย 
KIT-601 Lecture Notes-UNIT-3.pdf Mining Data Stream
KIT-601 Lecture Notes-UNIT-3.pdf Mining Data StreamKIT-601 Lecture Notes-UNIT-3.pdf Mining Data Stream
KIT-601 Lecture Notes-UNIT-3.pdf Mining Data Stream
Dr. Radhey Shyam
ย 
IT-601 Lecture Notes-UNIT-2.pdf Data Analysis
IT-601 Lecture Notes-UNIT-2.pdf Data AnalysisIT-601 Lecture Notes-UNIT-2.pdf Data Analysis
IT-601 Lecture Notes-UNIT-2.pdf Data Analysis
Dr. Radhey Shyam
ย 
KIT-601-L-UNIT-1 (Revised) Introduction to Data Analytcs.pdf
KIT-601-L-UNIT-1 (Revised) Introduction to Data Analytcs.pdfKIT-601-L-UNIT-1 (Revised) Introduction to Data Analytcs.pdf
KIT-601-L-UNIT-1 (Revised) Introduction to Data Analytcs.pdf
Dr. Radhey Shyam
ย 
SE-UNIT-3-II-Software metrics, numerical and their solutions.pdf
SE-UNIT-3-II-Software metrics, numerical and their solutions.pdfSE-UNIT-3-II-Software metrics, numerical and their solutions.pdf
SE-UNIT-3-II-Software metrics, numerical and their solutions.pdf
Dr. Radhey Shyam
ย 
Introduction to Data Analytics and data analytics life cycle
Introduction to Data Analytics and data analytics life cycleIntroduction to Data Analytics and data analytics life cycle
Introduction to Data Analytics and data analytics life cycle
Dr. Radhey Shyam
ย 
KCS-501-3.pdf
KCS-501-3.pdfKCS-501-3.pdf
KCS-501-3.pdf
Dr. Radhey Shyam
ย 
KIT-601 Lecture Notes-UNIT-2.pdf
KIT-601 Lecture Notes-UNIT-2.pdfKIT-601 Lecture Notes-UNIT-2.pdf
KIT-601 Lecture Notes-UNIT-2.pdf
Dr. Radhey Shyam
ย 
KIT-601 Lecture Notes-UNIT-1.pdf
KIT-601 Lecture Notes-UNIT-1.pdfKIT-601 Lecture Notes-UNIT-1.pdf
KIT-601 Lecture Notes-UNIT-1.pdf
Dr. Radhey Shyam
ย 
KCS-055 U5.pdf
KCS-055 U5.pdfKCS-055 U5.pdf
KCS-055 U5.pdf
Dr. Radhey Shyam
ย 
KCS-055 MLT U4.pdf
KCS-055 MLT U4.pdfKCS-055 MLT U4.pdf
KCS-055 MLT U4.pdf
Dr. Radhey Shyam
ย 
Deep-Learning-2017-Lecture5CNN.pptx
Deep-Learning-2017-Lecture5CNN.pptxDeep-Learning-2017-Lecture5CNN.pptx
Deep-Learning-2017-Lecture5CNN.pptx
Dr. Radhey Shyam
ย 
SE UNIT-3 (Software metrics).pdf
SE UNIT-3 (Software metrics).pdfSE UNIT-3 (Software metrics).pdf
SE UNIT-3 (Software metrics).pdf
Dr. Radhey Shyam
ย 
SE UNIT-2.pdf
SE UNIT-2.pdfSE UNIT-2.pdf
SE UNIT-2.pdf
Dr. Radhey Shyam
ย 
SE UNIT-1 Revised.pdf
SE UNIT-1 Revised.pdfSE UNIT-1 Revised.pdf
SE UNIT-1 Revised.pdf
Dr. Radhey Shyam
ย 
SE UNIT-3.pdf
SE UNIT-3.pdfSE UNIT-3.pdf
SE UNIT-3.pdf
Dr. Radhey Shyam
ย 
Ip unit 5
Ip unit 5Ip unit 5
Ip unit 5
Dr. Radhey Shyam
ย 
Ip unit 4 modified on 22.06.21
Ip unit 4 modified on 22.06.21Ip unit 4 modified on 22.06.21
Ip unit 4 modified on 22.06.21
Dr. Radhey Shyam
ย 
Ip unit 3 modified of 26.06.2021
Ip unit 3 modified of 26.06.2021Ip unit 3 modified of 26.06.2021
Ip unit 3 modified of 26.06.2021
Dr. Radhey Shyam
ย 
Ip unit 2 modified on 8.6.2021
Ip unit 2 modified on 8.6.2021Ip unit 2 modified on 8.6.2021
Ip unit 2 modified on 8.6.2021
Dr. Radhey Shyam
ย 

More from Dr. Radhey Shyam (20)

KIT-601 Lecture Notes-UNIT-5.pdf Frame Works and Visualization
KIT-601 Lecture Notes-UNIT-5.pdf Frame Works and VisualizationKIT-601 Lecture Notes-UNIT-5.pdf Frame Works and Visualization
KIT-601 Lecture Notes-UNIT-5.pdf Frame Works and Visualization
ย 
KIT-601 Lecture Notes-UNIT-3.pdf Mining Data Stream
KIT-601 Lecture Notes-UNIT-3.pdf Mining Data StreamKIT-601 Lecture Notes-UNIT-3.pdf Mining Data Stream
KIT-601 Lecture Notes-UNIT-3.pdf Mining Data Stream
ย 
IT-601 Lecture Notes-UNIT-2.pdf Data Analysis
IT-601 Lecture Notes-UNIT-2.pdf Data AnalysisIT-601 Lecture Notes-UNIT-2.pdf Data Analysis
IT-601 Lecture Notes-UNIT-2.pdf Data Analysis
ย 
KIT-601-L-UNIT-1 (Revised) Introduction to Data Analytcs.pdf
KIT-601-L-UNIT-1 (Revised) Introduction to Data Analytcs.pdfKIT-601-L-UNIT-1 (Revised) Introduction to Data Analytcs.pdf
KIT-601-L-UNIT-1 (Revised) Introduction to Data Analytcs.pdf
ย 
SE-UNIT-3-II-Software metrics, numerical and their solutions.pdf
SE-UNIT-3-II-Software metrics, numerical and their solutions.pdfSE-UNIT-3-II-Software metrics, numerical and their solutions.pdf
SE-UNIT-3-II-Software metrics, numerical and their solutions.pdf
ย 
Introduction to Data Analytics and data analytics life cycle
Introduction to Data Analytics and data analytics life cycleIntroduction to Data Analytics and data analytics life cycle
Introduction to Data Analytics and data analytics life cycle
ย 
KCS-501-3.pdf
KCS-501-3.pdfKCS-501-3.pdf
KCS-501-3.pdf
ย 
KIT-601 Lecture Notes-UNIT-2.pdf
KIT-601 Lecture Notes-UNIT-2.pdfKIT-601 Lecture Notes-UNIT-2.pdf
KIT-601 Lecture Notes-UNIT-2.pdf
ย 
KIT-601 Lecture Notes-UNIT-1.pdf
KIT-601 Lecture Notes-UNIT-1.pdfKIT-601 Lecture Notes-UNIT-1.pdf
KIT-601 Lecture Notes-UNIT-1.pdf
ย 
KCS-055 U5.pdf
KCS-055 U5.pdfKCS-055 U5.pdf
KCS-055 U5.pdf
ย 
KCS-055 MLT U4.pdf
KCS-055 MLT U4.pdfKCS-055 MLT U4.pdf
KCS-055 MLT U4.pdf
ย 
Deep-Learning-2017-Lecture5CNN.pptx
Deep-Learning-2017-Lecture5CNN.pptxDeep-Learning-2017-Lecture5CNN.pptx
Deep-Learning-2017-Lecture5CNN.pptx
ย 
SE UNIT-3 (Software metrics).pdf
SE UNIT-3 (Software metrics).pdfSE UNIT-3 (Software metrics).pdf
SE UNIT-3 (Software metrics).pdf
ย 
SE UNIT-2.pdf
SE UNIT-2.pdfSE UNIT-2.pdf
SE UNIT-2.pdf
ย 
SE UNIT-1 Revised.pdf
SE UNIT-1 Revised.pdfSE UNIT-1 Revised.pdf
SE UNIT-1 Revised.pdf
ย 
SE UNIT-3.pdf
SE UNIT-3.pdfSE UNIT-3.pdf
SE UNIT-3.pdf
ย 
Ip unit 5
Ip unit 5Ip unit 5
Ip unit 5
ย 
Ip unit 4 modified on 22.06.21
Ip unit 4 modified on 22.06.21Ip unit 4 modified on 22.06.21
Ip unit 4 modified on 22.06.21
ย 
Ip unit 3 modified of 26.06.2021
Ip unit 3 modified of 26.06.2021Ip unit 3 modified of 26.06.2021
Ip unit 3 modified of 26.06.2021
ย 
Ip unit 2 modified on 8.6.2021
Ip unit 2 modified on 8.6.2021Ip unit 2 modified on 8.6.2021
Ip unit 2 modified on 8.6.2021
ย 

Recently uploaded

spirit beverages ppt without graphics.pptx
spirit beverages ppt without graphics.pptxspirit beverages ppt without graphics.pptx
spirit beverages ppt without graphics.pptx
Madan Karki
ย 
Mechanical Engineering on AAI Summer Training Report-003.pdf
Mechanical Engineering on AAI Summer Training Report-003.pdfMechanical Engineering on AAI Summer Training Report-003.pdf
Mechanical Engineering on AAI Summer Training Report-003.pdf
21UME003TUSHARDEB
ย 
cnn.pptx Convolutional neural network used for image classication
cnn.pptx Convolutional neural network used for image classicationcnn.pptx Convolutional neural network used for image classication
cnn.pptx Convolutional neural network used for image classication
SakkaravarthiShanmug
ย 
2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf
2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf
2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf
Yasser Mahgoub
ย 
Curve Fitting in Numerical Methods Regression
Curve Fitting in Numerical Methods RegressionCurve Fitting in Numerical Methods Regression
Curve Fitting in Numerical Methods Regression
Nada Hikmah
ย 
Advanced control scheme of doubly fed induction generator for wind turbine us...
Advanced control scheme of doubly fed induction generator for wind turbine us...Advanced control scheme of doubly fed induction generator for wind turbine us...
Advanced control scheme of doubly fed induction generator for wind turbine us...
IJECEIAES
ย 
ISPM 15 Heat Treated Wood Stamps and why your shipping must have one
ISPM 15 Heat Treated Wood Stamps and why your shipping must have oneISPM 15 Heat Treated Wood Stamps and why your shipping must have one
ISPM 15 Heat Treated Wood Stamps and why your shipping must have one
Las Vegas Warehouse
ย 
Data Driven Maintenance | UReason Webinar
Data Driven Maintenance | UReason WebinarData Driven Maintenance | UReason Webinar
Data Driven Maintenance | UReason Webinar
UReason
ย 
LLM Fine Tuning with QLoRA Cassandra Lunch 4, presented by Anant
LLM Fine Tuning with QLoRA Cassandra Lunch 4, presented by AnantLLM Fine Tuning with QLoRA Cassandra Lunch 4, presented by Anant
LLM Fine Tuning with QLoRA Cassandra Lunch 4, presented by Anant
Anant Corporation
ย 
Manufacturing Process of molasses based distillery ppt.pptx
Manufacturing Process of molasses based distillery ppt.pptxManufacturing Process of molasses based distillery ppt.pptx
Manufacturing Process of molasses based distillery ppt.pptx
Madan Karki
ย 
CompEx~Manual~1210 (2).pdf COMPEX GAS AND VAPOURS
CompEx~Manual~1210 (2).pdf COMPEX GAS AND VAPOURSCompEx~Manual~1210 (2).pdf COMPEX GAS AND VAPOURS
CompEx~Manual~1210 (2).pdf COMPEX GAS AND VAPOURS
RamonNovais6
ย 
CEC 352 - SATELLITE COMMUNICATION UNIT 1
CEC 352 - SATELLITE COMMUNICATION UNIT 1CEC 352 - SATELLITE COMMUNICATION UNIT 1
CEC 352 - SATELLITE COMMUNICATION UNIT 1
PKavitha10
ย 
Certificates - Mahmoud Mohamed Moursi Ahmed
Certificates - Mahmoud Mohamed Moursi AhmedCertificates - Mahmoud Mohamed Moursi Ahmed
Certificates - Mahmoud Mohamed Moursi Ahmed
Mahmoud Morsy
ย 
Transformers design and coooling methods
Transformers design and coooling methodsTransformers design and coooling methods
Transformers design and coooling methods
Roger Rozario
ย 
Embedded machine learning-based road conditions and driving behavior monitoring
Embedded machine learning-based road conditions and driving behavior monitoringEmbedded machine learning-based road conditions and driving behavior monitoring
Embedded machine learning-based road conditions and driving behavior monitoring
IJECEIAES
ย 
Properties Railway Sleepers and Test.pptx
Properties Railway Sleepers and Test.pptxProperties Railway Sleepers and Test.pptx
Properties Railway Sleepers and Test.pptx
MDSABBIROJJAMANPAYEL
ย 
22CYT12-Unit-V-E Waste and its Management.ppt
22CYT12-Unit-V-E Waste and its Management.ppt22CYT12-Unit-V-E Waste and its Management.ppt
22CYT12-Unit-V-E Waste and its Management.ppt
KrishnaveniKrishnara1
ย 
CHINAโ€™S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECT
CHINAโ€™S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTCHINAโ€™S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECT
CHINAโ€™S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECT
jpsjournal1
ย 
Welding Metallurgy Ferrous Materials.pdf
Welding Metallurgy Ferrous Materials.pdfWelding Metallurgy Ferrous Materials.pdf
Welding Metallurgy Ferrous Materials.pdf
AjmalKhan50578
ย 
artificial intelligence and data science contents.pptx
artificial intelligence and data science contents.pptxartificial intelligence and data science contents.pptx
artificial intelligence and data science contents.pptx
GauravCar
ย 

Recently uploaded (20)

spirit beverages ppt without graphics.pptx
spirit beverages ppt without graphics.pptxspirit beverages ppt without graphics.pptx
spirit beverages ppt without graphics.pptx
ย 
Mechanical Engineering on AAI Summer Training Report-003.pdf
Mechanical Engineering on AAI Summer Training Report-003.pdfMechanical Engineering on AAI Summer Training Report-003.pdf
Mechanical Engineering on AAI Summer Training Report-003.pdf
ย 
cnn.pptx Convolutional neural network used for image classication
cnn.pptx Convolutional neural network used for image classicationcnn.pptx Convolutional neural network used for image classication
cnn.pptx Convolutional neural network used for image classication
ย 
2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf
2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf
2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf
ย 
Curve Fitting in Numerical Methods Regression
Curve Fitting in Numerical Methods RegressionCurve Fitting in Numerical Methods Regression
Curve Fitting in Numerical Methods Regression
ย 
Advanced control scheme of doubly fed induction generator for wind turbine us...
Advanced control scheme of doubly fed induction generator for wind turbine us...Advanced control scheme of doubly fed induction generator for wind turbine us...
Advanced control scheme of doubly fed induction generator for wind turbine us...
ย 
ISPM 15 Heat Treated Wood Stamps and why your shipping must have one
ISPM 15 Heat Treated Wood Stamps and why your shipping must have oneISPM 15 Heat Treated Wood Stamps and why your shipping must have one
ISPM 15 Heat Treated Wood Stamps and why your shipping must have one
ย 
Data Driven Maintenance | UReason Webinar
Data Driven Maintenance | UReason WebinarData Driven Maintenance | UReason Webinar
Data Driven Maintenance | UReason Webinar
ย 
LLM Fine Tuning with QLoRA Cassandra Lunch 4, presented by Anant
LLM Fine Tuning with QLoRA Cassandra Lunch 4, presented by AnantLLM Fine Tuning with QLoRA Cassandra Lunch 4, presented by Anant
LLM Fine Tuning with QLoRA Cassandra Lunch 4, presented by Anant
ย 
Manufacturing Process of molasses based distillery ppt.pptx
Manufacturing Process of molasses based distillery ppt.pptxManufacturing Process of molasses based distillery ppt.pptx
Manufacturing Process of molasses based distillery ppt.pptx
ย 
CompEx~Manual~1210 (2).pdf COMPEX GAS AND VAPOURS
CompEx~Manual~1210 (2).pdf COMPEX GAS AND VAPOURSCompEx~Manual~1210 (2).pdf COMPEX GAS AND VAPOURS
CompEx~Manual~1210 (2).pdf COMPEX GAS AND VAPOURS
ย 
CEC 352 - SATELLITE COMMUNICATION UNIT 1
CEC 352 - SATELLITE COMMUNICATION UNIT 1CEC 352 - SATELLITE COMMUNICATION UNIT 1
CEC 352 - SATELLITE COMMUNICATION UNIT 1
ย 
Certificates - Mahmoud Mohamed Moursi Ahmed
Certificates - Mahmoud Mohamed Moursi AhmedCertificates - Mahmoud Mohamed Moursi Ahmed
Certificates - Mahmoud Mohamed Moursi Ahmed
ย 
Transformers design and coooling methods
Transformers design and coooling methodsTransformers design and coooling methods
Transformers design and coooling methods
ย 
Embedded machine learning-based road conditions and driving behavior monitoring
Embedded machine learning-based road conditions and driving behavior monitoringEmbedded machine learning-based road conditions and driving behavior monitoring
Embedded machine learning-based road conditions and driving behavior monitoring
ย 
Properties Railway Sleepers and Test.pptx
Properties Railway Sleepers and Test.pptxProperties Railway Sleepers and Test.pptx
Properties Railway Sleepers and Test.pptx
ย 
22CYT12-Unit-V-E Waste and its Management.ppt
22CYT12-Unit-V-E Waste and its Management.ppt22CYT12-Unit-V-E Waste and its Management.ppt
22CYT12-Unit-V-E Waste and its Management.ppt
ย 
CHINAโ€™S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECT
CHINAโ€™S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTCHINAโ€™S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECT
CHINAโ€™S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECT
ย 
Welding Metallurgy Ferrous Materials.pdf
Welding Metallurgy Ferrous Materials.pdfWelding Metallurgy Ferrous Materials.pdf
Welding Metallurgy Ferrous Materials.pdf
ย 
artificial intelligence and data science contents.pptx
artificial intelligence and data science contents.pptxartificial intelligence and data science contents.pptx
artificial intelligence and data science contents.pptx
ย 

KIT-601 Lecture Notes-UNIT-4.pdf Frequent Itemsets and Clustering

  • 1. A p r i l 1 2 , 2 0 2 4 / D r . R S Data Analytics (KIT-601) Unit-4: Frequent Itemsets and Clustering Dr. Radhey Shyam Professor Department of Information Technology SRMCEM Lucknow (Affiliated to Dr. A.P.J. Abdul Kalam Technical University, Lucknow) Unit-4 has been prepared and compiled by Dr. Radhey Shyam, with grateful acknowledgment to those who made their course contents freely available or (Contributed directly or indirectly). Feel free to use this study material for your own academic purposes. For any query, communication can be made through this email : shyam0058@gmail.com. April 12, 2024
  • 2. Data Analytics (KIT 601) Course Outcome ( CO) Bloomโ€™s Knowledge Level (KL) At the end of course , the student will be able to CO 1 Discuss various concepts of data analytics pipeline K1, K2 CO 2 Apply classification and regression techniques K3 CO 3 Explain and apply mining techniques on streaming data K2, K3 CO 4 Compare different clustering and frequent pattern mining algorithms K4 CO 5 Describe the concept of R programming and implement analytics on Big data using R. K2,K3 DETAILED SYLLABUS 3-0-0 Unit Topic Proposed Lecture I Introduction to Data Analytics: Sources and nature of data, classification of data (structured, semi-structured, unstructured), characteristics of data, introduction to Big Data platform, need of data analytics, evolution of analytic scalability, analytic process and tools, analysis vs reporting, modern data analytic tools, applications of data analytics. Data Analytics Lifecycle: Need, key roles for successful analytic projects, various phases of data analytics lifecycle โ€“ discovery, data preparation, model planning, model building, communicating results, operationalization. 08 II Data Analysis: Regression modeling, multivariate analysis, Bayesian modeling, inference and Bayesian networks, support vector and kernel methods, analysis of time series: linear systems analysis & nonlinear dynamics, rule induction, neural networks: learning and generalisation, competitive learning, principal component analysis and neural networks, fuzzy logic: extracting fuzzy models from data, fuzzy decision trees, stochastic search methods. 08 III Mining Data Streams: Introduction to streams concepts, stream data model and architecture, stream computing, sampling data in a stream, filtering streams, counting distinct elements in a stream, estimating moments, counting oneness in a window, decaying window, Real-time Analytics Platform ( RTAP) applications, Case studies โ€“ real time sentiment analysis, stock market predictions. 08 IV Frequent Itemsets and Clustering: Mining frequent itemsets, market based modelling, Apriori algorithm, handling large data sets in main memory, limited pass algorithm, counting frequent itemsets in a stream, clustering techniques: hierarchical, K-means, clustering high dimensional data, CLIQUE and ProCLUS, frequent pattern based clustering methods, clustering in non-euclidean space, clustering for streams and parallelism. 08 V Frame Works and Visualization: MapReduce, Hadoop, Pig, Hive, HBase, MapR, Sharding, NoSQL Databases, S3, Hadoop Distributed File Systems, Visualization: visual data analysis techniques, interaction techniques, systems and applications. Introduction to R - R graphical user interfaces, data import and export, attribute and data types, descriptive statistics, exploratory data analysis, visualization before analysis, analytics for unstructured data. 08 Text books and References: 1. Michael Berthold, David J. Hand, Intelligent Data Analysis, Springer 2. Anand Rajaraman and Jeffrey David Ullman, Mining of Massive Datasets, Cambridge University Press. 3. John Garrett,Data Analytics for IT Networks : Developing Innovative Use Cases, Pearson Education Curriculum & Evaluation Scheme IT & CSI (V & VI semester) 23 A p r i l 1 2 , 2 0 2 4 / D r . R S
  • 3. A p r i l 1 2 , 2 0 2 4 / D r . R S Unit-IV: Frequent Itemsets and Clustering 1 Mining Frequent Itemsets Frequent itemset mining is a popular data mining task that involves identifying sets of items that frequently co-occur in a given dataset. In other words, it involves finding the items that occur together frequently and then grouping them into sets of items. One way to approach this problem is by using the Apriori algorithm, which is one of the most widely used algorithms for frequent itemset mining. The Apriori algorithm works by iteratively generating candidate itemsets and then checking their fre- quency against a minimum support threshold. The algorithm starts by generating all possible itemsets of size 1 and counting their frequencies in the dataset. The itemsets that meet the minimum support threshold are then selected as frequent itemsets. The algorithm then proceeds to generate candidate itemsets of size 2 from the frequent itemsets of size 1 and counts their frequencies. This process is repeated until no more frequent itemsets can be generated. However, when dealing with large datasets, this approach can become computationally expensive due to the potentially large number of candidate itemsets that need to be generated and counted. Point-wise frequent itemset mining is a more efficient alternative that can reduce the computational complexity of the Apriori algorithm by exploiting the sparsity of the dataset. Point-wise frequent itemset mining works by iterating over the transactions in the dataset and identifying the itemsets that occur in each transaction. For each transaction, the algorithm generates a bitmap vector where each bit corresponds to an item in the dataset, and its value is set to 1 if the item occurs in the transaction and 0 otherwise. The algorithm then performs a bitwise AND operation between the bitmap vectors of each transaction to identify the itemsets that occur in all the transactions. The itemsets that meet the minimum support threshold are then selected as frequent itemsets. The advantage of point-wise frequent itemset mining is that it avoids generating candidate itemsets that are not present in the dataset, thereby reducing the number of itemsets that need to be generated and counted. Additionally, point-wise frequent itemset mining can be parallelized, making it suitable for mining large datasets on distributed systems. In summary, point-wise frequent itemset mining is an efficient alternative to the Apriori algorithm for 3
  • 4. A p r i l 1 2 , 2 0 2 4 / D r . R S frequent itemset mining. It works by iterating over the transactions in the dataset and identifying the itemsets that occur in each transaction, thereby avoiding the generation of candidate itemsets that are not present in the dataset. 2 Market Based Modelling Market-based modeling is a technique used in economics and business to analyze and simulate the behavior of markets, particularly in relation to the supply and demand of goods and services. This modeling technique involves creating mathematical models that can simulate how different market participants (consumers, producers, and intermediaries) interact with each other in a market setting. One of the most common market-based models is the supply and demand model, which assumes that the price of a good or service is determined by the balance between its supply and demand. In this model, the price of a good or service will rise if the demand for it exceeds its supply, and will fall if the supply exceeds the demand. Another popular market-based model is the game theory model, which is used to analyze how different participants in a market interact with each other. Game theory models assume that market participants are rational and act in their own self-interest, and seek to identify the strategies that each participant is likely to adopt in a given situation. Market-based models can be used to analyze a wide range of economic phenomena, from the pricing of individual goods and services to the behavior of entire industries and markets. They can also be used to test the potential impact of various policies and interventions on the behavior of markets and market participants. Overall, market-based modeling is a powerful tool for understanding and predicting the behavior of markets and the economy as a whole. By creating mathematical models that simulate the behavior of market participants and the interactions between them, economists and business analysts can gain valuable insights into the workings of markets, and develop strategies for managing and optimizing their performance. 3 Apriori Algorithm The Apriori algorithm is a popular algorithm used in data mining and machine learning to discover frequent itemsets in large transactional datasets. It was proposed by Agrawal and Srikant in 1994 and is widely used 4
  • 5. A p r i l 1 2 , 2 0 2 4 / D r . R S in association rule mining, market basket analysis, and other data mining applications. The Apriori algorithm uses a bottom-up approach to generate all frequent itemsets by first identifying frequent individual items and then using those items to generate larger itemsets. The algorithm works by performing the following steps: ห† First, the algorithm scans the entire dataset to identify all individual items and their frequency of occurrence. This information is used to generate the initial set of frequent itemsets. ห† Next, the algorithm uses a level-wise search strategy to generate larger itemsets by combining fre- quent itemsets from the previous level. The algorithm starts with two-itemsets and then progressively generates larger itemsets until no more frequent itemsets can be found. ห† At each level, the algorithm prunes the search space by eliminating itemsets that cannot be frequent based on the minimum support threshold. This is done using the Apriori principle, which states that any subset of a frequent itemset must also be frequent. The algorithm terminates when no more frequent itemsets can be generated or when the maximum itemset size is reached. Once all frequent itemsets have been identified, the Apriori algorithm can be used to generate association rules that describe the relationships between different items in the dataset. An association rule is a statement of the form X โˆ’ > Y, where X and Y are itemsets and X is a subset of Y. The rule indicates that there is a strong relationship between items in X and items in Y. The strength of an association rule is measured using two metrics: support and confidence. Support is the percentage of transactions in the dataset that contain both X and Y, while confidence is the percentage of transactions that contain Y given that they also contain X. Overall, the Apriori algorithm is a powerful tool for discovering frequent itemsets and association rules in large datasets. By identifying patterns and relationships between different items in the dataset, it can be used to gain valuable insights into consumer behavior, market trends, and other important business and economic phenomena. 4 Handling Large Datasets in Main Memory Handling large datasets in main memory can be a challenging task, as the amount of memory available on most computer systems is often limited. However, there are several techniques and strategies that can be 5
  • 6. A p r i l 1 2 , 2 0 2 4 / D r . R S used to effectively manage and analyze large datasets in main memory: ห† Use data compression: Data compression techniques can be used to reduce the amount of memory required to store a dataset. Techniques such as gzip or bzip2 can compress text data, while binary data can be compressed using libraries like LZ4 or Snappy. ห† Use data partitioning: Large datasets can be partitioned into smaller, more manageable subsets, which can be processed and analyzed in main memory. This can be done using techniques such as horizontal partitioning, vertical partitioning, or hybrid partitioning. ห† Use data sampling: Data sampling can be used to select a representative subset of data for analysis, without requiring the entire dataset to be loaded into memory. Random sampling, stratified sampling, and cluster sampling are some of the commonly used sampling techniques. ห† Use in-memory databases: In-memory databases can be used to store large datasets in main memory for faster querying and analysis. Examples of in-memory databases include Apache Ignite, SAP HANA, and VoltDB. ห† Use parallel processing: Parallel processing techniques can be used to distribute the processing of large datasets across multiple processors or cores. This can be done using libraries like Apache Spark, which provides distributed data processing capabilities. ห† Use data streaming: Data streaming techniques can be used to process large datasets in real-time by processing data as it is generated, rather than storing it in memory. Apache Kafka, Apache Flink, and Apache Storm are some of the popular data streaming platforms. Overall, effective management of large datasets in main memory requires a combination of data compres- sion, partitioning, sampling, in-memory databases, parallel processing, and data streaming techniques. By leveraging these techniques, it is possible to effectively analyze and process large datasets in main memory, without requiring expensive hardware upgrades or specialized software tools. 5 Limited Pass Algorithm A limited pass algorithm is a technique used in data processing and analysis to efficiently process large datasets with limited memory resources. 6
  • 7. A p r i l 1 2 , 2 0 2 4 / D r . R S In a limited pass algorithm, the dataset is processed in a fixed number of passes or iterations, where each pass involves processing a subset of the data. The algorithm ensures that each pass is designed to capture the relevant information needed for the analysis, while minimizing the memory required to store the data. For example, a limited pass algorithm for processing a large text file could involve reading the file in chunks or sections, processing each section in memory, and then discarding the processed data before moving onto the next section. This approach enables the algorithm to handle large datasets that cannot be loaded entirely into memory. Limited pass algorithms are often used in situations where the data cannot be stored in main memory, or when the processing of the data requires significant computational resources. Examples of applications that use limited pass algorithms include text processing, machine learning, and data mining. While limited pass algorithms can be useful for processing large datasets with limited memory resources, they can also be less efficient than algorithms that can process the entire dataset in a single pass. Therefore, it is important to carefully design the algorithm to ensure that it can capture the relevant information needed for the analysis, while minimizing the number of passes required to process the data. 6 Counting Frequent Itemsets in a Stream Counting frequent itemsets in a stream is a problem of finding the most frequent itemsets in a continuous stream of transactions. This problem is commonly known as the Frequent Itemset Mining problem. Here are the steps involved in counting frequent itemsets in a stream: 1. Initialize a hash table to store the counts of each itemset. The size of the hash table should be limited to prevent it from becoming too large. 2. Read each transaction in the stream one at a time. 3. Generate all the possible itemsets from the transaction. This can be done using the Apriori algorithm, which generates candidate itemsets by combining smaller frequent itemsets. 4. Increment the count of each itemset in the hash table. 5. Prune infrequent itemsets from the hash table. An itemset is infrequent if its count is less than a predefined threshold. 6. Repeat steps 2-5 for each transaction in the stream. 7
  • 8. A p r i l 1 2 , 2 0 2 4 / D r . R S 7. Output the frequent itemsets that remain in the hash table after processing all the transactions. The main challenge in counting frequent itemsets in a stream is to keep track of the changing frequencies of the itemsets as new transactions arrive. This can be done efficiently using the hash table to store the counts of the itemsets. However, the hash table can become too large if the number of distinct itemsets is too large. To prevent this, the hash table can be limited in size by using a hash function that maps each itemset to a fixed number of hash buckets. The size of the hash table can be adjusted dynamically based on the number of items and transactions in the stream. Another challenge in counting frequent itemsets in a stream is to choose the threshold for the minimum count of an itemset to be considered frequent. The threshold should be set high enough to exclude infrequent itemsets, but low enough to include all the important frequent itemsets. The threshold can be determined using heuristics or by using machine learning techniques to learn the optimal threshold from the data. 7 Clustering Techniques Clustering techniques are used to group similar data points together in a dataset based on their similarity or distance measures. Here are some popular clustering techniques: 7.1 K-Means Clustering: This is a popular clustering algorithm that partitions a dataset into K clusters based on the mean dis- tance of the data points to their assigned cluster centers. It involves an iterative process of assigning data points to clusters and updating the cluster centers until convergence. K-Means is commonly used in image segmentation, marketing, and customer segmentation. 7.1.1 K-means Clustering algorithm K-Means clustering is a popular unsupervised machine learning algorithm that partitions a dataset into k clusters, where k is a pre-defined number of clusters. The algorithm works as follows: ห† Initialize the k cluster centroids randomly. ห† Assign each data point to the nearest cluster centroid based on its distance. ห† Calculate the new cluster centroids based on the mean of all data points assigned to that cluster. 8
  • 9. A p r i l 1 2 , 2 0 2 4 / D r . R S ห† Repeat steps 2-3 until the cluster centroids no longer change significantly, or a maximum number of iterations is reached. ห† The distance metric used for step 2 is typically the Euclidean distance, but other distance metrics can be used as well. ห† The K-Means algorithm aims to minimize the sum of squared distances between each data point and its assigned cluster centroid. This objective function is known as the within-cluster sum of squares (WCSS) or the sum of squared errors (SSE). ห† To determine the optimal number of clusters, a common approach is to use the elbow method. This involves plotting the WCSS or SSE against the number of clusters and selecting the number of clusters at the โ€elbowโ€ point, where the rate of decrease in WCSS or SSE begins to level off. K-Means is a computationally efficient algorithm that can scale to large datasets. It is particularly useful when the data is high-dimensional and traditional clustering algorithms may be too slow. However, K-Means requires the number of clusters to be pre-defined and may converge to a suboptimal solution if the initial cluster centroids are not well chosen. It is also sensitive to non-linear data and may not work well with such data. Here are some of its advantages and disadvantages: 9
  • 10. A p r i l 1 2 , 2 0 2 4 / D r . R S Advantages: ห† Simple and easy to understand: K-Means is easy to understand and implement, making it a popular choice for clustering tasks. ห† Fast and scalable: K-Means is a computation- ally efficient algorithm that can scale to large datasets. It is particularly useful when the data is high-dimensional and traditional clustering algorithms may be too slow. ห† Works well with circular or spherical clusters: K-Means works well with circular or spherical clusters, making it suitable for datasets that ex- hibit these types of shapes. ห† Provides a clear and interpretable result: K- Means provides a clear and interpretable clus- tering result, where each data point is assigned to one of the k clusters. Disadvantages: ห† Requires pre-defined number of clusters: K- Means requires the number of clusters to be pre-defined, which can be a challenge when the number of clusters is unknown or difficult to determine. ห† Sensitive to initial cluster centers: K-Means is sensitive to the initial placement of cluster cen- ters and can converge to a suboptimal solution if the initial centers are not well chosen. ห† Can converge to a local minimum: K-Means can converge to a local minimum rather than the global minimum, resulting in a suboptimal clustering solution. ห† Not suitable for non-linear data: K-Means as- sumes that the data is linearly separable and may not work well with non-linear data. In summary, K-Means is a simple and fast clustering algorithm that works well with circular or spherical clusters. However, it requires the number of clusters to be pre-defined and may converge to a suboptimal solution if the initial cluster centers are not well chosen. It is also sensitive to non-linear data and may not work well with such data. 7.2 Hierarchical Clustering: This technique builds a hierarchy of clusters by recursively dividing or merging clusters based on their similarity. It can be agglomerative (bottom-up) or divisive (top-down). In agglomerative clustering, each data point starts in its own cluster, and then pairs of clusters are successively merged until all data points belong to a single cluster. Divisive clustering starts with all data points in a single cluster and recursively divides them into smaller clusters. Hierarchical clustering is useful in gene expression analysis, social network 10
  • 11. A p r i l 1 2 , 2 0 2 4 / D r . R S analysis, and image analysis. 7.3 Density-based Clustering: This technique identifies clusters based on the density of data points. It assumes that clusters are areas of higher density separated by areas of lower density. Density-based clustering algorithms, such as DBSCAN (Density-Based Spatial Clustering of Applications with Noise), group together data points that are closely packed together and separate outliers. Density-based clustering is commonly used in image processing, geospatial data analysis, and anomaly detection. 7.4 Gaussian Mixture Models: This technique models the distribution of data points using a mixture of Gaussian probability distributions. Each component of the mixture represents a cluster, and the algorithm estimates the parameters of the mixture using the Expectation-Maximization algorithm. Gaussian Mixture Models are commonly used in image segmentation, handwriting recognition, and speech recognition. 7.5 Spectral Clustering: This technique converts the data points into a graph and then partitions the graph into clusters based on the eigenvalues and eigenvectors of the graph Laplacian matrix. Spectral clustering is useful in image segmentation, community detection in social networks, and document clustering. Each clustering technique has its own strengths and weaknesses, and the choice of clustering algorithm depends on the nature of the data, the clustering objective, and the computational resources available. 8 Clustering high-dimensional data Clustering high-dimensional data is a challenging task because the distance or similarity measures used in most clustering algorithms become less meaningful in high-dimensional space. Here are some techniques for clustering high-dimensional data: 11
  • 12. A p r i l 1 2 , 2 0 2 4 / D r . R S 8.1 Dimensionality Reduction: High-dimensional data can be transformed into a lower-dimensional space using dimensionality reduction techniques, such as Principal Component Analysis (PCA) or t-SNE (t-distributed Stochastic Neighbor Em- bedding). Dimensionality reduction can help to reduce the curse of dimensionality and make the clustering algorithms more effective. 8.2 Feature Selection: Not all features in high-dimensional data are equally informative. Feature selection techniques can be used to identify the most relevant features for clustering and discard the redundant or noisy features. This can help to improve the clustering accuracy and reduce the computational cost. 8.3 Subspace Clustering: Subspace clustering is a clustering technique that identifies clusters in subspaces of the high-dimensional space. This technique assumes that the data points lie in a union of subspaces, each of which represents a cluster. Subspace clustering algorithms, such as CLIQUE (CLustering In QUEst), identify the subspaces and clusters simultaneously. 8.4 Density-Based Clustering: Density-based clustering algorithms, such as DBSCAN, can be used for clustering high-dimensional data by defining the density of data points in each dimension. The clustering algorithm identifies regions of high density in the multidimensional space, which correspond to clusters. 8.5 Ensemble Clustering: Ensemble clustering combines multiple clustering algorithms or different parameter settings of the same algorithm to improve the clustering performance. Ensemble clustering can help to reduce the sensitivity of the clustering results to the choice of algorithm or parameter settings. 8.6 Deep Learning-Based Clustering: Deep learning-based clustering techniques, such as Deep Embedded Clustering (DEC) and Autoencoder- based Clustering (AE-Clustering), use neural networks to learn a low-dimensional representation of high- 12
  • 13. A p r i l 1 2 , 2 0 2 4 / D r . R S dimensional data and cluster the data in the reduced space. These techniques have shown promising results in clustering high-dimensional data in various domains, including image analysis and gene expression analysis. Clustering high-dimensional data requires careful consideration of the choice of clustering algorithm, feature selection or dimensionality reduction technique, and parameter settings. A combination of different techniques may be required to achieve the best clustering performance. 8.7 CLIQUE and ProCLUS CLIQUE (CLustering In QUEst) and ProCLUS are two popular subspace clustering algorithms for high- dimensional data. CLIQUE is a density-based algorithm that works by identifying dense subspaces in the data. It assumes that clusters exist in subspaces of the data that are dense in at least k dimensions, where k is a user-defined parameter. The algorithm identifies all possible dense subspaces by enumerating all combinations of k dimensions and checking if the corresponding subspaces are dense. It then merges the overlapping subspaces to form clusters. CLIQUE is efficient for high-dimensional data because it only considers a small number of dimensions at a time. ProCLUS (PROjective CLUSters) is a subspace clustering algorithm that works by identifying clusters in a low-dimensional projection of the data. It first selects a random projection matrix and projects the data onto a lower-dimensional space. It then uses K-Means clustering to cluster the projected data. The algorithm iteratively refines the projection matrix and re-clusters the data until convergence. The final clusters are projected back to the original high-dimensional space. ProCLUS is effective for high-dimensional data because it reduces the dimensionality of the data while preserving the clustering structure. Both CLIQUE and ProCLUS are designed to handle high-dimensional data by identifying clusters in subspaces of the data. They are effective for clustering data that have a natural subspace structure. However, they may not work well for data that do not have a clear subspace structure or when the data points are widely spread out in the high-dimensional space. It is important to carefully choose the appropriate algorithm based on the characteristics of the data and the clustering objectives. 9 Frequent pattern-based clustering methods Frequent pattern-based clustering methods combine frequent pattern mining with clustering techniques to identify clusters based on frequent patterns in the data. Here are some examples of frequent pattern-based 13
  • 14. A p r i l 1 2 , 2 0 2 4 / D r . R S clustering methods: 1. Frequent Pattern-based Clustering: is a clustering algorithm that uses frequent pattern mining to identify clusters in transactional data. The algorithm first identifies frequent itemsets in the data using Apriori or FP-Growth algorithms. It then constructs a graph where each frequent itemset is a node, and the edges represent the overlap between the itemsets. The graph is partitioned into clusters using a graph clustering algorithm. The resulting clusters are then used to assign objects to clusters based on their membership in the frequent itemsets. 2. Frequent Pattern-based Clustering Method: is a clustering algorithm that uses frequent pattern mining to identify clusters in high-dimensional data. The algorithm first discretizes the continuous data into categorical data. It then uses Apriori or FP-Growth algorithms to identify frequent itemsets in the categorical data. The frequent itemsets are used to construct a binary matrix that represents the membership of objects in the frequent itemsets. The binary matrix is clustered using a standard clustering algorithm, such as K-Means or Hierarchical clustering. The resulting clusters are then used to assign objects to clusters based on their membership in the frequent itemsets. 3. Clustering based on Frequent Pattern Combination: is a clustering algorithm that combines frequent pattern mining with pattern combination techniques to identify clusters in transactional data. The algorithm first identifies frequent itemsets in the data using Apriori or FP-Growth algorithms. It then uses pattern combination techniques, such as Minimum Description Length (MDL) or Bayesian Information Criterion (BIC), to generate composite patterns from the frequent itemsets. The composite patterns are then used to construct a graph, which is partitioned into clusters using a graph clustering algorithm. Frequent pattern-based clustering methods are effective for identifying clusters based on frequent patterns in the data. They can be applied to a wide range of data types, including transactional data and high- dimensional data. However, these methods may suffer from the curse of dimensionality when applied to high-dimensional data. It is important to carefully select the appropriate frequent pattern mining and clustering techniques based on the characteristics of the data and the clustering objectives. 14
  • 15. A p r i l 1 2 , 2 0 2 4 / D r . R S 10 Clustering in non-Euclidean space Clustering in non-Euclidean space refers to the clustering of data points that are not represented in the Euclidean space, such as graphs, time series, or text data. Traditional clustering algorithms, such as K- Means and Hierarchical clustering, assume that the data points are represented in the Euclidean space and use distance metrics, such as Euclidean distance or cosine similarity, to measure the similarity between data points. However, in non-Euclidean spaces, the notion of distance is different, and distance-based clustering methods may not be suitable. Here are some approaches for clustering in non-Euclidean spaces: 1. Spectral clustering: Spectral clustering is a popular clustering algorithm that can be applied to data represented in non-Euclidean spaces, such as graphs or time series. It uses the eigenvalues and eigen- vectors of the Laplacian matrix of the data to identify clusters. Spectral clustering converts the data points into a graph representation and then computes the Laplacian matrix of the graph. The eigen- vectors of the Laplacian matrix are used to embed the data points into a lower-dimensional space, where clustering is performed using a standard clustering algorithm, such as K-Means or Hierarchical clustering. 2. Density-Based Spatial Clustering of Applications with Noise: is a density-based clustering algorithm that can be applied to data represented in non-Euclidean spaces. It does not rely on a distance metric and can cluster data points based on their density. DBSCAN identifies clusters by defining two parameters: the minimum number of points required to form a cluster and a radius that determines the neighborhood of a point. DBSCAN labels each point as either a core point, a border point, or a noise point, based on its neighborhood. The core points are used to form clusters. 3. Topic modeling: Topic modeling is a clustering method that can be applied to text data, which is typically represented in a non-Euclidean space. Topic modeling identifies latent topics in the text data by analyzing the co-occurrence of words. It represents each document as a distribution over topics, and each topic as a distribution over words. The resulting topic distribution of each document can be used to cluster the documents based on their similarity. Clustering in non-Euclidean spaces requires careful consideration of the appropriate algorithms and tech- niques that are suitable for the specific data type. Spectral clustering and DBSCAN are effective for clustering 15
  • 16. A p r i l 1 2 , 2 0 2 4 / D r . R S data represented as graphs or time series, while topic modeling is suitable for text data. Other approaches, such as manifold learning and kernel methods, can also be used for clustering in non-Euclidean spaces. 11 Clustering for streams and parallelism Clustering for streams and parallelism are two important considerations for clustering large datasets. Stream data refers to data that arrives continuously and in real-time, while parallelism refers to the ability to distribute the clustering task across multiple computing resources. Here are some approaches for clustering streams and parallelism: 1. Online clustering: Online clustering is a technique that can be applied to streaming data. It updates the clustering model continuously as new data arrives. Online clustering algorithms, such as BIRCH and CluStream, are designed to handle data streams and can scale to large datasets. These algo- rithms incrementally update the cluster model as new data arrives and discard outdated data points to maintain the cluster modelโ€™s accuracy and efficiency. 2. Parallel clustering: Parallel clustering refers to the use of multiple computing resources, such as multiple processors or computing clusters, to speed up the clustering process. Parallel clustering algorithms, such as K-Means Parallel, Hierarchical Parallel, and DBSCAN Parallel, distribute the clustering task across multiple computing resources. These algorithms partition the data into smaller subsets and assign each subset to a separate computing resource. The resulting clusters are then merged to produce the final clustering result. 3. Distributed clustering: Distributed clustering refers to the use of multiple computing resources that are distributed across different physical locations, such as different data centers or cloud resources. Distributed clustering algorithms, such as MapReduce and Hadoop, distribute the clustering task across multiple computing resources and handle data that is too large to fit into a single computing resourceโ€™s memory. These algorithms partition the data into smaller subsets and assign each subset to a separate computing resource. The resulting clusters are then merged to produce the final clustering result. Clustering for streams and parallelism requires careful consideration of the appropriate algorithms and techniques that are suitable for the specific clustering objectives and data types. Online clustering is effective 16
  • 17. A p r i l 1 2 , 2 0 2 4 / D r . R S for clustering streaming data, while parallel clustering and distributed clustering can speed up the clustering process for large datasets. Q1: Write R function to check whether the given number is prime or not. # Program to check if the input number is prime or not # take input from the user num = as.integer(readline(prompt=โ€Enter a number: โ€)) flag = 0 # prime numbers are greater than 1 if(num ยฟ 1) # check for factors flag = 1 for(i in 2:(num-1)) { if ((num %% i) == 0) flag = 0 break } } } if(num == 2) flag = 1 if(flag == 1) print(paste(num,โ€is a prime numberโ€)) else print(paste(num,โ€is not a prime numberโ€)) 17
  • 18. A p r i l 1 2 , 2 0 2 4 / D r . R S Apriori algorithm:โ€”The apriori algorithm solves the frequent item sets problem. The algorithm ana- lyzes a data set to determine which combinations of items occur together frequently. The Apriori algorithm is at the core of various algorithms for data mining problems. The best known problem is finding the asso- ciation rules that hold in a basket -item relation. Numerical: Given: Support = 60% = 60/100 โˆ— 5 = 3 Confidence = 70% ITERATION:1 STEP 1: (C1) Itemsets Counts A 1 C 2 D 1 E 4 I 1 K 5 M 3 N 2 O 3 U 1 Y 3 STEP 2: (L2) Itemsets Counts E 4 K 5 M 3 O 3 Y 3 ITERATION 2: STEP 3: (C2) Itemsets Counts E, K 4 E, M 2 E, O 3 E, Y 2 K, M 3 K, O 3 K, Y 3 M, O 1 M, Y 2 O, Y 2 STEP 4: (L2) Itemsets Counts E, K 4 E, O 3 K, M 3 K, O 3 K, Y 3 ITERATION 3: STEP 5: (C3) Itemsets Counts E, K, O 3 K, M, O 1 K, M, Y 2 STEP 6: (L3) Itemsets Counts E, K, O 3 Now, stop since no more combinations can be made in L3. ASSOCIATION RULE: 1. [E, K] โ†’ O = 3/4 = 75% 2. [K, O] โ†’ E = 3/3 = 100% 3. [E, O] โ†’ K = 3/3 = 100% 4. E โ†’ [K, O] = 3/4 = 75% 18
  • 19. A p r i l 1 2 , 2 0 2 4 / D r . R S 5. K โ†’ [E, O] = 3/5 = 60% 6. O โ†’ [E, K] = 3/3 = 100% Therefore, Rule no. 5 is discarded because confidence โ‰ฅ 70% So, Rule 1,2,3,4,6 are selected. 19
  • 20. Printed Page: 1 of 2 Subject Code: KIT601 0Roll No: 0 0 0 0 0 0 0 0 0 0 0 0 0 BTECH (SEM VI) THEORY EXAMINATION 2021-22 DATA ANALYTICS Time: 3 Hours Total Marks: 100 Note: Attempt all Sections. If you require any missing data, then choose suitably. SECTION A 1. Attempt all questions in brief. 2*10 = 20 Qno Questions CO (a) Discuss the need of data analytics. 1 (b) Give the classification of data. 1 (c) Define neural network. 2 (d) What is multivariate analysis? 2 (e) Give the full form of RTAP and discuss its application. 3 (f) What is the role of sampling data in a stream? 3 (g) Discuss the use of limited pass algorithm. 4 (h) What is the principle behind hierarchical clustering technique? 4 (i) List five R functions used in descriptive statistics. 5 (j) List the names of any 2 visualization tools. 5 SECTION B 2. Attempt any three of the following: 10*3 = 30 Qno Questions CO (a) Explain the process model and computation model for Big data platform. 1 (b) Explain the use and advantages of decision trees. 2 (c) Explain the architecture of data stream model. 3 (d) Illustrate the K-means algorithm in detail with its advantages. 4 (e) Differentiate between NoSQL and RDBMS databases. 5 SECTION C 3. Attempt any one part of the following: 10*1 = 10 Qno Questions CO (a) Explain the various phases of data analytics life cycle. 1 (b) Explain modern data analytics tools in detail. 1 4. Attempt any one part of the following: 10 *1 = 10 Qno Questions CO (a) Compare various types of support vector and kernel methods of data analysis. 2 (b) Given data= {2,3,4,5,6,7;1,5,3,6,7,8}. Compute the principal component using PCA algorithm. 2 A p r i l 1 2 , 2 0 2 4 / D r . R S
  • 21. Printed Page: 2 of 2 Subject Code: KIT601 0Roll No: 0 0 0 0 0 0 0 0 0 0 0 0 0 BTECH (SEM VI) THEORY EXAMINATION 2021-22 DATA ANALYTICS 5. Attempt any one part of the following: 10*1 = 10 Qno Questions CO (a) Explain any one algorithm to count number of distinct elements in a data stream. 3 (b) Discuss the case study of stock market predictions in detail. 3 6. Attempt any one part of the following: 10*1 = 10 Qno Questions CO (a) Differentiate between CLIQUE and ProCLUS clustering. 4 (b) A database has 5 transactions. Let min_sup=60% and min_conf=80%. TID Items_Bought T100 {M, O, N, K, E, Y} T200 {D, O, N, K, E, Y} T300 {M, A, K, E} T400 {M, U, C, K, Y} T500 {C, O, O, K, I, E} i) Find all frequent itemsets using Apriori algorithm. ii) List all the strong association rules (with support s and confidence c). 4 7. Attempt any one part of the following: 10*1 = 10 Qno Questions CO (a) Explain the HIVE architecture with its features in detail. 5 (b) Write R function to check whether the given number is prime or not. 5 A p r i l 1 2 , 2 0 2 4 / D r . R S
  • 24. A p r i l 1 2 , 2 0 2 4 / D r . R S Appendix [17]: Additional Study Material For Numerical Perspectives 24
  • 25. DATABASE SYSTEMS GROUP What is Frequent Itemset Mining? Frequent Itemset Mining: Finding frequent patterns, associations, correlations, or causal structures among sets of items or objects in transaction databases, relational databases, and other information repositories. โ€ข Given: โ€“ A set of items ๐ผ = {๐‘–1, ๐‘–2, โ€ฆ , ๐‘–๐‘š} โ€“ A database of transactions ๐ท, where a transaction ๐‘‡ โŠ† ๐ผ is a set of items โ€ข Task 1: find all subsets of items that occur together in many transactions. โ€“ E.g.: 85% of transactions contain the itemset {milk, bread, butter} โ€ข Task 2: find all rules that correlate the presence of one set of items with that of another set of items in the transaction database. โ€“ E.g.: 98% of people buying tires and auto accessories also get automotive service done โ€ข Applications: Basket data analysis, cross-marketing, catalog design, loss-leader analysis, clustering, classification, recommendation systems, etc. Frequent Itemset Mining ๏ƒ  Introduction 3 DATABASE SYSTEMS GROUP Example: Basket Data Analysis โ€ข Transaction database D= {{butter, bread, milk, sugar}; {butter, flour, milk, sugar}; {butter, eggs, milk, salt}; {eggs}; {butter, flour, milk, salt, sugar}} โ€ข Question of interest: โ€“ Which items are bought together frequently? โ€ข Applications โ€“ Improved store layout โ€“ Cross marketing โ€“ Focused attached mailings / add-on sales โ€“ * ๏ƒž Maintenance Agreement (What the store should do to boost Maintenance Agreement sales) โ€“ Home Electronics ๏ƒž * (What other products should the store stock up?) Frequent Itemset Mining ๏ƒ  Introduction 4 items frequency {butter} 4 {milk} 4 {butter, milk} 4 {sugar} 3 {butter, sugar} 3 {milk, sugar} 3 {butter, milk, sugar} 3 {eggs} 2 โ€ฆ A p r i l 1 2 , 2 0 2 4 / D r . R S
  • 26. DATABASE SYSTEMS GROUP Chapter 3: Frequent Itemset Mining 1) Introduction โ€“ Transaction databases, market basket data analysis 2) Mining Frequent Itemsets โ€“ Apriori algorithm, hash trees, FP-tree 3) Simple Association Rules โ€“ Basic notions, rule generation, interestingness measures 4) Further Topics โ€“ Hierarchical Association Rules โ€ข Motivation, notions, algorithms, interestingness โ€“ Quantitative Association Rules โ€ข Motivation, basic idea, partitioning numerical attributes, adaptation of apriori algorithm, interestingness 5) Extensions and Summary Outline 5 DATABASE SYSTEMS GROUP Mining Frequent Itemsets: Basic Notions ๏‚ง Items ๐ผ = {๐‘–1, ๐‘–2, โ€ฆ , ๐‘–๐‘š} : a set of literals (denoting items) โ€ข Itemset ๐‘‹: Set of items ๐‘‹ โŠ† ๐ผ โ€ข Database ๐ท: Set of transactions ๐‘‡, each transaction is a set of items T โŠ† ๐ผ โ€ข Transaction ๐‘‡ contains an itemset ๐‘‹: ๐‘‹ โŠ† ๐‘‡ โ€ข The items in transactions and itemsets are sorted lexicographically: โ€“ itemset ๐‘‹ = (๐‘ฅ1, ๐‘ฅ2, โ€ฆ , ๐‘ฅ๐‘˜ ), where ๐‘ฅ1 ๏‚ฃ ๐‘ฅ2 ๏‚ฃ โ€ฆ ๏‚ฃ ๐‘ฅ๐‘˜ โ€ข Length of an itemset: number of elements in the itemset โ€ข k-itemset: itemset of length k โ€ข The support of an itemset X is defined as: ๐‘ ๐‘ข๐‘๐‘๐‘œ๐‘Ÿ๐‘ก ๐‘‹ = ๐‘‡ โˆˆ ๐ท|๐‘‹ โŠ† ๐‘‡ โ€ข Frequent itemset: an itemset X is called frequent for database ๐ท iff it is contained in more than ๐‘š๐‘–๐‘›๐‘†๐‘ข๐‘ many transactions: ๐‘ ๐‘ข๐‘๐‘๐‘œ๐‘Ÿ๐‘ก(๐‘‹) โ‰ฅ ๐‘š๐‘–๐‘›๐‘†๐‘ข๐‘ โ€ข Goal 1: Given a database ๐ทand a threshold ๐‘š๐‘–๐‘›๐‘†๐‘ข๐‘ , find all frequent itemsets X โˆˆ ๐‘ƒ๐‘œ๐‘ก(๐ผ). Frequent Itemset Mining ๏ƒ  Algorithms 6 A p r i l 1 2 , 2 0 2 4 / D r . R S
  • 27. DATABASE SYSTEMS GROUP Mining Frequent Itemsets: Basic Idea โ€ข Naรฏve Algorithm โ€“ count the frequency of all possible subsets of ๐ผ in the database ๏ƒ  too expensive since there are 2m such itemsets for |๐ผ| = ๐‘š items โ€ข The Apriori principle (anti-monotonicity): Any non-empty subset of a frequent itemset is frequent, too! A โŠ† I with support A โ‰ฅ minSup โ‡’ โˆ€Aโ€ฒ โŠ‚ A โˆง Aโ€ฒ โ‰  โˆ…: support Aโ€ฒ โ‰ฅ minSup Any superset of a non-frequent itemset is non-frequent, too! A โŠ† I with support A < minSup โ‡’ โˆ€Aโ€ฒ โŠƒ A: support Aโ€ฒ < minSup โ€ข Method based on the apriori principle โ€“ First count the 1-itemsets, then the 2-itemsets, then the 3-itemsets, and so on โ€“ When counting (k+1)-itemsets, only consider those (k+1)-itemsets where all subsets of length k have been determined as frequent in the previous step Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  Apriori Algorithm 7 cardinality of power set ร˜ A B C D AB AC AD BC BD CD ABC ABD ACD BCD ABCD not frequent DATABASE SYSTEMS GROUP The Apriori Algorithm variable Ck: candidate itemsets of size k variable Lk: frequent itemsets of size k L1 = {frequent items} for (k = 1; Lk !=๏ƒ†; k++) do begin // JOIN STEP: join Lk with itself to produce Ck+1 // PRUNE STEP: discard (k+1)-itemsets from Ck+1 that contain non-frequent k-itemsets as subsets Ck+1 = candidates generated from Lk for each transaction t in database do Increment the count of all candidates in Ck+1 that are contained in t Lk+1 = candidates in Ck+1 with min_support return ๏ƒˆk Lk Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  Apriori Algorithm 8 produce candidates prove candidates A p r i l 1 2 , 2 0 2 4 / D r . R S
  • 28. DATABASE SYSTEMS GROUP Generating Candidates (Join Step) โ€ข Requirements for set of all candidate ๐‘˜ + 1 -itemsets ๐ถ๐‘˜+1 โ€“ Completeness: Must contain all frequent ๐‘˜ + 1 -itemsets (superset property ๐ถ๐‘˜+1 ๏ƒŠ ๐ฟ๐‘˜+1) โ€“ Selectiveness: Significantly smaller than the set of all ๐‘˜ + 1 -subsets โ€“ Suppose the items are sorted by any order (e.g., lexicograph.) โ€ข Step 1: Joining (๐ถ๐‘˜+1 = ๐ฟ๐‘˜ โ‹ˆ ๐ฟ๐‘˜) โ€“ Consider frequent ๐‘˜-itemsets ๐‘ and ๐‘ž โ€“ ๐‘ and ๐‘ž are joined if they share the same first ๐‘˜ โˆ’ 1 items insert into Ck+1 select p.i1, p.i2, โ€ฆ, p.ikโ€“1, p.ik, q.ik from Lk : p, Lk : q where p.i1=q.i1, โ€ฆ, p.ik โ€“1 =q.ikโ€“1, p.ik < q.ik Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  Apriori Algorithm 9 p ๏ƒŽ Lk=3 (A, C, F) (A, C, F, G) ๏ƒŽ Ck+1=4 q ๏ƒŽ Lk=3 (A, C, G) DATABASE SYSTEMS GROUP Generating Candidates (Prune Step) โ€ข Step 2: Pruning (๐ฟ๐‘˜+1 = {X โˆˆ ๐ถ๐‘˜+1|๐‘ ๐‘ข๐‘๐‘๐‘œ๐‘Ÿ๐‘ก ๐‘‹ โ‰ฅ ๐‘š๐‘–๐‘›๐‘†๐‘ข๐‘} ) โ€“ Naรฏve: Check support of every itemset in ๐ถ๐‘˜+1 ๏ƒŸ inefficient for huge ๐ถ๐‘˜+1 โ€“ Instead, apply Apriori principle first: Remove candidate (k+1) -itemsets which contain a non-frequent k -subset s, i.e., s ๏ƒ Lk forall itemsets c in Ck+1 do forall k-subsets s of c do if (s is not in Lk) then delete c from Ck+1 โ€ข Example 1 โ€“ L3 = {(ACF), (ACG), (AFG), (AFH), (CFG)} โ€“ Candidates after the join step: {(ACFG), (AFGH)} โ€“ In the pruning step: delete (AFGH) because (FGH) ๏ƒ L3, i.e., (FGH) is not a frequent 3-itemset; also (AGH) ๏ƒ L3 ๏ƒ  C4 = {(ACFG)} ๏ƒ  check the support to generate L4 Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  Apriori Algorithm 10 A p r i l 1 2 , 2 0 2 4 / D r . R S
  • 29. DATABASE SYSTEMS GROUP Apriori Algorithm โ€“ Full Example TID items 100 1 3 4 6 200 2 3 5 300 1 2 3 5 400 1 5 6 Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  Apriori Algorithm 11 itemsetcount {1} 3 {2} 2 {3} 3 {4} 1 {5} 3 {6} 2 database D scan D minSup=0.5 C1 itemsetcount {1} 3 {2} 2 {3} 3 {5} 3 {6} 2 L1 ๐ฟ1 โ‹ˆ ๐ฟ1 itemset {1 2} {1 3} {1 5} {1 6} {2 3} {2 5} {2 6} {3 5} {3 6} {5 6} C2 prune C1 scan D C2 C2 itemsetcount {1 3} 2 {1 5} 2 {1 6} 2 {2 3} 2 {2 5} 2 {3 5} 2 L2 itemset {1 2} {1 3} {1 5} {1 6} {2 3} {2 5} {2 6} {3 5} {3 6} {5 6} itemsetcount {1 2} 1 {1 3} 2 {1 5} 2 {1 6} 2 {2 3} 2 {2 5} 2 {2 6} 0 {3 5} 2 {3 6} 1 {5 6} 1 ๐ฟ2 โ‹ˆ ๐ฟ2 itemset {1 3 5} {1 3 6} {1 5 6} {2 3 5} C3 prune C2 itemset {1 3 5} {1 3 6} โœ— {1 5 6} โœ— {2 3 5} C3 scan D itemsetcount {1 3 5} 1 {2 3 5} 2 C3 itemsetcount {2 3 5} 2 L3 ๐ฟ3 โ‹ˆ ๐ฟ3 C4 is empty DATABASE SYSTEMS GROUP How to Count Supports of Candidates? โ€ข Why is counting supports of candidates a problem? โ€“ The total number of candidates can be very huge โ€“ One transaction may contain many candidates โ€ข Method: Hash-Tree โ€“ Candidate itemsets are stored in a hash-tree โ€“ Leaf nodes of hash-tree contain lists of itemsets and their support (i.e., counts) โ€“ Interior nodes contain hash tables โ€“ Subset function finds all the candidates contained in a transaction Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  Apriori Algorithm 12 h(K) = K mod 3 e.g. for 3-Itemsets 0 1 2 0 1 2 0 1 2 0 1 2 (3 6 7) 0 1 2 (3 5 7) (3 5 11) (7 9 12) (1 6 11) (1 4 11) (1 7 9) (7 8 9) (1 11 12) (2 3 8) (5 6 7) 0 1 2 (2 5 6) (2 5 7) (5 8 11) (3 4 15) (3 7 11) (3 4 11) (3 4 8) (2 4 6) (2 7 9) (2 4 7) (5 7 10) A p r i l 1 2 , 2 0 2 4 / D r . R S
  • 30. DATABASE SYSTEMS GROUP Hash-Tree โ€“ Construction โ€ข Searching for an itemset โ€“ Start at the root (level 1) โ€“ At level d: apply the hash function h to the d-th item in the itemset โ€ข Insertion of an itemset โ€“ search for the corresponding leaf node, and insert the itemset into that leaf โ€“ if an overflow occurs: โ€ข Transform the leaf node into an internal node โ€ข Distribute the entries to the new leaf nodes according to the hash function Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  Apriori Algorithm 13 h(K) = K mod 3 for 3-Itemsets 0 1 2 0 1 2 0 1 2 0 1 2 (3 6 7) 0 1 2 (3 5 7) (3 5 11) (7 9 12) (1 6 11) (1 4 11) (1 7 9) (7 8 9) (1 11 12) (2 3 8) (5 6 7) 0 1 2 (2 5 6) (2 5 7) (5 8 11) (3 4 15) (3 7 11) (3 4 11) (3 4 8) (2 4 6) (2 7 9) (2 4 7) (5 7 10) DATABASE SYSTEMS GROUP Hash-Tree โ€“ Counting โ€ข Search all candidate itemsets contained in a transaction T = (t1 t2 ... tn) for a current itemset length of k โ€ข At the root โ€“ Determine the hash values for each item t1 t2 ... tn-k+1 in T โ€“ Continue the search in the resulting child nodes โ€ข At an internal node at level d (reached after hashing of item ๐‘ก๐‘–) โ€“ Determine the hash values and continue the search for each item ๐‘ก๐‘— with ๐‘– < ๐‘— โ‰ค ๐‘› โˆ’ ๐‘˜ + ๐‘‘ โ€ข At a leaf node โ€“ Check whether the itemsets in the leaf node are contained in transaction T Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  Apriori Algorithm 14 0 1 2 0 1 2 0 1 2 0 1 2 (3 6 7) 0 1 2 (3 5 7) (3 5 11) (7 9 12) (1 6 11) (1 4 11) (1 7 9) (7 8 9) (1 11 12) (2 3 8) (5 6 7) 0 1 2 (2 5 6) (2 5 7) (5 8 11) (3 4 15) (3 7 11) (3 4 11) (3 4 8) (2 4 6) (2 7 9) (2 4 7) (5 7 10) 3 9 7 3,9 7 1,7 9,12 Pruned subtrees Tested leaf nodes Transaction (1, 3, 7, 9, 12) h(K) = K mod 3 in our example n=5 and k=3 A p r i l 1 2 , 2 0 2 4 / D r . R S
  • 31. DATABASE SYSTEMS GROUP Is Apriori Fast Enough? โ€” Performance Bottlenecks โ€ข The core of the Apriori algorithm: โ€“ Use frequent (k โ€“ 1)-itemsets to generate candidate frequent k-itemsets โ€“ Use database scan and pattern matching to collect counts for the candidate itemsets โ€ข The bottleneck of Apriori: candidate generation โ€“ Huge candidate sets: โ€ข 104 frequent 1-itemsets will generate 107 candidate 2-itemsets โ€ข To discover a frequent pattern of size 100, e.g., {a1, a2, โ€ฆ, a100}, one needs to generate 2100 ๏‚ป 1030 candidates. โ€“ Multiple scans of database: โ€ข Needs n or n+1 scans, n is the length of the longest pattern ๏ƒ  Is it possible to mine the complete set of frequent itemsets without candidate generation? Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  Apriori Algorithm 15 DATABASE SYSTEMS GROUP Mining Frequent Patterns Without Candidate Generation โ€ข Compress a large database into a compact, Frequent-Pattern tree (FP- tree) structure โ€“ highly condensed, but complete for frequent pattern mining โ€“ avoid costly database scans โ€ข Develop an efficient, FP-tree-based frequent pattern mining method โ€“ A divide-and-conquer methodology: decompose mining tasks into smaller ones โ€“ Avoid candidate generation: sub-database test only! โ€ข Idea: โ€“ Compress database into FP-tree, retaining the itemset association information โ€“ Divide the compressed database into conditional databases, each associated with one frequent item and mine each such database separately. Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  FP-Tree 16 A p r i l 1 2 , 2 0 2 4 / D r . R S
  • 32. DATABASE SYSTEMS GROUP Construct FP-tree from a Transaction DB Steps for compressing the database into a FP-tree: 1. Scan DB once, find frequent 1-itemsets (single items) 2. Order frequent items in frequency descending order Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  FP-Tree 17 item frequency f 4 c 4 a 3 b 3 m 3 p 3 1&2 header table: TID items bought 100 {f, a, c, d, g, i, m, p} 200 {a, b, c, f, l, m, o} 300 {b, f, h, j, o} 400 {b, c, k, s, p} 500 {a, f, c, e, l, p, m, n} sort items in the order of descending support minSup=0.5 DATABASE SYSTEMS GROUP Construct FP-tree from a Transaction DB Steps for compressing the database into a FP-tree: 1. Scan DB once, find frequent 1-itemsets (single items) 2. Order frequent items in frequency descending order 3. Scan DB again, construct FP-tree starting with most frequent item per transaction Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  FP-Tree 18 item frequency f 4 c 4 a 3 b 3 m 3 p 3 header table: TID items bought (ordered) frequent items 100 {f, a, c, d, g, i, m, p} {f, c, a, m, p} 200 {a, b, c, f, l, m, o} {f, c, a, b, m} 300 {b, f, h, j, o} {f, b} 400 {b, c, k, s, p} {c, b, p} 500 {a, f, c, e, l, p, m, n} {f, c, a, m, p} for each transaction only keep its frequent items sorted in descending order of their frequencies 1&2 3a for each transaction build a path in the FP-tree: - If a path with common prefix exists: increment frequency of nodes on this path and append suffix - Otherwise: create a new branch A p r i l 1 2 , 2 0 2 4 / D r . R S
  • 33. DATABASE SYSTEMS GROUP Construct FP-tree from a Transaction DB Steps for compressing the database into a FP-tree: 1. Scan DB once, find frequent 1-itemsets (single items) 2. Order frequent items in frequency descending order 3. Scan DB again, construct FP-tree starting with most frequent item per transaction Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  FP-Tree 19 item frequency head f 4 c 4 a 3 b 3 m 3 p 3 {} f:4 c:1 b:1 p:1 b:1 c:3 a:3 b:1 m:2 p:2 m:1 header table: TID items bought (ordered) frequent items 100 {f, a, c, d, g, i, m, p} {f, c, a, m, p} 200 {a, b, c, f, l, m, o} {f, c, a, b, m} 300 {b, f, h, j, o} {f, b} 400 {b, c, k, s, p} {c, b, p} 500 {a, f, c, e, l, p, m, n} {f, c, a, m, p} 1&2 3a 3b header table references the occurrences of the frequent items in the FP-tree DATABASE SYSTEMS GROUP Benefits of the FP-tree Structure โ€ข Completeness: โ€“ never breaks a long pattern of any transaction โ€“ preserves complete information for frequent pattern mining โ€ข Compactness โ€“ reduce irrelevant informationโ€”infrequent items are gone โ€“ frequency descending ordering: more frequent items are more likely to be shared โ€“ never be larger than the original database (if not count node-links and counts) โ€“ Experiments demonstrate compression ratios over 100 Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  FP-Tree 20 A p r i l 1 2 , 2 0 2 4 / D r . R S
  • 34. DATABASE SYSTEMS GROUP Mining Frequent Patterns Using FP-tree โ€ข General idea (divide-and-conquer) โ€“ Recursively grow frequent pattern path using the FP-tree โ€ข Method โ€“ For each item, construct its conditional pattern-base (prefix paths), and then its conditional FP-tree โ€“ Repeat the process on each newly created conditional FP-tree โ€ฆ โ€“ โ€ฆuntil the resulting FP-tree is empty, or it contains only one path (single path will generate all the combinations of its sub-paths, each of which is a frequent pattern) Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  FP-Tree 21 DATABASE SYSTEMS GROUP Major Steps to Mine FP-tree 1) Construct conditional pattern base for each node in the FP-tree 2) Construct conditional FP-tree from each conditional pattern-base 3) Recursively mine conditional FP-trees and grow frequent patterns obtained so far โ€“ If the conditional FP-tree contains a single path, simply enumerate all the patterns Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  FP-Tree 22 A p r i l 1 2 , 2 0 2 4 / D r . R S
  • 35. DATABASE SYSTEMS GROUP Major Steps to Mine FP-tree: Conditional Pattern Base 1) Construct conditional pattern base for each node in the FP-tree โ€“ Starting at the frequent header table in the FP-tree โ€“ Traverse FP-tree by following the link of each frequent item (dashed lines) โ€“ Accumulate all of transformed prefix paths of that item to form a conditional pattern base โ€ข For each item its prefixes are regarded as condition for it being a suffix. These prefixes form the conditional pattern base. The frequency of the prefixes can be read in the node of the item. Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  FP-Tree 23 {} f:4 c:1 b:1 p:1 b:1 c:3 a:3 b:1 m:2 p:2 m:1 item frequency head f 4 c 4 a 3 b 3 m 3 p 3 header table: item cond. pattern base f {} c f:3, {} a fc:3 b fca:1, f:1, c:1 m fca:2, fcab:1 p fcam:2, cb:1 conditional pattern base: DATABASE SYSTEMS GROUP Properties of FP-tree for Conditional Pattern Bases โ€ข Node-link property โ€“ For any frequent item ai, all the possible frequent patterns that contain ai can be obtained by following ai's node-links, starting from ai's head in the FP-tree header โ€ข Prefix path property โ€“ To calculate the frequent patterns for a node ai in a path P, only the prefix sub-path of ai in P needs to be accumulated, and its frequency count should carry the same count as node ai. Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  FP-Tree 24 A p r i l 1 2 , 2 0 2 4 / D r . R S
  • 36. DATABASE SYSTEMS GROUP Major Steps to Mine FP-tree: Conditional FP-tree 1) Construct conditional pattern base for each node in the FP-tree โœ” 2) Construct conditional FP-tree from each conditional pattern-base โ€“ The prefix paths of a suffix represent the conditional basis. ๏ƒ They can be regarded as transactions of a database. โ€“ Those prefix paths whose support โ‰ฅ minSup, induce a conditional FP-tree โ€“ For each pattern-base โ€ข Accumulate the count for each item in the base โ€ข Construct the FP-tree for the frequent items of the pattern base Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  FP-Tree 25 conditional pattern base: m-conditional FP-tree {}|m f:3 c:3 a:3 item frequency f 3 .. c 3 .. a 3 .. b 1โœ— item cond. pattern base f {} c f:3 a fc:3 b fca:1, f:1, c:1 m fca:2, fcab:1 p fcam:2, cb:1 DATABASE SYSTEMS GROUP Major Steps to Mine FP-tree: Conditional FP-tree 1) Construct conditional pattern base for each node in the FP-tree โœ” 2) Construct conditional FP-tree from each conditional pattern-base Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  FP-Tree 26 conditional pattern base: {}|m f:3 c:3 a:3 item cond. pattern base f {} c f:3 a fc:3 b fca:1, f:1, c:1 m fca:2, fcab:1 p fcam:2, cb:1 {}|f = {} {}|c f:3 {}|a f:3 c:3 {}|b = {} {}|p c:3 A p r i l 1 2 , 2 0 2 4 / D r . R S
  • 37. DATABASE SYSTEMS GROUP Major Steps to Mine FP-tree 1) Construct conditional pattern base for each node in the FP-tree โœ” 2) Construct conditional FP-tree from each conditional pattern-base โœ” 3) Recursively mine conditional FP-trees and grow frequent patterns obtained so far โ€“ If the conditional FP-tree contains a single path, simply enumerate all the patterns (enumerate all combinations of sub-paths) Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  FP-Tree 27 example: m-conditional FP-tree {}|m f:3 c:3 a:3 All frequent patterns concerning m m, fm, cm, am, fcm, fam, cam, fcam just a single path DATABASE SYSTEMS GROUP FP-tree: Full Example Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  FP-Tree 28 item frequency head f 4 b 3 c 3 {} b:1 c:1 header table: TID items bought (ordered) frequent items 100 {b, c, f} {f, b, c} 200 {a, b, c} {b, c} 300 {d, f} {f} 400 {b, c, e, f} {f, b, c} 500 {f, g} {f} minSup=0.4 f:4 b:2 c:2 database: item cond. pattern base f {} b f:2, {} c fb:2, b:1 conditional pattern base: A p r i l 1 2 , 2 0 2 4 / D r . R S
  • 38. DATABASE SYSTEMS GROUP FP-tree: Full Example Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  FP-Tree 29 {} b:1 c:1 f:4 b:2 c:2 item cond. pattern base f {} b f:2 c fb:2, b:1 conditional pattern base 1: {}|f = {} {}|b f:2 {}|c b:1 f:2 b:2 item cond. pattern base b f:2 f {} conditional pattern base 2: {}|fc = {} {}|bc f:2 {{f}} {{b},{fb}} {{fc}} {{bc},{fbc}} DATABASE SYSTEMS GROUP Principles of Frequent Pattern Growth โ€ข Pattern growth property โ€“ Let ๏ก be a frequent itemset in DB, B be ๏ก's conditional pattern base, and ๏ข be an itemset in B. Then ๏ก ๏ƒˆ ๏ข is a frequent itemset in DB iff ๏ข is frequent in B. โ€ข โ€œabcdef โ€ is a frequent pattern, if and only if โ€“ โ€œabcde โ€ is a frequent pattern, and โ€“ โ€œf โ€ is frequent in the set of transactions containing โ€œabcde โ€ Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  FP-Tree 30 A p r i l 1 2 , 2 0 2 4 / D r . R S
  • 39. DATABASE SYSTEMS GROUP 0 10 20 30 40 50 60 70 80 90 100 0 0,5 1 1,5 2 2,5 3 Support threshold(%) Run time(sec.) D1 FP-grow th runtime D1 Apriori runtime Why Is Frequent Pattern Growth Fast? โ€ข Performance study in [Han, Pei&Yin โ€™00] shows โ€“ FP-growth is an order of magnitude faster than Apriori, and is also faster than tree-projection โ€ข Reasoning โ€“ No candidate generation, no candidate test โ€ข Apriori algorithm has to proceed breadth-first โ€“ Use compact data structure โ€“ Eliminate repeated database scan โ€“ Basic operation is counting and FP-tree building Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  FP-Tree 31 Data set T25I20D10K: T 25 avg. length of transactions I 20 avg. length of frequent itemsets D 10K database size (#transactions) DATABASE SYSTEMS GROUP Maximal or Closed Frequent Itemsets โ€ข Big challenge: database contains potentially a huge number of frequent itemsets (especially if minSup is set too low). โ€“ A frequent itemset of length 100 contains 2100-1 many frequent subsets โ€ข Closed frequent itemset: An itemset X is closed in a data set D if there exists no proper super- itemset Y such that ๐‘ ๐‘ข๐‘๐‘๐‘œ๐‘Ÿ๐‘ก(๐‘‹) = ๐‘ ๐‘ข๐‘๐‘๐‘œ๐‘Ÿ๐‘ก(๐‘Œ) in D. โ€“ The set of closed frequent itemsets contains complete information regarding its corresponding frequent itemsets. โ€ข Maximal frequent itemset: An itemset X is maximal in a data set D if there exists no proper super- itemset Y such that ๐‘ ๐‘ข๐‘๐‘๐‘œ๐‘Ÿ๐‘ก ๐‘Œ โ‰ฅ ๐‘š๐‘–๐‘›๐‘†๐‘ข๐‘ in D. โ€“ The set of maximal itemsets does not contain the complete support information โ€“ More compact representation Frequent Itemset Mining ๏ƒ  Algorithms ๏ƒ  Maximal or Closed Frequent Itemsets 32 A p r i l 1 2 , 2 0 2 4 / D r . R S
  • 40. DATABASE SYSTEMS GROUP Chapter 3: Frequent Itemset Mining 1) Introduction โ€“ Transaction databases, market basket data analysis 2) Mining Frequent Itemsets โ€“ Apriori algorithm, hash trees, FP-tree 3) Simple Association Rules โ€“ Basic notions, rule generation, interestingness measures 4) Further Topics โ€“ Hierarchical Association Rules โ€ข Motivation, notions, algorithms, interestingness โ€“ Quantitative Association Rules โ€ข Motivation, basic idea, partitioning numerical attributes, adaptation of apriori algorithm, interestingness 5) Extensions and Summary Outline 33 DATABASE SYSTEMS GROUP Simple Association Rules: Introduction โ€ข Transaction database: D= {{butter, bread, milk, sugar}; {butter, flour, milk, sugar}; {butter, eggs, milk, salt}; {eggs}; {butter, flour, milk, salt, sugar}} โ€ข Frequent itemsets: โ€ข Question of interest: โ€“ If milk and sugar are bought, will the customer always buy butter as well? ๐‘š๐‘–๐‘™๐‘˜, ๐‘ ๐‘ข๐‘”๐‘Ž๐‘Ÿ โ‡’ ๐‘๐‘ข๐‘ก๐‘ก๐‘’๐‘Ÿ ? โ€“ In this case, what would be the probability of buying butter? Frequent Itemset Mining ๏ƒ  Simple Association Rules 34 items support {butter} 4 {milk} 4 {butter, milk} 4 {sugar} 3 {butter, sugar} 3 {milk, sugar} 3 {butter, milk, sugar} 3 A p r i l 1 2 , 2 0 2 4 / D r . R S
  • 41. DATABASE SYSTEMS GROUP Simple Association Rules: Basic Notions ๏‚ง Items ๐ผ = {๐‘–1, ๐‘–2, โ€ฆ , ๐‘–๐‘š} : a set of literals (denoting items) โ€ข Itemset ๐‘‹: Set of items ๐‘‹ โŠ† ๐ผ โ€ข Database ๐ท: Set of transactions ๐‘‡, each transaction is a set of items T โŠ† ๐ผ โ€ข Transaction ๐‘‡ contains an itemset ๐‘‹: ๐‘‹ โŠ† ๐‘‡ โ€ข The items in transactions and itemsets are sorted lexicographically: โ€“ itemset ๐‘‹ = (๐‘ฅ1, ๐‘ฅ2, โ€ฆ , ๐‘ฅ๐‘˜ ), where ๐‘ฅ1 ๏‚ฃ ๐‘ฅ2 ๏‚ฃ โ€ฆ ๏‚ฃ ๐‘ฅ๐‘˜ โ€ข Length of an itemset: cardinality of the itemset (k-itemset: itemset of length k) โ€ข The support of an itemset X is defined as: ๐‘ ๐‘ข๐‘๐‘๐‘œ๐‘Ÿ๐‘ก ๐‘‹ = ๐‘‡ โˆˆ ๐ท|๐‘‹ โŠ† ๐‘‡ โ€ข Frequent itemset: an itemset X is called frequent iff ๐‘ ๐‘ข๐‘๐‘๐‘œ๐‘Ÿ๐‘ก(๐‘‹) โ‰ฅ ๐‘š๐‘–๐‘›๐‘†๐‘ข๐‘ โ€ข Association rule: An association rule is an implication of the form ๐‘‹ โ‡’ ๐‘Œ where ๐‘‹, ๐‘Œ โŠ† ๐ผ are two itemsets with ๐‘‹ โˆฉ ๐‘Œ = โˆ…. โ€ข Note: simply enumerating all possible association rules is not reasonable! ๏ƒ  What are the interesting association rules w.r.t. ๐ท? Frequent Itemset Mining ๏ƒ  Simple Association Rules 35 DATABASE SYSTEMS GROUP Interestingness of Association Rules โ€ข Interestingness of an association rule: Quantify the interestingness of an association rule with respect to a transaction database D: โ€“ Support: frequency (probability) of the entire rule with respect to D ๐‘ ๐‘ข๐‘๐‘๐‘œ๐‘Ÿ๐‘ก ๐‘‹ โ‡’ ๐‘Œ = ๐‘ƒ ๐‘‹ โˆช ๐‘Œ = {๐‘‡ โˆˆ ๐ท|๐‘‹ โˆช ๐‘Œ โŠ† ๐‘‡} ๐ท = ๐‘ ๐‘ข๐‘๐‘๐‘œ๐‘Ÿ๐‘ก(๐‘‹ โˆช ๐‘Œ) โ€œprobability that a transaction in ๐ท contains the itemset ๐‘‹ โˆช ๐‘Œโ€ โ€“ Confidence: indicates the strength of implication in the rule ๐‘๐‘œ๐‘›๐‘“๐‘–๐‘‘๐‘’๐‘›๐‘๐‘’ ๐‘‹ โ‡’ ๐‘Œ = ๐‘ƒ ๐‘Œ|๐‘‹ = {๐‘‡ โˆˆ ๐ท|๐‘‹ โˆช ๐‘Œ โŠ† ๐‘‡} {๐‘‡ โˆˆ ๐ท|๐‘‹ โŠ† ๐‘‡} = ๐‘ ๐‘ข๐‘๐‘๐‘œ๐‘Ÿ๐‘ก(๐‘‹ โˆช ๐‘Œ) ๐‘ ๐‘ข๐‘๐‘๐‘œ๐‘Ÿ๐‘ก(๐‘‹) โ€œconditional probability that a transaction in ๐ท containing the itemset ๐‘‹ also contains itemset ๐‘Œโ€ โ€“ Rule form: โ€œ๐ต๐‘œ๐‘‘๐‘ฆ โ‡’ ๐ป๐‘’๐‘Ž๐‘‘ [๐‘ ๐‘ข๐‘๐‘๐‘œ๐‘Ÿ๐‘ก, ๐‘๐‘œ๐‘›๐‘“๐‘–๐‘‘๐‘’๐‘›๐‘๐‘’]โ€ โ€ข Association rule examples: โ€“ buys diapers ๏ƒž buys beers [0.5%, 60%] โ€“ major in CS โˆง takes DB ๏ƒž avg. grade A [1%, 75%] Frequent Itemset Mining ๏ƒ  Simple Association Rules 36 buys beer buys diapers buys both A p r i l 1 2 , 2 0 2 4 / D r . R S
  • 42. DATABASE SYSTEMS GROUP Mining of Association Rules โ€ข Task of mining association rules: Given a database ๐ท, determine all association rules having a ๐‘ ๐‘ข๐‘๐‘๐‘œ๐‘Ÿ๐‘ก โ‰ฅ ๐‘š๐‘–๐‘›๐‘†๐‘ข๐‘ and a ๐‘๐‘œ๐‘›๐‘“๐‘–๐‘‘๐‘’๐‘›๐‘๐‘’ โ‰ฅ ๐‘š๐‘–๐‘›๐ถ๐‘œ๐‘›๐‘“ (so-called strong association rules). โ€ข Key steps of mining association rules: 1) Find frequent itemsets, i.e., itemsets that have at least support = ๐‘š๐‘–๐‘›๐‘†๐‘ข๐‘ 2) Use the frequent itemsets to generate association rules โ€ข For each itemset ๐‘‹ and every nonempty subset Y โŠ‚ ๐‘‹ generate rule Y โ‡’ (๐‘‹ โˆ’ ๐‘Œ) if ๐‘š๐‘–๐‘›๐‘†๐‘ข๐‘ and ๐‘š๐‘–๐‘›๐ถ๐‘œ๐‘›๐‘“ are fulfilled โ€ข we have 2|๐‘‹| โˆ’ 2 many association rule candidates for each itemset ๐‘‹ โ€ข Example frequent itemsets rule candidates: A โ‡’ ๐ต; ๐ต โ‡’ ๐ด; A โ‡’ ๐ถ; ๐ถ โ‡’ A; ๐ต โ‡’ ๐ถ; C โ‡’ ๐ต; ๐ด, ๐ต โ‡’ ๐ถ; ๐ด, ๐ถ โ‡’ ๐ต; ๐ถ, ๐ต โ‡’ ๐ด; ๐ด โ‡’ ๐ต, ๐ถ; ๐ต โ‡’ ๐ด, ๐ถ; ๐ถ โ‡’ ๐ด, ๐ต Frequent Itemset Mining ๏ƒ  Simple Association Rules 37 1-itemset count 2-itemset count 3-itemset count {A} {B} {C} 3 4 5 {A, B} {A, C} {B, C} 3 2 4 {A, B, C} 2 DATABASE SYSTEMS GROUP Generating Rules from Frequent Itemsets โ€ข For each frequent itemset X โ€“ For each nonempty subset Y of X, form a rule Y โ‡’ (๐‘‹ โˆ’ ๐‘Œ) โ€“ Delete those rules that do not have minimum confidence Note: 1) support always exceeds ๐‘š๐‘–๐‘›๐‘†๐‘ข๐‘ 2) the support values of the frequent itemsets suffice to calculate the confidence โ€ข Example: ๐‘‹ = {๐ด, ๐ต, ๐ถ}, ๐‘š๐‘–๐‘›๐ถ๐‘œ๐‘›๐‘“ = 60% โ€“ conf (A ๏ƒž B) = 3/3; โœ” โ€“ conf (B ๏ƒž A) = 3/4; โœ” โ€“ conf (A ๏ƒž C) = 2/3; โœ” โ€“ conf (C ๏ƒž A) = 2/5; โœ— โ€“ conf (B ๏ƒž C) = 4/4; โœ” โ€“ conf (C ๏ƒž B) = 4/5; โœ” โ€“ conf (A ๏ƒž B, C) = 2/3; โœ” conf (B, C ๏ƒž A) = ยฝ โœ— โ€“ conf (B ๏ƒž A, C) = 2/4; โœ— conf (A, C ๏ƒž B) = 1 โœ” โ€“ conf (C ๏ƒž A, B) = 2/5; โœ— conf (A, B ๏ƒž C) = 2/3 โœ” โ€ข Exploit anti-monotonicity for generating candidates for strong association rules! Frequent Itemset Mining ๏ƒ  Simple Association Rules 38 itemset count {A} {B} {C} 3 4 5 {A, B} {A, C} {B, C} 3 2 4 {A, B, C} 2 A p r i l 1 2 , 2 0 2 4 / D r . R S
  • 43. DATABASE SYSTEMS GROUP Interestingness Measurements โ€ข Objective measures โ€“ Two popular measurements: โ€“ support and โ€“ confidence โ€ข Subjective measures [Silberschatz & Tuzhilin, KDD95] โ€“ A rule (pattern) is interesting if it is โ€“ unexpected (surprising to the user) and/or โ€“ actionable (the user can do something with it) Frequent Itemset Mining ๏ƒ  Simple Association Rules 39 DATABASE SYSTEMS GROUP Criticism to Support and Confidence Example 1 [Aggarwal & Yu, PODS98] โ€ข Among 5000 students โ€“ 3000 play basketball (=60%) โ€“ 3750 eat cereal (=75%) โ€“ 2000 both play basket ball and eat cereal (=40%) โ€ข Rule play basketball ๏ƒž eat cereal [40%, 66.7%] is misleading because the overall percentage of students eating cereal is 75% which is higher than 66.7% โ€ข Rule play basketball ๏ƒž not eat cereal [20%, 33.3%] is far more accurate, although with lower support and confidence โ€ข Observation: play basketball and eat cereal are negatively correlated ๏ƒ˜ Not all strong association rules are interesting and some can be misleading. ๏ƒ  augment the support and confidence values with interestingness measures such as the correlation ๐ด โ‡’ ๐ต [๐‘ ๐‘ข๐‘๐‘, ๐‘๐‘œ๐‘›๐‘“, ๐‘๐‘œ๐‘Ÿ๐‘Ÿ] Frequent Itemset Mining ๏ƒ  Simple Association Rules 40 A p r i l 1 2 , 2 0 2 4 / D r . R S
  • 44. DATABASE SYSTEMS GROUP Other Interestingness Measures: Correlation โ€ข Lift is a simple correlation measure between two items A and B: ! The two rules ๐ด โ‡’ ๐ต and ๐ต โ‡’ ๐ด have the same correlation coefficient. โ€ข take both P(A) and P(B) in consideration โ€ข ๐‘๐‘œ๐‘Ÿ๐‘Ÿ๐ด,๐ต > 1 the two items A and B are positively correlated โ€ข ๐‘๐‘œ๐‘Ÿ๐‘Ÿ๐ด,๐ต = 1 there is no correlation between the two items A and B โ€ข ๐‘๐‘œ๐‘Ÿ๐‘Ÿ๐ด,๐ต < 1 the two items A and B are negatively correlated Frequent Itemset Mining ๏ƒ  Simple Association Rules 41 ๐‘๐‘œ๐‘Ÿ๐‘Ÿ๐ด,๐ต = ๐‘ƒ(๐ด โ€ซฺ‚โ€ฌ ๐ต) ๐‘ƒ ๐ด ๐‘ƒ(๐ต) = ๐‘ƒ ๐ต ๐ด ) ๐‘ƒ ๐ต = ๐‘๐‘œ๐‘›๐‘“(๐ดโ‡’๐ต) ๐‘ ๐‘ข๐‘๐‘(๐ต) DATABASE SYSTEMS GROUP Other Interestingness Measures: Correlation โ€ข Example 2: โ€ข X and Y: positively correlated โ€ข X and Z: negatively related โ€ข support and confidence of X=>Z dominates โ€ข but items X and Z are negatively correlated โ€ข Items X and Y are positively correlated Frequent Itemset Mining ๏ƒ  Simple Association Rules 42 X 1 1 1 1 0 0 0 0 Y 1 1 0 0 0 0 0 0 Z 0 1 1 1 1 1 1 1 rule support confidence correlation ๐‘‹ โ‡’ ๐‘Œ 25% 50% 2 ๐‘‹ โ‡’ ๐‘ 37.5% 75% 0.86 ๐‘Œ โ‡’ ๐‘ 12.5% 50% 0.57 A p r i l 1 2 , 2 0 2 4 / D r . R S
  • 45. DATABASE SYSTEMS GROUP Chapter 3: Frequent Itemset Mining 1) Introduction โ€“ Transaction databases, market basket data analysis 2) Mining Frequent Itemsets โ€“ Apriori algorithm, hash trees, FP-tree 3) Simple Association Rules โ€“ Basic notions, rule generation, interestingness measures 4) Further Topics โ€“ Hierarchical Association Rules โ€ข Motivation, notions, algorithms, interestingness โ€“ Quantitative Association Rules โ€ข Motivation, basic idea, partitioning numerical attributes, adaptation of apriori algorithm, interestingness 5) Extensions and Summary Outline 43 DATABASE SYSTEMS GROUP Hierarchical Association Rules: Motivation โ€ข Problem of association rules in plain itemsets โ€“ High minsup: apriori finds only few rules โ€“ Low minsup: apriori finds unmanagably many rules โ€ข Exploit item taxonomies (generalizations, is-a hierarchies) which exist in many applications โ€ข New task: find all generalized association rules between generalized items ๏ƒ  Body and Head of a rule may have items of any level of the hierarchy โ€ข Generalized association rule: ๐‘‹ โ‡’ ๐‘Œ with ๐‘‹, ๐‘Œ โŠ‚ ๐ผ, ๐‘‹ โˆฉ ๐‘Œ = โˆ… and no item in ๐‘Œ is an ancestor of any item in ๐‘‹ i.e., ๐‘—๐‘Ž๐‘๐‘˜๐‘’๐‘ก๐‘  โ‡’ ๐‘๐‘™๐‘œ๐‘กโ„Ž๐‘’๐‘  is essentially true Frequent Itemset Mining ๏ƒ  Further Topics ๏ƒ  Hierarchical Association Rules 44 shoes sports shoes boots outerwear jackets jeans clothes shirts A p r i l 1 2 , 2 0 2 4 / D r . R S
  • 46. DATABASE SYSTEMS GROUP Hierarchical Association Rules: Motivating Example โ€ข Examples Jeans ๏ƒž boots jackets ๏ƒž boots Outerwear ๏ƒž boots Support > minsup โ€ข Characteristics โ€“ Support(โ€œouterwear ๏ƒž bootsโ€) is not necessarily equal to the sum support(โ€œjackets ๏ƒž bootsโ€) + support( โ€œjeans ๏ƒž bootsโ€) e.g. if a transaction with jackets, jeans and boots exists โ€“ Support for sets of generalizations (e.g., product groups) is higher than support for sets of individual items If the support of rule โ€œouterwear ๏ƒž bootsโ€ exceeds minsup, then the support of rule โ€œclothes ๏ƒž bootsโ€ does, too Frequent Itemset Mining ๏ƒ  Further Topics ๏ƒ  Hierarchical Association Rules 45 Support < minSup DATABASE SYSTEMS GROUP Mining Multi-Level Associations โ€ข A top_down, progressive deepening approach: โ€“ First find high-level strong rules: โ€ข milk ๏ƒž bread [20%, 60%]. โ€“ Then find their lower-level โ€œweakerโ€ rules: โ€ข 1.5% milk ๏ƒž wheat bread [6%, 50%]. โ€ข Different min_support threshold across multi-levels lead to different algorithms: โ€“ adopting the same min_support across multi-levels โ€“ adopting reduced min_support at lower levels Frequent Itemset Mining ๏ƒ  Further Topics ๏ƒ  Hierarchical Association Rules 46 Food bread milk 3.5% Sunset Fraser 1.5% white wheat Wonder A p r i l 1 2 , 2 0 2 4 / D r . R S
  • 47. DATABASE SYSTEMS GROUP Minimum Support for Multiple Levels โ€ข Uniform Support + the search procedure is simplified (monotonicity) + the user is required to specify only one support threshold โ€ข Reduced Support (Variable Support) + takes the lower frequency of items in lower levels into consideration Frequent Itemset Mining ๏ƒ  Further Topics ๏ƒ  Hierarchical Association Rules 47 minsup = 5 % minsup = 5 % milk support = 10 % 3.5% support = 6 % 1.5% support = 4 % milk support = 10 % 3.5% support = 6 % 1.5% support = 4 % minsup = 3 % minsup = 5 % DATABASE SYSTEMS GROUP Multilevel Association Mining using Reduced Support โ€ข A top_down, progressive deepening approach: โ€“ First find high-level strong rules: โ€ข milk ๏ƒž bread [20%, 60%]. โ€“ Then find their lower-level โ€œweakerโ€ rules: โ€ข 1.5% milk ๏ƒž wheat bread [6%, 50%]. 3 approaches using reduced Support: โ€ข Level-by-level independent method: โ€“ Examine each node in the hierarchy, regardless of whether or not its parent node is found to be frequent โ€ข Level-cross-filtering by single item: โ€“ Examine a node only if its parent node at the preceding level is frequent โ€ข Level-cross- filtering by k-itemset: โ€“ Examine a k-itemset at a given level only if its parent k-itemset at the preceding level is frequent Frequent Itemset Mining ๏ƒ  Further Topics ๏ƒ  Hierarchical Association Rules 48 Food bread milk 3.5% Sunset Fraser 1.5% white wheat Wonder level-wise processing (breadth first) A p r i l 1 2 , 2 0 2 4 / D r . R S
  • 48. DATABASE SYSTEMS GROUP Multilevel Associations: Variants โ€ข A top_down, progressive deepening approach: โ€“ First find high-level strong rules: โ€ข milk ๏ƒž bread [20%, 60%]. โ€“ Then find their lower-level โ€œweakerโ€ rules: โ€ข 1.5% milk ๏ƒž wheat bread [6%, 50%]. โ€ข Variations at mining multiple-level association rules. โ€“ Level-crossed association rules: โ€ข 1.5 % milk ๏ƒž Wonder wheat bread โ€“ Association rules with multiple, alternative hierarchies: โ€ข 1.5 % milk ๏ƒž Wonder bread Frequent Itemset Mining ๏ƒ  Further Topics ๏ƒ  Hierarchical Association Rules 49 Food bread milk 3.5% Sunset Fraser 1.5% white wheat Wonder level-wise processing (breadth first) DATABASE SYSTEMS GROUP Multi-level Association: Redundancy Filtering โ€ข Some rules may be redundant due to โ€œancestorโ€ relationships between items. โ€ข Example โ€“ ๐‘…1: milk ๏ƒž wheat bread [support = 8%, confidence = 70%] โ€“ ๐‘…2: 1.5% milk ๏ƒž wheat bread [support = 2%, confidence = 72%] โ€ข We say that rule 1 is an ancestor of rule 2. โ€ข Redundancy: A rule is redundant if its support is close to the โ€œexpectedโ€ value, based on the ruleโ€™s ancestor. Frequent Itemset Mining ๏ƒ  Further Topics ๏ƒ  Hierarchical Association Rules 50 A p r i l 1 2 , 2 0 2 4 / D r . R S
  • 49. DATABASE SYSTEMS GROUP Interestingness of Hierarchical Association Rules: Notions Let ๐‘‹, ๐‘‹โ€ฒ, ๐‘Œ, ๐‘Œโ€ฒ โŠ† ๐ผ be itemsets. โ€ข An itemset ๐‘‹โ€ฒ is an ancestor of ๐‘‹ iff there exist ancestors ๐‘ฅ1 โ€ฒ , โ€ฆ , ๐‘ฅ๐‘˜ โ€ฒ of ๐‘ฅ1, โ€ฆ , ๐‘ฅ๐‘˜ โˆˆ ๐‘‹ and ๐‘ฅ๐‘˜+1, โ€ฆ , ๐‘ฅ๐‘› with ๐‘› = ๐‘‹ such that ๐‘‹โ€ฒ = {๐‘ฅ1 โ€ฒ , โ€ฆ , ๐‘ฅ๐‘˜ โ€ฒ , ๐‘ฅ๐‘˜+1, โ€ฆ , ๐‘ฅ๐‘›}. โ€ข Let ๐‘‹โ€ฒ and ๐‘Œโ€ฒ be ancestors of ๐‘‹ and ๐‘Œ. Then we call the rules ๐‘‹โ€ฒ ๏ƒž ๐‘Œโ€ฒ, ๐‘‹๏ƒž๐‘Œโ€ฒ, and ๐‘‹โ€ฒ๏ƒž๐‘Œ ancestors of the rule X ๏ƒž Y . โ€ข The rule Xยด ๏ƒž Yยด is a direct ancestor of rule X ๏ƒž Y in a set of rules if: โ€“ Rule Xยด ๏ƒž Yโ€˜ is an ancestor of rule X ๏ƒž Y, and โ€“ There is no rule Xโ€œ ๏ƒž Yโ€œ such that Xโ€œ ๏ƒž Yโ€œ is an ancestor of X ๏ƒž Y and Xยด ๏ƒž Yยด is an ancestor of Xโ€œ ๏ƒž Yโ€œ โ€ข A hierarchical association rule X ๏ƒž Y is called R-interesting if: โ€“ There are no direct ancestors of X ๏ƒž Y or โ€“ The actual support is larger than R times the expected support or โ€“ The actual confidence is larger than R times the expected confidence Frequent Itemset Mining ๏ƒ  Further Topics ๏ƒ  Hierarchical Association Rules 51 DATABASE SYSTEMS GROUP Expected Support and Expected Confidence โ€ข How to compute the expected support? Given the rule for X ๏ƒž Y and its ancestor rule Xยด ๏ƒž Yยด the expected support of X ๏ƒž Y is defined as: ๐ธ๐‘โ€ฒ P ๐‘ = P(๐‘ง1) P(๐‘ง1 โ€ฒ ) ร— โ‹ฏ ร— P ๐‘ง๐‘— P(๐‘ง๐‘— โ€ฒ ) ร— P ๐‘โ€ฒ where ๐‘ = ๐‘‹ โˆช ๐‘Œ = {๐‘ง1, โ€ฆ , ๐‘ง๐‘›}, ๐‘โ€ฒ = ๐‘‹โ€ฒ โˆช ๐‘Œโ€ฒ = {๐‘ง1 โ€ฒ , โ€ฆ , ๐‘ง๐‘— โ€ฒ , ๐‘ง๐‘—+1, โ€ฆ , ๐‘ง๐‘›} and each ๐‘ง๐‘– โ€ฒ โˆˆ ๐‘โ€ฒ is an ancestor of ๐‘ง๐‘– โˆˆ ๐‘ Frequent Itemset Mining ๏ƒ  Further Topics ๏ƒ  Hierarchical Association Rules 52 [SAโ€™95] R. Srikant, R. Agrawal: Mining Generalized Association Rules. In VLDB, 1995. A p r i l 1 2 , 2 0 2 4 / D r . R S
  • 50. DATABASE SYSTEMS GROUP Expected Support and Expected Confidence โ€ข How to compute the expected confidence? Given the rule for X ๏ƒž Y and its ancestor rule Xยด ๏ƒž Yยด, then the expected confidence of X ๏ƒž Y is defined as: ๐ธ๐‘‹โ€ฒโ‡’๐‘Œโ€ฒ P ๐‘Œ|๐‘‹ = P(๐‘ฆ1) P(๐‘ฆ1 โ€ฒ ) ร— โ‹ฏ ร— P ๐‘ฆ๐‘— P ๐‘ฆ๐‘— โ€ฒ ร— P ๐‘Œโ€ฒ|๐‘‹โ€ฒ where ๐‘Œ = {๐‘ฆ1, โ€ฆ , ๐‘ฆ๐‘›} and ๐‘Œโ€ฒ = ๐‘ฆ1 โ€ฒ , โ€ฆ , ๐‘ฆ๐‘— โ€ฒ , ๐‘ฆ๐‘—+1, โ€ฆ , ๐‘ฆ๐‘› and each ๐‘ฆ๐‘– โ€ฒ โˆˆ ๐‘Œโ€ฒ is an ancestor of ๐‘ฆ๐‘– โˆˆ ๐‘Œ Frequent Itemset Mining ๏ƒ  Further Topics ๏ƒ  Hierarchical Association Rules 53 [SAโ€™95] R. Srikant, R. Agrawal: Mining Generalized Association Rules. In VLDB, 1995. DATABASE SYSTEMS GROUP Interestingness of Hierarchical Association Rules:Example โ€ข Example โ€“ Let R = 1.6 โ€ข Frequent Itemset Mining ๏ƒ  Further Topics ๏ƒ  Hierarchical Association Rules 54 Item Support clothes 20 outerwear 10 jackets 4 No rule support R-interesting? 1 clothes ๏ƒž shoes 10 yes: no ancestors 2 outerwear ๏ƒž shoes 9 yes: Support > R *exp. support (wrt. rule 1) = (1.6 โ‹… ( 10 20 โ‹… 10)) = 8 3 jackets ๏ƒž shoes 4 Not wrt. support: Support > R * exp. support (wrt. rule 1) = 3.2 Support < R * exp. support (wrt. rule 2) = 5.75 ๏ƒ  still need to check the confidence! A p r i l 1 2 , 2 0 2 4 / D r . R S
  • 51. DATABASE SYSTEMS GROUP Chapter 3: Frequent Itemset Mining 1) Introduction โ€“ Transaction databases, market basket data analysis 2) Simple Association Rules โ€“ Basic notions, rule generation, interestingness measures 3) Mining Frequent Itemsets โ€“ Apriori algorithm, hash trees, FP-tree 4) Further Topics โ€“ Hierarchical Association Rules โ€ข Motivation, notions, algorithms, interestingness โ€“ Multidimensional and Quantitative Association Rules โ€ข Motivation, basic idea, partitioning numerical attributes, adaptation of apriori algorithm, interestingness 5) Summary Outline 55 DATABASE SYSTEMS GROUP Multi-Dimensional Association: Concepts โ€ข Single-dimensional rules: โ€“ buys milk ๏ƒž buys bread โ€ข Multi-dimensional rules: ๏‚ณ 2 dimensions โ€“ Inter-dimension association rules (no repeated dimensions) โ€ข age between 19-25 ๏ƒ™ status is student ๏ƒž buys coke โ€“ hybrid-dimension association rules (repeated dimensions) โ€ข age between 19-25 ๏ƒ™ buys popcorn ๏ƒž buys coke Frequent Itemset Mining ๏ƒ  Extensions & Summary 56 A p r i l 1 2 , 2 0 2 4 / D r . R S
  • 52. DATABASE SYSTEMS GROUP Techniques for Mining Multi- Dimensional Associations โ€ข Search for frequent k-predicate set: โ€“ Example: {age, occupation, buys} is a 3-predicate set. โ€“ Techniques can be categorized by how age is treated. 1. Using static discretization of quantitative attributes โ€“ Quantitative attributes are statically discretized by using predefined concept hierarchies. 2. Quantitative association rules โ€“ Quantitative attributes are dynamically discretized into โ€œbinsโ€based on the distribution of the data. 3. Distance-based association rules โ€“ This is a dynamic discretization process that considers the distance between data points. Frequent Itemset Mining ๏ƒ  Extensions & Summary 57 DATABASE SYSTEMS GROUP Quantitative Association Rules โ€ข Up to now: associations of boolean attributes only โ€ข Now: numerical attributes, too โ€ข Example: โ€“ Original database โ€“ Boolean database Frequent Itemset Mining ๏ƒ  Further Topics ๏ƒ  Quantitative Association Rules 58 ID age marital status # cars 1 23 single 0 2 38 married 2 ID age: 20..29 age: 30..39 m-status: single m-status: married . . . 1 1 0 1 0 . . . 2 0 1 0 1 . . . A p r i l 1 2 , 2 0 2 4 / D r . R S
  • 53. DATABASE SYSTEMS GROUP Quantitative Association Rules: Ideas โ€ข Static discretization โ€“ Discretization of all attributes before mining the association rules โ€“ E.g. by using a generalization hierarchy for each attribute โ€“ Substitute numerical attribute values by ranges or intervals โ€ข Dynamic discretization โ€“ Discretization of the attributes during association rule mining โ€“ Goal (e.g.): maximization of confidence โ€“ Unification of neighboring association rules to a generalized rule Frequent Itemset Mining ๏ƒ  Further Topics ๏ƒ  Quantitative Association Rules 59 DATABASE SYSTEMS GROUP Partitioning of Numerical Attributes โ€ข Problem: Minimum support โ€“ Too many intervals ๏‚ฎ๏€ too small support for each individual interval โ€“ Too few intervals ๏‚ฎ too small confidence of the rules โ€ข Solution โ€“ First, partition the domain into many intervals โ€“ Afterwards, create new intervals by merging adjacent interval โ€ข Numeric attributes are dynamically discretized such that the confidence or compactness of the rules mined is maximized. Frequent Itemset Mining ๏ƒ  Further Topics ๏ƒ  Quantitative Association Rules 60 A p r i l 1 2 , 2 0 2 4 / D r . R S
  • 54. DATABASE SYSTEMS GROUP Quantitative Association Rules โ€ข 2-D quantitative association rules: Aquan1 ๏ƒ™ Aquan2 ๏ƒž Acat โ€ข Cluster โ€œadjacentโ€ association rules to form general rules using a 2-D grid. โ€ข Example: Frequent Itemset Mining ๏ƒ  Further Topics ๏ƒ  Quantitative Association Rules 61 age(X,โ€30-34โ€) ๏ƒ™ income(X,โ€24K - 48Kโ€) ๏ƒž buys(X,โ€high resolution TVโ€) DATABASE SYSTEMS GROUP Chapter 3: Frequent Itemset Mining 1) Introduction โ€“ Transaction databases, market basket data analysis 2) Mining Frequent Itemsets โ€“ Apriori algorithm, hash trees, FP-tree 3) Simple Association Rules โ€“ Basic notions, rule generation, interestingness measures 4) Further Topics โ€“ Hierarchical Association Rules โ€ข Motivation, notions, algorithms, interestingness โ€“ Quantitative Association Rules โ€ข Motivation, basic idea, partitioning numerical attributes, adaptation of apriori algorithm, interestingness 5) Summary Outline 62 A p r i l 1 2 , 2 0 2 4 / D r . R S
  • 55. A p r i l 1 2 , 2 0 2 4 / D r . R S 12 Reference [1] https://www.jigsawacademy.com/blogs/hr-analytics/data-analytics-lifecycle/ [2] https://statacumen.com/teach/ADA1/ADA1_notes_F14.pdf [3] https://www.youtube.com/watch?v=fDRa82lxzaU [4] https://www.investopedia.com/terms/d/data-analytics.asp [5] http://egyankosh.ac.in/bitstream/123456789/10935/1/Unit-2.pdf [6] http://epgp.inflibnet.ac.in/epgpdata/uploads/epgp_content/computer_science/16._d ata_analytics/03._evolution_of_analytical_scalability/et/9280_et_3_et.pdf [7] https://bhavanakhivsara.files.wordpress.com/2018/06/data-science-and-big-data-analy -nieizv_book.pdf [8] https://www.researchgate.net/publication/317214679_Sentiment_Analysis_for_Effect ive_Stock_Market_Prediction [9] https://snscourseware.org/snscenew/files/1569681518.pdf [10] http://csis.pace.edu/ctappert/cs816-19fall/books/2015DataScience&BigDataAnalytics. pdf [11] https://www.youtube.com/watch?v=mccsmoh2_3c [12] https://mentalmodels4life.net/2015/11/18/agile-data-science-applying-kanban-in-the-a nalytics-life-cycle/ [13] https://www.sas.com/en_in/insights/big-data/what-is-big-data.html#:~:text=Big%20dat a%20refers%20to%20data,around%20for%20a%20long%20time. [14] https://www.javatpoint.com/big-data-characteristics [15] Liu, S., Wang, M., Zhan, Y., & Shi, J. (2009). Daily work stress and alcohol use: Testing the cross- level moderation effects of neuroticism and job involvement. Personnel Psychology,62(3), 575โ€“597. http://dx.doi.org/10.1111/j.1744-6570.2009.01149.x 55