Lecture 11 - KNN and Clustering, a lecture in subject module Statistical & Ma...Maninda Edirisooriya
Supervised ML technique, K-Nearest Neighbor and Unsupervised Clustering techniques are learnt in this lesson. This was one of the lectures of a full course I taught in University of Moratuwa, Sri Lanka on 2023 second half of the year.
UNIT - 4: Data Warehousing and Data MiningNandakumar P
UNIT-IV
Cluster Analysis: Types of Data in Cluster Analysis – A Categorization of Major Clustering Methods – Partitioning Methods – Hierarchical methods – Density, Based Methods – Grid, Based Methods – Model, Based Clustering Methods – Clustering High, Dimensional Data – Constraint, Based Cluster Analysis – Outlier Analysis.
The method of identifying similar groups of data in a data set is called clustering. Entities in each group are comparatively more similar to entities of that group than those of the other groups.
Clustering is the process of making a group of abstract objects into classes of similar objects. Clustering helps to splits data into several subsets. Each of these subsets contains data similar to each other, and these subsets are called clusters. Now that the data from our customer base is divided into clusters, we can make an informed decision about who we think is best suited for this product.
Cluster analysis is a data analysis technique that explores the naturally occurring groups within a data set known as clusters. Cluster analysis doesn't need to group data points into any predefined groups, which means that it is an unsupervised learning method.
Cluster analysis is a data analysis technique that explores the naturally occurring groups within a data set known as clusters. Cluster analysis doesn't need to group data points into any predefined groups, which means that it is an unsupervised learning method.
Unsupervised learning Algorithms and Assumptionsrefedey275
Topics :
Introduction to unsupervised learning
Unsupervised learning Algorithms and Assumptions
K-Means algorithm – introduction
Implementation of K-means algorithm
Hierarchical Clustering – need and importance of hierarchical clustering
Agglomerative Hierarchical Clustering
Working of dendrogram
Steps for implementation of AHC using Python
Gaussian Mixture Models – Introduction, importance and need of the model
Normal , Gaussian distribution
Implementation of Gaussian mixture model
Understand the different distance metrics used in clustering
Euclidean, Manhattan, Cosine, Mahala Nobis
Features of a Cluster – Labels, Centroids, Inertia, Eigen vectors and Eigen values
Principal component analysis
Supervised learning (classification)
Supervision: The training data (observations, measurements, etc.) are accompanied by labels indicating the class of the observations
New data is classified based on the training set
Unsupervised learning (clustering)
The class labels of training data is unknown
Given a set of measurements, observations, etc. with the aim of establishing the existence of classes or clusters in the data
Types of Hierarchical Clustering
There are mainly two types of hierarchical clustering:
Agglomerative hierarchical clustering
Divisive Hierarchical clustering
A distribution in statistics is a function that shows the possible values for a variable and how often they occur.
In probability theory and statistics, the Normal Distribution, also called the Gaussian Distribution.
is the most significant continuous probability distribution.
Sometimes it is also called a bell curve.
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
Lecture 11 - KNN and Clustering, a lecture in subject module Statistical & Ma...Maninda Edirisooriya
Supervised ML technique, K-Nearest Neighbor and Unsupervised Clustering techniques are learnt in this lesson. This was one of the lectures of a full course I taught in University of Moratuwa, Sri Lanka on 2023 second half of the year.
UNIT - 4: Data Warehousing and Data MiningNandakumar P
UNIT-IV
Cluster Analysis: Types of Data in Cluster Analysis – A Categorization of Major Clustering Methods – Partitioning Methods – Hierarchical methods – Density, Based Methods – Grid, Based Methods – Model, Based Clustering Methods – Clustering High, Dimensional Data – Constraint, Based Cluster Analysis – Outlier Analysis.
The method of identifying similar groups of data in a data set is called clustering. Entities in each group are comparatively more similar to entities of that group than those of the other groups.
Clustering is the process of making a group of abstract objects into classes of similar objects. Clustering helps to splits data into several subsets. Each of these subsets contains data similar to each other, and these subsets are called clusters. Now that the data from our customer base is divided into clusters, we can make an informed decision about who we think is best suited for this product.
Cluster analysis is a data analysis technique that explores the naturally occurring groups within a data set known as clusters. Cluster analysis doesn't need to group data points into any predefined groups, which means that it is an unsupervised learning method.
Cluster analysis is a data analysis technique that explores the naturally occurring groups within a data set known as clusters. Cluster analysis doesn't need to group data points into any predefined groups, which means that it is an unsupervised learning method.
Unsupervised learning Algorithms and Assumptionsrefedey275
Topics :
Introduction to unsupervised learning
Unsupervised learning Algorithms and Assumptions
K-Means algorithm – introduction
Implementation of K-means algorithm
Hierarchical Clustering – need and importance of hierarchical clustering
Agglomerative Hierarchical Clustering
Working of dendrogram
Steps for implementation of AHC using Python
Gaussian Mixture Models – Introduction, importance and need of the model
Normal , Gaussian distribution
Implementation of Gaussian mixture model
Understand the different distance metrics used in clustering
Euclidean, Manhattan, Cosine, Mahala Nobis
Features of a Cluster – Labels, Centroids, Inertia, Eigen vectors and Eigen values
Principal component analysis
Supervised learning (classification)
Supervision: The training data (observations, measurements, etc.) are accompanied by labels indicating the class of the observations
New data is classified based on the training set
Unsupervised learning (clustering)
The class labels of training data is unknown
Given a set of measurements, observations, etc. with the aim of establishing the existence of classes or clusters in the data
Types of Hierarchical Clustering
There are mainly two types of hierarchical clustering:
Agglomerative hierarchical clustering
Divisive Hierarchical clustering
A distribution in statistics is a function that shows the possible values for a variable and how often they occur.
In probability theory and statistics, the Normal Distribution, also called the Gaussian Distribution.
is the most significant continuous probability distribution.
Sometimes it is also called a bell curve.
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
Opendatabay - Open Data Marketplace.pptxOpendatabay
Opendatabay.com unlocks the power of data for everyone. Open Data Marketplace fosters a collaborative hub for data enthusiasts to explore, share, and contribute to a vast collection of datasets.
First ever open hub for data enthusiasts to collaborate and innovate. A platform to explore, share, and contribute to a vast collection of datasets. Through robust quality control and innovative technologies like blockchain verification, opendatabay ensures the authenticity and reliability of datasets, empowering users to make data-driven decisions with confidence. Leverage cutting-edge AI technologies to enhance the data exploration, analysis, and discovery experience.
From intelligent search and recommendations to automated data productisation and quotation, Opendatabay AI-driven features streamline the data workflow. Finding the data you need shouldn't be a complex. Opendatabay simplifies the data acquisition process with an intuitive interface and robust search tools. Effortlessly explore, discover, and access the data you need, allowing you to focus on extracting valuable insights. Opendatabay breaks new ground with a dedicated, AI-generated, synthetic datasets.
Leverage these privacy-preserving datasets for training and testing AI models without compromising sensitive information. Opendatabay prioritizes transparency by providing detailed metadata, provenance information, and usage guidelines for each dataset, ensuring users have a comprehensive understanding of the data they're working with. By leveraging a powerful combination of distributed ledger technology and rigorous third-party audits Opendatabay ensures the authenticity and reliability of every dataset. Security is at the core of Opendatabay. Marketplace implements stringent security measures, including encryption, access controls, and regular vulnerability assessments, to safeguard your data and protect your privacy.
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
4. Clustering
• Clustering is the process of finding meaningful groups in
data
• For example, the customers of a company can be grouped
based on purchase behavior.
• The task of clustering can be used in two different classes of
applications:
• To describe a given dataset and
• as a preprocessing step for other data science algorithms.
4
5. Clustering to describe the data
• The most common application of clustering is to explore the
data and find all the possible meaningful groups in the data
• Clustering a company’s customer records can yield a few
groups in such a way that customers within a group are
more like each other than customers belonging to a
different group
• Applications:
• Marketing: Finding the common groups of customers
• Document clustering: One common text mining task is to
automatically group documents
• Session grouping: In web analytics, clustering is helpful to
understand common groups of clickstream patterns
5
7. Types of clustering techniques
• The clustering process seeks to find groupings in data, in
such a way that data points within a cluster are more similar
to each other than to data points in the other clusters
• One common way of measuring similarity is the Euclidean
distance measurement in n-dimensional space
7
8. Taxonomy based on data point’s membership
• Exclusive or strict partitioning clusters: Each data object
belongs to one exclusive cluster
• Overlapping clusters: The cluster groups are not exclusive,
and each data object may belong to more than one cluster.
• Hierarchical clusters: Each child cluster can be merged to
form a parent cluster.
• Fuzzy or probabilistic clusters: Each data point belongs to all
cluster groups with varying degrees of membership from 0
to 1.
8
9. Taxonomy by algorithmic approach
• Prototype-based clustering: In the prototype-based clustering,
each cluster is represented by a central data object, also called a
prototype (centroid clustering or center-based clustering)
• Density clustering:
• Each dense area can be assigned a cluster and the low-density area can
be discarded as noise
• Hierarchical clustering:
• Hierarchical clustering is a process where a cluster hierarchy is created
based on the distance between data points.
• The output of a hierarchical clustering is a dendrogram: a tree diagram
that shows different clusters at any point of precision which is specified
by the user
• Model-based clustering:
• Model-based clustering gets its foundation from statistics and
probability distribution models; this technique is also called
distribution-based clustering.
• Mixture of Gaussians is one of the model-based clustering techniques
9
11. k-Means
• k-Means clustering is a prototype-based clustering method where
the dataset is divided into k-clusters.
• User specifies the number of clusters (k) that need to be grouped
in the dataset.
• The objective of k-means clustering is to find a prototype data
point for each cluster; all the data points are then assigned to the
nearest prototype, which then forms a cluster.
• The prototype is called as the centroid, the center of the cluster.
• The center of the cluster can be the mean of all data objects in
the cluster, as in k-means, or the most represented data object, as
in k-medoid clustering.
• The cluster centroid or mean data object does not have to be a
real data point in the dataset and can be an imaginary data point
that represents the characteristics of all the data points within
the cluster.
11
12. k-Means
• The data objects inside a partition belong to the cluster.
• These partitions are also called Voronoi partitions, and each
prototype is a seed in a Voronoi partition.
12
13. Algorithm
• The logic of finding k-clusters within a given dataset is rather
simple and always converges to a solution
• However, the final result in most cases will be locally
optimal where the solution will not converge to the best
global solution
• Step 1: Initiate Centroids
• The first step in k-means algorithm is to initiate k random centroids
• Step 2: Assign Data Points
• all the data points are now assigned to the nearest centroid to form
a cluster.
• Step 3: Calculate New Centroids
• New centroids are means of each clusters
13
14. Algorithm
• Step 4: Repeat Assignment and Calculate New Centroids
• assigning data points to the nearest centroid is repeated until all
the data points are reassigned to new centroids
• Step 5: Termination
• Step 3—calculating new centroids, and step 4—assigning data
points to new centroids, are repeated until no further change in
assignment of data points happens.
14
15. Some key issues
• Initiation: The final clustering grouping depends on the
random initiator and the nature of the dataset.
• Empty clusters: One possibility in k-means clustering is the
formation of empty clusters in which no data objects are
associated.
• Outliers: Since SSE (sum of squared errors) is used as an
objective function, k-means clustering is susceptible to
outliers
• Post-processing: Since k-means clustering seeks to be locally
optimal, a few post-processing techniques can be
introduced to force a new solution that has less SSE
15
16. Evaluation of Clusters
• Evaluation of clustering can be as simple as computing total
SSE
• Good models will have low SSE within the cluster and low
overall SSE among all clusters.
• SSE can also be referred to as the average within-cluster
distance and can be calculated for each cluster and then
averaged for all the clusters.
• Davies-Bouldin index is a measure of uniqueness of the
clusters and takes into consideration both cohesiveness of
the cluster (distance between the data points and center of
the cluster) and separation between the clusters
16
19. DBSCAN
• A cluster can also be defined as an area of high
concentration (or density) of data objects surrounded by
areas of low concentration (or density) of data objects.
• A density-clustering algorithm identifies clusters in the data
based on the measurement of the density distribution in n-
dimensional space
• Specifying the number of the cluster parameters (𝑘) is not
necessary for density-based algorithms
• Thus, density-based clustering can serve as an important
data exploration technique
• Density can be defined as the number of data points in a
unit n-dimensional space
19
20. Algorithm
• Step 1: Defining Epsilon and
MinPoints
• Calculation of a density for all
data points in a dataset, with a
given fixed radius 𝜀 (epsilon).
• To determine whether a
neighborhood is high-density
or low-density, a threshold of
data points (MinPoints) will
have to be defined, above
which the neighborhood is
considered high-density
• Both 𝜀 and 𝑀𝑖𝑛𝑃𝑜𝑖𝑛𝑡𝑠 are
user-defined parameters
20
21. Algorithm
• Step 2: Classification of Data Points
21
Core points: All the data points inside the
high-density region of at least one data
point are considered a core point. A high-
density region is a space where there are
at least MinPoints data points within a
radius of 𝜀 for any data point.
Border points: Border points sit on the
circumference of radius 𝜀 from a
data point. A border point is the boundary
between high-density and low-density space.
Border points are counted within the high-
density space calculation
Noise points: Any point that is neither a core point nor border point is
called a noise point. They form a low-density region around the high
density region.
22. Algorithm
• Step 3: Clustering
• Groups of core points form distinct clusters. If two core points are
within 𝜀 of each other, then both core points are within the same
cluster
• All these clustered core points form a cluster, which is surrounded
by low-density noise points
• A few data points are left unlabeled or associated to a default noise
cluster
22
23. Special Cases: Varying Densities
• The dataset has four distinct regions numbered from 1-4.
Region 1 is the high-density area A, regions 2 and 4 are of
mediumdensity B, and between them is region 3, which is
extremely low-density C
• The density threshold parameters are tuned in such a way
as to partition and identify region 1, then regions 2 and 4
(with density B) will be considered noise, along with region
3
23
26. SOM
• A self-organizing map (SOM) is a powerful visual clustering
technique that evolved from a combination of neural
networks and prototype-based clustering
• A key distinction in this neural network is the absence of an
output target function to optimize or predict, hence, it is an
unsupervised learning algorithm
• SOM methodology is used to project data objects from data
space, mostly in 𝑛 dimensions, to grid space, usually
resulting in two dimensions
26
27. Algorithm
• Step 1: Topology Specification
• Two-dimensional rows and columns with either a rectangular lattice
or a hexagonal lattice are commonly used in SOMs
• The number of centroids is the product of the number of rows and
columns in the grid
27
28. Algorithm
• Step 2: Initialize Centroids
• A SOM starts the process by initializing the centroids. The initial
centroids are values of random data objects from the dataset. This
is similar to initializing centroids in k-means clustering.
• Step 3: Assignment of Data Objects
• After centroids are selected and placed on the grid in the
intersection of rows and columns, data objects are selected one by
one and assigned to the nearest centroid.
• Step 4: Centroid Update
28
29. Algorithm
• Step 5: Termination
• The entire algorithm is continued until no significant centroid
updates take place in each run or until the specified number of run
count is reached
• a SOM tends to converge to a solution in most cases but doesn’t
guarantee an optimal solution
• Step 6: Mapping a New Data Object
• any new data object can be quickly given a location on the grid
space, based on its proximity to the centroids.
• The characteristics of new data objects can be further understood
by studying the neighbors.
29