Cluster analysis is a technique used to group objects into clusters based on similarities. There are several major approaches to cluster analysis including partitioning methods, hierarchy methods, density-based methods, and grid-based methods. Partitioning methods construct partitions of the data objects into a set number of clusters by optimizing a chosen criterion, such as k-means and k-medoids clustering algorithms.
Cluster analysis is used to group similar objects together and separate dissimilar objects. It has applications in understanding data patterns and reducing large datasets. The main types are partitional which divides data into non-overlapping subsets, and hierarchical which arranges clusters in a tree structure. Popular clustering algorithms include k-means, hierarchical clustering, and graph-based clustering. K-means partitions data into k clusters by minimizing distances between points and cluster centroids, but requires specifying k and is sensitive to initial centroid positions. Hierarchical clustering creates nested clusters without needing to specify the number of clusters, but has higher computational costs.
This document discusses machine learning concepts including supervised vs. unsupervised learning, clustering algorithms, and specific clustering methods like k-means and k-nearest neighbors. It provides examples of how clustering can be used for applications such as market segmentation and astronomical data analysis. Key clustering algorithms covered are hierarchy methods, partitioning methods, k-means which groups data by assigning objects to the closest cluster center, and k-nearest neighbors which classifies new data based on its closest training examples.
This document provides an overview of clustering and classification techniques. It defines clustering as organizing objects into groups of similar objects and discusses common clustering algorithms like k-means and hierarchical clustering. It also provides examples of how k-means works and references for further information.
This document discusses Classification and Regression Trees (CART), a data mining technique for classification and regression. CART builds decision trees by recursively splitting data into purer child nodes based on a split criterion, with the goal of minimizing heterogeneity. It describes the 8 step CART generation process: 1) testing all possible splits of variables, 2) evaluating splits using reduction in impurity, 3) selecting the best split, 4) repeating for all variables, 5) selecting the split with most reduction in impurity, 6) assigning classes, 7) repeating on child nodes, and 8) pruning trees to avoid overfitting.
The document provides an overview of topics to be covered in a data analysis course, including cluster analysis and decision trees. The course will cover descriptive statistics, probability distributions, correlation, regression, hypothesis testing, clustering methods like k-means, and decision tree techniques like CHAID. Clustering involves grouping similar objects together to identify homogeneous clusters that are heterogeneous from each other. Applications of clustering include market segmentation, credit risk analysis, and operations. The document gives an example of clustering students based on their exam scores.
It is a data mining technique used to place the data elements into their related groups. Clustering is the process of partitioning the data (or objects) into the same class, The data in one class is more similar to each other than to those in other cluster.
This presentation educates you about Classification and
Regression trees (CART), CART decision tree methodology, Classification Trees, Regression Trees, Differences in CART, When to use CART?, Advantages of CART, Limitations of CART and What is a CART in Machine Learning?.
For more topics stay tuned with Learnbay.
Cluster analysis is used to group similar objects together and separate dissimilar objects. It has applications in understanding data patterns and reducing large datasets. The main types are partitional which divides data into non-overlapping subsets, and hierarchical which arranges clusters in a tree structure. Popular clustering algorithms include k-means, hierarchical clustering, and graph-based clustering. K-means partitions data into k clusters by minimizing distances between points and cluster centroids, but requires specifying k and is sensitive to initial centroid positions. Hierarchical clustering creates nested clusters without needing to specify the number of clusters, but has higher computational costs.
This document discusses machine learning concepts including supervised vs. unsupervised learning, clustering algorithms, and specific clustering methods like k-means and k-nearest neighbors. It provides examples of how clustering can be used for applications such as market segmentation and astronomical data analysis. Key clustering algorithms covered are hierarchy methods, partitioning methods, k-means which groups data by assigning objects to the closest cluster center, and k-nearest neighbors which classifies new data based on its closest training examples.
This document provides an overview of clustering and classification techniques. It defines clustering as organizing objects into groups of similar objects and discusses common clustering algorithms like k-means and hierarchical clustering. It also provides examples of how k-means works and references for further information.
This document discusses Classification and Regression Trees (CART), a data mining technique for classification and regression. CART builds decision trees by recursively splitting data into purer child nodes based on a split criterion, with the goal of minimizing heterogeneity. It describes the 8 step CART generation process: 1) testing all possible splits of variables, 2) evaluating splits using reduction in impurity, 3) selecting the best split, 4) repeating for all variables, 5) selecting the split with most reduction in impurity, 6) assigning classes, 7) repeating on child nodes, and 8) pruning trees to avoid overfitting.
The document provides an overview of topics to be covered in a data analysis course, including cluster analysis and decision trees. The course will cover descriptive statistics, probability distributions, correlation, regression, hypothesis testing, clustering methods like k-means, and decision tree techniques like CHAID. Clustering involves grouping similar objects together to identify homogeneous clusters that are heterogeneous from each other. Applications of clustering include market segmentation, credit risk analysis, and operations. The document gives an example of clustering students based on their exam scores.
It is a data mining technique used to place the data elements into their related groups. Clustering is the process of partitioning the data (or objects) into the same class, The data in one class is more similar to each other than to those in other cluster.
This presentation educates you about Classification and
Regression trees (CART), CART decision tree methodology, Classification Trees, Regression Trees, Differences in CART, When to use CART?, Advantages of CART, Limitations of CART and What is a CART in Machine Learning?.
For more topics stay tuned with Learnbay.
Cluster analysis is a descriptive technique that groups similar objects into clusters. It finds natural groupings within data according to characteristics in the data. Cluster analysis is used for taxonomy development, data simplification, and relationship identification. Some applications of cluster analysis include market segmentation in marketing, grouping users on social networks, and reducing markers on maps. It requires representative data and assumes groups will be sufficiently sized and not distorted by outliers.
Data mining , Knowledge Discovery Process, ClassificationDr. Abdul Ahad Abro
The document provides an overview of data mining techniques and processes. It discusses data mining as the process of extracting knowledge from large amounts of data. It describes common data mining tasks like classification, regression, clustering, and association rule learning. It also outlines popular data mining processes like CRISP-DM and SEMMA that involve steps of business understanding, data preparation, modeling, evaluation and deployment. Decision trees are presented as a popular classification technique that uses a tree structure to split data into nodes and leaves to classify examples.
This document discusses various clustering analysis methods including k-means, k-medoids (PAM), and CLARA. It explains that clustering involves grouping similar objects together without predefined classes. Partitioning methods like k-means and k-medoids (PAM) assign objects to clusters to optimize a criterion function. K-means uses cluster centroids while k-medoids uses actual data points as cluster representatives. PAM is more robust to outliers than k-means but does not scale well to large datasets, so CLARA applies PAM to samples of the data. Examples of clustering applications include market segmentation, land use analysis, and earthquake studies.
This document provides an overview of data mining techniques and concepts. It defines data mining as the process of discovering interesting patterns and knowledge from large amounts of data. The key steps involved are data cleaning, integration, selection, transformation, mining, evaluation, and presentation. Common data mining techniques include classification, clustering, association rule mining, and anomaly detection. The document also discusses data sources, major applications of data mining, and challenges.
Supervised learning and Unsupervised learning Usama Fayyaz
This document discusses supervised and unsupervised machine learning. Supervised learning uses labeled training data to learn a function that maps inputs to outputs. Unsupervised learning is used when only input data is available, with the goal of modeling underlying structures or distributions in the data. Common supervised algorithms include decision trees and logistic regression, while common unsupervised algorithms include k-means clustering and dimensionality reduction.
This document discusses spatial data mining and its applications. Spatial data mining involves extracting knowledge and relationships from large spatial databases. It can be used for applications like GIS, remote sensing, medical imaging, and more. Some challenges include the complexity of spatial data types and large data volumes. The document also covers topics like spatial data warehouses, dimensions and measures in spatial analysis, spatial association rule mining, and applications in fields such as earth science, crime mapping, and commerce.
k-means clustering aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean, serving as a prototype of the cluster. This results in a partitioning of the data space into Voronoi cells.
This document discusses clustering, which is the task of grouping data points into clusters so that points within the same cluster are more similar to each other than points in other clusters. It describes different types of clustering methods, including density-based, hierarchical, partitioning, and grid-based methods. It provides examples of specific clustering algorithms like K-means, DBSCAN, and discusses applications of clustering in fields like marketing, biology, libraries, insurance, city planning, and earthquake studies.
The document discusses various model-based clustering techniques for handling high-dimensional data, including expectation-maximization, conceptual clustering using COBWEB, self-organizing maps, subspace clustering with CLIQUE and PROCLUS, and frequent pattern-based clustering. It provides details on the methodology and assumptions of each technique.
Clustering is the process of grouping similar objects together. It allows data to be analyzed and summarized. There are several methods of clustering including partitioning, hierarchical, density-based, grid-based, and model-based. Hierarchical clustering methods are either agglomerative (bottom-up) or divisive (top-down). Density-based methods like DBSCAN and OPTICS identify clusters based on density. Grid-based methods impose grids on data to find dense regions. Model-based clustering uses models like expectation-maximization. High-dimensional data can be clustered using subspace or dimension-reduction methods. Constraint-based clustering allows users to specify preferences.
This document provides an overview of outlier detection. It defines outliers as observations that deviate significantly from other observations. There are two types of outliers: univariate outliers found in a single feature and multivariate outliers found in multiple features. Common causes of outliers include data entry errors, measurement errors, experimental errors, intentional outliers, data processing errors, sampling errors, and natural outliers. Methods for detecting outliers include z-score analysis, statistical modeling, linear regression models, proximity based models, information theory models, and high dimensional detection methods.
Linear Regression vs Logistic Regression | EdurekaEdureka!
YouTube: https://youtu.be/OCwZyYH14uw
** Data Science Certification using R: https://www.edureka.co/data-science **
This Edureka PPT on Linear Regression Vs Logistic Regression covers the basic concepts of linear and logistic models. The following topics are covered in this session:
Types of Machine Learning
Regression Vs Classification
What is Linear Regression?
What is Logistic Regression?
Linear Regression Use Case
Logistic Regression Use Case
Linear Regression Vs Logistic Regression
Blog Series: http://bit.ly/data-science-blogs
Data Science Training Playlist: http://bit.ly/data-science-playlist
Follow us to never miss an update in the future.
YouTube: https://www.youtube.com/user/edurekaIN
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
key note address delivered on 23rd March 2011 in the Workshop on Data Mining and Computational Biology in Bioinformatics, sponsored by DBT India and organised by Unit of Simulation and Informatics, IARI, New Delhi.
I do not claim any originality either to slides or their content and in fact aknowledge various web sources.
PCA transforms correlated variables into uncorrelated variables called principal components. It finds the directions of maximum variance in high-dimensional data by computing the eigenvectors of the covariance matrix. The first principal component accounts for as much of the variability in the data as possible, and each succeeding component accounts for as much of the remaining variability as possible. Dimensionality reduction is achieved by ignoring components with small eigenvalues, retaining only the most significant components.
This document outlines topics to be covered in a presentation on K-means clustering. It will discuss the introduction of K-means clustering, how the algorithm works, provide an example, and applications. The key aspects are that K-means clustering partitions data into K clusters based on similarity, assigns data points to the closest centroid, and recalculates centroids until clusters are stable. It is commonly used for market segmentation, computer vision, astronomy, and agriculture.
CLIQUE is a grid-based clustering algorithm that identifies dense units in subspaces of high-dimensional data to provide efficient clustering. It works by first partitioning each attribute dimension into equal intervals and then the data space into rectangular grid cells. It finds dense units in subspaces like planes and intersections them to identify dense units in higher dimensions. These dense units are grouped into clusters. CLIQUE scales linearly with size of data and number of dimensions and automatically identifies relevant subspaces for clustering. However, the clustering accuracy may be reduced for simplicity.
This document summarizes a machine learning workshop on feature selection. It discusses typical feature selection methods like single feature evaluation using metrics like mutual information and Gini indexing. It also covers subset selection techniques like sequential forward selection and sequential backward selection. Examples are provided showing how feature selection improves performance for logistic regression on large datasets with more features than samples. The document outlines the workshop agenda and provides details on when and why feature selection is important for machine learning models.
c and data structures first unit notes (jntuh syllabus)Acad
This document provides an overview of computer systems and components. It discusses the hardware and software aspects of computers, including input/output devices, the central processing unit, primary and auxiliary storage, and system and application software. It also describes different computing environments like personal, time-sharing, client-server, and distributed computing. The document outlines the evolution of computer languages from machine language to high-level languages. It discusses the steps to create and run computer programs, including writing, compiling, linking, and executing programs. Finally, it introduces the C programming language and provides a brief history of its development.
A structure is a collection of variables of different data types grouped together under a single name. A structure declaration defines the format of the structure, while a structure variable allocates memory for it. Structures allow grouping of related data and can be used within other structures or as elements of an array. Pointers to structures can be used to access member variables using the -> operator. Structures can be passed as arguments to functions to organize related data.
Cluster analysis is a descriptive technique that groups similar objects into clusters. It finds natural groupings within data according to characteristics in the data. Cluster analysis is used for taxonomy development, data simplification, and relationship identification. Some applications of cluster analysis include market segmentation in marketing, grouping users on social networks, and reducing markers on maps. It requires representative data and assumes groups will be sufficiently sized and not distorted by outliers.
Data mining , Knowledge Discovery Process, ClassificationDr. Abdul Ahad Abro
The document provides an overview of data mining techniques and processes. It discusses data mining as the process of extracting knowledge from large amounts of data. It describes common data mining tasks like classification, regression, clustering, and association rule learning. It also outlines popular data mining processes like CRISP-DM and SEMMA that involve steps of business understanding, data preparation, modeling, evaluation and deployment. Decision trees are presented as a popular classification technique that uses a tree structure to split data into nodes and leaves to classify examples.
This document discusses various clustering analysis methods including k-means, k-medoids (PAM), and CLARA. It explains that clustering involves grouping similar objects together without predefined classes. Partitioning methods like k-means and k-medoids (PAM) assign objects to clusters to optimize a criterion function. K-means uses cluster centroids while k-medoids uses actual data points as cluster representatives. PAM is more robust to outliers than k-means but does not scale well to large datasets, so CLARA applies PAM to samples of the data. Examples of clustering applications include market segmentation, land use analysis, and earthquake studies.
This document provides an overview of data mining techniques and concepts. It defines data mining as the process of discovering interesting patterns and knowledge from large amounts of data. The key steps involved are data cleaning, integration, selection, transformation, mining, evaluation, and presentation. Common data mining techniques include classification, clustering, association rule mining, and anomaly detection. The document also discusses data sources, major applications of data mining, and challenges.
Supervised learning and Unsupervised learning Usama Fayyaz
This document discusses supervised and unsupervised machine learning. Supervised learning uses labeled training data to learn a function that maps inputs to outputs. Unsupervised learning is used when only input data is available, with the goal of modeling underlying structures or distributions in the data. Common supervised algorithms include decision trees and logistic regression, while common unsupervised algorithms include k-means clustering and dimensionality reduction.
This document discusses spatial data mining and its applications. Spatial data mining involves extracting knowledge and relationships from large spatial databases. It can be used for applications like GIS, remote sensing, medical imaging, and more. Some challenges include the complexity of spatial data types and large data volumes. The document also covers topics like spatial data warehouses, dimensions and measures in spatial analysis, spatial association rule mining, and applications in fields such as earth science, crime mapping, and commerce.
k-means clustering aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean, serving as a prototype of the cluster. This results in a partitioning of the data space into Voronoi cells.
This document discusses clustering, which is the task of grouping data points into clusters so that points within the same cluster are more similar to each other than points in other clusters. It describes different types of clustering methods, including density-based, hierarchical, partitioning, and grid-based methods. It provides examples of specific clustering algorithms like K-means, DBSCAN, and discusses applications of clustering in fields like marketing, biology, libraries, insurance, city planning, and earthquake studies.
The document discusses various model-based clustering techniques for handling high-dimensional data, including expectation-maximization, conceptual clustering using COBWEB, self-organizing maps, subspace clustering with CLIQUE and PROCLUS, and frequent pattern-based clustering. It provides details on the methodology and assumptions of each technique.
Clustering is the process of grouping similar objects together. It allows data to be analyzed and summarized. There are several methods of clustering including partitioning, hierarchical, density-based, grid-based, and model-based. Hierarchical clustering methods are either agglomerative (bottom-up) or divisive (top-down). Density-based methods like DBSCAN and OPTICS identify clusters based on density. Grid-based methods impose grids on data to find dense regions. Model-based clustering uses models like expectation-maximization. High-dimensional data can be clustered using subspace or dimension-reduction methods. Constraint-based clustering allows users to specify preferences.
This document provides an overview of outlier detection. It defines outliers as observations that deviate significantly from other observations. There are two types of outliers: univariate outliers found in a single feature and multivariate outliers found in multiple features. Common causes of outliers include data entry errors, measurement errors, experimental errors, intentional outliers, data processing errors, sampling errors, and natural outliers. Methods for detecting outliers include z-score analysis, statistical modeling, linear regression models, proximity based models, information theory models, and high dimensional detection methods.
Linear Regression vs Logistic Regression | EdurekaEdureka!
YouTube: https://youtu.be/OCwZyYH14uw
** Data Science Certification using R: https://www.edureka.co/data-science **
This Edureka PPT on Linear Regression Vs Logistic Regression covers the basic concepts of linear and logistic models. The following topics are covered in this session:
Types of Machine Learning
Regression Vs Classification
What is Linear Regression?
What is Logistic Regression?
Linear Regression Use Case
Logistic Regression Use Case
Linear Regression Vs Logistic Regression
Blog Series: http://bit.ly/data-science-blogs
Data Science Training Playlist: http://bit.ly/data-science-playlist
Follow us to never miss an update in the future.
YouTube: https://www.youtube.com/user/edurekaIN
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
key note address delivered on 23rd March 2011 in the Workshop on Data Mining and Computational Biology in Bioinformatics, sponsored by DBT India and organised by Unit of Simulation and Informatics, IARI, New Delhi.
I do not claim any originality either to slides or their content and in fact aknowledge various web sources.
PCA transforms correlated variables into uncorrelated variables called principal components. It finds the directions of maximum variance in high-dimensional data by computing the eigenvectors of the covariance matrix. The first principal component accounts for as much of the variability in the data as possible, and each succeeding component accounts for as much of the remaining variability as possible. Dimensionality reduction is achieved by ignoring components with small eigenvalues, retaining only the most significant components.
This document outlines topics to be covered in a presentation on K-means clustering. It will discuss the introduction of K-means clustering, how the algorithm works, provide an example, and applications. The key aspects are that K-means clustering partitions data into K clusters based on similarity, assigns data points to the closest centroid, and recalculates centroids until clusters are stable. It is commonly used for market segmentation, computer vision, astronomy, and agriculture.
CLIQUE is a grid-based clustering algorithm that identifies dense units in subspaces of high-dimensional data to provide efficient clustering. It works by first partitioning each attribute dimension into equal intervals and then the data space into rectangular grid cells. It finds dense units in subspaces like planes and intersections them to identify dense units in higher dimensions. These dense units are grouped into clusters. CLIQUE scales linearly with size of data and number of dimensions and automatically identifies relevant subspaces for clustering. However, the clustering accuracy may be reduced for simplicity.
This document summarizes a machine learning workshop on feature selection. It discusses typical feature selection methods like single feature evaluation using metrics like mutual information and Gini indexing. It also covers subset selection techniques like sequential forward selection and sequential backward selection. Examples are provided showing how feature selection improves performance for logistic regression on large datasets with more features than samples. The document outlines the workshop agenda and provides details on when and why feature selection is important for machine learning models.
c and data structures first unit notes (jntuh syllabus)Acad
This document provides an overview of computer systems and components. It discusses the hardware and software aspects of computers, including input/output devices, the central processing unit, primary and auxiliary storage, and system and application software. It also describes different computing environments like personal, time-sharing, client-server, and distributed computing. The document outlines the evolution of computer languages from machine language to high-level languages. It discusses the steps to create and run computer programs, including writing, compiling, linking, and executing programs. Finally, it introduces the C programming language and provides a brief history of its development.
A structure is a collection of variables of different data types grouped together under a single name. A structure declaration defines the format of the structure, while a structure variable allocates memory for it. Structures allow grouping of related data and can be used within other structures or as elements of an array. Pointers to structures can be used to access member variables using the -> operator. Structures can be passed as arguments to functions to organize related data.
The document provides an overview of TinyOS, an open source operating system designed for wireless sensor networks. It discusses TinyOS' architecture, component model, programming using NesC, and key characteristics. TinyOS uses an event-driven model with non-blocking calls and no process scheduling. It has a small memory footprint and aims to minimize power consumption. The document also provides examples of TinyOS applications and components.
This document discusses parallel processing techniques such as pipelining and vector processing to increase computational speed. It covers Flynn's classification of computer architectures, arithmetic pipelining using a floating-point adder as an example, instruction pipelining with a four-segment model, resolving data dependencies and branch difficulties in pipelines, and RISC pipeline examples addressing delayed load and branch issues. The key techniques discussed are decomposing operations into parallel suboperations, hardware interlocks, operand forwarding, and compiler assistance.
The document discusses input/output organization in computer systems. It describes peripheral devices like monitors, keyboards, printers, and storage devices that are connected to computers. It then explains the need for input/output interfaces to handle differences in signal values, timing, data formats, and operating modes between the CPU and peripherals. Common interface types include serial and parallel interfaces. The document outlines techniques for synchronous and asynchronous data transfer, including the use of handshaking protocols to ensure reliable communication between devices. It provides examples of specific interface chips like the 8251 serial interface adapter.
The document discusses different levels of computer memory hierarchy including main memory, cache memory, auxiliary memory, and virtual memory. Main memory uses RAM and ROM chips that are connected to the CPU through address and data buses. The address lines select the specific memory chip and byte location within that chip. Main memory is the highest level of memory that can be accessed directly by the CPU for storage of data and instructions currently in use.
The document discusses minimum spanning trees and two algorithms for finding them: Prim's algorithm and Kruskal's algorithm. Prim's algorithm works by growing a spanning tree from an initial node, always adding the edge with the lowest weight that connects to a node not yet in the tree. Kruskal's algorithm sorts the edges by weight and builds up a spanning tree by adding edges in order as long as they do not form cycles. Both algorithms run on undirected, weighted graphs and produce optimal minimum spanning trees.
Backtracking and branch and bound are algorithms for solving problems systematically by trying options in an orderly manner. Backtracking uses depth-first search and prunes subtrees that don't lead to solutions. Branch and bound uses breadth-first search and pruning, maintaining upper and lower bounds to eliminate options. Both aim to avoid exhaustive search by eliminating non-promising options early. Examples that can use these techniques include maze navigation, the eight queens problem, and Sudoku puzzles.
This document discusses classification and prediction. Classification predicts categorical class labels by classifying data based on a training set and class labels. Prediction models continuous values and predicts unknown values. Some applications are credit approval, marketing, medical diagnosis, and treatment analysis. Classification involves a learning step to describe classes and a classification step to classify new data. Prediction involves estimating accuracy by comparing test results to known labels. Issues with classification and prediction include data preparation, comparing methods, and decision tree induction algorithms.
Union allows different data types to share the same memory location. It allocates enough memory to hold the largest member. While structures allocate separate memory for each member, unions share the same memory so only one member can be active at a time. For example, a union could hold an integer or float in the same memory space. Unions are useful for reducing memory usage but require careful use since writing to one member can overwrite another member's value.
This document provides an overview of wireless sensor networks (WSNs). It discusses the architecture of sensor networks, including sensor node hardware, operating systems, and network density considerations. It also describes several layers of the WSN protocol stack, including the MAC layer and common MAC protocols like S-MAC. Key topics covered include query-based communication in WSNs, classifications of WSNs based on functionality, and energy-efficient operation through low-duty cycling.
This document discusses association rule mining. Association rule mining finds frequent patterns, associations, correlations, or causal structures among items in transaction databases. The Apriori algorithm is commonly used to find frequent itemsets and generate association rules. It works by iteratively joining frequent itemsets from the previous pass to generate candidates, and then pruning the candidates that have infrequent subsets. Various techniques can improve the efficiency of Apriori, such as hashing to count itemsets and pruning transactions that don't contain frequent itemsets. Alternative approaches like FP-growth compress the database into a tree structure to avoid costly scans and candidate generation. The document also discusses mining multilevel, multidimensional, and quantitative association rules.
This document provides an overview of cluster analysis techniques. It begins by defining cluster analysis and its applications. It then categorizes major clustering methods into partitioning methods (like k-means and k-medoids), hierarchical methods, density-based methods, grid-based methods, and model-based methods. The document discusses different data types that can be clustered and measures for determining cluster quality. It also outlines requirements for effective clustering in data mining.
This document provides an overview of clustering techniques. It discusses what clustering is, different types of attributes that can be clustered, and major clustering approaches. The major approaches covered are partitioning algorithms, which construct partitions and evaluate them; hierarchical algorithms, which create a hierarchical decomposition; and density-based algorithms, which are based on connectivity and density. Examples of applications are also provided.
This document provides an overview of machine learning techniques that can be applied in finance, including exploratory data analysis, clustering, classification, and regression methods. It discusses statistical learning approaches like data mining and modeling. For clustering, it describes techniques like k-means clustering, hierarchical clustering, Gaussian mixture models, and self-organizing maps. For classification, it mentions discriminant analysis, decision trees, neural networks, and support vector machines. It also provides summaries of regression, ensemble methods, and working with big data and distributed learning.
Cluster analysis is an unsupervised machine learning technique that groups similar data objects into clusters. It finds internal structures within unlabeled data by partitioning it into groups based on similarity. Some key applications of cluster analysis include market segmentation, document classification, and identifying subtypes of diseases. The quality of clusters depends on both the similarity measure used and how well objects are grouped within each cluster versus across clusters.
Presentation on Machine Learning and Data Miningbutest
The document discusses the differences between automatic learning/machine learning and data mining. It provides definitions for supervised vs unsupervised learning, what automated induction is, and the base components of data mining. Additionally, it outlines differences in the scientific approach between automatic learning and data mining, as well as differences from an industry perspective, including common data mining techniques used and tips for successful data mining projects.
very useful for cluster analysis. supportive for engineering student as well as it students. also provide example for every topic helps in numerical problems. good material for reading.
This document provides a short review of clustering techniques for students. It defines clustering and different types of grouping methods such as hard vs soft clustering. It discusses popular clustering algorithms like hierarchical clustering, k-means clustering, and density-based clustering. It also covers cluster validity, usability, preprocessing techniques, meta methods, and visual clustering. Open problems in clustering mentioned include how to identify outlier objects and accelerate classification.
Neo4j MeetUp - Graph Exploration with MetaExpAdrian Ziegler
This document discusses graph exploration using Neo4j and describes:
1. Computing meta-paths from graph schemas to efficiently represent knowledge in graphs.
2. Embedding meta-paths to learn vector representations for active learning and preference prediction.
3. An active learning strategy to label informative meta-paths and explore the space of all meta-paths.
Machine Learning and Artificial Neural Networks.pptAnshika865276
Machine learning and neural networks are discussed. Machine learning investigates how knowledge is acquired through experience. A machine learning model includes what is learned (the domain), who is learning (the computer program), and the information source. Techniques discussed include k-nearest neighbors algorithm, Winnow algorithm, naive Bayes classifier, decision trees, and reinforcement learning. Reinforcement learning involves an agent interacting with an environment to optimize outcomes through trial and error.
This document discusses different clustering methods in data mining. It begins by defining cluster analysis and its applications. It then categorizes major clustering methods into partitioning methods, hierarchical methods, density-based methods, grid-based methods, and model-based clustering methods. Finally, it provides details on partitioning methods like k-means and k-medoids clustering algorithms.
This document discusses data mining techniques, including decision trees. It describes the basic steps in data mining as exploration, model building and validation, and deployment. It then discusses some common techniques used in data mining like association analysis, decision trees, neural networks, and statistical methods. It focuses on decision trees, describing how they take an input object and output a yes/no decision. Decision trees can represent both classification and regression problems depending on whether the target variable is categorical or continuous. The document discusses how decision trees examine predictor variables one at a time to determine the best splits to minimize misclassification.
FUNCTION OF RIVAL SIMILARITY IN A COGNITIVE DATA ANALYSIS Maxim Kazantsev
The document discusses the use of a rival similarity function (FRiS) in cognitive data analysis and machine learning algorithms. FRiS measures the similarity of an object to one object over another, and accounts for locality, normality, invariance and other properties. The authors describe how FRiS can be used to improve algorithms for tasks like classification, feature selection, filling in missing data, and ordering objects. They provide examples of algorithms like FRiS-Class that apply FRiS to problems involving clustering and taxonomy. Evaluation on real datasets shows these FRiS-based algorithms outperform other common methods.
Chapter - 7 Data Mining Concepts and Techniques 2nd Ed slides Han & Kambererror007
The document describes chapter 7 of the book "Data Mining: Concepts and Techniques" which covers cluster analysis. The chapter discusses what cluster analysis is, different types of data that can be analyzed, major clustering methods like partitioning, hierarchical, and density-based methods. It also covers measuring cluster quality, requirements for clustering in data mining, and how to calculate similarity and dissimilarity between data objects.
The document discusses cluster analysis and outlier analysis techniques for data mining. It covers key topics such as defining clusters and the goal of cluster analysis, different types of data that can be analyzed via clustering, major categories of clustering methods like partitioning, hierarchical, density-based, and model-based approaches. Specific clustering algorithms discussed include k-means, k-medoids, hierarchical clustering, DBSCAN, and EM. The document provides examples of clustering applications and discusses evaluating clustering quality and requirements for clustering in data mining.
This document provides an overview of various machine learning algorithms and concepts, including supervised learning techniques like linear regression, logistic regression, decision trees, random forests, and support vector machines. It also discusses unsupervised learning methods like principal component analysis and kernel-based PCA. Key aspects of linear regression, logistic regression, and random forests are summarized, such as cost functions, gradient descent, sigmoid functions, and bagging. Kernel methods are also introduced, explaining how the kernel trick can allow solving non-linear problems by mapping data to a higher-dimensional feature space.
This document provides an overview of machine learning and neural network techniques. It defines machine learning as the field that focuses on algorithms that can learn. The document discusses several key components of a machine learning model, including what is being learned (the domain) and from what information the learner is learning. It then summarizes several common machine learning algorithms like k-NN, Naive Bayes classifiers, decision trees, reinforcement learning, and the Rocchio algorithm for relevance feedback in information retrieval. For each technique, it provides a brief definition and examples of applications.
A brief lesson on what constitutes computational decision making, from simple regression via various classification methods to deep learning. No maths, only basic concepts to teach the lingo of machine learning to a lay audience.
This document discusses content-based image retrieval using singular value decomposition (SVD) and support vector machines (SVM). It begins by explaining the need for automated image indexing and describes content-based image retrieval (CBIR) which searches image collections based on automatically extracted visual features. It then covers SVD for feature extraction and SVM for classification of image classes. The document concludes with experimental results demonstrating 64.985% accuracy on a database using this approach.
The document discusses routing protocols at the network layer. It describes shortest path algorithms, distance vector routing, and link state routing. Distance vector routing can experience the count-to-infinity problem when a link fails. Link state routing avoids this by having each router share link state information to build a common view of the network topology.
The network layer provides two main services: connectionless and connection-oriented. Connectionless service routes packets independently through routers using destination addresses and routing tables. Connection-oriented service establishes a virtual circuit between source and destination, routing all related traffic along the pre-determined path. The document also discusses store-and-forward packet switching, where packets are stored until fully received before being forwarded, and services provided to the transport layer like uniform addressing.
The document contains 3 multiple choice questions about the OSI model: 1) OSI stands for Open System Interconnection, 2) The number of layers in the OSI reference model is 7, 3) The transport layer is responsible for process to process delivery in a general network model.
Union allows different data types to share the same memory location. It allocates enough memory to hold the largest member data type. While structures allocate separate memory for each member, unions share the same memory location, allowing only one member to be active at a time. For example, a union can hold an integer or float or character in the same memory space. Unions are useful for saving memory but require careful use since writing to one member can overwrite another member's value.
The document discusses stacks and queues as linear data structures. It defines a stack as a first-in last-out (LIFO) structure where elements are inserted and deleted from one end. Stacks are commonly used to handle function calls and parameters. The document also defines queues as first-in first-out (FIFO) structures where elements are inserted at the rear and deleted from the front. Examples of stack and queue applications are given. Implementation of stacks using arrays and pointers is described along with push and pop algorithms.
A structure is a collection of variables of different data types grouped under a single name. A structure is declared using the struct keyword followed by the structure tag name within curly braces. Variables within a structure are called members and can be accessed using the dot operator. Structures allow grouping of related data and can be used to represent complex data. Arrays of structures can also be defined to store multiple structures. Pointers to structures allow accessing members of a structure using pointer notation. User defined data types like enum and typedef allow defining custom data types in C.
Functions break problems into smaller, more manageable steps and allow code to be reused. A function header specifies the return type, name, and parameters. The body contains local variable declarations and statements that solve the problem, ending with a return. Parameters in the header communicate with code outside the function, while local variables are private and only used within the function.
fgets() and fputs() are used for string input/output operations. fgets() reads a string from a file into a specified memory location, reading up to a maximum number of characters. fputs() writes a string to a specified file. getw() and putw() are used to read and write integers to files. Fscanf() and fprintf() handle formatted input/output, with fscanf() reading formatted data from a file according to a control string, similar to scanf() but taking a file pointer as the first argument.
1. A stack is a linear data structure that follows the LIFO (Last In First Out) principle, where the last item inserted is the first item removed. It allows insertion and removal of items from one end only.
2. Stacks have many applications including function calls, memory allocation, undo operations, and parsing expressions. They use push and pop operations to insert and remove items.
3. A queue is a linear data structure that follows the FIFO (First In First Out) principle, where the first item inserted is the first item removed. It allows insertion at one end (rear) and removal from the other (front). Queues are useful for scheduling processes.
1. Dynamic memory allocation allows programs to allocate memory as needed during runtime rather than having fixed memory allocations. This is done using dynamic data structures and memory management techniques.
2. Key memory management functions like malloc(), calloc(), free(), and realloc() allow programs to allocate, initialize, free, and modify the size of memory blocks dynamically.
3. Memory is allocated from the heap, located between the program instructions/global variables in permanent storage and local variables on the stack. The size of the heap changes as programs execute due to dynamic memory allocation.
This document summarizes research on detecting botnets. It discusses how botnets are networks of infected computers or "bots" controlled remotely by botmasters to perform malicious tasks like DDoS attacks. Botnets are increasingly using encryption like SSL to evade detection. The paper presents a technique to detect botnets based on the assumption that botnet domains have short lifespans. It involves monitoring network traffic, extracting attributes from packets, and using an artificial neural network to cluster data and identify suspicious bots. The experiments analyzed real network traffic using this method to detect botnets using SSL encryption.
The document discusses botnet detection using SSL encryption. It begins with an abstract discussing how botnets spread through distributed denial of service attacks and control large numbers of computers. The authors propose checking SSL traffic and features to detect malicious connections. It then discusses how botnets use peer-to-peer networks and protocols like HTTP and IRC, making detection challenging. The document outlines a framework for detecting P2P botnets using host-based and network-based methods. It describes applying the Apriori algorithm to find frequent itemsets in network data to identify systems likely to be infected. In conclusion, the authors present a detection system that can identify malicious connections over SSL and propose a graphical tool to detect future infected systems through data mining.
This document summarizes a literature study on peer-to-peer botnets presented at the International Conference on Recent Innovations in Science, Engineering and Technology (ICRISET-18) on May 18-19, 2018. It discusses the evolution of botnets from centralized command and control structures to decentralized peer-to-peer architectures to avoid detection. The paper then reviews various approaches for detecting P2P botnets, including signature-based, traffic analysis, behavior analysis, graph analysis, and data mining techniques. It concludes that no single technique can effectively detect the evolving botnets and that a collaborative detection framework is needed, with future work developing a model to analyze current botnets and a generic detection framework.
This document discusses multiple processor systems including multiprocessors, multicomputers, and distributed systems. It covers topics such as multiprocessor hardware architectures, operating systems, scheduling, synchronization, and communication in these systems. It also discusses distributed system middleware including document-based systems like the web, file system-based systems like AFS, shared object systems like CORBA and Globe, and coordination-based systems like Linda and Jini.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
The simplified electron and muon model, Oscillating Spacetime: The Foundation...RitikBhardwaj56
Discover the Simplified Electron and Muon Model: A New Wave-Based Approach to Understanding Particles delves into a groundbreaking theory that presents electrons and muons as rotating soliton waves within oscillating spacetime. Geared towards students, researchers, and science buffs, this book breaks down complex ideas into simple explanations. It covers topics such as electron waves, temporal dynamics, and the implications of this model on particle physics. With clear illustrations and easy-to-follow explanations, readers will gain a new outlook on the universe's fundamental nature.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
2. General Applications of ClusteringGeneral Applications of Clustering
Pattern RecognitionPattern Recognition
Spatial Data AnalysisSpatial Data Analysis
create thematic maps in GIS by clustering featurecreate thematic maps in GIS by clustering feature
spacesspaces
detect spatial clusters and explain them in spatial datadetect spatial clusters and explain them in spatial data
miningmining
Image ProcessingImage Processing
Economic Science ( market research)Economic Science ( market research)
WWWWWW
Document classificationDocument classification
Cluster Weblog data to discover groups of similarCluster Weblog data to discover groups of similar
access patternsaccess patterns
Lecture-41 - What is Cluster Analysis?Lecture-41 - What is Cluster Analysis?
3. Examples of Clustering ApplicationsExamples of Clustering Applications
Marketing: Help marketers discover distinct groups in theirMarketing: Help marketers discover distinct groups in their
customer bases, and then use this knowledge to developcustomer bases, and then use this knowledge to develop
targeted marketing programstargeted marketing programs
Land use: Identification of areas of similar land use in anLand use: Identification of areas of similar land use in an
earth observation databaseearth observation database
Insurance: Identifying groups of motor insurance policyInsurance: Identifying groups of motor insurance policy
holders with a high average claim costholders with a high average claim cost
City-planning: Identifying groups of houses according toCity-planning: Identifying groups of houses according to
their house type, value, and geographical locationtheir house type, value, and geographical location
Earth-quake studies: Observed earth quake epicentersEarth-quake studies: Observed earth quake epicenters
should be clustered along continent faultsshould be clustered along continent faults
Lecture-41 - What is Cluster Analysis?Lecture-41 - What is Cluster Analysis?
4. What Is Good Clustering?What Is Good Clustering?
A good clustering method will produce highA good clustering method will produce high
quality clusters withquality clusters with
highhigh intra-classintra-class similaritysimilarity
lowlow inter-classinter-class similaritysimilarity
The quality of a clustering result depends onThe quality of a clustering result depends on
both the similarity measure used by the methodboth the similarity measure used by the method
and its implementation.and its implementation.
The quality of a clustering method is alsoThe quality of a clustering method is also
measured by its ability to discover some or all ofmeasured by its ability to discover some or all of
the hidden patterns.the hidden patterns.
Lecture-41 - What is Cluster Analysis?Lecture-41 - What is Cluster Analysis?
5. Requirements of Clustering in Data MiningRequirements of Clustering in Data Mining
ScalabilityScalability
Ability to deal with different types of attributesAbility to deal with different types of attributes
Discovery of clusters with arbitrary shapeDiscovery of clusters with arbitrary shape
Minimal requirements for domain knowledge toMinimal requirements for domain knowledge to
determine input parametersdetermine input parameters
Able to deal with noise and outliersAble to deal with noise and outliers
Insensitive to order of input recordsInsensitive to order of input records
High dimensionalityHigh dimensionality
Incorporation of user-specified constraintsIncorporation of user-specified constraints
Interpretability and usabilityInterpretability and usability
Lecture-41 - What is Cluster Analysis?Lecture-41 - What is Cluster Analysis?
7. Data StructuresData Structures
Data matrixData matrix
(two modes)(two modes)
Dissimilarity matrixDissimilarity matrix
(one mode)(one mode)
npx...nfx...n1x
...............
ipx...ifx...i1x
...............
1px...1fx...11x
0...)2,()1,(
:::
)2,3()
...ndnd
0dd(3,1
0d(2,1)
0
Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis
8. Measure the Quality of ClusteringMeasure the Quality of Clustering
Dissimilarity/Similarity metric: Similarity is expressed inDissimilarity/Similarity metric: Similarity is expressed in
terms of a distance function, which is typically metric:terms of a distance function, which is typically metric:
dd((i, ji, j))
There is a separate “quality” function that measures theThere is a separate “quality” function that measures the
“goodness” of a cluster.“goodness” of a cluster.
The definitions of distance functions are usually veryThe definitions of distance functions are usually very
different for interval-scaled, boolean, categorical, ordinaldifferent for interval-scaled, boolean, categorical, ordinal
and ratio variables.and ratio variables.
Weights should be associated with different variablesWeights should be associated with different variables
based on applications and data semantics.based on applications and data semantics.
It is hard to define “similar enough” or “good enough”It is hard to define “similar enough” or “good enough”
the answer is typically highly subjective.the answer is typically highly subjective.
Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis
9. Type of data in clustering analysisType of data in clustering analysis
Interval-scaled variablesInterval-scaled variables
Binary variablesBinary variables
Categorical, Ordinal, and Ratio ScaledCategorical, Ordinal, and Ratio Scaled
variablesvariables
Variables of mixed typesVariables of mixed types
Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis
10. Interval-valued variablesInterval-valued variables
Standardize dataStandardize data
Calculate the mean absolute deviation:Calculate the mean absolute deviation:
wherewhere
Calculate the standardized measurement (Calculate the standardized measurement (z-scorez-score))
Using mean absolute deviation is more robustUsing mean absolute deviation is more robust
than using standard deviationthan using standard deviation
.)...
21
1
nffff
xx(xnm +++=
|)|...|||(|1
21 fnffffff
mxmxmxns −++−+−=
f
fif
if s
mx
z
−
=
Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis
11. Similarity and Dissimilarity ObjectsSimilarity and Dissimilarity Objects
Distances are normally used to measure the similarity orDistances are normally used to measure the similarity or
dissimilarity between two data objectsdissimilarity between two data objects
Some popular ones include:Some popular ones include: Minkowski distanceMinkowski distance::
wherewhere ii = (= (xxi1i1,, xxi2i2, …,, …, xxipip) and) and jj = (= (xxj1j1,, xxj2j2, …,, …, xxjpjp) are two) are two pp-dimensional-dimensional
data objects, anddata objects, and qq is a positive integeris a positive integer
IfIf qq == 11,, dd is Manhattan distanceis Manhattan distance
q
q
pp
qq
j
x
i
x
j
x
i
x
j
x
i
xjid )||...|||(|),(
2211
−++−+−=
||...||||),(
2211 pp j
x
i
x
j
x
i
x
j
x
i
xjid −++−+−=
Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis
12. Similarity and Dissimilarity ObjectsSimilarity and Dissimilarity Objects
If qIf q == 22,, dd is Euclidean distance:is Euclidean distance:
PropertiesProperties
d(i,j)d(i,j) ≥≥ 00
d(i,i)d(i,i) = 0= 0
d(i,j)d(i,j) == d(j,i)d(j,i)
d(i,j)d(i,j) ≤≤ d(i,k)d(i,k) ++ d(k,j)d(k,j)
Also one can use weighted distance, parametricAlso one can use weighted distance, parametric
Pearson product moment correlation, or otherPearson product moment correlation, or other
disimilarity measures.disimilarity measures.
)||...|||(|),( 22
22
2
11 pp j
x
i
x
j
x
i
x
j
x
i
xjid −++−+−=
Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis
13. Binary VariablesBinary Variables
A contingency table for binary dataA contingency table for binary data
Simple matching coefficient (invariant, if theSimple matching coefficient (invariant, if the
binary variable isbinary variable is symmetricsymmetric):):
Jaccard coefficient (noninvariant if the binaryJaccard coefficient (noninvariant if the binary
variable isvariable is asymmetricasymmetric):):
dcba
cbjid
+++
+=),(
pdbcasum
dcdc
baba
sum
++
+
+
0
1
01
cba
cbjid
++
+=),(
Object i
Object j
Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis
14. Dissimilarity between BinaryDissimilarity between Binary
VariablesVariables
ExampleExample
gender is a symmetric attributegender is a symmetric attribute
the remaining attributes are asymmetric binarythe remaining attributes are asymmetric binary
let the values Y and P be set to 1, and the value N be set to 0let the values Y and P be set to 1, and the value N be set to 0
Name Gender Fever Cough Test-1 Test-2 Test-3 Test-4
Jack M Y N P N N N
Mary F Y N P N P N
Jim M Y P N N N N
75.0
211
21
),(
67.0
111
11
),(
33.0
102
10
),(
=
++
+
=
=
++
+
=
=
++
+
=
maryjimd
jimjackd
maryjackd
Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis
15. Categorical VariablesCategorical Variables
A generalization of the binary variable in that itA generalization of the binary variable in that it
can take more than 2 states, e.g., red, yellow,can take more than 2 states, e.g., red, yellow,
blue, greenblue, green
Method 1: Simple matchingMethod 1: Simple matching
MM is no of matches,is no of matches, pp is total no of variablesis total no of variables
Method 2: use a large number of binary variablesMethod 2: use a large number of binary variables
creating a new binary variable for each of thecreating a new binary variable for each of the MM
nominal statesnominal states
p
mpjid −=),(
Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis
16. Ordinal VariablesOrdinal Variables
An ordinal variable can be discrete or continuousAn ordinal variable can be discrete or continuous
order is important, e.g., rankorder is important, e.g., rank
Can be treated like interval-scaledCan be treated like interval-scaled
replacingreplacing xxifif by their rankby their rank
map the range of each variable onto [0, 1] by replacingmap the range of each variable onto [0, 1] by replacing
ii-th object in the-th object in the ff-th variable by-th variable by
compute the dissimilarity using methods for interval-compute the dissimilarity using methods for interval-
scaled variablesscaled variables
1
1
−
−
=
f
if
if M
r
z
},...,1{ fif
Mr ∈
Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis
17. Ratio-Scaled VariablesRatio-Scaled Variables
Ratio-scaled variable: a positive measurement onRatio-scaled variable: a positive measurement on
a nonlinear scale, approximately at exponentiala nonlinear scale, approximately at exponential
scale, such asscale, such as AeAeBtBt
oror AeAe-Bt-Bt
Methods:Methods:
treat them like interval-scaled variablestreat them like interval-scaled variables
apply logarithmic transformationapply logarithmic transformation
yyifif == log(xlog(xifif))
treat them as continuous ordinal data treat their rank astreat them as continuous ordinal data treat their rank as
interval-scaled.interval-scaled.
Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis
18. Variables of MixedVariables of Mixed
TypesTypesA database may contain all the six types of variablesA database may contain all the six types of variables
symmetric binary, asymmetric binary, nominal, ordinal,symmetric binary, asymmetric binary, nominal, ordinal,
interval and ratio.interval and ratio.
One may use a weighted formula to combine theirOne may use a weighted formula to combine their
effects.effects.
ff is binary or nominal:is binary or nominal:
ddijij
(f)(f)
= 0 if x= 0 if xifif = x= xjfjf , or d, or dijij
(f)(f)
= 1 o.w.= 1 o.w.
ff is interval-based: use the normalized distanceis interval-based: use the normalized distance
ff is ordinal or ratio-scaledis ordinal or ratio-scaled
compute ranks rcompute ranks rifif andand
and treat zand treat zifif as interval-scaledas interval-scaled
)(
1
)()(
1
),( f
ij
p
f
f
ij
f
ij
p
f
d
jid
δ
δ
=
=
Σ
Σ
=
1
1
−
−
=
f
if
M
rzif
Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis
20. Major Clustering ApproachesMajor Clustering Approaches
Partitioning algorithmsPartitioning algorithms
Construct various partitions and then evaluate them by someConstruct various partitions and then evaluate them by some
criterioncriterion
Hierarchy algorithmsHierarchy algorithms
Create a hierarchical decomposition of the set of data (or objects)Create a hierarchical decomposition of the set of data (or objects)
using some criterionusing some criterion
Density-basedDensity-based
based on connectivity and density functionsbased on connectivity and density functions
Grid-basedGrid-based
based on a multiple-level granularity structurebased on a multiple-level granularity structure
Model-basedModel-based
A model is hypothesized for each of the clusters and the idea is toA model is hypothesized for each of the clusters and the idea is to
find the best fit of that model to each otherfind the best fit of that model to each other
Lecture-43 - A Categorization of Major Clustering MethodsLecture-43 - A Categorization of Major Clustering Methods
22. Partitioning Algorithms: Basic ConceptPartitioning Algorithms: Basic Concept
Partitioning method: Construct a partition of a databasePartitioning method: Construct a partition of a database DD
ofof nn objects into a set ofobjects into a set of kk clustersclusters
Given aGiven a kk, find a partition of, find a partition of k clustersk clusters that optimizes thethat optimizes the
chosen partitioning criterionchosen partitioning criterion
Global optimal: exhaustively enumerate all partitionsGlobal optimal: exhaustively enumerate all partitions
Heuristic methods:Heuristic methods: k-meansk-means andand k-medoidsk-medoids algorithmsalgorithms
k-meansk-means - Each cluster is represented by the center of- Each cluster is represented by the center of
the clusterthe cluster
k-medoidsk-medoids or PAM (Partition around medoids) - Eachor PAM (Partition around medoids) - Each
cluster is represented by one of the objects in thecluster is represented by one of the objects in the
clustercluster
Lecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods
23. TheThe K-MeansK-Means Clustering MethodClustering Method
GivenGiven kk, the, the k-meansk-means algorithm isalgorithm is
implemented in 4 steps:implemented in 4 steps:
Partition objects intoPartition objects into kk nonempty subsetsnonempty subsets
Compute seed points as the centroids of theCompute seed points as the centroids of the
clusters of the current partition. The centroid is theclusters of the current partition. The centroid is the
center (mean point) of the cluster.center (mean point) of the cluster.
Assign each object to the cluster with the nearestAssign each object to the cluster with the nearest
seed point.seed point.
Go back to Step 2, stop when no more newGo back to Step 2, stop when no more new
assignment.assignment.
Lecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods
25. thethe K-MeansK-Means MethodMethod
StrengthStrength
Relatively efficientRelatively efficient:: OO((tkntkn), where), where nn is no of objects,is no of objects, kk isis
no of clusters, andno of clusters, and tt is no of iterations. Normally,is no of iterations. Normally, kk,,
tt <<<< nn..
Often terminates at aOften terminates at a local optimumlocal optimum. The. The global optimumglobal optimum
may be found using techniques such as:may be found using techniques such as: deterministicdeterministic
annealingannealing andand genetic algorithmsgenetic algorithms
WeaknessWeakness
Applicable only whenApplicable only when meanmean is defined, then what aboutis defined, then what about
categorical data?categorical data?
Need to specifyNeed to specify k,k, thethe numbernumber of clusters, in advanceof clusters, in advance
Unable to handle noisy data andUnable to handle noisy data and outliersoutliers
Not suitable to discover clusters withNot suitable to discover clusters with non-convex shapesnon-convex shapes
Lecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods
26. Variations of theVariations of the K-MeansK-Means MethodMethod
A few variants of theA few variants of the k-meansk-means which differ inwhich differ in
Selection of the initialSelection of the initial kk meansmeans
Dissimilarity calculationsDissimilarity calculations
Strategies to calculate cluster meansStrategies to calculate cluster means
Handling categorical data:Handling categorical data: k-modesk-modes
Replacing means of clusters with modesReplacing means of clusters with modes
Using new dissimilarity measures to deal withUsing new dissimilarity measures to deal with
categorical objectscategorical objects
Using a frequency-based method to update modes ofUsing a frequency-based method to update modes of
clustersclusters
A mixture of categorical and numerical data:A mixture of categorical and numerical data: k-prototypek-prototype
methodmethod
Lecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods
27. TheThe KK--MedoidsMedoids Clustering MethodClustering Method
FindFind representativerepresentative objects, calledobjects, called medoidsmedoids, in clusters, in clusters
PAMPAM (Partitioning Around Medoids, 1987)(Partitioning Around Medoids, 1987)
starts from an initial set of medoids and iterativelystarts from an initial set of medoids and iteratively
replaces one of the medoids by one of the non-replaces one of the medoids by one of the non-
medoids if it improves the total distance of the resultingmedoids if it improves the total distance of the resulting
clusteringclustering
PAMPAM works effectively for small data sets, but does notworks effectively for small data sets, but does not
scale well for large data setsscale well for large data sets
CLARACLARA (Kaufmann & Rousseeuw, 1990)(Kaufmann & Rousseeuw, 1990)
CLARANSCLARANS (Ng & Han, 1994): Randomized sampling(Ng & Han, 1994): Randomized sampling
Focusing + spatial data structure (Ester et al., 1995)Focusing + spatial data structure (Ester et al., 1995)
Lecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods
28. PAM (Partitioning Around Medoids) (1987)PAM (Partitioning Around Medoids) (1987)
PAM (Kaufman and Rousseeuw, 1987), built inPAM (Kaufman and Rousseeuw, 1987), built in
SplusSplus
Use real object to represent the clusterUse real object to represent the cluster
SelectSelect kk representative objects arbitrarilyrepresentative objects arbitrarily
For each pair of non-selected objectFor each pair of non-selected object hh and selectedand selected
objectobject ii, calculate the total swapping cost, calculate the total swapping cost TCTCihih
For each pair ofFor each pair of ii andand hh,,
IfIf TCTCihih < 0,< 0, ii is replaced byis replaced by hh
Then assign each non-selected object to the mostThen assign each non-selected object to the most
similar representative objectsimilar representative object
repeat steps 2-3 until there is no changerepeat steps 2-3 until there is no change
Lecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods
30. CLARACLARA (Clustering Large Applications)(Clustering Large Applications)
(1990)(1990)
CLARACLARA (Kaufmann and Rousseeuw in 1990)(Kaufmann and Rousseeuw in 1990)
Built in statistical analysis packages, such as S+Built in statistical analysis packages, such as S+
It drawsIt draws multiple samplesmultiple samples of the data set, appliesof the data set, applies PAMPAM onon
each sample, and gives the best clustering as the outputeach sample, and gives the best clustering as the output
StrengthStrength: deals with larger data sets than: deals with larger data sets than PAMPAM
Weakness:Weakness:
Efficiency depends on the sample sizeEfficiency depends on the sample size
A good clustering based on samples will notA good clustering based on samples will not
necessarily represent a good clustering of the wholenecessarily represent a good clustering of the whole
data set if the sample is biaseddata set if the sample is biased
Lecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods
31. CLARANSCLARANS (“Randomized” CLARA)(“Randomized” CLARA)
(1994)(1994)
CLARANSCLARANS (A Clustering Algorithm based on Randomized(A Clustering Algorithm based on Randomized
Search) CLARANS draws sample of neighborsSearch) CLARANS draws sample of neighbors
dynamicallydynamically
The clustering process can be presented as searching aThe clustering process can be presented as searching a
graph where every node is a potential solution, that is, agraph where every node is a potential solution, that is, a
set ofset of kk medoidsmedoids
If the local optimum is found,If the local optimum is found, CLARANSCLARANS starts with newstarts with new
randomly selected node in search for a new local optimumrandomly selected node in search for a new local optimum
It is more efficient and scalable than bothIt is more efficient and scalable than both PAMPAM andand
CLARACLARA
Focusing techniques and spatial access structures mayFocusing techniques and spatial access structures may
further improve its performancefurther improve its performance
Lecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods
33. Hierarchical ClusteringHierarchical Clustering
Use distance matrix as clustering criteria. ThisUse distance matrix as clustering criteria. This
method does not require the number of clustersmethod does not require the number of clusters
kk as an input, but needs a termination conditionas an input, but needs a termination condition
Step 0 Step 1 Step 2 Step 3 Step 4
b
d
c
e
a
a b
d e
c d e
a b c d e
Step 4 Step 3 Step 2 Step 1 Step 0
agglomerative
(AGNES)
divisive
(DIANA)
Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
34. AGNES (Agglomerative Nesting)AGNES (Agglomerative Nesting)
Introduced in Kaufmann and Rousseeuw (1990)Introduced in Kaufmann and Rousseeuw (1990)
Implemented in statistical analysis packages, e.g.,Implemented in statistical analysis packages, e.g.,
SplusSplus
Use the Single-Link method and the dissimilarityUse the Single-Link method and the dissimilarity
matrix.matrix.
Merge nodes that have the least dissimilarityMerge nodes that have the least dissimilarity
Go on in a non-descending fashionGo on in a non-descending fashion
Eventually all nodes belong to the same clusterEventually all nodes belong to the same cluster
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
35. A Dendrogram Shows How the
Clusters are Merged Hierarchically
Decompose data objects into a several levels of nested
partitioning (tree of clusters), called a dendrogram.
A clustering of the data objects is obtained by cutting the
dendrogram at the desired level, then each connected
component forms a cluster.
Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
36. DIANA (Divisive Analysis)DIANA (Divisive Analysis)
Introduced in Kaufmann and Rousseeuw (1990)Introduced in Kaufmann and Rousseeuw (1990)
Implemented in statistical analysis packages,Implemented in statistical analysis packages,
e.g., Spluse.g., Splus
Inverse order of AGNESInverse order of AGNES
Eventually each node forms a cluster on its ownEventually each node forms a cluster on its own
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
37. More on Hierarchical Clustering MethodsMore on Hierarchical Clustering Methods
Major weakness of agglomerative clustering methodsMajor weakness of agglomerative clustering methods
do not scaledo not scale well: time complexity of at leastwell: time complexity of at least OO((nn22
),),
wherewhere nn is the number of total objectsis the number of total objects
can never undo what was done previouslycan never undo what was done previously
Integration of hierarchical with distance-based clusteringIntegration of hierarchical with distance-based clustering
BIRCH (1996)BIRCH (1996): uses CF-tree and incrementally adjusts: uses CF-tree and incrementally adjusts
the quality of sub-clustersthe quality of sub-clusters
CURE (1998CURE (1998): selects well-scattered points from the): selects well-scattered points from the
cluster and then shrinks them towards the center of thecluster and then shrinks them towards the center of the
cluster by a specified fractioncluster by a specified fraction
CHAMELEON (1999)CHAMELEON (1999): hierarchical clustering using: hierarchical clustering using
dynamic modelingdynamic modeling
Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
38. BIRCH (1996)BIRCH (1996)
Birch: Balanced Iterative Reducing and Clustering usingBirch: Balanced Iterative Reducing and Clustering using
Hierarchies, by Zhang, Ramakrishnan, Livny (SIGMODHierarchies, by Zhang, Ramakrishnan, Livny (SIGMOD’’96)96)
Incrementally construct a CF (Clustering Feature) tree, aIncrementally construct a CF (Clustering Feature) tree, a
hierarchical data structure for multiphase clusteringhierarchical data structure for multiphase clustering
Phase 1: scan DB to build an initial in-memory CF treePhase 1: scan DB to build an initial in-memory CF tree
(a multi-level compression of the data that tries to(a multi-level compression of the data that tries to
preserve the inherent clustering structure of the data)preserve the inherent clustering structure of the data)
Phase 2: use an arbitrary clustering algorithm to clusterPhase 2: use an arbitrary clustering algorithm to cluster
the leaf nodes of the CF-treethe leaf nodes of the CF-tree
Scales linearlyScales linearly: finds a good clustering with a single scan: finds a good clustering with a single scan
and improves the quality with a few additional scansand improves the quality with a few additional scans
Weakness:Weakness: handles only numeric data, and sensitive to thehandles only numeric data, and sensitive to the
order of the data record.order of the data record.
Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
41. CURECURE (Clustering(Clustering
UsingUsing
REpresentatives )REpresentatives )
CURE: proposed by Guha, Rastogi & Shim, 1998CURE: proposed by Guha, Rastogi & Shim, 1998
Stops the creation of a cluster hierarchy if a levelStops the creation of a cluster hierarchy if a level
consists ofconsists of kk clustersclusters
Uses multiple representative points to evaluate theUses multiple representative points to evaluate the
distance between clusters, adjusts well to arbitrarydistance between clusters, adjusts well to arbitrary
shaped clusters and avoids single-link effectshaped clusters and avoids single-link effect
Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
42. Drawbacks of Distance-Drawbacks of Distance-
Based MethodBased Method
Drawbacks of square-error based clusteringDrawbacks of square-error based clustering
methodmethod
Consider only one point as representative of a clusterConsider only one point as representative of a cluster
Good only for convex shaped, similar size andGood only for convex shaped, similar size and
density, and ifdensity, and if kk can be reasonably estimatedcan be reasonably estimatedLecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
43. Cure: The AlgorithmCure: The Algorithm
Draw random sampleDraw random sample ss..
Partition sample toPartition sample to pp partitions with sizepartitions with size s/ps/p
Partially cluster partitions intoPartially cluster partitions into s/pqs/pq clustersclusters
Eliminate outliersEliminate outliers
By random samplingBy random sampling
If a cluster grows too slow, eliminate it.If a cluster grows too slow, eliminate it.
Cluster partial clusters.Cluster partial clusters.
Label data in diskLabel data in disk
Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
44. Data Partitioning andData Partitioning and
ClusteringClustering
s = 50s = 50
p = 2p = 2
s/p = 25s/p = 25
x x
x
y
y y
y
x
y
x
s/pq = 5
Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
45. Cure: Shrinking RepresentativeCure: Shrinking Representative
PointsPoints
Shrink the multiple representative points towardsShrink the multiple representative points towards
the gravity center by a fraction ofthe gravity center by a fraction of αα..
Multiple representatives capture the shape of theMultiple representatives capture the shape of the
cluster
x
y
x
y
Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
46. Clustering CategoricalClustering Categorical
Data: ROCKData: ROCK
ROCK: Robust Clustering using linKs,ROCK: Robust Clustering using linKs,
by S. Guha, R. Rastogi, K. Shim (ICDE’99).by S. Guha, R. Rastogi, K. Shim (ICDE’99).
Use links to measure similarity/proximityUse links to measure similarity/proximity
Not distance basedNot distance based
Computational complexity:Computational complexity:
Basic ideas:Basic ideas:
Similarity function and neighbors:Similarity function and neighbors:
LetLet TT11 = {1,2,3},= {1,2,3}, TT22={3,4,5}={3,4,5}
O n nm m n nm a( log )2 2
+ +
Sim T T
T T
T T
( , )1 2
1 2
1 2
=
∩
∪
Sim T T( , )
{ }
{ , , , , }
.1 2
3
1 2 3 4 5
1
5
0 2= = =
Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
47. Rock: AlgorithmRock: Algorithm
Links: The number of common neighboursLinks: The number of common neighbours
for the two points.for the two points.
AlgorithmAlgorithm
Draw random sampleDraw random sample
Cluster with linksCluster with links
Label data in diskLabel data in disk
{1,2,3}, {1,2,4}, {1,2,5}, {1,3,4}, {1,3,5}
{1,4,5}, {2,3,4}, {2,3,5}, {2,4,5}, {3,4,5}
{1,2,3} {1,2,4}3
Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
48. CHAMELEONCHAMELEON
CHAMELEON: hierarchical clustering using dynamicCHAMELEON: hierarchical clustering using dynamic
modeling, by G. Karypis, E.H. Han and V. Kumarmodeling, by G. Karypis, E.H. Han and V. Kumar’’9999
Measures the similarity based on a dynamic modelMeasures the similarity based on a dynamic model
Two clusters are merged only if theTwo clusters are merged only if the interconnectivityinterconnectivity
andand closeness (proximity)closeness (proximity) between two clusters arebetween two clusters are
highhigh relative torelative to the internal interconnectivity of thethe internal interconnectivity of the
clusters and closeness of items within the clustersclusters and closeness of items within the clusters
A two phase algorithmA two phase algorithm
1. Use a graph partitioning algorithm: cluster objects1. Use a graph partitioning algorithm: cluster objects
into a large number of relatively small sub-clustersinto a large number of relatively small sub-clusters
2. Use an agglomerative hierarchical clustering2. Use an agglomerative hierarchical clustering
algorithm: find the genuine clusters by repeatedlyalgorithm: find the genuine clusters by repeatedly
combining these sub-clusterscombining these sub-clusters
Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
49. Overall Framework ofOverall Framework of
CHAMELEONCHAMELEON
Construct
Sparse Graph Partition the Graph
Merge Partition
Final Clusters
Data Set
Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
51. Density-Based Clustering MethodsDensity-Based Clustering Methods
Clustering based on density (local clusterClustering based on density (local cluster
criterion), such as density-connected pointscriterion), such as density-connected points
Major features:Major features:
Discover clusters of arbitrary shapeDiscover clusters of arbitrary shape
Handle noiseHandle noise
One scanOne scan
Need density parameters as termination conditionNeed density parameters as termination condition
Several methodsSeveral methods
DBSCANDBSCAN
OPTICSOPTICS
DENCLUEDENCLUE
CLIQUECLIQUE
Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods
52. Density-Based ClusteringDensity-Based Clustering
Two parametersTwo parameters::
EpsEps: Maximum radius of the neighbourhood: Maximum radius of the neighbourhood
MinPtsMinPts: Minimum number of points in an Eps-: Minimum number of points in an Eps-
neighbourhood of that pointneighbourhood of that point
NNEpsEps(p):(p): {q belongs to D | dist(p,q) <= Eps}{q belongs to D | dist(p,q) <= Eps}
Directly density-reachableDirectly density-reachable:: A pointA point pp is directlyis directly
density-reachable from a pointdensity-reachable from a point qq wrt.wrt. EpsEps, MinPts, MinPts
ifif
1)1) pp belongs tobelongs to NNEpsEps(q)(q)
2) core point condition:2) core point condition:
|N|NEpsEps (q)| >= MinPts(q)| >= MinPts
p
q
MinPts = 5
Eps = 1 cm
Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods
53. Density-Based ClusteringDensity-Based Clustering
Density-reachable:Density-reachable:
A pointA point pp is density-reachable fromis density-reachable from
a pointa point qq wrt.wrt. EpsEps,, MinPtsMinPts if thereif there
is a chain of pointsis a chain of points pp11,, ……,, ppnn,, pp11 == qq,,
ppnn == pp such thatsuch that ppi+1i+1 is directlyis directly
density-reachable fromdensity-reachable from ppii
Density-connectedDensity-connected
A pointA point pp is density-connected to ais density-connected to a
pointpoint qq wrt.wrt. EpsEps,, MinPtsMinPts if there isif there is
a pointa point oo such that both,such that both, pp andand qq
are density-reachable fromare density-reachable from oo wrt.wrt.
EpsEps andand MinPtsMinPts..
p
q
p1
p q
o
Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods
54. DBSCAN: Density Based SpatialDBSCAN: Density Based Spatial
Clustering of Applications with NoiseClustering of Applications with Noise
Relies on aRelies on a density-baseddensity-based notion of cluster: Anotion of cluster: A
clustercluster is defined as a maximal set of density-is defined as a maximal set of density-
connected pointsconnected points
Discovers clusters of arbitrary shape in spatialDiscovers clusters of arbitrary shape in spatial
databases with noisedatabases with noise
Core
Border
Outlier
Eps = 1cm
MinPts = 5
Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods
55. DBSCAN: The AlgorithmDBSCAN: The Algorithm
Arbitrary select a pointArbitrary select a point pp
Retrieve all points density-reachable fromRetrieve all points density-reachable from pp wrtwrt EpsEps
andand MinPtsMinPts..
IfIf pp is a core point, a cluster is formed.is a core point, a cluster is formed.
IfIf pp is a border point, no points are density-reachableis a border point, no points are density-reachable
fromfrom pp and DBSCAN visits the next point of theand DBSCAN visits the next point of the
database.database.
Continue the process until all of the points have beenContinue the process until all of the points have been
processed.processed.
Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods
56. OPTICS: A Cluster-Ordering MethodOPTICS: A Cluster-Ordering Method
OPTICS: Ordering Points To Identify theOPTICS: Ordering Points To Identify the
Clustering StructureClustering Structure
Ankerst, Breunig, Kriegel, and Sander (SIGMODAnkerst, Breunig, Kriegel, and Sander (SIGMOD’’99)99)
Produces a special order of the database wrt itsProduces a special order of the database wrt its
density-based clustering structuredensity-based clustering structure
This cluster-ordering contains info equiv to theThis cluster-ordering contains info equiv to the
density-based clusterings corresponding to a broaddensity-based clusterings corresponding to a broad
range of parameter settingsrange of parameter settings
Good for both automatic and interactive clusterGood for both automatic and interactive cluster
analysis, including finding intrinsic clustering structureanalysis, including finding intrinsic clustering structure
Can be represented graphically or using visualizationCan be represented graphically or using visualization
techniquestechniques
Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods
57. OPTICS: Some Extension fromOPTICS: Some Extension from
DBSCANDBSCAN
Index-based:Index-based:
k = number of dimensionsk = number of dimensions
N = 20N = 20
p = 75%p = 75%
M = N(1-p) = 5M = N(1-p) = 5
Complexity: O(Complexity: O(kNkN22
))
Core DistanceCore Distance
Reachability DistanceReachability Distance
D
p2
MinPts = 5
ε = 3 cm
Max (core-distance (o), d (o, p))
r(p1, o) = 2.8cm. r(p2,o) = 4cm
o
o
p1
Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods
59. DENCLUE: using density functionsDENCLUE: using density functions
DENsity-based CLUstEring by Hinneburg & KeimDENsity-based CLUstEring by Hinneburg & Keim
(KDD(KDD’’98)98)
Major featuresMajor features
Solid mathematical foundationSolid mathematical foundation
Good for data sets with large amounts of noiseGood for data sets with large amounts of noise
Allows a compact mathematical description of arbitrarilyAllows a compact mathematical description of arbitrarily
shaped clusters in high-dimensional data setsshaped clusters in high-dimensional data sets
Significant faster than existing algorithm (faster thanSignificant faster than existing algorithm (faster than
DBSCAN by a factor of up to 45)DBSCAN by a factor of up to 45)
But needs a large number of parametersBut needs a large number of parameters
Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods
60. Uses grid cells but only keeps information about gridUses grid cells but only keeps information about grid
cells that do actually contain data points and managescells that do actually contain data points and manages
these cells in a tree-based access structure.these cells in a tree-based access structure.
Influence function: describes the impact of a data pointInfluence function: describes the impact of a data point
within its neighborhood.within its neighborhood.
Overall density of the data space can be calculated asOverall density of the data space can be calculated as
the sum of the influence function of all data points.the sum of the influence function of all data points.
Clusters can be determined mathematically byClusters can be determined mathematically by
identifying density attractors.identifying density attractors.
Density attractors are local maximal of the overallDensity attractors are local maximal of the overall
density function.density function.
Denclue: Technical EssenceDenclue: Technical Essence
Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods
61. Gradient: The steepness of aGradient: The steepness of a
slopeslope
ExampleExample
∑=
−
=
N
i
xxd
D
Gaussian
i
exf 1
2
),(
2
2
)( σ
∑ =
−
⋅−=∇
N
i
xxd
ii
D
Gaussian
i
exxxxf 1
2
),(
2
2
)(),( σ
f x y eGaussian
d x y
( , )
( , )
=
−
2
2
2σ
Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods
65. Grid-Based Clustering MethodGrid-Based Clustering Method
Using multi-resolution grid data structureUsing multi-resolution grid data structure
Several interesting methodsSeveral interesting methods
STING (a STatistical INformation Grid approach)STING (a STatistical INformation Grid approach)
WaveClusterWaveCluster
A multi-resolution clustering approach usingA multi-resolution clustering approach using
wavelet methodwavelet method
CLIQUECLIQUE
Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
66. STING: A Statistical Information GridSTING: A Statistical Information Grid
ApproachApproach
Wang, Yang and Muntz (VLDB’97)Wang, Yang and Muntz (VLDB’97)
The spatial area area is divided into rectangularThe spatial area area is divided into rectangular
cellscells
There are several levels of cells corresponding toThere are several levels of cells corresponding to
different levels of resolutiondifferent levels of resolution
Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
67. STING: A Statistical InformationSTING: A Statistical Information
Grid ApproachGrid Approach
Each cell at a high level is partitioned into a number ofEach cell at a high level is partitioned into a number of
smaller cells in the next lower levelsmaller cells in the next lower level
Statistical info of each cell is calculated and storedStatistical info of each cell is calculated and stored
beforehand and is used to answer queriesbeforehand and is used to answer queries
Parameters of higher level cells can be easily calculatedParameters of higher level cells can be easily calculated
from parameters of lower level cellfrom parameters of lower level cell
countcount,, meanmean,, ss,, minmin,, maxmax
type of distribution—normal,type of distribution—normal, uniformuniform, etc., etc.
Use a top-down approach to answer spatial data queriesUse a top-down approach to answer spatial data queries
Start from a pre-selected layerStart from a pre-selected layer——typically with a smalltypically with a small
number of cellsnumber of cells
For each cell in the current level compute the confidenceFor each cell in the current level compute the confidence
intervalinterval
Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
68. STING: A Statistical Information GridSTING: A Statistical Information Grid
ApproachApproach
Remove the irrelevant cells from further considerationRemove the irrelevant cells from further consideration
When finish examining the current layer, proceed toWhen finish examining the current layer, proceed to
the next lower levelthe next lower level
Repeat this process until the bottom layer is reachedRepeat this process until the bottom layer is reached
Advantages:Advantages:
Query-independent, easy to parallelize, incrementalQuery-independent, easy to parallelize, incremental
updateupdate
O(K),O(K), wherewhere KK is the number of grid cells at theis the number of grid cells at the
lowest levellowest level
Disadvantages:Disadvantages:
All the cluster boundaries are either horizontal orAll the cluster boundaries are either horizontal or
vertical, and no diagonal boundary is detectedvertical, and no diagonal boundary is detected
Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
69. WaveClusterWaveCluster
Sheikholeslami, Chatterjee, and Zhang (VLDB’98)Sheikholeslami, Chatterjee, and Zhang (VLDB’98)
A multi-resolution clustering approach which appliesA multi-resolution clustering approach which applies
wavelet transform to the feature spacewavelet transform to the feature space
A wavelet transform is a signal processingA wavelet transform is a signal processing
technique that decomposes a signal into differenttechnique that decomposes a signal into different
frequency sub-band.frequency sub-band.
Both grid-based and density-basedBoth grid-based and density-based
Input parameters:Input parameters:
No of grid cells for each dimensionNo of grid cells for each dimension
the wavelet, and the no of applications of waveletthe wavelet, and the no of applications of wavelet
transform.transform.
Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
70. WaveClusterWaveCluster
How to apply wavelet transform to find clustersHow to apply wavelet transform to find clusters
Summaries the data by imposing a multidimensionalSummaries the data by imposing a multidimensional
grid structure onto data spacegrid structure onto data space
These multidimensional spatial data objects areThese multidimensional spatial data objects are
represented in a n-dimensional feature spacerepresented in a n-dimensional feature space
Apply wavelet transform on feature space to find theApply wavelet transform on feature space to find the
dense regions in the feature spacedense regions in the feature space
Apply wavelet transform multiple times which result inApply wavelet transform multiple times which result in
clusters at different scales from fine to coarseclusters at different scales from fine to coarse
Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
71. What Is WaveletWhat Is Wavelet
Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
74. WaveClusterWaveCluster
Why is wavelet transformation useful forWhy is wavelet transformation useful for
clusteringclustering
Unsupervised clusteringUnsupervised clustering
It uses hat-shape filters to emphasize region whereIt uses hat-shape filters to emphasize region where
points cluster, but simultaneously to suppress weakerpoints cluster, but simultaneously to suppress weaker
information in their boundaryinformation in their boundary
Effective removal of outliersEffective removal of outliers
Multi-resolutionMulti-resolution
Cost efficiencyCost efficiency
Major features:Major features:
Complexity O(N)Complexity O(N)
Detect arbitrary shaped clusters at different scalesDetect arbitrary shaped clusters at different scales
Not sensitive to noise, not sensitive to input orderNot sensitive to noise, not sensitive to input order
Only applicable to low dimensional dataOnly applicable to low dimensional data
Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
75. CLIQUE (Clustering In QUEst)CLIQUE (Clustering In QUEst)
Agrawal, Gehrke, Gunopulos, Raghavan (SIGMOD’98).Agrawal, Gehrke, Gunopulos, Raghavan (SIGMOD’98).
Automatically identifying subspaces of a high dimensionalAutomatically identifying subspaces of a high dimensional
data space that allow better clustering than original spacedata space that allow better clustering than original space
CLIQUE can be considered as both density-based and grid-CLIQUE can be considered as both density-based and grid-
basedbased
It partitions each dimension into the same number ofIt partitions each dimension into the same number of
equal length intervalequal length interval
It partitions an m-dimensional data space into non-It partitions an m-dimensional data space into non-
overlapping rectangular unitsoverlapping rectangular units
A unit is dense if the fraction of total data pointsA unit is dense if the fraction of total data points
contained in the unit exceeds the input model parametercontained in the unit exceeds the input model parameter
A cluster is a maximal set of connected dense unitsA cluster is a maximal set of connected dense units
within a subspacewithin a subspace
Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
76. CLIQUE: The Major StepsCLIQUE: The Major Steps
Partition the data space and find the number of points thatPartition the data space and find the number of points that
lie inside each cell of the partition.lie inside each cell of the partition.
Identify the subspaces that contain clusters using theIdentify the subspaces that contain clusters using the
Apriori principleApriori principle
Identify clusters:Identify clusters:
Determine dense units in all subspaces of interestsDetermine dense units in all subspaces of interests
Determine connected dense units in all subspaces ofDetermine connected dense units in all subspaces of
interests.interests.
Generate minimal description for the clustersGenerate minimal description for the clusters
Determine maximal regions that cover a cluster ofDetermine maximal regions that cover a cluster of
connected dense units for each clusterconnected dense units for each cluster
Determination of minimal cover for each clusterDetermination of minimal cover for each cluster
Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
78. Strength and Weakness ofStrength and Weakness of CLIQUECLIQUE
StrengthStrength
ItIt automaticallyautomatically finds subspaces of thefinds subspaces of the highesthighest
dimensionalitydimensionality such that high density clusters exist insuch that high density clusters exist in
those subspacesthose subspaces
It isIt is insensitiveinsensitive to the order of records in input andto the order of records in input and
does not presume some canonical data distributiondoes not presume some canonical data distribution
It scalesIt scales linearlylinearly with the size of input and has goodwith the size of input and has good
scalability as the number of dimensions in the datascalability as the number of dimensions in the data
increasesincreases
WeaknessWeakness
The accuracy of the clustering result may beThe accuracy of the clustering result may be
degraded at the expense of simplicity of the methoddegraded at the expense of simplicity of the method
Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
80. Model-Based Clustering MethodsModel-Based Clustering Methods
Attempt to optimize the fit between the data and some mathematicalAttempt to optimize the fit between the data and some mathematical
modelmodel
Statistical and AI approachStatistical and AI approach
Conceptual clusteringConceptual clustering
A form of clustering in machine learningA form of clustering in machine learning
Produces a classification scheme for a set of unlabeled objectsProduces a classification scheme for a set of unlabeled objects
Finds characteristic description for each concept (class)Finds characteristic description for each concept (class)
COBWEBCOBWEB
A popular a simple method of incremental conceptual learningA popular a simple method of incremental conceptual learning
Creates a hierarchical clustering in the form of a classificationCreates a hierarchical clustering in the form of a classification
treetree
Each node refers to a concept and contains a probabilisticEach node refers to a concept and contains a probabilistic
description of that conceptdescription of that concept
Lecture-48 - Model-Based Clustering MethodsLecture-48 - Model-Based Clustering Methods
81. COBWEB Clustering MethodCOBWEB Clustering Method
A classification tree
Lecture-48 - Model-Based Clustering MethodsLecture-48 - Model-Based Clustering Methods
82. More on Statistical-Based ClusteringMore on Statistical-Based Clustering
Limitations of COBWEBLimitations of COBWEB
The assumption that the attributes are independentThe assumption that the attributes are independent
of each other is often too strong because correlationof each other is often too strong because correlation
may existmay exist
Not suitable for clustering large database data –Not suitable for clustering large database data –
skewed tree and expensive probability distributionsskewed tree and expensive probability distributions
CLASSITCLASSIT
an extension of COBWEB for incremental clusteringan extension of COBWEB for incremental clustering
of continuous dataof continuous data
suffers similar problems as COBWEBsuffers similar problems as COBWEB
AutoClass (Cheeseman and Stutz, 1996)AutoClass (Cheeseman and Stutz, 1996)
Uses Bayesian statistical analysis to estimate theUses Bayesian statistical analysis to estimate the
number of clustersnumber of clusters
Popular in industryPopular in industry
Lecture-48 - Model-Based Clustering MethodsLecture-48 - Model-Based Clustering Methods
83. Other Model-Based Clustering MethodsOther Model-Based Clustering Methods
Neural network approachesNeural network approaches
Represent each cluster as an exemplar, acting as aRepresent each cluster as an exemplar, acting as a
“prototype” of the cluster“prototype” of the cluster
New objects are distributed to the cluster whoseNew objects are distributed to the cluster whose
exemplar is the most similar according to someexemplar is the most similar according to some
dostance measuredostance measure
Competitive learningCompetitive learning
Involves a hierarchical architecture of several unitsInvolves a hierarchical architecture of several units
(neurons)(neurons)
Neurons compete in a “winner-takes-all” fashion forNeurons compete in a “winner-takes-all” fashion for
the object currently being presentedthe object currently being presented
Lecture-48 - Model-Based Clustering MethodsLecture-48 - Model-Based Clustering Methods
85. Self-organizing feature maps (SOMs)Self-organizing feature maps (SOMs)
Clustering is also performed by having severalClustering is also performed by having several
units competing for the current objectunits competing for the current object
The unit whose weight vector is closest to theThe unit whose weight vector is closest to the
current object winscurrent object wins
The winner and its neighbors learn by havingThe winner and its neighbors learn by having
their weights adjustedtheir weights adjusted
SOMs are believed to resemble processing thatSOMs are believed to resemble processing that
can occur in the braincan occur in the brain
Useful for visualizing high-dimensional data inUseful for visualizing high-dimensional data in
2- or 3-D space2- or 3-D space
Lecture-48 - Model-Based Clustering MethodsLecture-48 - Model-Based Clustering Methods
87. What Is Outlier Discovery?What Is Outlier Discovery?
What are outliers?What are outliers?
The set of objects are considerably dissimilar fromThe set of objects are considerably dissimilar from
the remainder of the datathe remainder of the data
Example: Sports: Michael Jordon, WayneExample: Sports: Michael Jordon, Wayne
Gretzky, ...Gretzky, ...
ProblemProblem
Find top n outlier pointsFind top n outlier points
Applications:Applications:
Credit card fraud detectionCredit card fraud detection
Telecom fraud detectionTelecom fraud detection
Customer segmentationCustomer segmentation
Medical analysisMedical analysis
Lecture-49 - Outlier AnalysisLecture-49 - Outlier Analysis
88. Outlier Discovery:Outlier Discovery:
Statistical ApproachesStatistical Approaches
Assume a model underlying distribution thatAssume a model underlying distribution that
generates data set (e.g. normal distribution)generates data set (e.g. normal distribution)
Use discordancy tests depending onUse discordancy tests depending on
data distributiondata distribution
distribution parameter (e.g., mean, variance)distribution parameter (e.g., mean, variance)
number of expected outliersnumber of expected outliers
DrawbacksDrawbacks
most tests are for single attributemost tests are for single attribute
In many cases, data distribution may not be knownIn many cases, data distribution may not be known
Lecture-49 - Outlier AnalysisLecture-49 - Outlier Analysis
89. Outlier Discovery: Distance-BasedOutlier Discovery: Distance-Based
ApproachApproach
Introduced to counter the main limitationsIntroduced to counter the main limitations
imposed by statistical methodsimposed by statistical methods
We need multi-dimensional analysis without knowingWe need multi-dimensional analysis without knowing
data distribution.data distribution.
Distance-based outlier: A DB(p, D)-outlier is anDistance-based outlier: A DB(p, D)-outlier is an
object O in a dataset T such that at least aobject O in a dataset T such that at least a
fraction p of the objects in T lies at a distancefraction p of the objects in T lies at a distance
greater than D from Ogreater than D from O
Algorithms for mining distance-based outliersAlgorithms for mining distance-based outliers
Index-based algorithmIndex-based algorithm
Nested-loop algorithmNested-loop algorithm
Cell-based algorithmCell-based algorithm
Lecture-49 - Outlier AnalysisLecture-49 - Outlier Analysis
90. Outlier Discovery: Deviation-BasedOutlier Discovery: Deviation-Based
ApproachApproach
Identifies outliers by examining the mainIdentifies outliers by examining the main
characteristics of objects in a groupcharacteristics of objects in a group
Objects that “deviate” from this description areObjects that “deviate” from this description are
considered outliersconsidered outliers
sequential exception techniquesequential exception technique
simulates the way in which humans can distinguishsimulates the way in which humans can distinguish
unusual objects from among a series of supposedlyunusual objects from among a series of supposedly
like objectslike objects
OLAP data cube techniqueOLAP data cube technique
uses data cubes to identify regions of anomalies inuses data cubes to identify regions of anomalies in
large multidimensional datalarge multidimensional data
Lecture-49 - Outlier AnalysisLecture-49 - Outlier Analysis