This document outlines an agenda for a university course on classification methods taught by Dr. S. Shivendu. The objectives of the course are to understand statistical concepts of principal component analysis and factor analysis, interpret results, and use SAS software. The agenda includes overviews of statistical concepts, principal component analysis, factor analysis, and textbooks. The document provides background information on these topics, including why they are used, basic concepts, properties, assumptions, and how the analyses work.
This document discusses methods for processing and displaying data in quantitative and qualitative research studies. It describes editing, coding, and verifying data in quantitative studies. For qualitative studies, it outlines identifying themes in responses and integrating them into a report. Various methods for communicating analyzed data are presented, including tables, graphs, and text. Specific steps in coding data, developing a codebook, and structuring tables are explained. The document aims to help students understand proper procedures for handling and presenting research information.
Cluster analysis is an unsupervised machine learning technique that groups unlabeled data points into clusters based on similarities. It aims to divide data into meaningful groups (clusters) where items in the same cluster are more similar to each other than items in different clusters. The document discusses different types of cluster analysis methods including hierarchical agglomerative clustering which starts with each data point as its own cluster and merges them together based on similarity, and k-means clustering which partitions data into k mutually exclusive clusters where each observation belongs to the cluster with the nearest mean. It also covers topics like distance measures, selecting the optimal number of clusters, and limitations of cluster analysis techniques.
This document provides an overview of unsupervised machine learning and k-means clustering. It begins with an introduction to clustering and then discusses key aspects of k-means clustering such as how it works, choosing the optimal number of clusters, and issues with random initialization. It also covers hierarchical clustering methods including agglomerative and divisive approaches. Overall, the document serves as a tutorial on unsupervised learning techniques for grouping unlabeled data.
This document discusses research methodology and selecting study designs. It begins by outlining the learning outcomes, which include defining research design, identifying functions of design, explaining causality theory and designs. It then defines research design and lists its functions. The document contrasts quantitative and qualitative designs. It explains how to minimize the effects of extraneous variables and discusses common quantitative and qualitative designs, highlighting their strengths and weaknesses.
This document discusses hierarchical clustering, an unsupervised machine learning technique that produces nested clusters organized as a hierarchical tree. There are two main types of hierarchical clustering: agglomerative, which starts with each point as an individual cluster and merges them; and divisive, which starts with everything in one cluster and splits them. Different linkage methods like single, complete, average, and Ward's linkage define how the distance between clusters is calculated during the merging or splitting process. Hierarchical clustering has strengths like not requiring pre-specifying the number of clusters but has weaknesses like high computational complexity of O(n3) time for most algorithms.
This document provides an overview of unsupervised learning and clustering algorithms. It discusses key concepts in unsupervised learning and clustering such as k-means clustering and hierarchical clustering. K-means clustering partitions data into k clusters by minimizing distances between data points and cluster centroids. Hierarchical clustering builds nested clusters by successively merging or dividing clusters based on distance measures. Representing clusters and evaluating clustering results are also covered.
This document outlines an agenda for a university course on classification methods taught by Dr. S. Shivendu. The objectives of the course are to understand statistical concepts of principal component analysis and factor analysis, interpret results, and use SAS software. The agenda includes overviews of statistical concepts, principal component analysis, factor analysis, and textbooks. The document provides background information on these topics, including why they are used, basic concepts, properties, assumptions, and how the analyses work.
This document discusses methods for processing and displaying data in quantitative and qualitative research studies. It describes editing, coding, and verifying data in quantitative studies. For qualitative studies, it outlines identifying themes in responses and integrating them into a report. Various methods for communicating analyzed data are presented, including tables, graphs, and text. Specific steps in coding data, developing a codebook, and structuring tables are explained. The document aims to help students understand proper procedures for handling and presenting research information.
Cluster analysis is an unsupervised machine learning technique that groups unlabeled data points into clusters based on similarities. It aims to divide data into meaningful groups (clusters) where items in the same cluster are more similar to each other than items in different clusters. The document discusses different types of cluster analysis methods including hierarchical agglomerative clustering which starts with each data point as its own cluster and merges them together based on similarity, and k-means clustering which partitions data into k mutually exclusive clusters where each observation belongs to the cluster with the nearest mean. It also covers topics like distance measures, selecting the optimal number of clusters, and limitations of cluster analysis techniques.
This document provides an overview of unsupervised machine learning and k-means clustering. It begins with an introduction to clustering and then discusses key aspects of k-means clustering such as how it works, choosing the optimal number of clusters, and issues with random initialization. It also covers hierarchical clustering methods including agglomerative and divisive approaches. Overall, the document serves as a tutorial on unsupervised learning techniques for grouping unlabeled data.
This document discusses research methodology and selecting study designs. It begins by outlining the learning outcomes, which include defining research design, identifying functions of design, explaining causality theory and designs. It then defines research design and lists its functions. The document contrasts quantitative and qualitative designs. It explains how to minimize the effects of extraneous variables and discusses common quantitative and qualitative designs, highlighting their strengths and weaknesses.
This document discusses hierarchical clustering, an unsupervised machine learning technique that produces nested clusters organized as a hierarchical tree. There are two main types of hierarchical clustering: agglomerative, which starts with each point as an individual cluster and merges them; and divisive, which starts with everything in one cluster and splits them. Different linkage methods like single, complete, average, and Ward's linkage define how the distance between clusters is calculated during the merging or splitting process. Hierarchical clustering has strengths like not requiring pre-specifying the number of clusters but has weaknesses like high computational complexity of O(n3) time for most algorithms.
This document provides an overview of unsupervised learning and clustering algorithms. It discusses key concepts in unsupervised learning and clustering such as k-means clustering and hierarchical clustering. K-means clustering partitions data into k clusters by minimizing distances between data points and cluster centroids. Hierarchical clustering builds nested clusters by successively merging or dividing clusters based on distance measures. Representing clusters and evaluating clustering results are also covered.
This document discusses formulating the research problem in research methodology. It defines a research problem as a perceived gap between what is and what should be. The key points covered include:
- Identifying sources of research problems such as people, problems, programs, and phenomena.
- Considering factors like relevance, expertise, and ethics when selecting a research problem.
- Outlining the steps to formulate a research problem such as identifying the broad field and raising questions.
- The importance of formulating clear research objectives and operational definitions to focus the study.
This document summarizes an R boot camp focusing on statistics. It includes an agenda that covers introducing the lab component, R basics, descriptive statistics in R, revisiting installation instructions, and measures of variability in R. Descriptive statistics are presented as ways to characterize data through measures of central tendency, shape, and variability. Examples are provided in R for calculating the mean, median, mode, range, percentiles, variance, standard deviation, and coefficient of variation. The central limit theorem and standardizing scores are also discussed. Real-world applications of R for clean and messy data are mentioned.
Read first few slides cluster analysisKritika Jain
Cluster analysis is a technique used to group similar objects together based on their characteristics. It involves calculating distances between objects to determine which ones are most similar and should be clustered together. There are hierarchical and non-hierarchical approaches to cluster analysis. Hierarchical clustering starts with each object in its own cluster and then combines the most similar clusters together in each step, while non-hierarchical clustering assigns objects directly to pre-specified clusters. A combination of the two methods is often best to determine the optimal number of clusters.
Cluster analysis is a technique used to group similar objects together based on their characteristics. It involves calculating distances between objects to determine which ones are most similar and should be clustered together. There are hierarchical and non-hierarchical approaches to cluster analysis. Hierarchical clustering starts with each object in its own cluster and then combines the most similar clusters together in each step, while non-hierarchical clustering assigns objects directly to pre-specified clusters. A combination of the two methods is often best to determine the optimal number of clusters.
Cluster analysis is a technique used to group similar objects together based on their characteristics. It involves calculating distances between objects to determine which ones are most similar and should be clustered together. There are hierarchical and non-hierarchical approaches to cluster analysis. Hierarchical clustering starts with each object in its own cluster and then combines the most similar clusters together in each step, while non-hierarchical clustering assigns objects directly to pre-specified clusters. A combination of the two methods is often best to determine the optimal number of clusters.
This document outlines a lecture on non-parametric statistics. It begins by defining parametric and non-parametric tests, noting that non-parametric tests do not require assumptions of normality and can be used when data is not of sufficient quality for parametric tests. It then reviews the assumptions of common parametric t-tests and how to determine if the data violates assumptions. The document introduces several non-parametric tests: the Mann-Whitney U test for comparing two independent groups, the Wilcoxon test for comparing two related groups, and the Kruskal-Wallis H test for comparing three or more independent groups. It provides examples and outlines how to perform each test in SPSS.
Hub AI&BigData meetup / Вадим Кузьменко: Как машинное обучение помогает снизи...Hub-IT-School
The document discusses machine learning and how it can be applied by a utility company. It explains how machine learning involves using algorithms to find patterns in data to help estimate unknown values or predict future usage. Specific applications discussed include estimating a home's heating type from usage data, creating customer segmentation profiles based on load curve patterns, and using disaggregated load data to estimate customer setpoints for heating and cooling. The goal is to apply these techniques to improve customer personalization, targeting, and participation in utility programs.
The document provides an introduction to data analytics using R given by Wei Zhong from NUS. It begins with an overview of Wei Zhong and his background in computational biology. The agenda is then outlined, covering key concepts in data analytics like logistic regression, decision trees, random forests, and evaluation metrics. An overview of data analytics is presented, distinguishing descriptive, predictive, and prescriptive analytics. Statistical learning techniques like linear models, tree-based models, and clustering are introduced. Key concepts like cross-validation that will be used in the hands-on session are then defined.
The document describes decision tree induction and classification. It provides examples of datasets that can be used to build decision trees, including examples on loan approvals, playing sailboats, and shapes. It explains the basic TDIDT (Top Down Induction of Decision Trees) algorithm for building decision trees from datasets in a recursive partitioning manner. Key aspects of decision tree learning covered include attribute selection criteria, information gain, entropy, and residual information.
The document provides an introduction and overview of CART (Classification and Regression Trees) machine learning algorithm. It discusses how CART gained recognition over time, from its introduction in 1984 to widespread use today due to advances in data mining. It summarizes the key steps of building a CART model, including binary splitting, competitors and surrogates, and interpreting the resulting decision tree to extract rules.
This document provides an introduction to unsupervised learning and clustering algorithms. It discusses how unsupervised learning is used to find patterns in unlabeled data. Clustering algorithms are introduced as a common unsupervised learning technique that groups similar data points together. Specific clustering algorithms covered include k-means, k-medoids, hierarchical clustering, density-based clustering, and grid-based clustering. The document also compares the k-means and k-medoids partitioning clustering algorithms.
The document presents an unsupervised and self-supervised method for sentence summarization called BottleSum that is based on the Information Bottleneck principle. BottleSum includes an extractive model called BottleSum Ex and an abstractive model called BottleSum Self. BottleSum Ex achieves state-of-the-art results on automatic metrics for unsupervised models and BottleSum Self performs better than BottleSum Ex in human evaluations, demonstrating the effectiveness of the Information Bottleneck approach for summarization.
The document discusses cluster analysis, an unsupervised machine learning technique that groups observations into clusters such that observations within each cluster are similar to each other and dissimilar to observations in other clusters. It describes two main approaches to cluster analysis - hierarchical clustering and k-means clustering. Hierarchical clustering groups observations together successively into tree-structured clusters while k-means clustering partitions observations into k mutually exclusive clusters where each observation belongs to the cluster with the nearest mean. The document provides examples of each approach applied to consumer data to group consumers based on income and education characteristics.
Decision Tree Machine Learning Detailed Explanation.DrezzingGaming
Decision Tree is a machine learning algorithm that can be used for both classification and regression problems. It creates a flow-chart like structure starting with an initial node which branches out further into other sub-nodes. The documents discuss decision tree structure, splitting criteria, feature selection and real world applications. Code in Python is provided to demonstrate building a basic decision tree classifier on the iris dataset.
R User Group Singapore, Data Mining with R (Workshop II) - Random forestsWei Zhong Toh
The document provides an overview of random forests, including:
- Bagging and random subspace sampling which are used to build decision trees on different subsets of data to reduce correlation between trees.
- Out-of-bag assessment which evaluates performance of each tree on data not used in its construction.
- Variable importance measures like mean decrease in accuracy and Gini which assess importance of variables.
- Multidimensional scaling plots on proximity matrices which visualize relationships between samples.
- Tuning of hyperparameters like mtry and number of trees.
- Comparison to other ensemble methods is discussed and code demonstration is planned.
This document discusses formulating the research problem in research methodology. It defines a research problem as a perceived gap between what is and what should be. The key points covered include:
- Identifying sources of research problems such as people, problems, programs, and phenomena.
- Considering factors like relevance, expertise, and ethics when selecting a research problem.
- Outlining the steps to formulate a research problem such as identifying the broad field and raising questions.
- The importance of formulating clear research objectives and operational definitions to focus the study.
This document summarizes an R boot camp focusing on statistics. It includes an agenda that covers introducing the lab component, R basics, descriptive statistics in R, revisiting installation instructions, and measures of variability in R. Descriptive statistics are presented as ways to characterize data through measures of central tendency, shape, and variability. Examples are provided in R for calculating the mean, median, mode, range, percentiles, variance, standard deviation, and coefficient of variation. The central limit theorem and standardizing scores are also discussed. Real-world applications of R for clean and messy data are mentioned.
Read first few slides cluster analysisKritika Jain
Cluster analysis is a technique used to group similar objects together based on their characteristics. It involves calculating distances between objects to determine which ones are most similar and should be clustered together. There are hierarchical and non-hierarchical approaches to cluster analysis. Hierarchical clustering starts with each object in its own cluster and then combines the most similar clusters together in each step, while non-hierarchical clustering assigns objects directly to pre-specified clusters. A combination of the two methods is often best to determine the optimal number of clusters.
Cluster analysis is a technique used to group similar objects together based on their characteristics. It involves calculating distances between objects to determine which ones are most similar and should be clustered together. There are hierarchical and non-hierarchical approaches to cluster analysis. Hierarchical clustering starts with each object in its own cluster and then combines the most similar clusters together in each step, while non-hierarchical clustering assigns objects directly to pre-specified clusters. A combination of the two methods is often best to determine the optimal number of clusters.
Cluster analysis is a technique used to group similar objects together based on their characteristics. It involves calculating distances between objects to determine which ones are most similar and should be clustered together. There are hierarchical and non-hierarchical approaches to cluster analysis. Hierarchical clustering starts with each object in its own cluster and then combines the most similar clusters together in each step, while non-hierarchical clustering assigns objects directly to pre-specified clusters. A combination of the two methods is often best to determine the optimal number of clusters.
This document outlines a lecture on non-parametric statistics. It begins by defining parametric and non-parametric tests, noting that non-parametric tests do not require assumptions of normality and can be used when data is not of sufficient quality for parametric tests. It then reviews the assumptions of common parametric t-tests and how to determine if the data violates assumptions. The document introduces several non-parametric tests: the Mann-Whitney U test for comparing two independent groups, the Wilcoxon test for comparing two related groups, and the Kruskal-Wallis H test for comparing three or more independent groups. It provides examples and outlines how to perform each test in SPSS.
Hub AI&BigData meetup / Вадим Кузьменко: Как машинное обучение помогает снизи...Hub-IT-School
The document discusses machine learning and how it can be applied by a utility company. It explains how machine learning involves using algorithms to find patterns in data to help estimate unknown values or predict future usage. Specific applications discussed include estimating a home's heating type from usage data, creating customer segmentation profiles based on load curve patterns, and using disaggregated load data to estimate customer setpoints for heating and cooling. The goal is to apply these techniques to improve customer personalization, targeting, and participation in utility programs.
The document provides an introduction to data analytics using R given by Wei Zhong from NUS. It begins with an overview of Wei Zhong and his background in computational biology. The agenda is then outlined, covering key concepts in data analytics like logistic regression, decision trees, random forests, and evaluation metrics. An overview of data analytics is presented, distinguishing descriptive, predictive, and prescriptive analytics. Statistical learning techniques like linear models, tree-based models, and clustering are introduced. Key concepts like cross-validation that will be used in the hands-on session are then defined.
The document describes decision tree induction and classification. It provides examples of datasets that can be used to build decision trees, including examples on loan approvals, playing sailboats, and shapes. It explains the basic TDIDT (Top Down Induction of Decision Trees) algorithm for building decision trees from datasets in a recursive partitioning manner. Key aspects of decision tree learning covered include attribute selection criteria, information gain, entropy, and residual information.
The document provides an introduction and overview of CART (Classification and Regression Trees) machine learning algorithm. It discusses how CART gained recognition over time, from its introduction in 1984 to widespread use today due to advances in data mining. It summarizes the key steps of building a CART model, including binary splitting, competitors and surrogates, and interpreting the resulting decision tree to extract rules.
This document provides an introduction to unsupervised learning and clustering algorithms. It discusses how unsupervised learning is used to find patterns in unlabeled data. Clustering algorithms are introduced as a common unsupervised learning technique that groups similar data points together. Specific clustering algorithms covered include k-means, k-medoids, hierarchical clustering, density-based clustering, and grid-based clustering. The document also compares the k-means and k-medoids partitioning clustering algorithms.
The document presents an unsupervised and self-supervised method for sentence summarization called BottleSum that is based on the Information Bottleneck principle. BottleSum includes an extractive model called BottleSum Ex and an abstractive model called BottleSum Self. BottleSum Ex achieves state-of-the-art results on automatic metrics for unsupervised models and BottleSum Self performs better than BottleSum Ex in human evaluations, demonstrating the effectiveness of the Information Bottleneck approach for summarization.
The document discusses cluster analysis, an unsupervised machine learning technique that groups observations into clusters such that observations within each cluster are similar to each other and dissimilar to observations in other clusters. It describes two main approaches to cluster analysis - hierarchical clustering and k-means clustering. Hierarchical clustering groups observations together successively into tree-structured clusters while k-means clustering partitions observations into k mutually exclusive clusters where each observation belongs to the cluster with the nearest mean. The document provides examples of each approach applied to consumer data to group consumers based on income and education characteristics.
Decision Tree Machine Learning Detailed Explanation.DrezzingGaming
Decision Tree is a machine learning algorithm that can be used for both classification and regression problems. It creates a flow-chart like structure starting with an initial node which branches out further into other sub-nodes. The documents discuss decision tree structure, splitting criteria, feature selection and real world applications. Code in Python is provided to demonstrate building a basic decision tree classifier on the iris dataset.
R User Group Singapore, Data Mining with R (Workshop II) - Random forestsWei Zhong Toh
The document provides an overview of random forests, including:
- Bagging and random subspace sampling which are used to build decision trees on different subsets of data to reduce correlation between trees.
- Out-of-bag assessment which evaluates performance of each tree on data not used in its construction.
- Variable importance measures like mean decrease in accuracy and Gini which assess importance of variables.
- Multidimensional scaling plots on proximity matrices which visualize relationships between samples.
- Tuning of hyperparameters like mtry and number of trees.
- Comparison to other ensemble methods is discussed and code demonstration is planned.
Similar to Segmentation: Clustering and Classification (20)
Categorical Data and Statistical AnalysisMichael770443
In this presentation, we will introduce two tests and hypothesis testing based on it, and different non-parametric methods such as the Kolmogorov-Smirnov test, the Wilcoxon’s signed-rank test, the Mann-Whitney U test, and the Kruskal-Wallis test.
In this presentation, you will differentiate the ANOVA and ANCOVA statistical methods, and identify real-world situations where the ANOVA and ANCOVA methods for statistical inference are applied.
In this presentation, we will analyze multicollinearity and inference with interaction terms in regression analysis. We will also analyze partial correlation and interpretation procedures of statistical data.
In this presentation, we will discuss the mathematical basis of linear regression and analyze the concepts of p-value, hypothesis testing, and confidence intervals, and their interpretation.
This document provides an overview of an introductory course on statistical concepts at the University of South Florida. It outlines the course objectives, which are to identify the course structure, recap foundational statistics concepts, and identify the programming structure in SAS. The agenda covers topics like data analytics, probability, statistical inference, distributions, and SAS basics. It also discusses key statistical thinking concepts like variation, inference from data, and the relationship between data, information, knowledge and wisdom. Hypothesis testing and its errors and power are explained. Issues with correlated data are also covered.
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
Communicating effectively and consistently with students can help them feel at ease during their learning experience and provide the instructor with a communication trail to track the course's progress. This workshop will take you through constructing an engaging course container to facilitate effective communication.
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
Temple of Asclepius in Thrace. Excavation resultsKrassimira Luka
The temple and the sanctuary around were dedicated to Asklepios Zmidrenus. This name has been known since 1875 when an inscription dedicated to him was discovered in Rome. The inscription is dated in 227 AD and was left by soldiers originating from the city of Philippopolis (modern Plovdiv).
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
How to Setup Warehouse & Location in Odoo 17 Inventory
Segmentation: Clustering and Classification
1. U N I V E R S I T Y O F S O U T H F L O R I D A // 1
U N I V E R S I T Y O F S O U T H F L O R I D A //
U N I V E R S I T Y O F S O U T H F L O R I D A //
Segmentation: Clustering and
Classification
Dr. S. Shivendu
2. U N I V E R S I T Y O F S O U T H F L O R I D A // 2
U N I V E R S I T Y O F S O U T H F L O R I D A // 2
Objectives
Overview of Segmentation
Perform clustering and classification.
01
Perform structural equation modeling.
02
Perform programming in SAS to do factor analysis
03
3. U N I V E R S I T Y O F S O U T H F L O R I D A // 3
U N I V E R S I T Y O F S O U T H F L O R I D A // 3
Agenda
Overview of Statistical Concepts
Segmentation: Clustering and Classification
Structural Equation Modeling
Factor Analysis in SAS
4. U N I V E R S I T Y O F S O U T H F L O R I D A // 4
U N I V E R S I T Y O F S O U T H F L O R I D A // 4
Course Textbooks
5. U N I V E R S I T Y O F S O U T H F L O R I D A // 5
Segmentation
• Classifying or dividing set of objects or consumers or people into different groups
• Market segmentation refers to dividing consumers into different distinct groups based on some
characteristics
• Statistical classification refers to using statistical tools to divide the population in sub groups
• Machine learning methods are also used for dividing the population into subgroups
• We will cover two approaches today
• Clustering
• Machine learning method – Decision Trees – for classification (very briefly)
6. U N I V E R S I T Y O F S O U T H F L O R I D A // 6
Unsupervised learning: Clustering
• The organization of unlabeled data into similarity groups is called clusters.
• Therefore, a cluster is a collection of data items which are “similar” between them, and “dissimilar”
to data items in other clusters.
7. U N I V E R S I T Y O F S O U T H F L O R I D A // 7
Initial applications of clustering
8. U N I V E R S I T Y O F S O U T H F L O R I D A // 8
More recent example: computer vision
9. U N I V E R S I T Y O F S O U T H F L O R I D A // 9
Clustering: Basic idea
Group together similar instances: BUT a or b or c?
a. b.
c.
10. U N I V E R S I T Y O F S O U T H F L O R I D A // 10
Requirements for clustering
11. U N I V E R S I T Y O F S O U T H F L O R I D A // 11
Measure of distance or ‘dissimilarity’
12. U N I V E R S I T Y O F S O U T H F L O R I D A // 12
Cluster evaluation
• Intra-cluster cohesion (compactness):
• Cohesion measures how near the data points in a cluster are to the cluster centroid.
• Sum of squared error (SSE) is a commonly used measure.
• Inter-cluster separation (isolation):
• Separation means that different cluster centroids should be far away from one another.
• In most applications, the key is still expert judgment
13. U N I V E R S I T Y O F S O U T H F L O R I D A // 13
How many clusters?
14. U N I V E R S I T Y O F S O U T H F L O R I D A // 14
Clustering methods
15. U N I V E R S I T Y O F S O U T H F L O R I D A // 15
Clustering methods (contd.)
16. U N I V E R S I T Y O F S O U T H F L O R I D A // 16
Clustering: k-means
17. U N I V E R S I T Y O F S O U T H F L O R I D A // 17
K-means clustering
18. U N I V E R S I T Y O F S O U T H F L O R I D A // 18
K-means algorithm
19. U N I V E R S I T Y O F S O U T H F L O R I D A // 19
K-means convergence criterion
20. U N I V E R S I T Y O F S O U T H F L O R I D A // 20
K-means clustering: Step 1
21. U N I V E R S I T Y O F S O U T H F L O R I D A // 21
K-means clustering: Step 2
22. U N I V E R S I T Y O F S O U T H F L O R I D A // 22
k-means clustering example
23. U N I V E R S I T Y O F S O U T H F L O R I D A // 23
k-means clustering example
24. U N I V E R S I T Y O F S O U T H F L O R I D A // 24
k-means clustering example
25. U N I V E R S I T Y O F S O U T H F L O R I D A // 25
K-means: Strength
• Simple: easy to understand and to implement
• Efficient: Time complexity: O(tkn),
• where n is the number of data points,
• k is the number of clusters, and
• t is the number of iterations.
• Since both k and t are small, k-means is considered a linear algorithm.
• K-means is the most popular clustering algorithm.
• Note that: it terminates at a local optimum if SSE is used. The global optimum is
hard to find due to complexity.
26. U N I V E R S I T Y O F S O U T H F L O R I D A // 26
K-means: Weaknesses
• The algorithm is only applicable if the mean is defined.
• For categorical data, k-mode - the centroid is represented by most frequent values.
• The user needs to specify k
• The algorithm is sensitive to outliers
• Outliers are data points that are very far away from other data points.
• Outliers could be errors in the data recording or some special data points with very
different values
27. U N I V E R S I T Y O F S O U T H F L O R I D A // 27
Outliers
28. U N I V E R S I T Y O F S O U T H F L O R I D A // 28
Dealing with outliers
• Remove some data points that are much further away from the centroids than other
data points
• To be safe, we may want to monitor these possible outliers over a few
iterations and then decide to remove them.
• Perform random sampling: by choosing a small subset of the data points, the chance
of selecting an outlier is much smaller
• Assign the rest of the data points to the clusters by distance or similarity
comparison, or classification
29. U N I V E R S I T Y O F S O U T H F L O R I D A // 29
Special data structures: Limitation
• The k-means algorithm is not suitable for discovering clusters that are not hyper-
ellipsoids (or hyper-spheres)
30. U N I V E R S I T Y O F S O U T H F L O R I D A // 30
Example: k-means for segmentation
31. U N I V E R S I T Y O F S O U T H F L O R I D A // 31
Example: k-means for segmentation
32. U N I V E R S I T Y O F S O U T H F L O R I D A // 32
Example: k-means for segmentation
33. U N I V E R S I T Y O F S O U T H F L O R I D A // 33
K-means: Summary
• Despite weaknesses, k-means is still the most popular algorithm due to its simplicity and
efficiency
• No clear evidence that any other clustering algorithm performs better in general
• Comparing different clustering algorithms is a difficult task. No one knows the correct clusters!
34. U N I V E R S I T Y O F S O U T H F L O R I D A // 34
Clustering method: Hierarchical
35. U N I V E R S I T Y O F S O U T H F L O R I D A // 35
Hierarchical clustering
36. U N I V E R S I T Y O F S O U T H F L O R I D A // 36
Example: Biological taxonomy
37. U N I V E R S I T Y O F S O U T H F L O R I D A // 37
Types of hierarchical clustering
• Top-down clustering (or divisive clustering)
• Starts with all data points in one cluster, the root, then
• splits the root into a set of child clusters. Each child cluster is recursively divided
further
• stops when only singleton clusters of individual data points remain, i.e., each
cluster with only a single point
• Bottom-up clustering (or agglomerative clustering)
• The dendrogram (?) is built from the bottom level by
• merging the most similar (or nearest) pair of clusters
• stopping when all the data points are merged into a single cluster (i.e., the root
cluster).
38. U N I V E R S I T Y O F S O U T H F L O R I D A // 38
What is a Dendrogram?
39. U N I V E R S I T Y O F S O U T H F L O R I D A // 39
Divisive hierarchical clustering
40. U N I V E R S I T Y O F S O U T H F L O R I D A // 40
Agglomerative hierarchical clustering
41. U N I V E R S I T Y O F S O U T H F L O R I D A // 41
Single linkage or Nearest neighbor
42. U N I V E R S I T Y O F S O U T H F L O R I D A // 42
Complete linkage or Farthest neighbor
43. U N I V E R S I T Y O F S O U T H F L O R I D A // 43
Divisive vs. Agglomerative
44. U N I V E R S I T Y O F S O U T H F L O R I D A // 44
In summary
• Clustering has a long history and still is in active research
• There are a huge number of clustering algorithms, among them:
Density based algorithm, Sub-space clustering, Scale-up methods,
Neural networks based methods, Fuzzy clustering, Co-clustering
• Clustering is hard to evaluate, but very useful in practice
• Clustering is highly application dependent (and to some extent subjective)
45. U N I V E R S I T Y O F S O U T H F L O R I D A // 45
Machine learning and classification
• Machine learning: studies how to automatically learn to make accurate
predictions based on past observations
• Classification problem: classify observations into given set of categories
46. U N I V E R S I T Y O F S O U T H F L O R I D A // 46
Examples of classification problem
• Text categorization (e.g., spam filtering)
• Fraud detection
• Optical character detection
• Machine vision (face detection)
• Natural-language processing(e.g., spoken language understanding)
• Market segmentation(e.g.: predict if customer will respond to promotion)
• Bioinformatics(e.g., classify proteins according to their function)
47. U N I V E R S I T Y O F S O U T H F L O R I D A // 47
Classification algorithms in machine learning
• Decision trees
• Boosting
• Support vector machine
• Neural networks
• Nearest neighbor algorithm
• Naive bays
• Bagging
• Random forest
• We will discuss only decision trees
5/23/2022 47
48. U N I V E R S I T Y O F S O U T H F L O R I D A // 48
Decision trees: An example (For Reference Only: Slides
48 to 62)
49. U N I V E R S I T Y O F S O U T H F L O R I D A // 49
A decision tree classifier
50. U N I V E R S I T Y O F S O U T H F L O R I D A // 50
How to built decision trees
• Choose rule to split on
• Divide data following the
• splitting rule
• in disjoint sets
51. U N I V E R S I T Y O F S O U T H F L O R I D A // 51
Building decision trees
52. U N I V E R S I T Y O F S O U T H F L O R I D A // 52
How to choose splitting rule?
53. U N I V E R S I T Y O F S O U T H F L O R I D A // 53
How to measure purity?
54. U N I V E R S I T Y O F S O U T H F L O R I D A // 54
Different types of error rates
• Training error = fraction of training examples misclassified
• Test error = fraction of test examples misclassified
• Generalization error = probability of misclassifying new random example
55. U N I V E R S I T Y O F S O U T H F L O R I D A // 55
A possible classifier
56. U N I V E R S I T Y O F S O U T H F L O R I D A // 56
Another classifier
57. U N I V E R S I T Y O F S O U T H F L O R I D A // 57
Test size vs accuracy
58. U N I V E R S I T Y O F S O U T H F L O R I D A // 58
Overfitting: an example
59. U N I V E R S I T Y O F S O U T H F L O R I D A // 59
Building a good classifier
60. U N I V E R S I T Y O F S O U T H F L O R I D A // 60
Example
61. U N I V E R S I T Y O F S O U T H F L O R I D A // 61
Good and bad classifiers
62. U N I V E R S I T Y O F S O U T H F L O R I D A // 62
Controlling tree size
63. U N I V E R S I T Y O F S O U T H F L O R I D A // 63
Good and bad of decision trees
• Very fast to train and evaluate
• Relatively easy to interpret
• BUT: Accuracy often not state-of-the-art
64. U N I V E R S I T Y O F S O U T H F L O R I D A // 64
Practical considerations
• There are many classification methods in machine learning
• Each of them has advantages and disadvantages and are good for specific
domain
• Practical considerations
• Want training data to be like test data
• Use your knowledge of problem to know where to get training data,
and what to expect test data to be like
65. U N I V E R S I T Y O F S O U T H F L O R I D A // 65
Structural Equation Modeling
• Structural equation modeling (SEM) is
• a series of statistical methods that allow complex relationships between one or
more independent variables and one or more dependent variables.
• Though there are many ways to describe SEM, it is most commonly thought of
• as a hybrid between some form of analysis of variance (ANOVA)/regression and
some form of factor analysis.
• In general, it can be remarked that SEM allows one to
• perform some type of multilevel regression/ANOVA on factors.
• One should therefore be quite familiar with
• univariate and multivariate regression/ANOVA
• as well as the basics of factor analysis
• to implement SEM for your data.
66. U N I V E R S I T Y O F S O U T H F L O R I D A // 66
Some preliminary terminology
• Variables that are not influenced by another other variables in a model are called exogenous
variables.
• As an example, suppose we have two factors that cause changes in GPA, hours studying
per week and IQ. Suppose there is no causal relationship between hours studying and
IQ. Then both IQ and hours studying would be exogenous variables in the model.
• Variables that are influenced by other variables in a model are called endogenous variables.
GPA would be a endogenous variable in the previous example in (a).
• A variable that is directly observed and measured is called a manifest variable (it is also called
an indicator variable in some circles).
• In the example in (a), all variables can be directly observed and thus qualify as manifest
variables. There is a special name for a structural equation model which examines only
manifest variables, called path analysis.
67. U N I V E R S I T Y O F S O U T H F L O R I D A // 67
Some preliminary terminology (contd.)
• A variable that is not directly measured is a latent variable.
• The “factors” in a factor analysis are latent variables.
• For example, suppose we were additionally interested in the impact of motivation on GPA.
Motivation, as it is an internal, non-observable state, is indirectly assessed by a student’s
response on a questionnaire, and thus it is a latent variable.
• Latent variables increase the complexity of a structural equation model because one needs
to take into account all of the questionnaire items and measured responses that are used to
quantify the “factor” or latent variable.
• In this instance, each item on the questionnaire would be a single variable that would either
be significantly or insignificantly involved in a linear combination of variables that influence
the variation in the latent factor of motivation
68. U N I V E R S I T Y O F S O U T H F L O R I D A // 68
Some preliminary terminology
• For the purposes of SEM, specifically, moderation refers to a situation that includes three or
more variables, such that the presence of one of those variables changes the relationship
between the other two.
• In other words, moderation exists when the association between two variables is not the
same at all levels of a third variable.
• One way to think of moderation is when you observe an interaction between two
variables in an ANOVA.
• For example, stress and psychological adjustment may differ at different levels of social
support (i.e., this is the definition of an interaction).
• In other words, stress may adversely affect adjustment more under conditions of low
social support compared to conditions of high social support. This would imply a two-
way interaction between stress and psychological support if an ANOVA were to be
performed.
• Figure 1 shows a conceptual diagram of moderation. This diagram shows that there are
three direct effects that are hypothesized to cause changes in psychological adjustment
– a main effect of stress, a main effect of social support, and an interaction effect of
stress and social support.
69. U N I V E R S I T Y O F S O U T H F L O R I D A // 69
What is moderation?
Figure 1. A Diagrammed Example of Moderation. Note that each individual effect of stress, social support, and the interaction
of stress and social support can be separated and is said to be related to psychological adjustment. Also note that there are no
causal connections between stress and social support.
Figure 1 shows a conceptual diagram of moderation.
This diagram shows that there are three direct effects that are
hypothesized to cause changes in psychological adjustment – a main effect
of stress, a main effect of social support, and an interaction effect of stress
and social support.
Stress
Social Support
Stress X Social
Support
Interaction
Psychological
Adjustment
70. U N I V E R S I T Y O F S O U T H F L O R I D A // 70
Some more terminology: Mediation
• In SEM, mediation refers to a situation
• that includes three or more variables, such that there is a causal process
between all three variables.
• Note that this is distinct from moderation.
• In the previous example in (e), we can say there are three separate things
in the model that cause a change in psychological adjustment: stress,
social support, and the combined effect of stress and social support that is
not accounted for by each individual variable.
• Mediation describes a much different relationship that is generally more
complex.
• In a mediation relationship, there is a direct effect between an independent
variable and a dependent variable.
• There are also indirect effects between an independent variable and a mediator
variable, and between a mediator variable and a dependent variable.
71. U N I V E R S I T Y O F S O U T H F L O R I D A // 71
Mediation (contd.)
• The example in Figure 1, can be re-specified into a mediating process, as shown in Figure 2.
• Figure 2: A Diagram of a Mediation Model. Including indirect effects in a mediation model may
change the direct effect of a single independent variable on a dependent variable.
Stress Psychological
Adjustment
Social
Support
(Revised) Direct Effect
Indirect Effect
Indirect Effect
72. U N I V E R S I T Y O F S O U T H F L O R I D A // 72
Mediation effect
• The main difference from the moderation model is that we now allow for causal relationships
between stress and social support and social support and psychological adjustment to be
expressed.
• Imagine that social support was not included in the model – we just wanted to see the direct
effect of stress and psychological adjustment.
• We would get a measure of the direct effect by using regression or ANOVA. When we include
social support as a mediator, that direct effect will change as a result of decomposing the
causal process into indirect effects of stress on social support and social support on
psychological adjustment.
• The degree to which the direct effect changes as a result of including the mediating variable
of social support is referred to as the mediational effect.
• Testing for mediation involves running a series of regression analyses for all of the causal
pathways and some method of estimating a change in direct effect. This technique is
actually involved in structural equation models that include mediator variables and will be
discussed in the next section of this document.
73. U N I V E R S I T Y O F S O U T H F L O R I D A // 73
Moderation and Mediation
• In many respects moderation and mediational models are the foundation of structural equation
modeling.
• In fact, they can be considered as simple structural equation models themselves.
• Therefore, it is very important to understand how to analyze such models to understand more
complex structural equation models that include latent variables.
74. U N I V E R S I T Y O F S O U T H F L O R I D A // 74
Covariance and Correlation
Covariance and correlation are the building blocks of how data will be represented in model
specification that implements structural equation modeling.
One should know how to obtain a correlation matrix or covariance matrix using PROC CORR in
SAS to calculate a correlation or covariance matrix.
The covariance matrix in practice serves as dataset to be analyzed.
In the context of SEM, covariances and correlations between variables are essential because
they allow one to include a relationship between two variables that is not necessarily causal.
In practice, most structural equation models contain both causal and non-causal relationships.
Obtaining covariance estimates between variables allows one to better estimate direct and
indirect effects with other variables, particularly in complex models with many parameters to
be estimated.
75. U N I V E R S I T Y O F S O U T H F L O R I D A // 75
Structural model
• A structural model is a part of the entire structural equation model diagram that one
will complete for every model one proposes.
• It is used to relate all of the variables (both latent and manifest) one will need to
account for in the model.
• There are a few important rules to follow when creating a structural model.
76. U N I V E R S I T Y O F S O U T H F L O R I D A // 76
Measurement model
• A measurement model is a part of the entire structural equation model diagram that
one will complete for every proposed model.
• It is essential if there are latent variables in the model.
• This part of the diagram which is analogous to factor analysis:
• One need to include all individual items, variables, or observations that “load” onto
the latent variable, their relationships, variances, and errors.
• There are a few important rules to follow when creating the measurement model that
we will discuss later document.
77. U N I V E R S I T Y O F S O U T H F L O R I D A // 77
Finally
• Together, the structural model and the measurement model form the entire structural
equation model.
• This model includes everything that has been measured, observed, or otherwise
manipulated in the set of variables examined.
• A recursive structural equation model is a model in which causation is directed in one
single direction.
• A nonrecursive structural equation model has causation which flows in both directions
at some parts of the model.
78. U N I V E R S I T Y O F S O U T H F L O R I D A // 78
Research questions suitable for SEM
• SEM can conceptually be used to answer any research question involving the indirect or
direct observation of one or more independent variables or one or more dependent
variables.
• However, the primary goal of SEM is to determine and validify a proposed causal process
and/or model.
• Therefore, SEM is a confirmatory technique.
• Like any other test or model, we have a sample and want to say something about the
population that comprises the sample. We have a covariance matrix to serve as our
dataset, which is based on the sample of collected measurements.
• The empirical question of SEM is therefore whether the proposed model produces a
population covariance matrix that is consistent with the sample covariance matrix.
• Because one must specify a priori a model that will undergo validation testing, there are
many questions SEM can answer.
79. U N I V E R S I T Y O F S O U T H F L O R I D A // 79
Limitations and Assumptions of SEM
• Because SEM is a confirmatory technique, one must plan accordingly.
• One must specify a full model a priori and test that model based on the sample and variables
included in the measurements.
• One must know the number of parameters one need to estimate – including covariances, path
coefficients, and variances.
• One must know all relationships one want to specify in the model. Then, and only then, can
one begin analyses.
• Because SEM has the ability to model complex relationships between multivariate data, sample size
is an important (but unfortunately underemphasized) issue.
• Two popular assumptions are that one needs more than 200 observations, or at least 50 more
than 8 times the number of variables in the model.
• A larger sample size is always desired for SEM.
80. U N I V E R S I T Y O F S O U T H F L O R I D A // 80
Limitations and assumptions (contd.)
• Like other multivariate statistical methodologies, most of the estimation
techniques used in SEM require multivariate normality.
• Data need to be examined for univariate and multivariate outliers.
Transformations on the variables can be made. However, there are some
estimation methods that do not require normality.
• SEM techniques only look at first-order (linear) relationships between variables.
• Linear relationships can be explored by creating bivariate scatterplots for
all of your variables. Power transformations can be made if a relationship
between two variables seems quadratic.
81. U N I V E R S I T Y O F S O U T H F L O R I D A // 81
Limitations and assumptions (contd.)
• Multicollinearity among the IVs for manifest variables can be an issue.
• Most programs will inspect the determinant of a section of covariance matrix,
or the whole covariance matrix.
• A very small determinant may be indicative of extreme multicollinearity.
• The residuals of the covariances (not residual scores) need to be small and centered
about zero.
• Some goodness of fit tests (like the Lagrange Multiplier test) remain robust
against highly deviated residuals or non-normal residuals.
82. U N I V E R S I T Y O F S O U T H F L O R I D A // 82
Preparing data for analysis
• Assuming one has checked for model assumptions, dealt with missing data, and imported
data into a SAS, one should obtain a covariance matrix on dataset.
• In SAS, you can run PROC CORR and create an output file that has the covariance
matrix.
• Alternatively, you can manually type in the covariance matrix or import it from a
spreadsheet.
• One must specify in the data declaration that the set is a covariance matrix, so, in
SAS for example, your code would appear as data name type=COV;
• effort must be made to ensure the decimals of the entries are aligned.
83. U N I V E R S I T Y O F S O U T H F L O R I D A // 83
Developing model hypothesis and specifications
• Often, the most difficult part of SEM is to correctly identifying model.
• One must know exactly the number of latent and manifest variables one is including in the
model, as well as the number of variances and covariances to be calculated, as well as the
number of parameters one is estimating.
84. U N I V E R S I T Y O F S O U T H F L O R I D A // 84
Building model
• The most important part of SEM analysis is the causal model. The following basic, general rules
that are used
• Rule 1. Latent variables/factors are represented with circles and measured/manifest variables
are represented with squares.
• Rule 2. Lines with an arrow in one direction show a hypothesized direct relationship between
the two variables. It should originate at the causal variable and point to the variable that is
caused. Absence of a line indicates there is no causal relationship between the variables.
85. U N I V E R S I T Y O F S O U T H F L O R I D A // 85
Building model (contd.)
• Rule 3. Lines with an arrow in both directions should be curved and this demonstrates a bi-
directional relationship (i.e., a covariance).
• Rule 3a. Covariance arrows should only be allowed for exogenous variables.
86. U N I V E R S I T Y O F S O U T H F L O R I D A // 86
Building model (contd.)
• Rule 4. For every endogenous variable, a residual term should be added in the model.
Generally, a residual term is a circle with the letter E written in it, which stands for error.
• Rule 4a. For latent variables that are also endogenous, a residual term is not called error in
the lingo of SEM. It is called a disturbance, and therefore the “error term” here would be a
circle with a D written in it, standing for disturbance.
87. U N I V E R S I T Y O F S O U T H F L O R I D A // 87
SEM
• SEM is useful method to build causal relationship models
• Is next step after Factor analysis and EFA and CFA.
• The Reading posted on canvas is very useful in practicing and understanding the details of
PROC CALIS
88. U N I V E R S I T Y O F S O U T H F L O R I D A // 88
U N I V E R S I T Y O F S O U T H F L O R I D A //
You have reached the end
of the presentation.