Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Successfully reported this slideshow.

Like this presentation? Why not share!

- Introduction To XL-Miner by DataminingTools Inc 11356 views
- Xlminer demo by Sangjun Woo 6096 views
- XL-MINER:Prediction by xlminer content 4805 views
- XL Miner: Classification by DataminingTools Inc 2907 views
- Data Mining: Mining ,associations, ... by DataminingTools Inc 3247 views
- AI: AI & Searching by DataminingTools Inc 5114 views

3,749 views

Published on

XL-MINER: Data Exploration

Published in:
Technology

No Downloads

Total views

3,749

On SlideShare

0

From Embeds

0

Number of Embeds

11

Shares

0

Downloads

0

Comments

0

Likes

3

No embeds

No notes for slide

- 1. Introduction to<br />XLMiner™<br />Data Reduction and <br />exploration<br />XLMiner and Microsoft Office are registered trademarks of the respective owners.<br />
- 2. Data Exploration And Reduction<br />Data Exploration and reduction is used when the data set to be mined is very large and may contain large number of variables that are very correlated or unrelated to the outcome we are working at. Using the tools in XLMiner, one can reduce the size of the data set or explore the data set to formulate hypothesis that can be worth testing.<br />There are two techniques for this purpose:<br />Principle Component Analysis:The PCA is a mathematical function that is used to transform a number of correlated variables into a smaller number of uncorrelated variables. These uncorrelated variables are called Principal Components. Thus, we get a data set which has a lesser number of variables but the variability of data is maintained since the first principle component takes into consideration the maximum amount of variation in data and others after it consider slightly lesser amounts of variability into account<br />Cluster Analysis: Cluster analysis is also called data segmentation. Its primary objective is to assign objects to the same clusters such that those within a cluster have marked similarities and those in different clusters have marked differences<br />http://dataminingtools.net<br />
- 3. Data Exploration And Reduction- Principle Component Analysis<br />http://dataminingtools.net<br />
- 4. Data Exploration And Reduction<br />Fixed #components : You can specify a fixed number here.<br />Smallest #components explaining : This option lets you specify a percentage, and XLMiner�will calculate the minimum number of principal components required to account for that percentage of variance. Do not select it here<br />http://dataminingtools.net<br />
- 5. Data Exploration And Reduction- Output<br />http://dataminingtools.net<br />
- 6. Data Exploration And Reduction-Cluster Analysis<br />Cluster analysis can be done in two ways:<br />k-Means Clustering: - <br />In k-means clustering, the clustering procedure begins with a single cluster that is successively split <br />into two clusters. This continues till the required number of clusters is obtained.<br />2.Hierarchical Cluster Analysis: - <br />Hierarchical clustering itself can be done in two ways –<br /> agglomerative and divisive clustering. In agglomerative clustering, as the name suggests,<br /> distinct objects are combined to form a group of objects having some similarities. In divisive clustering, <br />objects are grouped into finer groups successively. <br />http://dataminingtools.net<br />
- 7. Data Exploration And Reduction – K-Means Clustering<br />Select the variables to be selected as input. Deselect the rows that contain Headers (Here TYPE var)<br />http://dataminingtools.net<br />
- 8. Data Exploration And Reduction – K-Means Clustering<br />Enter the number of clusters you ant the data set to be divided into and the number of iterations to be performed while creating the clusters. You may also specify number of starts and seed<br />http://dataminingtools.net<br />
- 9. Data Exploration And Reduction – K-Means Clustering (Output)<br />XLMiner calculates the squares of the distances and chooses the least value as the Best Starting point .<br />http://dataminingtools.net<br />
- 10. Data Exploration And Reduction – K-Means Clustering (Output)<br />This shows the distance of each row from the clusters. See how the rows are put into the cluster from which the a row has least distance .<br />http://dataminingtools.net<br />
- 11. Data Exploration And Reduction – Hierarchical clustering <br />In hierarchical clustering, the mean of all the values is calculated and the set is split into two from there. Then the mean for these sets is calculates and split into two .This process continues until the requires number of clusters are not formed.<br />Hierarchical clustering itself can be done in two ways – agglomerative and divisive clustering. In agglomerative clustering, as the name suggests, distinct objects are combined to form a group of objects having some similarities. In divisive clustering, objects are grouped into finer groups successively. <br />http://dataminingtools.net<br />
- 12. Data Exploration And Reduction – Hierarchical Clustering<br />http://dataminingtools.net<br />
- 13. Data Exploration And Reduction – Hierarchical Clustering<br />Select “Normalize Data” and then select from any one of the five clustering procedures available.<br />http://dataminingtools.net<br />
- 14. Data Exploration And Reduction – Hierarchical Clustering<br />This output details the history of the cluster formation. Initially, each individual case is considered its own cluster (with just itself as a member), so we start off with # clusters = # cases (21 in the example above). At stage 1, above, clusters (i.e. cases) 10 and 13 were found to be closer together than any other two clusters (i.e. cases), so they are joined together in a cluster called Cluster 10. So now we have one cluster that has two cases (cases 10 and 13), and 19 other clusters that still have just one case in each. At stage 2, clusters 7 and 12 are found to be closer together than any other two clusters, so they are joined together into cluster 7.<br />The cluster ID is thus the lowest case number of the cases belonging to that cluster. <br />This process continues until there is just one cluster. At various stages of the clustering process, there are different numbers of clusters. A graph called a dendrogram lets you visualize this:<br />http://dataminingtools.net<br />
- 15. Data Exploration And Reduction – Hierarchical Clustering<br />http://dataminingtools.net<br />
- 16. Data Exploration And Reduction – Hierarchical Clustering<br />This shows the assignment of cases to clusters(we selected 8 clusters)<br />http://dataminingtools.net<br />
- 17. Thank you<br />For more visit:<br />http://dataminingtools.net<br />http://dataminingtools.net<br />
- 18. Visit more self help tutorials<br />Pick a tutorial of your choice and browse through it at your own pace.<br />The tutorials section is free, self-guiding and will not involve any additional support.<br />Visit us at www.dataminingtools.net<br />

No public clipboards found for this slide

Be the first to comment