Your SlideShare is downloading. ×
Data Mining in SQL Server 2008<br />Ing Eduardo Castro<br />GrupoAsesor en Informática<br />ecastro@grupoasesor.net<br />
Eduardo Castro<br />ecastro@grupoasesor.net<br />MCITP Server Administrator<br />MCTS Windows Server 2008 ActiveDirectory<...
Disclaimer<br />The information contained in this slide deck represents the current view of Microsoft Corporation on the i...
Overview<br />Introducing Data Mining Office Add-Ins<br />Understanding Data Mining Structure Improvements<br />Using the ...
Introducing Data Mining Office Add-Ins<br />Data Preparation Tasks<br />Tools for Exploration<br />Tools for Prediction<br...
Data Preparation Tasks<br />
Tools for Exploration - Table Analysis Tools<br />7<br />
Tools for Exploration - Data Modeling Tools <br />
Tools for Exploration – Model Viewers <br />Cluster Diagram<br /><ul><li>Distribution of population
Strength of similarities between clusters</li></ul>Other viewers:<br /><ul><li> Decision tree
 Neural  network
 Association   rules
 Time series</li></ul>Cluster Profiles<br /><ul><li>Distribution of values for each attribute
Drill through to details</li></ul>Cluster Characteristics<br /><ul><li>Attributes ordered by importance to cluster
Probability attribute appearing in cluster</li></ul>Cluster Discrimination<br /><ul><li>Comparison of attributes between t...
Tools for Prediction - Data Modeling Tools<br />
Model Testing and Validation<br />Accuracy Chart<br /><ul><li>Measurement of model accuracy
Lift chart comparing actual results to random guess and to perfect prediction</li></ul>Classification Matrix<br /><ul><li>...
Displays percentage and counts</li></ul>Profit Chart<br /><ul><li>Estimation of profit by percentage of population contacted
Input: population, fixed cost, individual cost, revenue per individual
Output: maximum profit, probability threshold</li></ul>Cross Validation – more on this later<br />
1 Using the Data Mining Excel Add-In<br />demo <br />
Understanding Data Mining Structure Improvements<br />Data Partitioning for Training and Testing<br />Mining Model Column ...
Data Partitioning for Training and Testing<br />Specify as percentage or maximum number of cases<br />Smaller value is use...
Data Partitioning with DMX<br />Create a structure with partitioning with the HOLDOUT keyword<br />Query the structure to ...
Mining Model Column Aliases<br />Assign a column alias to reuse a column in a structure<br />Column content can be clarifi...
Data Mining Filters<br />Specify a condition to apply to mining structure columns <br />Filter creates subsets of training...
Data Mining Filters with DMX<br />Add a filtered mining model to a structure<br />
Drill Through to Mining Structure  Data<br />Add columns to the mining structure, but not to models<br />Eliminates unnece...
Cross-Validation of a Mining Model<br />Purpose<br />Validate the accuracy of a single model<br />Compare models within th...
Cross-Validation Parameters<br />Fold Count<br />Number of partitions to use<br />Minimum 2, Maximum 256<br />Maximum 10 f...
Upcoming SlideShare
Loading in...5
×

Minería de Datos en Sql Server 2008

1,188

Published on

Inteligencia de Negocios en SQL Server 2008 y Minería de Datos.

Ing. Eduardo Castro Martinez, PhD
Microsoft SQL Server MVP
http://ecastrom.blogspot.com
http://comunidadwindows.org

Published in: Technology
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,188
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
0
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide
  • Data Mining Office Add-ins were introduced with SQL Server 2005, and a new version is available for SQL Server 2008 to take advantage of the improvements made to Analysis Services data mining. In this module, we’ll review how to use the Data Mining Add-ins, and then examine the changes made to mining structures as well as the new Time Series alogrithm.
  • Data Mining Add-ins for Office allow you to perform a variety of data mining tasks. You can prepare data by applying data cleansing, and you can partition the data into training and test sets. Some of the add-in tools are focused on exploring your data, while other tools are built specifically for prediction purposes. The add-ins also includes functionality for testing and validating each model.Point out that the add-ins are also useful as a client viewer for data mining models developed on the server.
  • This slide shows the data preparation tasks : Explore Data (to find anomalies), clean data (to handle outliers or erroenous data, and partition data to separate it into training and test data.In the background is a view used to consolidate information from several tables. Transformations have been applied to enforce business rules. This logical table is then used as the source for data mining activities –whether using the add-ins or using BI Development Studio.
  • This slide identifies the table analysis tools that are exploration-based data mining tools and identifies the data mining algorithm associated with the tool.
  • This slide identifies the data modeling tools that are exploration-based data mining tools and identifies the data mining algorithm associated with the tool.
  • Model viewers are available not only for mining data models created by using the add-in, but also for mining models created on the server.
  • This slide shows the predictive tools and shows the related algorithm.
  • Prediction tools are also available in the Data Modeling ribbon of Excel. Here you see the algorithm associated with these predictive tools.
  • The Data Mining add-in also includes model testing and validation tools, such as an Accuracy cart, a classification matrix, and a profit chart. Cross Validation is also new to Analysis Services data mining and will be discussed in more detail later in this module.
  • In this section, we’ll review the improvements for mining structures in SSAS 2008. Specifically, we’ll look at setting up data partitions for training and testing dta, how to us aliases with mining model columns, how to apply filers to data associated with a mining model, how to drillthrough to details when studying data mining results, and how to use the cross-validation report to assess the accuracy of a model or to compare multiple models to find the best model.
  • To create training and testing sets using random data for SSAS 2005, best practice was to use the Random Sample transformation in SSIS 2005. However, the package design was particularly cumbersome for structures with nested tables. In SSAS 2008, the process to generate random data sets for training and testing is built in.You can specify parameters for partitioning data into training and testing sets: In the Data Mining Wizard In the Properties pane of the mining structureAnalysis services uses a random sampling algorithm to assign data to either the training or the testing data set.If you provide both a percentage and maximum number of rows, the smaller number prevails. For example, you can specify a percentage of 30% of the entire data set which is not to exceed 1,000 rows if the data source continues to grow. When using the same data source view for multiple mining structures, you might want to keep the same partitioning strategy for each mining structure. Set the HoldoutSeed property to the same value in each structure to yield comparable results in the training and testing data sets.You can also define partitioning using DMX, AMO, or XML DDL.Point out that partitioning is not available for a model using the Time Series algorithm.
  • For those who prefer to use DMX to create mining structures instead of the user interface, DMX now supports partitioning when the mining structure is created. Point out that HOLDOUT cannot be used with ALTER MINING STRUCTURE.The process to train the model – using INSERT INTO MINING STRUCTURE – is unchanged. The query executes and data is random sampled. A holdout store is created for each partition of the mining structure. In SSAS 2008, you can now query the structure to view the contents of the training and testing data sets.
  • In SSAS 2005, you could change the name of a mining model column in Business Intelligence Development Studio, but not in DMX. One reason you might want to use alias a column is when you want to use the same column with different algorithms, but one algorithm supports continuous columns and the other does not. You can add a column to the mining structure more than once and set the Content property to a different value for each version of the column. Ignore the column in the model where the content type is unsupported, and include it as an input column in models supporting that content type. By enabling the use of an alias, you can use the same NATURAL PREDICTION JOIN for the models in the same mining structure because input columns are bound by name to the model column.
  • Instead of creating separate data source views for your mining structure, you can create separate filtered models. Each model contains the same training and testing data which allows you to compare model results. Why create filtered models?Achieve better overall accuracy by eliminating strong patterns of one attribute value (e.g. North America versus Pacific).Compare patterns in isolated subsets of data.You can create filers: In the Model Filter dialog box In the Properties pane of the mining modelIn the case of discretized values, the bucket containing the specified value is selected. Example: Age = 23 returns bucket containing 20-25 ages.An example of a filter expression for a case table and a nested table:Gender = ‘M’ and EXISTS(select * from Products where Model = ‘Water Bottle’)Point out that NOT EXISTS is also valid.Mention the URL on the Resources slide for more information about filter syntax.You must process the mining structure to see the filter applied to the model.
  • Mention that using drillthrough in a filtered model returns all cases matching the filter, whether used for training or testing.
  • As in SSAS 2005, the following algorithms do not support drill through: NaïveBayes Neural Network Logistic RegressionThe Time Series algorithm supports drill through in a DMX query only; drill through is not supported in Business Intelligence Development Studio.
  • Using parameters you specify, cross-validation automatically creates partitions of the data set of approximately equal size. For each partition, a mining model is created for the entire data set with one of the partitions removed, and then tested for accuracy using the partition that was excluded. If the variations are subtle, then the model generalizes well. If there is too much variation, then the model is not useful.Point out that cross-validation cannot be used with models built using the Time Series or Sequence Clustering algorithms.You can use the Cross Validation Report in the Mining Accuracy Chart of Business Intelligence Development Studio, or use Analysis Services stored procedures to create an ad hoc cross-validation SQL Server Management Studio.
  • More folds results in longer processing time.
  • This slide and the next outlines the types of tests and their respective measures that are found on the cross-validation report. Different models will use different test types for this report. Point out the report can be generated in Business Intelligence Development Studio, which will be shown in the demonstration, or by calling an Analysis Services stored procedure.
  • Data mining in SSAS 2008 was also improved by modifying the Time Series algorithm. In this section, we’ll review why the mining structure is improved and we’ll review the algorithm parameters for the Time Series algorithm.
  • In SSAS 2005, the ARTxp Time Series prediction algorithm (autoregressive tree model for multiple prior unknown states), built by Microsoft Research, was introduced. The purpose of this algorithm was to tackle a difficult business problem – how to accuractly predict the next step in a series. It was less reliable for predicting 10 steps or further out.ARIMA (autoregressive integrated moving average) is a very common time series algorithm that is well understood by seasoned data miners. It provides good predictions when projecting beyond the next 10 steps. In SSAS 2008, the Microsoft Time Series algorithm blends results of the two algorithms to leverage short and long term capabilities.In Standard Edition, you can configure your model to use one or the other algorithm, or both (which is the default). In Enterprise Edition, you can do custom weighting to get best prediction over a variable time span.
  • The FORECAST_METHOD default value is MIXED. You can change this to use ARIMA or ARTXP to use a single algorithm exclusively.The PREDICTION_SMOOTHING parameter affects the weighting of the ARTxpand ARIMAalgorithms when MIXED mode is used. A value closer to 0 weights in favor of ARTxp while a value closer to 1 weights in favor of ARIMA. For example, a value of 0.8 is weighted towards ARIMA and the value of 0.2 is used for ARTxp.
  • Transcript of "Minería de Datos en Sql Server 2008"

    1. 1. Data Mining in SQL Server 2008<br />Ing Eduardo Castro<br />GrupoAsesor en Informática<br />ecastro@grupoasesor.net<br />
    2. 2. Eduardo Castro<br />ecastro@grupoasesor.net<br />MCITP Server Administrator<br />MCTS Windows Server 2008 ActiveDirectory<br />MCTS Windows Server 2008 Network Infrastructure<br />MCTS Windows Server 2008 Applications Infrastructure<br />MCITP Enterprise Support<br />MCSTS Windows Vista<br />MCITP Database Developer<br />MCITP Database Administrator<br />MCTS SQL Server<br />MCITP Exchange Server 2007<br />MCTS Office PerformancePoint Server<br />MCTS Team Foundation Server<br />MCPD Enterprise Application Developer<br />MCTS .Net Framework 2.0: Distributed Applications<br />MCT 2008<br />International Association of Software Architects Chapter Leader<br />IEEE Communications Society Board of Directors<br />European Datawarehouse Research<br />
    3. 3. Disclaimer<br />The information contained in this slide deck represents the current view of Microsoft Corporation on the issues discussed as of the date of publication. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information presented after the date of publication.<br />This slide deck is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS DOCUMENT.<br />Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this slide deck may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation. <br />Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this slide deck. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this slide deck does not give you any license to these patents, trademarks, copyrights, or other intellectual property.<br />Unless otherwise noted, the example companies, organizations, products, domain names, e-mail addresses, logos, people, places and events depicted herein are fictitious, and no association with any real company, organization, product, domain name, email address, logo, person, place or event is intended or should be inferred. <br />© 2008 Microsoft Corporation. All rights reserved.<br />Microsoft, SQL Server, Office System, Visual Studio, SharePoint Server, Office PerformancePoint Server, .NET Framework, ProClarity Desktop Professionalare either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.<br />The names of actual companies and products mentioned herein may be the trademarks of their respective owners.<br />3<br />
    4. 4. Overview<br />Introducing Data Mining Office Add-Ins<br />Understanding Data Mining Structure Improvements<br />Using the New Time Series Algorithm<br />4<br />
    5. 5. Introducing Data Mining Office Add-Ins<br />Data Preparation Tasks<br />Tools for Exploration<br />Tools for Prediction<br />Model Testing and Validation<br />
    6. 6. Data Preparation Tasks<br />
    7. 7. Tools for Exploration - Table Analysis Tools<br />7<br />
    8. 8. Tools for Exploration - Data Modeling Tools <br />
    9. 9. Tools for Exploration – Model Viewers <br />Cluster Diagram<br /><ul><li>Distribution of population
    10. 10. Strength of similarities between clusters</li></ul>Other viewers:<br /><ul><li> Decision tree
    11. 11. Neural network
    12. 12. Association rules
    13. 13. Time series</li></ul>Cluster Profiles<br /><ul><li>Distribution of values for each attribute
    14. 14. Drill through to details</li></ul>Cluster Characteristics<br /><ul><li>Attributes ordered by importance to cluster
    15. 15. Probability attribute appearing in cluster</li></ul>Cluster Discrimination<br /><ul><li>Comparison of attributes between two clusters</li></li></ul><li>Tools for Prediction - Table Analysis Tools <br />
    16. 16. Tools for Prediction - Data Modeling Tools<br />
    17. 17. Model Testing and Validation<br />Accuracy Chart<br /><ul><li>Measurement of model accuracy
    18. 18. Lift chart comparing actual results to random guess and to perfect prediction</li></ul>Classification Matrix<br /><ul><li>Shows correct and incorrect predictions
    19. 19. Displays percentage and counts</li></ul>Profit Chart<br /><ul><li>Estimation of profit by percentage of population contacted
    20. 20. Input: population, fixed cost, individual cost, revenue per individual
    21. 21. Output: maximum profit, probability threshold</li></ul>Cross Validation – more on this later<br />
    22. 22. 1 Using the Data Mining Excel Add-In<br />demo <br />
    23. 23. Understanding Data Mining Structure Improvements<br />Data Partitioning for Training and Testing<br />Mining Model Column Aliases<br />Data Mining Filters<br />Drill Through to Mining Structure Data<br />Cross-Validation of a Mining Model<br />
    24. 24. Data Partitioning for Training and Testing<br />Specify as percentage or maximum number of cases<br />Smaller value is used if both parameters specified<br />Data is divided randomly between training and testing<br />HoldoutSeed property enables consistent partitions across structures<br />
    25. 25. Data Partitioning with DMX<br />Create a structure with partitioning with the HOLDOUT keyword<br />Query the structure to review partitions<br />
    26. 26. Mining Model Column Aliases<br />Assign a column alias to reuse a column in a structure<br />Column content can be clarified<br />Column can be more easily referenced in DMX<br />Continuous and discretized versions of the same column can be used in separate models in the same structure<br />
    27. 27. Data Mining Filters<br />Specify a condition to apply to mining structure columns <br />Filter creates subsets of training and testing data for a model<br />Multiple conditions can be linked with AND/OR operators<br />Conditions for continuous value use &gt; , &gt;=, &lt;, &lt;= operators<br />Conditions for discrete values use =, !=, or is null operators<br />Conditions on nested tables can use EXISTS keyword and subquery<br />
    28. 28. Data Mining Filters with DMX<br />Add a filtered mining model to a structure<br />
    29. 29. Drill Through to Mining Structure Data<br />Add columns to the mining structure, but not to models<br />Eliminates unnecessary data from model and improves processing time<br />Supports drill through from mining model viewer or DMX for visibility into results<br />
    30. 30. Cross-Validation of a Mining Model<br />Purpose<br />Validate the accuracy of a single model<br />Compare models within the same mining structure<br />Process<br />Split mining structure into partitions of equal size<br />Iteratively build models on all partitions excluding one partition such that all partitions are excluded once<br />Measure accuracy of each model using the excluded partition<br />Analyze results<br />
    31. 31. Cross-Validation Parameters<br />Fold Count<br />Number of partitions to use<br />Minimum 2, Maximum 256<br />Maximum 10 for session mining structure<br />Max Cases<br />Total number of cases to include in cross-validation<br />Cases divided across folds<br />Value of 0 specifies all cases<br />Target Attribute<br />Predictable column <br />Target State<br />Target value for target attribute<br />Value of null specifies all states are to be tested<br />Target Threshold<br />Value between 0 and 1 for prediction probability above which a predicted state is considered correct<br />Value of null specifies most probable prediction is considered correct<br />
    32. 32. Cross-Validation Report<br />
    33. 33. Cross-Validation Report<br />24<br />
    34. 34. 2 Creating a Clustering Model<br />demo <br />
    35. 35. Using the New Time Series Algorithm<br />Better Time Series Support<br />Time Series Algorithm Parameters<br />
    36. 36. Better Time Series Support<br />ARTxp algorithm<br />Still included in Microsoft Time Series algorithm<br />Best for prediction of next likely value in a series<br />ARIMA algorithm<br />Added to Microsoft Time Series algorithm<br />Best for long-term predictions<br />The new Microsoft Time Series algorithm<br />Trains one model using ARTxp and second model using ARIMA<br />Blends the results to return best prediction <br />
    37. 37. Time Series Algorithm Parameters<br />
    38. 38. Resources<br />Model Filter Syntax and Examples, technet.microsoft.com/en-us/library/bb895186(SQL.100).aspx<br />Cross-Validation, msdn2.microsoft.com/en-us/library/bb895174(SQL.100).aspx<br />SQL Server Data Mining, www.sqlserverdatamining.com<br />Jamie MacLennan’s blog, blogs.msdn.com/jamiemac/default.aspx<br />
    39. 39.
    40. 40. © 2008 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries.<br />The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.<br />

    ×